Codebase list matrix-synapse / 0b5220a
Update upstream source from tag 'upstream/1.18.0' Update to upstream version '1.18.0' with Debian dir c9edf440848c2869631bde7e8ecddc361ac9fa29 Andrej Shadura 3 years ago
194 changed file(s) with 8812 addition(s) and 5379 deletion(s). Raw diff Collapse all Expand all
0 Synapse 1.18.0 (2020-07-30)
1 ===========================
2
3 Deprecation Warnings
4 --------------------
5
6 ### Docker Tags with `-py3` Suffix
7
8 From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
9
10 On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
11
12 Scripts relying on the `-py3` suffix will need to be updated.
13
14
15 ### TCP-based Replication
16
17 When setting up worker processes, we now recommend the use of a Redis server for replication. The old direct TCP connection method is deprecated and will be removed in a future release. See [docs/workers.md](https://github.com/matrix-org/synapse/blob/release-v1.18.0/docs/workers.md) for more details.
18
19
20 Improved Documentation
21 ----------------------
22
23 - Update worker docs with latest enhancements. ([\#7969](https://github.com/matrix-org/synapse/issues/7969))
24
25
26 Synapse 1.18.0rc2 (2020-07-28)
27 ==============================
28
29 Bugfixes
30 --------
31
32 - Fix an `AssertionError` exception introduced in v1.18.0rc1. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
33 - Fix experimental support for moving typing off master when worker is restarted, which is broken in v1.18.0rc1. ([\#7967](https://github.com/matrix-org/synapse/issues/7967))
34
35
36 Internal Changes
37 ----------------
38
39 - Further optimise queueing of inbound replication commands. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
40
41
42 Synapse 1.18.0rc1 (2020-07-27)
43 ==============================
44
45 Features
46 --------
47
48 - Include room states on invite events that are sent to application services. Contributed by @Sorunome. ([\#6455](https://github.com/matrix-org/synapse/issues/6455))
49 - Add delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7613](https://github.com/matrix-org/synapse/issues/7613), [\#7953](https://github.com/matrix-org/synapse/issues/7953))
50 - Add experimental support for running multiple federation sender processes. ([\#7798](https://github.com/matrix-org/synapse/issues/7798))
51 - Add the option to validate the `iss` and `aud` claims for JWT logins. ([\#7827](https://github.com/matrix-org/synapse/issues/7827))
52 - Add support for handling registration requests across multiple client reader workers. ([\#7830](https://github.com/matrix-org/synapse/issues/7830))
53 - Add an admin API to list the users in a room. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7842](https://github.com/matrix-org/synapse/issues/7842))
54 - Allow email subjects to be customised through Synapse's configuration. ([\#7846](https://github.com/matrix-org/synapse/issues/7846))
55 - Add the ability to re-activate an account from the admin API. ([\#7847](https://github.com/matrix-org/synapse/issues/7847), [\#7908](https://github.com/matrix-org/synapse/issues/7908))
56 - Add experimental support for running multiple pusher workers. ([\#7855](https://github.com/matrix-org/synapse/issues/7855))
57 - Add experimental support for moving typing off master. ([\#7869](https://github.com/matrix-org/synapse/issues/7869), [\#7959](https://github.com/matrix-org/synapse/issues/7959))
58 - Report CPU metrics to prometheus for time spent processing replication commands. ([\#7879](https://github.com/matrix-org/synapse/issues/7879))
59 - Support oEmbed for media previews. ([\#7920](https://github.com/matrix-org/synapse/issues/7920))
60 - Abort federation requests where the client disconnects before the ratelimiter expires. ([\#7930](https://github.com/matrix-org/synapse/issues/7930))
61 - Cache responses to `/_matrix/federation/v1/state_ids` to reduce duplicated work. ([\#7931](https://github.com/matrix-org/synapse/issues/7931))
62
63
64 Bugfixes
65 --------
66
67 - Fix detection of out of sync remote device lists when receiving events from remote users. ([\#7815](https://github.com/matrix-org/synapse/issues/7815))
68 - Fix bug where Synapse fails to process an incoming event over federation if the server is missing too much of the event's auth chain. ([\#7817](https://github.com/matrix-org/synapse/issues/7817))
69 - Fix a bug causing Synapse to misinterpret the value `off` for `encryption_enabled_by_default_for_room_type` in its configuration file(s) if that value isn't surrounded by quotes. This bug was introduced in v1.16.0. ([\#7822](https://github.com/matrix-org/synapse/issues/7822))
70 - Fix bug where we did not always pass in `app_name` or `server_name` to email templates, including e.g. for registration emails. ([\#7829](https://github.com/matrix-org/synapse/issues/7829))
71 - Errors which occur while using the non-standard JWT login now return the proper error: `403 Forbidden` with an error code of `M_FORBIDDEN`. ([\#7844](https://github.com/matrix-org/synapse/issues/7844))
72 - Fix "AttributeError: 'str' object has no attribute 'get'" error message when applying per-room message retention policies. The bug was introduced in Synapse 1.7.0. ([\#7850](https://github.com/matrix-org/synapse/issues/7850))
73 - Fix a bug introduced in Synapse 1.10.0 which could cause a "no create event in auth events" error during room creation. ([\#7854](https://github.com/matrix-org/synapse/issues/7854))
74 - Fix a bug which allowed empty rooms to be rejoined over federation. ([\#7859](https://github.com/matrix-org/synapse/issues/7859))
75 - Fix 'Unable to find a suitable guest user ID' error when using multiple client_reader workers. ([\#7866](https://github.com/matrix-org/synapse/issues/7866))
76 - Fix a long standing bug where the tracing of async functions with opentracing was broken. ([\#7872](https://github.com/matrix-org/synapse/issues/7872), [\#7961](https://github.com/matrix-org/synapse/issues/7961))
77 - Fix "TypeError in `synapse.notifier`" exceptions. ([\#7880](https://github.com/matrix-org/synapse/issues/7880))
78 - Fix deprecation warning due to invalid escape sequences. ([\#7895](https://github.com/matrix-org/synapse/issues/7895))
79
80
81 Updates to the Docker image
82 ---------------------------
83
84 - Base docker image on Debian Buster rather than Alpine Linux. Contributed by @maquis196. ([\#7839](https://github.com/matrix-org/synapse/issues/7839))
85
86
87 Improved Documentation
88 ----------------------
89
90 - Provide instructions on using `register_new_matrix_user` via docker. ([\#7885](https://github.com/matrix-org/synapse/issues/7885))
91 - Change the sample config postgres user section to use `synapse_user` instead of `synapse` to align with the documentation. ([\#7889](https://github.com/matrix-org/synapse/issues/7889))
92 - Reorder database paragraphs to promote postgres over sqlite. ([\#7933](https://github.com/matrix-org/synapse/issues/7933))
93 - Update the dates of ACME v1's end of life in [`ACME.md`](https://github.com/matrix-org/synapse/blob/master/docs/ACME.md). ([\#7934](https://github.com/matrix-org/synapse/issues/7934))
94
95
96 Deprecations and Removals
97 -------------------------
98
99 - Remove unused `synapse_replication_tcp_resource_invalidate_cache` prometheus metric. ([\#7878](https://github.com/matrix-org/synapse/issues/7878))
100 - Remove Ubuntu Eoan from the list of `.deb` packages that we build as it is now end-of-life. Contributed by @gary-kim. ([\#7888](https://github.com/matrix-org/synapse/issues/7888))
101
102
103 Internal Changes
104 ----------------
105
106 - Switch parts of the codebase from `simplejson` to the standard library `json`. ([\#7802](https://github.com/matrix-org/synapse/issues/7802))
107 - Add type hints to the http server code and remove an unused parameter. ([\#7813](https://github.com/matrix-org/synapse/issues/7813))
108 - Add type hints to synapse.api.errors module. ([\#7820](https://github.com/matrix-org/synapse/issues/7820))
109 - Ensure that calls to `json.dumps` are compatible with the standard library json. ([\#7836](https://github.com/matrix-org/synapse/issues/7836))
110 - Remove redundant `retry_on_integrity_error` wrapper for event persistence code. ([\#7848](https://github.com/matrix-org/synapse/issues/7848))
111 - Consistently use `db_to_json` to convert from database values to JSON objects. ([\#7849](https://github.com/matrix-org/synapse/issues/7849))
112 - Convert various parts of the codebase to async/await. ([\#7851](https://github.com/matrix-org/synapse/issues/7851), [\#7860](https://github.com/matrix-org/synapse/issues/7860), [\#7868](https://github.com/matrix-org/synapse/issues/7868), [\#7871](https://github.com/matrix-org/synapse/issues/7871), [\#7873](https://github.com/matrix-org/synapse/issues/7873), [\#7874](https://github.com/matrix-org/synapse/issues/7874), [\#7884](https://github.com/matrix-org/synapse/issues/7884), [\#7912](https://github.com/matrix-org/synapse/issues/7912), [\#7935](https://github.com/matrix-org/synapse/issues/7935), [\#7939](https://github.com/matrix-org/synapse/issues/7939), [\#7942](https://github.com/matrix-org/synapse/issues/7942), [\#7944](https://github.com/matrix-org/synapse/issues/7944))
113 - Add support for handling registration requests across multiple client reader workers. ([\#7853](https://github.com/matrix-org/synapse/issues/7853))
114 - Small performance improvement in typing processing. ([\#7856](https://github.com/matrix-org/synapse/issues/7856))
115 - The default value of `filter_timeline_limit` was changed from -1 (no limit) to 100. ([\#7858](https://github.com/matrix-org/synapse/issues/7858))
116 - Optimise queueing of inbound replication commands. ([\#7861](https://github.com/matrix-org/synapse/issues/7861))
117 - Add some type annotations to `HomeServer` and `BaseHandler`. ([\#7870](https://github.com/matrix-org/synapse/issues/7870))
118 - Clean up `PreserveLoggingContext`. ([\#7877](https://github.com/matrix-org/synapse/issues/7877))
119 - Change "unknown room version" logging from 'error' to 'warning'. ([\#7881](https://github.com/matrix-org/synapse/issues/7881))
120 - Stop using `device_max_stream_id` table and just use `device_inbox.stream_id`. ([\#7882](https://github.com/matrix-org/synapse/issues/7882))
121 - Return an empty body for OPTIONS requests. ([\#7886](https://github.com/matrix-org/synapse/issues/7886))
122 - Fix typo in generated config file. Contributed by @ThiefMaster. ([\#7890](https://github.com/matrix-org/synapse/issues/7890))
123 - Import ABC from `collections.abc` for Python 3.10 compatibility. ([\#7892](https://github.com/matrix-org/synapse/issues/7892))
124 - Remove unused functions `time_function`, `trace_function`, `get_previous_frames`
125 and `get_previous_frame` from `synapse.logging.utils` module. ([\#7897](https://github.com/matrix-org/synapse/issues/7897))
126 - Lint the `contrib/` directory in CI and linting scripts, add `synctl` to the linting script for consistency with CI. ([\#7914](https://github.com/matrix-org/synapse/issues/7914))
127 - Use Element CSS and logo in notification emails when app name is Element. ([\#7919](https://github.com/matrix-org/synapse/issues/7919))
128 - Optimisation to /sync handling: skip serializing the response if the client has already disconnected. ([\#7927](https://github.com/matrix-org/synapse/issues/7927))
129 - When a client disconnects, don't log it as 'Error processing request'. ([\#7928](https://github.com/matrix-org/synapse/issues/7928))
130 - Add debugging to `/sync` response generation (disabled by default). ([\#7929](https://github.com/matrix-org/synapse/issues/7929))
131 - Update comments that refer to Deferreds for async functions. ([\#7945](https://github.com/matrix-org/synapse/issues/7945))
132 - Simplify error handling in federation handler. ([\#7950](https://github.com/matrix-org/synapse/issues/7950))
133
134
0135 Synapse 1.17.0 (2020-07-13)
1136 ===========================
2137
404404 ```
405405
406406 * You will also need to uncomment the `tls_certificate_path` and
407 `tls_private_key_path` lines under the `TLS` section. You can either
408 point these settings at an existing certificate and key, or you can
409 enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
410 for having Synapse automatically provision and renew federation
411 certificates through ACME can be found at [ACME.md](docs/ACME.md).
412 Note that, as pointed out in that document, this feature will not
413 work with installs set up after November 2019.
407 `tls_private_key_path` lines under the `TLS` section. You will need to manage
408 provisioning of these certificates yourself — Synapse had built-in ACME
409 support, but the ACMEv1 protocol Synapse implements is deprecated, not
410 allowed by LetsEncrypt for new sites, and will break for existing sites in
411 late 2020. See [ACME.md](docs/ACME.md).
414412
415413 If you are using your own certificate, be sure to use a `.pem` file that
416414 includes the full certificate chain including any intermediate certificates
187187 ================
188188
189189 Synapse offers two database engines:
190 * `PostgreSQL <https://www.postgresql.org>`_
190191 * `SQLite <https://sqlite.org/>`_
191 * `PostgreSQL <https://www.postgresql.org>`_
192
193 By default Synapse uses SQLite in and doing so trades performance for convenience.
194 SQLite is only recommended in Synapse for testing purposes or for servers with
195 light workloads.
196192
197193 Almost all installations should opt to use PostgreSQL. Advantages include:
198194
205201
206202 For information on how to install and use PostgreSQL, please see
207203 `docs/postgres.md <docs/postgres.md>`_.
204
205 By default Synapse uses SQLite and in doing so trades performance for convenience.
206 SQLite is only recommended in Synapse for testing purposes or for servers with
207 light workloads.
208208
209209 .. _reverse-proxy:
210210
7373 # replace `1.3.0` and `stretch` accordingly:
7474 wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
7575 dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
76
77 Upgrading to v1.18.0
78 ====================
79
80 Docker `-py3` suffix will be removed in future versions
81 -------------------------------------------------------
82
83 From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
84
85 On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
86
87 Scripts relying on the `-py3` suffix will need to be updated.
88
89 Redis replication is now recommended in lieu of TCP replication
90 ---------------------------------------------------------------
91
92 When setting up worker processes, we now recommend the use of a Redis server for replication. **The old direct TCP connection method is deprecated and will be removed in a future release.**
93 See `docs/workers.md <docs/workers.md>`_ for more details.
7694
7795 Upgrading to v1.14.0
7896 ====================
1616 """ Starts a synapse client console. """
1717 from __future__ import print_function
1818
19 from twisted.internet import reactor, defer, threads
20 from http import TwistedHttpClient
21
2219 import argparse
2320 import cmd
2421 import getpass
2724 import sys
2825 import time
2926 import urllib
27 from http import TwistedHttpClient
28
29 import nacl.encoding
30 import nacl.signing
3031 import urlparse
31
32 import nacl.signing
33 import nacl.encoding
34
35 from signedjson.sign import verify_signed_json, SignatureVerifyException
32 from signedjson.sign import SignatureVerifyException, verify_signed_json
33
34 from twisted.internet import defer, reactor, threads
3635
3736 CONFIG_JSON = "cmdclient_config.json"
3837
492491 "list messages <roomid> from=END&to=START&limit=3"
493492 """
494493 args = self._parse(line, ["type", "roomid", "qp"])
495 if not "type" in args or not "roomid" in args:
494 if "type" not in args or "roomid" not in args:
496495 print("Must specify type and room ID.")
497496 return
498497 if args["type"] not in ["members", "messages"]:
507506 try:
508507 key_value = key_value_str.split("=")
509508 qp[key_value[0]] = key_value[1]
510 except:
509 except Exception:
511510 print("Bad query param: %s" % key_value)
512511 return
513512
584583 parsed_url = urlparse.urlparse(args["path"])
585584 qp.update(urlparse.parse_qs(parsed_url.query))
586585 args["path"] = parsed_url.path
587 except:
586 except Exception:
588587 pass
589588
590589 reactor.callFromThread(
771770 syn_cmd.config = json.load(config)
772771 try:
773772 http_client.verbose = "on" == syn_cmd.config["verbose"]
774 except:
773 except Exception:
775774 pass
776775 print("Loaded config from %s" % config_path)
777 except:
776 except Exception:
778777 pass
779778
780779 # Twisted-specific: Runs the command processor in Twisted's event loop
1313 # limitations under the License.
1414
1515 from __future__ import print_function
16
17 import json
18 import urllib
19 from pprint import pformat
20
21 from twisted.internet import defer, reactor
1622 from twisted.web.client import Agent, readBody
1723 from twisted.web.http_headers import Headers
18 from twisted.internet import defer, reactor
19
20 from pprint import pformat
21
22 import json
23 import urllib
2424
2525
2626 class HttpClient(object):
2727 """
2828
2929
30 from synapse.federation import ReplicationHandler
31
32 from synapse.federation.units import Pdu
33
34 from synapse.util import origin_from_ucid
35
36 from synapse.app.homeserver import SynapseHomeServer
37
38 # from synapse.logging.utils import log_function
39
40 from twisted.internet import reactor, defer
41 from twisted.python import log
42
4330 import argparse
31 import curses.wrapper
4432 import json
4533 import logging
4634 import os
4735 import re
4836
4937 import cursesio
50 import curses.wrapper
38
39 from twisted.internet import defer, reactor
40 from twisted.python import log
41
42 from synapse.app.homeserver import SynapseHomeServer
43 from synapse.federation import ReplicationHandler
44 from synapse.federation.units import Pdu
45 from synapse.util import origin_from_ucid
46
47 # from synapse.logging.utils import log_function
5148
5249
5350 logger = logging.getLogger("example")
7471 """
7572
7673 try:
77 m = re.match("^join (\S+)$", line)
74 m = re.match(r"^join (\S+)$", line)
7875 if m:
7976 # The `sender` wants to join a room.
8077 (room_name,) = m.groups()
8380 # self.print_line("OK.")
8481 return
8582
86 m = re.match("^invite (\S+) (\S+)$", line)
83 m = re.match(r"^invite (\S+) (\S+)$", line)
8784 if m:
8885 # `sender` wants to invite someone to a room
8986 room_name, invitee = m.groups()
9289 # self.print_line("OK.")
9390 return
9491
95 m = re.match("^send (\S+) (.*)$", line)
92 m = re.match(r"^send (\S+) (.*)$", line)
9693 if m:
9794 # `sender` wants to message a room
9895 room_name, body = m.groups()
10198 # self.print_line("OK.")
10299 return
103100
104 m = re.match("^backfill (\S+)$", line)
101 m = re.match(r"^backfill (\S+)$", line)
105102 if m:
106103 # we want to backfill a room
107104 (room_name,) = m.groups()
199196 "#%s (unrec) %s = %s"
200197 % (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
201198 )
202
203 # def on_state_change(self, pdu):
204 ##self.output.print_line("#%s (state) %s *** %s" %
205 ##(pdu.context, pdu.state_key, pdu.pdu_type)
206 ##)
207
208 # if "joinee" in pdu.content:
209 # self._on_join(pdu.context, pdu.content["joinee"])
210 # elif "invitee" in pdu.content:
211 # self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
212199
213200 def _on_message(self, pdu):
214201 """ We received a message
313300 return self.replication_layer.backfill(dest, room_name, limit)
314301
315302 def _get_room_remote_servers(self, room_name):
316 return [i for i in self.joined_rooms.setdefault(room_name).servers]
303 return list(self.joined_rooms.setdefault(room_name).servers)
317304
318305 def _get_or_create_room(self, room_name):
319306 return self.joined_rooms.setdefault(room_name, Room(room_name))
333320 user = args.user
334321 server_name = origin_from_ucid(user)
335322
336 ## Set up logging ##
323 # Set up logging
337324
338325 root_logger = logging.getLogger()
339326
353340 observer = log.PythonLoggingObserver()
354341 observer.start()
355342
356 ## Set up synapse server
343 # Set up synapse server
357344
358345 curses_stdio = cursesio.CursesStdIO(stdscr)
359346 input_output = InputOutput(curses_stdio, user)
367354
368355 input_output.set_home_server(hs)
369356
370 ## Add input_output logger
357 # Add input_output logger
371358 io_logger = IOLoggerHandler(input_output)
372359 io_logger.setFormatter(formatter)
373360 root_logger.addHandler(io_logger)
374361
375 ## Start! ##
362 # Start!
376363
377364 try:
378365 port = int(server_name.split(":")[1])
379 except:
366 except Exception:
380367 port = 12345
381368
382369 app_hs.get_http_server().start_listening(port)
00 {
1 "__inputs": [
2 {
3 "name": "DS_PROMETHEUS",
4 "label": "Prometheus",
5 "description": "",
6 "type": "datasource",
7 "pluginId": "prometheus",
8 "pluginName": "Prometheus"
9 }
10 ],
11 "__requires": [
12 {
13 "type": "grafana",
14 "id": "grafana",
15 "name": "Grafana",
16 "version": "6.7.4"
17 },
18 {
19 "type": "panel",
20 "id": "graph",
21 "name": "Graph",
22 "version": ""
23 },
24 {
25 "type": "panel",
26 "id": "heatmap",
27 "name": "Heatmap",
28 "version": ""
29 },
30 {
31 "type": "datasource",
32 "id": "prometheus",
33 "name": "Prometheus",
34 "version": "1.0.0"
35 }
36 ],
137 "annotations": {
238 "list": [
339 {
40 "$$hashKey": "object:76",
441 "builtIn": 1,
542 "datasource": "$datasource",
643 "enable": false,
1653 "editable": true,
1754 "gnetId": null,
1855 "graphTooltip": 0,
19 "id": 1,
20 "iteration": 1591098104645,
56 "id": null,
57 "iteration": 1594646317221,
2158 "links": [
2259 {
2360 "asDropdown": true,
3370 "panels": [
3471 {
3572 "collapsed": false,
36 "datasource": null,
73 "datasource": "${DS_PROMETHEUS}",
3774 "gridPos": {
3875 "h": 1,
3976 "w": 24,
268305 "show": false
269306 },
270307 "links": [],
271 "options": {},
272308 "reverseYBuckets": false,
273309 "targets": [
274310 {
558594 },
559595 {
560596 "collapsed": true,
561 "datasource": null,
597 "datasource": "${DS_PROMETHEUS}",
562598 "gridPos": {
563599 "h": 1,
564600 "w": 24,
14221458 },
14231459 {
14241460 "collapsed": true,
1425 "datasource": null,
1461 "datasource": "${DS_PROMETHEUS}",
14261462 "gridPos": {
14271463 "h": 1,
14281464 "w": 24,
17941830 },
17951831 {
17961832 "collapsed": true,
1797 "datasource": null,
1833 "datasource": "${DS_PROMETHEUS}",
17981834 "gridPos": {
17991835 "h": 1,
18001836 "w": 24,
25302566 },
25312567 {
25322568 "collapsed": true,
2533 "datasource": null,
2569 "datasource": "${DS_PROMETHEUS}",
25342570 "gridPos": {
25352571 "h": 1,
25362572 "w": 24,
28222858 },
28232859 {
28242860 "collapsed": true,
2825 "datasource": null,
2861 "datasource": "${DS_PROMETHEUS}",
28262862 "gridPos": {
28272863 "h": 1,
28282864 "w": 24,
28432879 "h": 9,
28442880 "w": 12,
28452881 "x": 0,
2846 "y": 33
2882 "y": 6
28472883 },
28482884 "hiddenSeries": false,
28492885 "id": 79,
29392975 "h": 9,
29402976 "w": 12,
29412977 "x": 12,
2942 "y": 33
2978 "y": 6
29432979 },
29442980 "hiddenSeries": false,
29452981 "id": 83,
30373073 "h": 9,
30383074 "w": 12,
30393075 "x": 0,
3040 "y": 42
3076 "y": 15
30413077 },
30423078 "hiddenSeries": false,
30433079 "id": 109,
31363172 "h": 9,
31373173 "w": 12,
31383174 "x": 12,
3139 "y": 42
3175 "y": 15
31403176 },
31413177 "hiddenSeries": false,
31423178 "id": 111,
32223258 "dashLength": 10,
32233259 "dashes": false,
32243260 "datasource": "$datasource",
3225 "description": "",
3261 "description": "Number of events queued up on the master process for processing by the federation sender",
32263262 "fill": 1,
32273263 "fillGradient": 0,
32283264 "gridPos": {
32293265 "h": 9,
32303266 "w": 12,
32313267 "x": 0,
3232 "y": 51
3268 "y": 24
32333269 },
32343270 "hiddenSeries": false,
32353271 "id": 140,
33433379 {
33443380 "format": "short",
33453381 "label": null,
3382 "logBase": 1,
3383 "max": null,
3384 "min": null,
3385 "show": true
3386 }
3387 ],
3388 "yaxis": {
3389 "align": false,
3390 "alignLevel": null
3391 }
3392 },
3393 {
3394 "aliasColors": {},
3395 "bars": false,
3396 "dashLength": 10,
3397 "dashes": false,
3398 "datasource": "${DS_PROMETHEUS}",
3399 "description": "The number of events in the in-memory queues ",
3400 "fill": 1,
3401 "fillGradient": 0,
3402 "gridPos": {
3403 "h": 8,
3404 "w": 12,
3405 "x": 12,
3406 "y": 24
3407 },
3408 "hiddenSeries": false,
3409 "id": 142,
3410 "legend": {
3411 "avg": false,
3412 "current": false,
3413 "max": false,
3414 "min": false,
3415 "show": true,
3416 "total": false,
3417 "values": false
3418 },
3419 "lines": true,
3420 "linewidth": 1,
3421 "nullPointMode": "null",
3422 "options": {
3423 "dataLinks": []
3424 },
3425 "percentage": false,
3426 "pointradius": 2,
3427 "points": false,
3428 "renderer": "flot",
3429 "seriesOverrides": [],
3430 "spaceLength": 10,
3431 "stack": false,
3432 "steppedLine": false,
3433 "targets": [
3434 {
3435 "expr": "synapse_federation_transaction_queue_pending_pdus{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
3436 "interval": "",
3437 "legendFormat": "pending PDUs {{job}}-{{index}}",
3438 "refId": "A"
3439 },
3440 {
3441 "expr": "synapse_federation_transaction_queue_pending_edus{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
3442 "interval": "",
3443 "legendFormat": "pending EDUs {{job}}-{{index}}",
3444 "refId": "B"
3445 }
3446 ],
3447 "thresholds": [],
3448 "timeFrom": null,
3449 "timeRegions": [],
3450 "timeShift": null,
3451 "title": "In-memory federation transmission queues",
3452 "tooltip": {
3453 "shared": true,
3454 "sort": 0,
3455 "value_type": "individual"
3456 },
3457 "type": "graph",
3458 "xaxis": {
3459 "buckets": null,
3460 "mode": "time",
3461 "name": null,
3462 "show": true,
3463 "values": []
3464 },
3465 "yaxes": [
3466 {
3467 "$$hashKey": "object:317",
3468 "format": "short",
3469 "label": "events",
3470 "logBase": 1,
3471 "max": null,
3472 "min": "0",
3473 "show": true
3474 },
3475 {
3476 "$$hashKey": "object:318",
3477 "format": "short",
3478 "label": "",
33463479 "logBase": 1,
33473480 "max": null,
33483481 "min": null,
33603493 },
33613494 {
33623495 "collapsed": true,
3363 "datasource": null,
3496 "datasource": "${DS_PROMETHEUS}",
33643497 "gridPos": {
33653498 "h": 1,
33663499 "w": 24,
35663699 },
35673700 {
35683701 "collapsed": true,
3569 "datasource": null,
3702 "datasource": "${DS_PROMETHEUS}",
35703703 "gridPos": {
35713704 "h": 1,
35723705 "w": 24,
35873720 "h": 7,
35883721 "w": 12,
35893722 "x": 0,
3590 "y": 52
3723 "y": 79
35913724 },
35923725 "hiddenSeries": false,
35933726 "id": 48,
36813814 "h": 7,
36823815 "w": 12,
36833816 "x": 12,
3684 "y": 52
3817 "y": 79
36853818 },
36863819 "hiddenSeries": false,
36873820 "id": 104,
38013934 "h": 7,
38023935 "w": 12,
38033936 "x": 0,
3804 "y": 59
3937 "y": 86
38053938 },
38063939 "hiddenSeries": false,
38073940 "id": 10,
38974030 "h": 7,
38984031 "w": 12,
38994032 "x": 12,
3900 "y": 59
4033 "y": 86
39014034 },
39024035 "hiddenSeries": false,
39034036 "id": 11,
39864119 },
39874120 {
39884121 "collapsed": true,
3989 "datasource": null,
4122 "datasource": "${DS_PROMETHEUS}",
39904123 "gridPos": {
39914124 "h": 1,
39924125 "w": 24,
39954128 },
39964129 "id": 59,
39974130 "panels": [
3998 {
3999 "aliasColors": {},
4000 "bars": false,
4001 "dashLength": 10,
4002 "dashes": false,
4003 "datasource": "$datasource",
4004 "editable": true,
4005 "error": false,
4006 "fill": 1,
4007 "fillGradient": 0,
4008 "grid": {},
4009 "gridPos": {
4010 "h": 13,
4011 "w": 12,
4012 "x": 0,
4013 "y": 67
4014 },
4015 "hiddenSeries": false,
4016 "id": 12,
4017 "legend": {
4018 "alignAsTable": true,
4019 "avg": false,
4020 "current": false,
4021 "max": false,
4022 "min": false,
4023 "show": true,
4024 "total": false,
4025 "values": false
4026 },
4027 "lines": true,
4028 "linewidth": 2,
4029 "links": [],
4030 "nullPointMode": "null",
4031 "options": {
4032 "dataLinks": []
4033 },
4034 "paceLength": 10,
4035 "percentage": false,
4036 "pointradius": 5,
4037 "points": false,
4038 "renderer": "flot",
4039 "seriesOverrides": [],
4040 "spaceLength": 10,
4041 "stack": false,
4042 "steppedLine": false,
4043 "targets": [
4044 {
4045 "expr": "rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])",
4046 "format": "time_series",
4047 "interval": "",
4048 "intervalFactor": 2,
4049 "legendFormat": "{{job}}-{{index}} {{block_name}}",
4050 "refId": "A",
4051 "step": 20
4052 }
4053 ],
4054 "thresholds": [],
4055 "timeFrom": null,
4056 "timeRegions": [],
4057 "timeShift": null,
4058 "title": "Total CPU Usage by Block",
4059 "tooltip": {
4060 "shared": false,
4061 "sort": 0,
4062 "value_type": "cumulative"
4063 },
4064 "type": "graph",
4065 "xaxis": {
4066 "buckets": null,
4067 "mode": "time",
4068 "name": null,
4069 "show": true,
4070 "values": []
4071 },
4072 "yaxes": [
4073 {
4074 "format": "percentunit",
4075 "logBase": 1,
4076 "max": null,
4077 "min": null,
4078 "show": true
4079 },
4080 {
4081 "format": "short",
4082 "logBase": 1,
4083 "max": null,
4084 "min": null,
4085 "show": true
4086 }
4087 ],
4088 "yaxis": {
4089 "align": false,
4090 "alignLevel": null
4091 }
4092 },
4093 {
4094 "aliasColors": {},
4095 "bars": false,
4096 "dashLength": 10,
4097 "dashes": false,
4098 "datasource": "$datasource",
4099 "editable": true,
4100 "error": false,
4101 "fill": 1,
4102 "fillGradient": 0,
4103 "grid": {},
4104 "gridPos": {
4105 "h": 13,
4106 "w": 12,
4107 "x": 12,
4108 "y": 67
4109 },
4110 "hiddenSeries": false,
4111 "id": 26,
4112 "legend": {
4113 "alignAsTable": true,
4114 "avg": false,
4115 "current": false,
4116 "max": false,
4117 "min": false,
4118 "show": true,
4119 "total": false,
4120 "values": false
4121 },
4122 "lines": true,
4123 "linewidth": 2,
4124 "links": [],
4125 "nullPointMode": "null",
4126 "options": {
4127 "dataLinks": []
4128 },
4129 "paceLength": 10,
4130 "percentage": false,
4131 "pointradius": 5,
4132 "points": false,
4133 "renderer": "flot",
4134 "seriesOverrides": [],
4135 "spaceLength": 10,
4136 "stack": false,
4137 "steppedLine": false,
4138 "targets": [
4139 {
4140 "expr": "(rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])) / rate(synapse_util_metrics_block_count[$bucket_size])",
4141 "format": "time_series",
4142 "interval": "",
4143 "intervalFactor": 2,
4144 "legendFormat": "{{job}}-{{index}} {{block_name}}",
4145 "refId": "A",
4146 "step": 20
4147 }
4148 ],
4149 "thresholds": [],
4150 "timeFrom": null,
4151 "timeRegions": [],
4152 "timeShift": null,
4153 "title": "Average CPU Time per Block",
4154 "tooltip": {
4155 "shared": false,
4156 "sort": 0,
4157 "value_type": "cumulative"
4158 },
4159 "type": "graph",
4160 "xaxis": {
4161 "buckets": null,
4162 "mode": "time",
4163 "name": null,
4164 "show": true,
4165 "values": []
4166 },
4167 "yaxes": [
4168 {
4169 "format": "ms",
4170 "logBase": 1,
4171 "max": null,
4172 "min": null,
4173 "show": true
4174 },
4175 {
4176 "format": "short",
4177 "logBase": 1,
4178 "max": null,
4179 "min": null,
4180 "show": true
4181 }
4182 ],
4183 "yaxis": {
4184 "align": false,
4185 "alignLevel": null
4186 }
4187 },
41884131 {
41894132 "aliasColors": {},
41904133 "bars": false,
42034146 "y": 80
42044147 },
42054148 "hiddenSeries": false,
4149 "id": 12,
4150 "legend": {
4151 "alignAsTable": true,
4152 "avg": false,
4153 "current": false,
4154 "max": false,
4155 "min": false,
4156 "show": true,
4157 "total": false,
4158 "values": false
4159 },
4160 "lines": true,
4161 "linewidth": 2,
4162 "links": [],
4163 "nullPointMode": "null",
4164 "options": {
4165 "dataLinks": []
4166 },
4167 "paceLength": 10,
4168 "percentage": false,
4169 "pointradius": 5,
4170 "points": false,
4171 "renderer": "flot",
4172 "seriesOverrides": [],
4173 "spaceLength": 10,
4174 "stack": false,
4175 "steppedLine": false,
4176 "targets": [
4177 {
4178 "expr": "rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])",
4179 "format": "time_series",
4180 "interval": "",
4181 "intervalFactor": 2,
4182 "legendFormat": "{{job}}-{{index}} {{block_name}}",
4183 "refId": "A",
4184 "step": 20
4185 }
4186 ],
4187 "thresholds": [],
4188 "timeFrom": null,
4189 "timeRegions": [],
4190 "timeShift": null,
4191 "title": "Total CPU Usage by Block",
4192 "tooltip": {
4193 "shared": false,
4194 "sort": 0,
4195 "value_type": "cumulative"
4196 },
4197 "type": "graph",
4198 "xaxis": {
4199 "buckets": null,
4200 "mode": "time",
4201 "name": null,
4202 "show": true,
4203 "values": []
4204 },
4205 "yaxes": [
4206 {
4207 "format": "percentunit",
4208 "logBase": 1,
4209 "max": null,
4210 "min": null,
4211 "show": true
4212 },
4213 {
4214 "format": "short",
4215 "logBase": 1,
4216 "max": null,
4217 "min": null,
4218 "show": true
4219 }
4220 ],
4221 "yaxis": {
4222 "align": false,
4223 "alignLevel": null
4224 }
4225 },
4226 {
4227 "aliasColors": {},
4228 "bars": false,
4229 "dashLength": 10,
4230 "dashes": false,
4231 "datasource": "$datasource",
4232 "editable": true,
4233 "error": false,
4234 "fill": 1,
4235 "fillGradient": 0,
4236 "grid": {},
4237 "gridPos": {
4238 "h": 13,
4239 "w": 12,
4240 "x": 12,
4241 "y": 80
4242 },
4243 "hiddenSeries": false,
4244 "id": 26,
4245 "legend": {
4246 "alignAsTable": true,
4247 "avg": false,
4248 "current": false,
4249 "max": false,
4250 "min": false,
4251 "show": true,
4252 "total": false,
4253 "values": false
4254 },
4255 "lines": true,
4256 "linewidth": 2,
4257 "links": [],
4258 "nullPointMode": "null",
4259 "options": {
4260 "dataLinks": []
4261 },
4262 "paceLength": 10,
4263 "percentage": false,
4264 "pointradius": 5,
4265 "points": false,
4266 "renderer": "flot",
4267 "seriesOverrides": [],
4268 "spaceLength": 10,
4269 "stack": false,
4270 "steppedLine": false,
4271 "targets": [
4272 {
4273 "expr": "(rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])) / rate(synapse_util_metrics_block_count[$bucket_size])",
4274 "format": "time_series",
4275 "interval": "",
4276 "intervalFactor": 2,
4277 "legendFormat": "{{job}}-{{index}} {{block_name}}",
4278 "refId": "A",
4279 "step": 20
4280 }
4281 ],
4282 "thresholds": [],
4283 "timeFrom": null,
4284 "timeRegions": [],
4285 "timeShift": null,
4286 "title": "Average CPU Time per Block",
4287 "tooltip": {
4288 "shared": false,
4289 "sort": 0,
4290 "value_type": "cumulative"
4291 },
4292 "type": "graph",
4293 "xaxis": {
4294 "buckets": null,
4295 "mode": "time",
4296 "name": null,
4297 "show": true,
4298 "values": []
4299 },
4300 "yaxes": [
4301 {
4302 "format": "ms",
4303 "logBase": 1,
4304 "max": null,
4305 "min": null,
4306 "show": true
4307 },
4308 {
4309 "format": "short",
4310 "logBase": 1,
4311 "max": null,
4312 "min": null,
4313 "show": true
4314 }
4315 ],
4316 "yaxis": {
4317 "align": false,
4318 "alignLevel": null
4319 }
4320 },
4321 {
4322 "aliasColors": {},
4323 "bars": false,
4324 "dashLength": 10,
4325 "dashes": false,
4326 "datasource": "$datasource",
4327 "editable": true,
4328 "error": false,
4329 "fill": 1,
4330 "fillGradient": 0,
4331 "grid": {},
4332 "gridPos": {
4333 "h": 13,
4334 "w": 12,
4335 "x": 0,
4336 "y": 93
4337 },
4338 "hiddenSeries": false,
42064339 "id": 13,
42074340 "legend": {
42084341 "alignAsTable": true,
42964429 "h": 13,
42974430 "w": 12,
42984431 "x": 12,
4299 "y": 80
4432 "y": 93
43004433 },
43014434 "hiddenSeries": false,
43024435 "id": 27,
43914524 "h": 13,
43924525 "w": 12,
43934526 "x": 0,
4394 "y": 93
4527 "y": 106
43954528 },
43964529 "hiddenSeries": false,
43974530 "id": 28,
44854618 "h": 13,
44864619 "w": 12,
44874620 "x": 12,
4488 "y": 93
4621 "y": 106
44894622 },
44904623 "hiddenSeries": false,
44914624 "id": 25,
45714704 },
45724705 {
45734706 "collapsed": true,
4574 "datasource": null,
4707 "datasource": "${DS_PROMETHEUS}",
45754708 "gridPos": {
45764709 "h": 1,
45774710 "w": 24,
50615194 },
50625195 {
50635196 "collapsed": true,
5064 "datasource": null,
5197 "datasource": "${DS_PROMETHEUS}",
50655198 "gridPos": {
50665199 "h": 1,
50675200 "w": 24,
50825215 "h": 9,
50835216 "w": 12,
50845217 "x": 0,
5085 "y": 66
5218 "y": 121
50865219 },
50875220 "hiddenSeries": false,
50885221 "id": 91,
51785311 "h": 9,
51795312 "w": 12,
51805313 "x": 12,
5181 "y": 66
5314 "y": 121
51825315 },
51835316 "hiddenSeries": false,
51845317 "id": 21,
52705403 "h": 9,
52715404 "w": 12,
52725405 "x": 0,
5273 "y": 75
5406 "y": 130
52745407 },
52755408 "hiddenSeries": false,
52765409 "id": 89,
53685501 "h": 9,
53695502 "w": 12,
53705503 "x": 12,
5371 "y": 75
5504 "y": 130
53725505 },
53735506 "hiddenSeries": false,
53745507 "id": 93,
54585591 "h": 9,
54595592 "w": 12,
54605593 "x": 0,
5461 "y": 84
5594 "y": 139
54625595 },
54635596 "hiddenSeries": false,
54645597 "id": 95,
55515684 "mode": "spectrum"
55525685 },
55535686 "dataFormat": "tsbuckets",
5554 "datasource": "Prometheus",
5687 "datasource": "${DS_PROMETHEUS}",
55555688 "gridPos": {
55565689 "h": 9,
55575690 "w": 12,
55585691 "x": 12,
5559 "y": 84
5692 "y": 139
55605693 },
55615694 "heatmap": {},
55625695 "hideZeroBuckets": true,
55665699 "show": true
55675700 },
55685701 "links": [],
5569 "options": {},
55705702 "reverseYBuckets": false,
55715703 "targets": [
55725704 {
56085740 },
56095741 {
56105742 "collapsed": true,
5611 "datasource": null,
5743 "datasource": "${DS_PROMETHEUS}",
56125744 "gridPos": {
56135745 "h": 1,
56145746 "w": 24,
56295761 "h": 7,
56305762 "w": 12,
56315763 "x": 0,
5632 "y": 39
5764 "y": 66
56335765 },
56345766 "hiddenSeries": false,
56355767 "id": 2,
57535885 "h": 7,
57545886 "w": 12,
57555887 "x": 12,
5756 "y": 39
5888 "y": 66
57575889 },
57585890 "hiddenSeries": false,
57595891 "id": 41,
58465978 "h": 7,
58475979 "w": 12,
58485980 "x": 0,
5849 "y": 46
5981 "y": 73
58505982 },
58515983 "hiddenSeries": false,
58525984 "id": 42,
59386070 "h": 7,
59396071 "w": 12,
59406072 "x": 12,
5941 "y": 46
6073 "y": 73
59426074 },
59436075 "hiddenSeries": false,
59446076 "id": 43,
60306162 "h": 7,
60316163 "w": 12,
60326164 "x": 0,
6033 "y": 53
6165 "y": 80
60346166 },
60356167 "hiddenSeries": false,
60366168 "id": 113,
61286260 "h": 7,
61296261 "w": 12,
61306262 "x": 12,
6131 "y": 53
6263 "y": 80
61326264 },
61336265 "hiddenSeries": false,
61346266 "id": 115,
62146346 },
62156347 {
62166348 "collapsed": true,
6217 "datasource": null,
6349 "datasource": "${DS_PROMETHEUS}",
62186350 "gridPos": {
62196351 "h": 1,
62206352 "w": 24,
62356367 "h": 9,
62366368 "w": 12,
62376369 "x": 0,
6238 "y": 58
6370 "y": 40
62396371 },
62406372 "hiddenSeries": false,
62416373 "id": 67,
62666398 "steppedLine": false,
62676399 "targets": [
62686400 {
6269 "expr": " synapse_event_persisted_position{instance=\"$instance\",job=\"synapse\"} - ignoring(index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
6401 "expr": "max(synapse_event_persisted_position{instance=\"$instance\"}) - ignoring(instance,index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
62706402 "format": "time_series",
62716403 "interval": "",
62726404 "intervalFactor": 1,
63276459 "h": 9,
63286460 "w": 12,
63296461 "x": 12,
6330 "y": 58
6462 "y": 40
63316463 },
63326464 "hiddenSeries": false,
63336465 "id": 71,
63616493 "expr": "time()*1000-synapse_event_processing_last_ts{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
63626494 "format": "time_series",
63636495 "hide": false,
6496 "interval": "",
63646497 "intervalFactor": 1,
63656498 "legendFormat": "{{job}}-{{index}} {{name}}",
63666499 "refId": "B"
64196552 "h": 9,
64206553 "w": 12,
64216554 "x": 0,
6422 "y": 67
6555 "y": 49
64236556 },
64246557 "hiddenSeries": false,
64256558 "id": 121,
65086641 },
65096642 {
65106643 "collapsed": true,
6511 "datasource": null,
6644 "datasource": "${DS_PROMETHEUS}",
65126645 "gridPos": {
65136646 "h": 1,
65146647 "w": 24,
65386671 "h": 8,
65396672 "w": 12,
65406673 "x": 0,
6541 "y": 41
6674 "y": 86
65426675 },
65436676 "heatmap": {},
65446677 "hideZeroBuckets": true,
65486681 "show": true
65496682 },
65506683 "links": [],
6551 "options": {},
65526684 "reverseYBuckets": false,
65536685 "targets": [
65546686 {
65986730 "h": 8,
65996731 "w": 12,
66006732 "x": 12,
6601 "y": 41
6733 "y": 86
66026734 },
66036735 "hiddenSeries": false,
66046736 "id": 124,
66996831 "h": 8,
67006832 "w": 12,
67016833 "x": 0,
6702 "y": 49
6834 "y": 94
67036835 },
67046836 "heatmap": {},
67056837 "hideZeroBuckets": true,
67096841 "show": true
67106842 },
67116843 "links": [],
6712 "options": {},
67136844 "reverseYBuckets": false,
67146845 "targets": [
67156846 {
67596890 "h": 8,
67606891 "w": 12,
67616892 "x": 12,
6762 "y": 49
6893 "y": 94
67636894 },
67646895 "hiddenSeries": false,
67656896 "id": 128,
68787009 "h": 8,
68797010 "w": 12,
68807011 "x": 0,
6881 "y": 57
7012 "y": 102
68827013 },
68837014 "heatmap": {},
68847015 "hideZeroBuckets": true,
68887019 "show": true
68897020 },
68907021 "links": [],
6891 "options": {},
68927022 "reverseYBuckets": false,
68937023 "targets": [
68947024 {
69387068 "h": 8,
69397069 "w": 12,
69407070 "x": 12,
6941 "y": 57
7071 "y": 102
69427072 },
69437073 "hiddenSeries": false,
69447074 "id": 130,
70577187 "h": 8,
70587188 "w": 12,
70597189 "x": 0,
7060 "y": 65
7190 "y": 110
70617191 },
70627192 "heatmap": {},
70637193 "hideZeroBuckets": true,
70677197 "show": true
70687198 },
70697199 "links": [],
7070 "options": {},
70717200 "reverseYBuckets": false,
70727201 "targets": [
70737202 {
7074 "expr": "rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)",
7203 "expr": "rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
70757204 "format": "heatmap",
7205 "interval": "",
70767206 "intervalFactor": 1,
70777207 "legendFormat": "{{le}}",
70787208 "refId": "A"
71177247 "h": 8,
71187248 "w": 12,
71197249 "x": 12,
7120 "y": 65
7250 "y": 110
71217251 },
71227252 "hiddenSeries": false,
71237253 "id": 132,
71487278 "steppedLine": false,
71497279 "targets": [
71507280 {
7151 "expr": "histogram_quantile(0.5, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)) ",
7152 "format": "time_series",
7281 "expr": "histogram_quantile(0.5, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
7282 "format": "time_series",
7283 "interval": "",
71537284 "intervalFactor": 1,
71547285 "legendFormat": "50%",
71557286 "refId": "A"
71567287 },
71577288 {
7158 "expr": "histogram_quantile(0.75, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
7159 "format": "time_series",
7289 "expr": "histogram_quantile(0.75, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
7290 "format": "time_series",
7291 "interval": "",
71607292 "intervalFactor": 1,
71617293 "legendFormat": "75%",
71627294 "refId": "B"
71637295 },
71647296 {
7165 "expr": "histogram_quantile(0.90, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
7166 "format": "time_series",
7297 "expr": "histogram_quantile(0.90, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
7298 "format": "time_series",
7299 "interval": "",
71677300 "intervalFactor": 1,
71687301 "legendFormat": "90%",
71697302 "refId": "C"
71707303 },
71717304 {
7172 "expr": "histogram_quantile(0.99, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
7173 "format": "time_series",
7305 "expr": "histogram_quantile(0.99, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
7306 "format": "time_series",
7307 "interval": "",
71747308 "intervalFactor": 1,
71757309 "legendFormat": "99%",
71767310 "refId": "D"
71807314 "timeFrom": null,
71817315 "timeRegions": [],
71827316 "timeShift": null,
7183 "title": "Number of state resolution performed, by number of state groups involved (quantiles)",
7317 "title": "Number of state resolutions performed, by number of state groups involved (quantiles)",
71847318 "tooltip": {
71857319 "shared": true,
71867320 "sort": 0,
72327366 "list": [
72337367 {
72347368 "current": {
7369 "selected": false,
72357370 "text": "Prometheus",
72367371 "value": "Prometheus"
72377372 },
73087443 },
73097444 {
73107445 "allValue": null,
7311 "current": {
7312 "text": "matrix.org",
7313 "value": "matrix.org"
7314 },
7446 "current": {},
73157447 "datasource": "$datasource",
73167448 "definition": "",
73177449 "hide": 0,
73187450 "includeAll": false,
7451 "index": -1,
73197452 "label": null,
73207453 "multi": false,
73217454 "name": "instance",
73347467 {
73357468 "allFormat": "regex wildcard",
73367469 "allValue": "",
7337 "current": {
7338 "text": "synapse",
7339 "value": [
7340 "synapse"
7341 ]
7342 },
7470 "current": {},
73437471 "datasource": "$datasource",
73447472 "definition": "",
73457473 "hide": 0,
73467474 "hideLabel": false,
73477475 "includeAll": true,
7476 "index": -1,
73487477 "label": "Job",
73497478 "multi": true,
73507479 "multiFormat": "regex values",
73657494 {
73667495 "allFormat": "regex wildcard",
73677496 "allValue": ".*",
7368 "current": {
7369 "selected": false,
7370 "text": "All",
7371 "value": "$__all"
7372 },
7497 "current": {},
73737498 "datasource": "$datasource",
73747499 "definition": "",
73757500 "hide": 0,
73767501 "hideLabel": false,
73777502 "includeAll": true,
7503 "index": -1,
73787504 "label": "",
73797505 "multi": true,
73807506 "multiFormat": "regex values",
74277553 "timezone": "",
74287554 "title": "Synapse",
74297555 "uid": "000000012",
7430 "version": 29
7556 "variables": {
7557 "list": []
7558 },
7559 "version": 32
74317560 }
00 from __future__ import print_function
1
2 import argparse
3 import cgi
4 import datetime
5 import json
6
7 import pydot
8 import urllib2
19
210 # Copyright 2014-2016 OpenMarket Ltd
311 #
1422 # limitations under the License.
1523
1624
17 import sqlite3
18 import pydot
19 import cgi
20 import json
21 import datetime
22 import argparse
23 import urllib2
24
25
2625 def make_name(pdu_id, origin):
2726 return "%s@%s" % (pdu_id, origin)
2827
3231 node_map = {}
3332
3433 origins = set()
35 colors = set(("red", "green", "blue", "yellow", "purple"))
34 colors = {"red", "green", "blue", "yellow", "purple"}
3635
3736 for pdu in pdus:
3837 origins.add(pdu.get("origin"))
4847 try:
4948 c = colors.pop()
5049 color_map[o] = c
51 except:
50 except Exception:
5251 print("Run out of colours!")
5352 color_map[o] = "black"
5453
1212 # limitations under the License.
1313
1414
15 import argparse
16 import cgi
17 import datetime
18 import json
1519 import sqlite3
20
1621 import pydot
17 import cgi
18 import json
19 import datetime
20 import argparse
2122
2223 from synapse.events import FrozenEvent
2324 from synapse.util.frozenutils import unfreeze
9798 for prev_id, _ in event.prev_events:
9899 try:
99100 end_node = node_map[prev_id]
100 except:
101 except Exception:
101102 end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
102103
103104 node_map[prev_id] = end_node
00 from __future__ import print_function
1
2 import argparse
3 import cgi
4 import datetime
5
6 import pydot
7 import simplejson as json
8
9 from synapse.events import FrozenEvent
10 from synapse.util.frozenutils import unfreeze
111
212 # Copyright 2016 OpenMarket Ltd
313 #
1222 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1323 # See the License for the specific language governing permissions and
1424 # limitations under the License.
15
16
17 import pydot
18 import cgi
19 import simplejson as json
20 import datetime
21 import argparse
22
23 from synapse.events import FrozenEvent
24 from synapse.util.frozenutils import unfreeze
2525
2626
2727 def make_graph(file_name, room_id, file_prefix, limit):
105105 for prev_id, _ in event.prev_events:
106106 try:
107107 end_node = node_map[prev_id]
108 except:
108 except Exception:
109109 end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
110110
111111 node_map[prev_id] = end_node
1111 """
1212 from __future__ import print_function
1313
14 import json
15 import subprocess
16 import time
17
1418 import gevent
1519 import grequests
1620 from BeautifulSoup import BeautifulSoup
17 import json
18 import urllib
19 import subprocess
20 import time
21
22 # ACCESS_TOKEN="" #
21
22 ACCESS_TOKEN = ""
2323
2424 MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
2525 MYUSERNAME = "@davetest:matrix.org"
00 #!/usr/bin/env python
11 from __future__ import print_function
2 from argparse import ArgumentParser
2
33 import json
4 import requests
54 import sys
65 import urllib
6 from argparse import ArgumentParser
7
8 import requests
79
810 try:
911 raw_input
1515 ###
1616 ### Stage 0: builder
1717 ###
18 FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder
18 FROM docker.io/python:${PYTHON_VERSION}-slim as builder
1919
2020 # install the OS build deps
2121
22 RUN apk add \
23 build-base \
24 libffi-dev \
25 libjpeg-turbo-dev \
26 libwebp-dev \
27 libressl-dev \
28 libxslt-dev \
29 linux-headers \
30 postgresql-dev \
31 zlib-dev
3222
33 # build things which have slow build steps, before we copy synapse, so that
34 # the layer can be cached.
35 #
36 # (we really just care about caching a wheel here, as the "pip install" below
37 # will install them again.)
23 RUN apt-get update && apt-get install -y \
24 build-essential \
25 libpq-dev \
26 && rm -rf /var/lib/apt/lists/*
3827
28 # Build dependencies that are not available as wheels, to speed up rebuilds
3929 RUN pip install --prefix="/install" --no-warn-script-location \
40 cryptography \
41 msgpack-python \
42 pillow \
43 pynacl
30 frozendict \
31 jaeger-client \
32 opentracing \
33 prometheus-client \
34 psycopg2 \
35 pycparser \
36 pyrsistent \
37 pyyaml \
38 simplejson \
39 threadloop \
40 thrift
4441
4542 # now install synapse and all of the python deps to /install.
46
4743 COPY synapse /synapse/synapse/
4844 COPY scripts /synapse/scripts/
4945 COPY MANIFEST.in README.rst setup.py synctl /synapse/
5551 ### Stage 1: runtime
5652 ###
5753
58 FROM docker.io/python:${PYTHON_VERSION}-alpine3.11
54 FROM docker.io/python:${PYTHON_VERSION}-slim
5955
60 # xmlsec is required for saml support
61 RUN apk add --no-cache --virtual .runtime_deps \
62 libffi \
63 libjpeg-turbo \
64 libwebp \
65 libressl \
66 libxslt \
67 libpq \
68 zlib \
69 su-exec \
70 tzdata \
71 xmlsec
56 RUN apt-get update && apt-get install -y \
57 libpq5 \
58 xmlsec1 \
59 gosu \
60 && rm -rf /var/lib/apt/lists/*
7261
7362 COPY --from=builder /install /usr/local
7463 COPY ./docker/start.py /start.py
9393 * `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
9494 * `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
9595
96 ## Generating an (admin) user
97
98 After synapse is running, you may wish to create a user via `register_new_matrix_user`.
99
100 This requires a `registration_shared_secret` to be set in your config file. Synapse
101 must be restarted to pick up this change.
102
103 You can then call the script:
104
105 ```
106 docker exec -it synapse register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml --help
107 ```
108
109 Remember to remove the `registration_shared_secret` and restart if you no-longer need it.
110
96111 ## TLS support
97112
98113 The default configuration exposes a single HTTP port: http://localhost:8008. It
119119
120120 if ownership is not None:
121121 subprocess.check_output(["chown", "-R", ownership, "/data"])
122 args = ["su-exec", ownership] + args
122 args = ["gosu", ownership] + args
123123
124124 subprocess.check_output(args)
125125
171171 # make sure that synapse has perms to write to the data dir.
172172 subprocess.check_output(["chown", ownership, data_dir])
173173
174 args = ["su-exec", ownership] + args
175 os.execv("/sbin/su-exec", args)
174 args = ["gosu", ownership] + args
175 os.execv("/usr/sbin/gosu", args)
176176 else:
177177 os.execv("/usr/local/bin/python", args)
178178
188188 ownership = "{}:{}".format(desired_uid, desired_gid)
189189
190190 if ownership is None:
191 log("Will not perform chmod/su-exec as UserID already matches request")
191 log("Will not perform chmod/gosu as UserID already matches request")
192192
193193 # In generate mode, generate a configuration and missing keys, then exit
194194 if mode == "generate":
235235
236236 args = ["python", "-m", synapse_worker, "--config-path", config_path]
237237 if ownership is not None:
238 args = ["su-exec", ownership] + args
239 os.execv("/sbin/su-exec", args)
238 args = ["gosu", ownership] + args
239 os.execv("/usr/sbin/gosu", args)
240240 else:
241241 os.execv("/usr/local/bin/python", args)
242242
1111 In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
1212 Let's Encrypt announced that they were deprecating version 1 of the ACME
1313 protocol, with the plan to disable the use of it for new accounts in
14 November 2019, and for existing accounts in June 2020.
14 November 2019, for new domains in June 2020, and for existing accounts and
15 domains in June 2021.
1516
1617 Synapse doesn't currently support version 2 of the ACME protocol, which
1718 means that:
1819
1920 * for existing installs, Synapse's built-in ACME support will continue
20 to work until June 2020.
21 to work until June 2021.
2122 * for new installs, this feature will not work at all.
2223
2324 Either way, it is recommended to move from Synapse's ACME support
33 This API will remove all trace of a room from your database.
44
55 All local users must have left the room before it can be removed.
6
7 See also: [Delete Room API](rooms.md#delete-room-api)
68
79 The API is:
810
317317 "state_events": 93534
318318 }
319319 ```
320
321 # Room Members API
322
323 The Room Members admin API allows server admins to get a list of all members of a room.
324
325 The response includes the following fields:
326
327 * `members` - A list of all the members that are present in the room, represented by their ids.
328 * `total` - Total number of members in the room.
329
330 ## Usage
331
332 A standard request:
333
334 ```
335 GET /_synapse/admin/v1/rooms/<room_id>/members
336
337 {}
338 ```
339
340 Response:
341
342 ```
343 {
344 "members": [
345 "@foo:matrix.org",
346 "@bar:matrix.org",
347 "@foobar:matrix.org
348 ],
349 "total": 3
350 }
351 ```
352
353 # Delete Room API
354
355 The Delete Room admin API allows server admins to remove rooms from server
356 and block these rooms.
357 It is a combination and improvement of "[Shutdown room](shutdown_room.md)"
358 and "[Purge room](purge_room.md)" API.
359
360 Shuts down a room. Moves all local users and room aliases automatically to a
361 new room if `new_room_user_id` is set. Otherwise local users only
362 leave the room without any information.
363
364 The new room will be created with the user specified by the `new_room_user_id` parameter
365 as room administrator and will contain a message explaining what happened. Users invited
366 to the new room will have power level `-10` by default, and thus be unable to speak.
367
368 If `block` is `True` it prevents new joins to the old room.
369
370 This API will remove all trace of the old room from your database after removing
371 all local users.
372 Depending on the amount of history being purged a call to the API may take
373 several minutes or longer.
374
375 The local server will only have the power to move local user and room aliases to
376 the new room. Users on other servers will be unaffected.
377
378 The API is:
379
380 ```json
381 POST /_synapse/admin/v1/rooms/<room_id>/delete
382 ```
383
384 with a body of:
385 ```json
386 {
387 "new_room_user_id": "@someuser:example.com",
388 "room_name": "Content Violation Notification",
389 "message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
390 "block": true
391 }
392 ```
393
394 To use it, you will need to authenticate by providing an ``access_token`` for a
395 server admin: see [README.rst](README.rst).
396
397 A response body like the following is returned:
398
399 ```json
400 {
401 "kicked_users": [
402 "@foobar:example.com"
403 ],
404 "failed_to_kick_users": [],
405 "local_aliases": [
406 "#badroom:example.com",
407 "#evilsaloon:example.com"
408 ],
409 "new_room_id": "!newroomid:example.com"
410 }
411 ```
412
413 ## Parameters
414
415 The following parameters should be set in the URL:
416
417 * `room_id` - The ID of the room.
418
419 The following JSON body parameters are available:
420
421 * `new_room_user_id` - Optional. If set, a new room will be created with this user ID
422 as the creator and admin, and all users in the old room will be moved into that
423 room. If not set, no new room will be created and the users will just be removed
424 from the old room. The user ID must be on the local server, but does not necessarily
425 have to belong to a registered user.
426 * `room_name` - Optional. A string representing the name of the room that new users will be
427 invited to. Defaults to `Content Violation Notification`
428 * `message` - Optional. A string containing the first message that will be sent as
429 `new_room_user_id` in the new room. Ideally this will clearly convey why the
430 original room was shut down. Defaults to `Sharing illegal content on this server
431 is not permitted and rooms in violation will be blocked.`
432 * `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing future attempts to
433 join the room. Defaults to `false`.
434
435 The JSON body must not be empty. The body must be at least `{}`.
436
437 ## Response
438
439 The following fields are returned in the JSON response body:
440
441 * `kicked_users` - An array of users (`user_id`) that were kicked.
442 * `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
443 * `local_aliases` - An array of strings representing the local aliases that were migrated from
444 the old room to the new.
445 * `new_room_id` - A string representing the room ID of the new room.
88
99 The local server will only have the power to move local user and room aliases to
1010 the new room. Users on other servers will be unaffected.
11
12 See also: [Delete Room API](rooms.md#delete-room-api)
1113
1214 ## API
1315
9090
9191 - ``admin``, optional, defaults to ``false``.
9292
93 - ``deactivated``, optional, defaults to ``false``.
93 - ``deactivated``, optional. If unspecified, deactivation state will be left
94 unchanged on existing accounts and set to ``false`` for new accounts.
9495
9596 If the user already exists then optional parameters default to the current value.
97
98 In order to re-activate an account ``deactivated`` must be set to ``false``. If
99 users do not login via single-sign-on, a new ``password`` must be provided.
96100
97101 List Accounts
98102 =============
1919 Note that the login type of `m.login.jwt` is supported, but is deprecated. This
2020 will be removed in a future version of Synapse.
2121
22 The `jwt` should encode the local part of the user ID as the standard `sub`
23 claim. In the case that the token is not valid, the homeserver must respond with
24 `401 Unauthorized` and an error code of `M_UNAUTHORIZED`.
22 The `token` field should include the JSON web token with the following claims:
2523
26 (Note that this differs from the token based logins which return a
27 `403 Forbidden` and an error code of `M_FORBIDDEN` if an error occurs.)
24 * The `sub` (subject) claim is required and should encode the local part of the
25 user ID.
26 * The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
27 claims are optional, but validated if present.
28 * The issuer (`iss`) claim is optional, but required and validated if configured.
29 * The audience (`aud`) claim is optional, but required and validated if configured.
30 Providing the audience claim when not configured will cause validation to fail.
31
32 In the case that the token is not valid, the homeserver must respond with
33 `403 Forbidden` and an error code of `M_FORBIDDEN`.
2834
2935 As with other login types, there are additional fields (e.g. `device_id` and
3036 `initial_device_display_name`) which can be included in the above request.
5460 Although JSON Web Tokens are typically generated from an external server, the
5561 examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly.
5662
57 1. Configure Synapse with JWT logins:
63 1. Configure Synapse with JWT logins, note that this example uses a pre-shared
64 secret and an algorithm of HS256:
5865
5966 ```yaml
6067 jwt_config:
1818
1919 Password auth provider classes must provide the following methods:
2020
21 *class* `SomeProvider.parse_config`(*config*)
21 * `parse_config(config)`
22 This method is passed the `config` object for this module from the
23 homeserver configuration file.
2224
23 > This method is passed the `config` object for this module from the
24 > homeserver configuration file.
25 >
26 > It should perform any appropriate sanity checks on the provided
27 > configuration, and return an object which is then passed into
28 > `__init__`.
25 It should perform any appropriate sanity checks on the provided
26 configuration, and return an object which is then passed into
2927
30 *class* `SomeProvider`(*config*, *account_handler*)
28 This method should have the `@staticmethod` decoration.
3129
32 > The constructor is passed the config object returned by
33 > `parse_config`, and a `synapse.module_api.ModuleApi` object which
34 > allows the password provider to check if accounts exist and/or create
35 > new ones.
30 * `__init__(self, config, account_handler)`
31
32 The constructor is passed the config object returned by
33 `parse_config`, and a `synapse.module_api.ModuleApi` object which
34 allows the password provider to check if accounts exist and/or create
35 new ones.
3636
3737 ## Optional methods
3838
39 Password auth provider classes may optionally provide the following
40 methods.
39 Password auth provider classes may optionally provide the following methods:
4140
42 *class* `SomeProvider.get_db_schema_files`()
41 * `get_db_schema_files(self)`
4342
44 > This method, if implemented, should return an Iterable of
45 > `(name, stream)` pairs of database schema files. Each file is applied
46 > in turn at initialisation, and a record is then made in the database
47 > so that it is not re-applied on the next start.
43 This method, if implemented, should return an Iterable of
44 `(name, stream)` pairs of database schema files. Each file is applied
45 in turn at initialisation, and a record is then made in the database
46 so that it is not re-applied on the next start.
4847
49 `someprovider.get_supported_login_types`()
48 * `get_supported_login_types(self)`
5049
51 > This method, if implemented, should return a `dict` mapping from a
52 > login type identifier (such as `m.login.password`) to an iterable
53 > giving the fields which must be provided by the user in the submission
54 > to the `/login` api. These fields are passed in the `login_dict`
55 > dictionary to `check_auth`.
56 >
57 > For example, if a password auth provider wants to implement a custom
58 > login type of `com.example.custom_login`, where the client is expected
59 > to pass the fields `secret1` and `secret2`, the provider should
60 > implement this method and return the following dict:
61 >
62 > {"com.example.custom_login": ("secret1", "secret2")}
50 This method, if implemented, should return a `dict` mapping from a
51 login type identifier (such as `m.login.password`) to an iterable
52 giving the fields which must be provided by the user in the submission
53 to [the `/login` API](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
54 These fields are passed in the `login_dict` dictionary to `check_auth`.
6355
64 `someprovider.check_auth`(*username*, *login_type*, *login_dict*)
56 For example, if a password auth provider wants to implement a custom
57 login type of `com.example.custom_login`, where the client is expected
58 to pass the fields `secret1` and `secret2`, the provider should
59 implement this method and return the following dict:
6560
66 > This method is the one that does the real work. If implemented, it
67 > will be called for each login attempt where the login type matches one
68 > of the keys returned by `get_supported_login_types`.
69 >
70 > It is passed the (possibly UNqualified) `user` provided by the client,
71 > the login type, and a dictionary of login secrets passed by the
72 > client.
73 >
74 > The method should return a Twisted `Deferred` object, which resolves
75 > to the canonical `@localpart:domain` user id if authentication is
76 > successful, and `None` if not.
77 >
78 > Alternatively, the `Deferred` can resolve to a `(str, func)` tuple, in
79 > which case the second field is a callback which will be called with
80 > the result from the `/login` call (including `access_token`,
81 > `device_id`, etc.)
61 ```python
62 {"com.example.custom_login": ("secret1", "secret2")}
63 ```
8264
83 `someprovider.check_3pid_auth`(*medium*, *address*, *password*)
65 * `check_auth(self, username, login_type, login_dict)`
8466
85 > This method, if implemented, is called when a user attempts to
86 > register or log in with a third party identifier, such as email. It is
87 > passed the medium (ex. "email"), an address (ex.
88 > "<jdoe@example.com>") and the user's password.
89 >
90 > The method should return a Twisted `Deferred` object, which resolves
91 > to a `str` containing the user's (canonical) User ID if
92 > authentication was successful, and `None` if not.
93 >
94 > As with `check_auth`, the `Deferred` may alternatively resolve to a
95 > `(user_id, callback)` tuple.
67 This method does the real work. If implemented, it
68 will be called for each login attempt where the login type matches one
69 of the keys returned by `get_supported_login_types`.
9670
97 `someprovider.check_password`(*user_id*, *password*)
71 It is passed the (possibly unqualified) `user` field provided by the client,
72 the login type, and a dictionary of login secrets passed by the
73 client.
9874
99 > This method provides a simpler interface than
100 > `get_supported_login_types` and `check_auth` for password auth
101 > providers that just want to provide a mechanism for validating
102 > `m.login.password` logins.
103 >
104 > Iif implemented, it will be called to check logins with an
105 > `m.login.password` login type. It is passed a qualified
106 > `@localpart:domain` user id, and the password provided by the user.
107 >
108 > The method should return a Twisted `Deferred` object, which resolves
109 > to `True` if authentication is successful, and `False` if not.
75 The method should return an `Awaitable` object, which resolves
76 to the canonical `@localpart:domain` user ID if authentication is
77 successful, and `None` if not.
11078
111 `someprovider.on_logged_out`(*user_id*, *device_id*, *access_token*)
79 Alternatively, the `Awaitable` can resolve to a `(str, func)` tuple, in
80 which case the second field is a callback which will be called with
81 the result from the `/login` call (including `access_token`,
82 `device_id`, etc.)
11283
113 > This method, if implemented, is called when a user logs out. It is
114 > passed the qualified user ID, the ID of the deactivated device (if
115 > any: access tokens are occasionally created without an associated
116 > device ID), and the (now deactivated) access token.
117 >
118 > It may return a Twisted `Deferred` object; the logout request will
119 > wait for the deferred to complete but the result is ignored.
84 * `check_3pid_auth(self, medium, address, password)`
85
86 This method, if implemented, is called when a user attempts to
87 register or log in with a third party identifier, such as email. It is
88 passed the medium (ex. "email"), an address (ex.
89 "<jdoe@example.com>") and the user's password.
90
91 The method should return an `Awaitable` object, which resolves
92 to a `str` containing the user's (canonical) User id if
93 authentication was successful, and `None` if not.
94
95 As with `check_auth`, the `Awaitable` may alternatively resolve to a
96 `(user_id, callback)` tuple.
97
98 * `check_password(self, user_id, password)`
99
100 This method provides a simpler interface than
101 `get_supported_login_types` and `check_auth` for password auth
102 providers that just want to provide a mechanism for validating
103 `m.login.password` logins.
104
105 If implemented, it will be called to check logins with an
106 `m.login.password` login type. It is passed a qualified
107 `@localpart:domain` user id, and the password provided by the user.
108
109 The method should return an `Awaitable` object, which resolves
110 to `True` if authentication is successful, and `False` if not.
111
112 * `on_logged_out(self, user_id, device_id, access_token)`
113
114 This method, if implemented, is called when a user logs out. It is
115 passed the qualified user ID, the ID of the deactivated device (if
116 any: access tokens are occasionally created without an associated
117 device ID), and the (now deactivated) access token.
118
119 It may return an `Awaitable` object; the logout request will
120 wait for the `Awaitable` to complete, but the result is ignored.
3737 server {
3838 listen 443 ssl;
3939 listen [::]:443 ssl;
40
41 # For the federation port
42 listen 8448 ssl default_server;
43 listen [::]:8448 ssl default_server;
44
4045 server_name matrix.example.com;
4146
4247 location /_matrix {
4550 # Nginx by default only allows file uploads up to 1M in size
4651 # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
4752 client_max_body_size 10M;
48 }
49 }
50
51 server {
52 listen 8448 ssl default_server;
53 listen [::]:8448 ssl default_server;
54 server_name example.com;
55
56 location / {
57 proxy_pass http://localhost:8008;
58 proxy_set_header X-Forwarded-For $remote_addr;
5953 }
6054 }
6155 ```
101101 #gc_thresholds: [700, 10, 10]
102102
103103 # Set the limit on the returned events in the timeline in the get
104 # and sync operations. The default value is -1, means no upper limit.
104 # and sync operations. The default value is 100. -1 means no upper limit.
105 #
106 # Uncomment the following to increase the limit to 5000.
105107 #
106108 #filter_timeline_limit: 5000
107109
116118 # will receive errors when searching for messages. Defaults to enabled.
117119 #
118120 #enable_search: false
121
122 # List of ports that Synapse should listen on, their purpose and their
123 # configuration.
124 #
125 # Options for each listener include:
126 #
127 # port: the TCP port to bind to
128 #
129 # bind_addresses: a list of local addresses to listen on. The default is
130 # 'all local interfaces'.
131 #
132 # type: the type of listener. Normally 'http', but other valid options are:
133 # 'manhole' (see docs/manhole.md),
134 # 'metrics' (see docs/metrics-howto.md),
135 # 'replication' (see docs/workers.md).
136 #
137 # tls: set to true to enable TLS for this listener. Will use the TLS
138 # key/cert specified in tls_private_key_path / tls_certificate_path.
139 #
140 # x_forwarded: Only valid for an 'http' listener. Set to true to use the
141 # X-Forwarded-For header as the client IP. Useful when Synapse is
142 # behind a reverse-proxy.
143 #
144 # resources: Only valid for an 'http' listener. A list of resources to host
145 # on this port. Options for each resource are:
146 #
147 # names: a list of names of HTTP resources. See below for a list of
148 # valid resource names.
149 #
150 # compress: set to true to enable HTTP compression for this resource.
151 #
152 # additional_resources: Only valid for an 'http' listener. A map of
153 # additional endpoints which should be loaded via dynamic modules.
154 #
155 # Valid resource names are:
156 #
157 # client: the client-server API (/_matrix/client), and the synapse admin
158 # API (/_synapse/admin). Also implies 'media' and 'static'.
159 #
160 # consent: user consent forms (/_matrix/consent). See
161 # docs/consent_tracking.md.
162 #
163 # federation: the server-server API (/_matrix/federation). Also implies
164 # 'media', 'keys', 'openid'
165 #
166 # keys: the key discovery API (/_matrix/keys).
167 #
168 # media: the media API (/_matrix/media).
169 #
170 # metrics: the metrics interface. See docs/metrics-howto.md.
171 #
172 # openid: OpenID authentication.
173 #
174 # replication: the HTTP replication API (/_synapse/replication). See
175 # docs/workers.md.
176 #
177 # static: static resources under synapse/static (/_matrix/static). (Mostly
178 # useful for 'fallback authentication'.)
179 #
180 # webclient: A web client. Requires web_client_location to be set.
181 #
182 listeners:
183 # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
184 #
185 # Disabled by default. To enable it, uncomment the following. (Note that you
186 # will also need to give Synapse a TLS key and certificate: see the TLS section
187 # below.)
188 #
189 #- port: 8448
190 # type: http
191 # tls: true
192 # resources:
193 # - names: [client, federation]
194
195 # Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
196 # that unwraps TLS.
197 #
198 # If you plan to use a reverse proxy, please see
199 # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
200 #
201 - port: 8008
202 tls: false
203 type: http
204 x_forwarded: true
205 bind_addresses: ['::1', '127.0.0.1']
206
207 resources:
208 - names: [client, federation]
209 compress: false
210
211 # example additional_resources:
212 #
213 #additional_resources:
214 # "/_matrix/my/custom/endpoint":
215 # module: my_module.CustomRequestHandler
216 # config: {}
217
218 # Turn on the twisted ssh manhole service on localhost on the given
219 # port.
220 #
221 #- port: 9000
222 # bind_addresses: ['::1', '127.0.0.1']
223 # type: manhole
224
225 # Forward extremities can build up in a room due to networking delays between
226 # homeservers. Once this happens in a large room, calculation of the state of
227 # that room can become quite expensive. To mitigate this, once the number of
228 # forward extremities reaches a given threshold, Synapse will send an
229 # org.matrix.dummy_event event, which will reduce the forward extremities
230 # in the room.
231 #
232 # This setting defines the threshold (i.e. number of forward extremities in the
233 # room) at which dummy events are sent. The default value is 10.
234 #
235 #dummy_events_threshold: 5
236
237
238 ## Homeserver blocking ##
239
240 # How to reach the server admin, used in ResourceLimitError
241 #
242 #admin_contact: 'mailto:admin@server.com'
243
244 # Global blocking
245 #
246 #hs_disabled: false
247 #hs_disabled_message: 'Human readable reason for why the HS is blocked'
248
249 # Monthly Active User Blocking
250 #
251 # Used in cases where the admin or server owner wants to limit to the
252 # number of monthly active users.
253 #
254 # 'limit_usage_by_mau' disables/enables monthly active user blocking. When
255 # enabled and a limit is reached the server returns a 'ResourceLimitError'
256 # with error type Codes.RESOURCE_LIMIT_EXCEEDED
257 #
258 # 'max_mau_value' is the hard limit of monthly active users above which
259 # the server will start blocking user actions.
260 #
261 # 'mau_trial_days' is a means to add a grace period for active users. It
262 # means that users must be active for this number of days before they
263 # can be considered active and guards against the case where lots of users
264 # sign up in a short space of time never to return after their initial
265 # session.
266 #
267 # 'mau_limit_alerting' is a means of limiting client side alerting
268 # should the mau limit be reached. This is useful for small instances
269 # where the admin has 5 mau seats (say) for 5 specific people and no
270 # interest increasing the mau limit further. Defaults to True, which
271 # means that alerting is enabled
272 #
273 #limit_usage_by_mau: false
274 #max_mau_value: 50
275 #mau_trial_days: 2
276 #mau_limit_alerting: false
277
278 # If enabled, the metrics for the number of monthly active users will
279 # be populated, however no one will be limited. If limit_usage_by_mau
280 # is true, this is implied to be true.
281 #
282 #mau_stats_only: false
283
284 # Sometimes the server admin will want to ensure certain accounts are
285 # never blocked by mau checking. These accounts are specified here.
286 #
287 #mau_limit_reserved_threepids:
288 # - medium: 'email'
289 # address: 'reserved_user@example.com'
290
291 # Used by phonehome stats to group together related servers.
292 #server_context: context
293
294 # Resource-constrained homeserver settings
295 #
296 # When this is enabled, the room "complexity" will be checked before a user
297 # joins a new remote room. If it is above the complexity limit, the server will
298 # disallow joining, or will instantly leave.
299 #
300 # Room complexity is an arbitrary measure based on factors such as the number of
301 # users in the room.
302 #
303 limit_remote_rooms:
304 # Uncomment to enable room complexity checking.
305 #
306 #enabled: true
307
308 # the limit above which rooms cannot be joined. The default is 1.0.
309 #
310 #complexity: 0.5
311
312 # override the error which is returned when the room is too complex.
313 #
314 #complexity_error: "This room is too complex."
315
316 # Whether to require a user to be in the room to add an alias to it.
317 # Defaults to 'true'.
318 #
319 #require_membership_for_aliases: false
320
321 # Whether to allow per-room membership profiles through the send of membership
322 # events with profile information that differ from the target's global profile.
323 # Defaults to 'true'.
324 #
325 #allow_per_room_profiles: false
326
327 # How long to keep redacted events in unredacted form in the database. After
328 # this period redacted events get replaced with their redacted form in the DB.
329 #
330 # Defaults to `7d`. Set to `null` to disable.
331 #
332 #redaction_retention_period: 28d
333
334 # How long to track users' last seen time and IPs in the database.
335 #
336 # Defaults to `28d`. Set to `null` to disable clearing out of old rows.
337 #
338 #user_ips_max_age: 14d
339
340 # Message retention policy at the server level.
341 #
342 # Room admins and mods can define a retention period for their rooms using the
343 # 'm.room.retention' state event, and server admins can cap this period by setting
344 # the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
345 #
346 # If this feature is enabled, Synapse will regularly look for and purge events
347 # which are older than the room's maximum retention period. Synapse will also
348 # filter events received over federation so that events that should have been
349 # purged are ignored and not stored again.
350 #
351 retention:
352 # The message retention policies feature is disabled by default. Uncomment the
353 # following line to enable it.
354 #
355 #enabled: true
356
357 # Default retention policy. If set, Synapse will apply it to rooms that lack the
358 # 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
359 # matter much because Synapse doesn't take it into account yet.
360 #
361 #default_policy:
362 # min_lifetime: 1d
363 # max_lifetime: 1y
364
365 # Retention policy limits. If set, a user won't be able to send a
366 # 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
367 # that's not within this range. This is especially useful in closed federations,
368 # in which server admins can make sure every federating server applies the same
369 # rules.
370 #
371 #allowed_lifetime_min: 1d
372 #allowed_lifetime_max: 1y
373
374 # Server admins can define the settings of the background jobs purging the
375 # events which lifetime has expired under the 'purge_jobs' section.
376 #
377 # If no configuration is provided, a single job will be set up to delete expired
378 # events in every room daily.
379 #
380 # Each job's configuration defines which range of message lifetimes the job
381 # takes care of. For example, if 'shortest_max_lifetime' is '2d' and
382 # 'longest_max_lifetime' is '3d', the job will handle purging expired events in
383 # rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
384 # lower than or equal to 3 days. Both the minimum and the maximum value of a
385 # range are optional, e.g. a job with no 'shortest_max_lifetime' and a
386 # 'longest_max_lifetime' of '3d' will handle every room with a retention policy
387 # which 'max_lifetime' is lower than or equal to three days.
388 #
389 # The rationale for this per-job configuration is that some rooms might have a
390 # retention policy with a low 'max_lifetime', where history needs to be purged
391 # of outdated messages on a more frequent basis than for the rest of the rooms
392 # (e.g. every 12h), but not want that purge to be performed by a job that's
393 # iterating over every room it knows, which could be heavy on the server.
394 #
395 #purge_jobs:
396 # - shortest_max_lifetime: 1d
397 # longest_max_lifetime: 3d
398 # interval: 12h
399 # - shortest_max_lifetime: 3d
400 # longest_max_lifetime: 1y
401 # interval: 1d
402
403 # Inhibits the /requestToken endpoints from returning an error that might leak
404 # information about whether an e-mail address is in use or not on this
405 # homeserver.
406 # Note that for some endpoints the error situation is the e-mail already being
407 # used, and for others the error is entering the e-mail being unused.
408 # If this option is enabled, instead of returning an error, these endpoints will
409 # act as if no error happened and return a fake session ID ('sid') to clients.
410 #
411 #request_token_inhibit_3pid_errors: true
412
413
414 ## TLS ##
415
416 # PEM-encoded X509 certificate for TLS.
417 # This certificate, as of Synapse 1.0, will need to be a valid and verifiable
418 # certificate, signed by a recognised Certificate Authority.
419 #
420 # See 'ACME support' below to enable auto-provisioning this certificate via
421 # Let's Encrypt.
422 #
423 # If supplying your own, be sure to use a `.pem` file that includes the
424 # full certificate chain including any intermediate certificates (for
425 # instance, if using certbot, use `fullchain.pem` as your certificate,
426 # not `cert.pem`).
427 #
428 #tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
429
430 # PEM-encoded private key for TLS
431 #
432 #tls_private_key_path: "CONFDIR/SERVERNAME.tls.key"
433
434 # Whether to verify TLS server certificates for outbound federation requests.
435 #
436 # Defaults to `true`. To disable certificate verification, uncomment the
437 # following line.
438 #
439 #federation_verify_certificates: false
440
441 # The minimum TLS version that will be used for outbound federation requests.
442 #
443 # Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
444 # that setting this value higher than `1.2` will prevent federation to most
445 # of the public Matrix network: only configure it to `1.3` if you have an
446 # entirely private federation setup and you can ensure TLS 1.3 support.
447 #
448 #federation_client_minimum_tls_version: 1.2
449
450 # Skip federation certificate verification on the following whitelist
451 # of domains.
452 #
453 # This setting should only be used in very specific cases, such as
454 # federation over Tor hidden services and similar. For private networks
455 # of homeservers, you likely want to use a private CA instead.
456 #
457 # Only effective if federation_verify_certicates is `true`.
458 #
459 #federation_certificate_verification_whitelist:
460 # - lon.example.com
461 # - *.domain.com
462 # - *.onion
463
464 # List of custom certificate authorities for federation traffic.
465 #
466 # This setting should only normally be used within a private network of
467 # homeservers.
468 #
469 # Note that this list will replace those that are provided by your
470 # operating environment. Certificates must be in PEM format.
471 #
472 #federation_custom_ca_list:
473 # - myCA1.pem
474 # - myCA2.pem
475 # - myCA3.pem
476
477 # ACME support: This will configure Synapse to request a valid TLS certificate
478 # for your configured `server_name` via Let's Encrypt.
479 #
480 # Note that ACME v1 is now deprecated, and Synapse currently doesn't support
481 # ACME v2. This means that this feature currently won't work with installs set
482 # up after November 2019. For more info, and alternative solutions, see
483 # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
484 #
485 # Note that provisioning a certificate in this way requires port 80 to be
486 # routed to Synapse so that it can complete the http-01 ACME challenge.
487 # By default, if you enable ACME support, Synapse will attempt to listen on
488 # port 80 for incoming http-01 challenges - however, this will likely fail
489 # with 'Permission denied' or a similar error.
490 #
491 # There are a couple of potential solutions to this:
492 #
493 # * If you already have an Apache, Nginx, or similar listening on port 80,
494 # you can configure Synapse to use an alternate port, and have your web
495 # server forward the requests. For example, assuming you set 'port: 8009'
496 # below, on Apache, you would write:
497 #
498 # ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
499 #
500 # * Alternatively, you can use something like `authbind` to give Synapse
501 # permission to listen on port 80.
502 #
503 acme:
504 # ACME support is disabled by default. Set this to `true` and uncomment
505 # tls_certificate_path and tls_private_key_path above to enable it.
506 #
507 enabled: false
508
509 # Endpoint to use to request certificates. If you only want to test,
510 # use Let's Encrypt's staging url:
511 # https://acme-staging.api.letsencrypt.org/directory
512 #
513 #url: https://acme-v01.api.letsencrypt.org/directory
514
515 # Port number to listen on for the HTTP-01 challenge. Change this if
516 # you are forwarding connections through Apache/Nginx/etc.
517 #
518 port: 80
519
520 # Local addresses to listen on for incoming connections.
521 # Again, you may want to change this if you are forwarding connections
522 # through Apache/Nginx/etc.
523 #
524 bind_addresses: ['::', '0.0.0.0']
525
526 # How many days remaining on a certificate before it is renewed.
527 #
528 reprovision_threshold: 30
529
530 # The domain that the certificate should be for. Normally this
531 # should be the same as your Matrix domain (i.e., 'server_name'), but,
532 # by putting a file at 'https://<server_name>/.well-known/matrix/server',
533 # you can delegate incoming traffic to another server. If you do that,
534 # you should give the target of the delegation here.
535 #
536 # For example: if your 'server_name' is 'example.com', but
537 # 'https://example.com/.well-known/matrix/server' delegates to
538 # 'matrix.example.com', you should put 'matrix.example.com' here.
539 #
540 # If not set, defaults to your 'server_name'.
541 #
542 domain: matrix.example.com
543
544 # file to use for the account key. This will be generated if it doesn't
545 # exist.
546 #
547 # If unspecified, we will use CONFDIR/client.key.
548 #
549 account_key_file: DATADIR/acme_account.key
550
551 # List of allowed TLS fingerprints for this server to publish along
552 # with the signing keys for this server. Other matrix servers that
553 # make HTTPS requests to this server will check that the TLS
554 # certificates returned by this server match one of the fingerprints.
555 #
556 # Synapse automatically adds the fingerprint of its own certificate
557 # to the list. So if federation traffic is handled directly by synapse
558 # then no modification to the list is required.
559 #
560 # If synapse is run behind a load balancer that handles the TLS then it
561 # will be necessary to add the fingerprints of the certificates used by
562 # the loadbalancers to this list if they are different to the one
563 # synapse is using.
564 #
565 # Homeservers are permitted to cache the list of TLS fingerprints
566 # returned in the key responses up to the "valid_until_ts" returned in
567 # key. It may be necessary to publish the fingerprints of a new
568 # certificate and wait until the "valid_until_ts" of the previous key
569 # responses have passed before deploying it.
570 #
571 # You can calculate a fingerprint from a given TLS listener via:
572 # openssl s_client -connect $host:$port < /dev/null 2> /dev/null |
573 # openssl x509 -outform DER | openssl sha256 -binary | base64 | tr -d '='
574 # or by checking matrix.org/federationtester/api/report?server_name=$host
575 #
576 #tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
577
578
119579
120580 # Restrict federation to the following whitelist of domains.
121581 # N.B. we recommend also firewalling your federation listener to limit
148608 - '::1/128'
149609 - 'fe80::/64'
150610 - 'fc00::/7'
151
152 # List of ports that Synapse should listen on, their purpose and their
153 # configuration.
154 #
155 # Options for each listener include:
156 #
157 # port: the TCP port to bind to
158 #
159 # bind_addresses: a list of local addresses to listen on. The default is
160 # 'all local interfaces'.
161 #
162 # type: the type of listener. Normally 'http', but other valid options are:
163 # 'manhole' (see docs/manhole.md),
164 # 'metrics' (see docs/metrics-howto.md),
165 # 'replication' (see docs/workers.md).
166 #
167 # tls: set to true to enable TLS for this listener. Will use the TLS
168 # key/cert specified in tls_private_key_path / tls_certificate_path.
169 #
170 # x_forwarded: Only valid for an 'http' listener. Set to true to use the
171 # X-Forwarded-For header as the client IP. Useful when Synapse is
172 # behind a reverse-proxy.
173 #
174 # resources: Only valid for an 'http' listener. A list of resources to host
175 # on this port. Options for each resource are:
176 #
177 # names: a list of names of HTTP resources. See below for a list of
178 # valid resource names.
179 #
180 # compress: set to true to enable HTTP comression for this resource.
181 #
182 # additional_resources: Only valid for an 'http' listener. A map of
183 # additional endpoints which should be loaded via dynamic modules.
184 #
185 # Valid resource names are:
186 #
187 # client: the client-server API (/_matrix/client), and the synapse admin
188 # API (/_synapse/admin). Also implies 'media' and 'static'.
189 #
190 # consent: user consent forms (/_matrix/consent). See
191 # docs/consent_tracking.md.
192 #
193 # federation: the server-server API (/_matrix/federation). Also implies
194 # 'media', 'keys', 'openid'
195 #
196 # keys: the key discovery API (/_matrix/keys).
197 #
198 # media: the media API (/_matrix/media).
199 #
200 # metrics: the metrics interface. See docs/metrics-howto.md.
201 #
202 # openid: OpenID authentication.
203 #
204 # replication: the HTTP replication API (/_synapse/replication). See
205 # docs/workers.md.
206 #
207 # static: static resources under synapse/static (/_matrix/static). (Mostly
208 # useful for 'fallback authentication'.)
209 #
210 # webclient: A web client. Requires web_client_location to be set.
211 #
212 listeners:
213 # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
214 #
215 # Disabled by default. To enable it, uncomment the following. (Note that you
216 # will also need to give Synapse a TLS key and certificate: see the TLS section
217 # below.)
218 #
219 #- port: 8448
220 # type: http
221 # tls: true
222 # resources:
223 # - names: [client, federation]
224
225 # Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
226 # that unwraps TLS.
227 #
228 # If you plan to use a reverse proxy, please see
229 # https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
230 #
231 - port: 8008
232 tls: false
233 type: http
234 x_forwarded: true
235 bind_addresses: ['::1', '127.0.0.1']
236
237 resources:
238 - names: [client, federation]
239 compress: false
240
241 # example additional_resources:
242 #
243 #additional_resources:
244 # "/_matrix/my/custom/endpoint":
245 # module: my_module.CustomRequestHandler
246 # config: {}
247
248 # Turn on the twisted ssh manhole service on localhost on the given
249 # port.
250 #
251 #- port: 9000
252 # bind_addresses: ['::1', '127.0.0.1']
253 # type: manhole
254
255 # Forward extremities can build up in a room due to networking delays between
256 # homeservers. Once this happens in a large room, calculation of the state of
257 # that room can become quite expensive. To mitigate this, once the number of
258 # forward extremities reaches a given threshold, Synapse will send an
259 # org.matrix.dummy_event event, which will reduce the forward extremities
260 # in the room.
261 #
262 # This setting defines the threshold (i.e. number of forward extremities in the
263 # room) at which dummy events are sent. The default value is 10.
264 #
265 #dummy_events_threshold: 5
266
267
268 ## Homeserver blocking ##
269
270 # How to reach the server admin, used in ResourceLimitError
271 #
272 #admin_contact: 'mailto:admin@server.com'
273
274 # Global blocking
275 #
276 #hs_disabled: false
277 #hs_disabled_message: 'Human readable reason for why the HS is blocked'
278
279 # Monthly Active User Blocking
280 #
281 # Used in cases where the admin or server owner wants to limit to the
282 # number of monthly active users.
283 #
284 # 'limit_usage_by_mau' disables/enables monthly active user blocking. When
285 # enabled and a limit is reached the server returns a 'ResourceLimitError'
286 # with error type Codes.RESOURCE_LIMIT_EXCEEDED
287 #
288 # 'max_mau_value' is the hard limit of monthly active users above which
289 # the server will start blocking user actions.
290 #
291 # 'mau_trial_days' is a means to add a grace period for active users. It
292 # means that users must be active for this number of days before they
293 # can be considered active and guards against the case where lots of users
294 # sign up in a short space of time never to return after their initial
295 # session.
296 #
297 # 'mau_limit_alerting' is a means of limiting client side alerting
298 # should the mau limit be reached. This is useful for small instances
299 # where the admin has 5 mau seats (say) for 5 specific people and no
300 # interest increasing the mau limit further. Defaults to True, which
301 # means that alerting is enabled
302 #
303 #limit_usage_by_mau: false
304 #max_mau_value: 50
305 #mau_trial_days: 2
306 #mau_limit_alerting: false
307
308 # If enabled, the metrics for the number of monthly active users will
309 # be populated, however no one will be limited. If limit_usage_by_mau
310 # is true, this is implied to be true.
311 #
312 #mau_stats_only: false
313
314 # Sometimes the server admin will want to ensure certain accounts are
315 # never blocked by mau checking. These accounts are specified here.
316 #
317 #mau_limit_reserved_threepids:
318 # - medium: 'email'
319 # address: 'reserved_user@example.com'
320
321 # Used by phonehome stats to group together related servers.
322 #server_context: context
323
324 # Resource-constrained homeserver settings
325 #
326 # When this is enabled, the room "complexity" will be checked before a user
327 # joins a new remote room. If it is above the complexity limit, the server will
328 # disallow joining, or will instantly leave.
329 #
330 # Room complexity is an arbitrary measure based on factors such as the number of
331 # users in the room.
332 #
333 limit_remote_rooms:
334 # Uncomment to enable room complexity checking.
335 #
336 #enabled: true
337
338 # the limit above which rooms cannot be joined. The default is 1.0.
339 #
340 #complexity: 0.5
341
342 # override the error which is returned when the room is too complex.
343 #
344 #complexity_error: "This room is too complex."
345
346 # Whether to require a user to be in the room to add an alias to it.
347 # Defaults to 'true'.
348 #
349 #require_membership_for_aliases: false
350
351 # Whether to allow per-room membership profiles through the send of membership
352 # events with profile information that differ from the target's global profile.
353 # Defaults to 'true'.
354 #
355 #allow_per_room_profiles: false
356
357 # How long to keep redacted events in unredacted form in the database. After
358 # this period redacted events get replaced with their redacted form in the DB.
359 #
360 # Defaults to `7d`. Set to `null` to disable.
361 #
362 #redaction_retention_period: 28d
363
364 # How long to track users' last seen time and IPs in the database.
365 #
366 # Defaults to `28d`. Set to `null` to disable clearing out of old rows.
367 #
368 #user_ips_max_age: 14d
369
370 # Message retention policy at the server level.
371 #
372 # Room admins and mods can define a retention period for their rooms using the
373 # 'm.room.retention' state event, and server admins can cap this period by setting
374 # the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
375 #
376 # If this feature is enabled, Synapse will regularly look for and purge events
377 # which are older than the room's maximum retention period. Synapse will also
378 # filter events received over federation so that events that should have been
379 # purged are ignored and not stored again.
380 #
381 retention:
382 # The message retention policies feature is disabled by default. Uncomment the
383 # following line to enable it.
384 #
385 #enabled: true
386
387 # Default retention policy. If set, Synapse will apply it to rooms that lack the
388 # 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
389 # matter much because Synapse doesn't take it into account yet.
390 #
391 #default_policy:
392 # min_lifetime: 1d
393 # max_lifetime: 1y
394
395 # Retention policy limits. If set, a user won't be able to send a
396 # 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
397 # that's not within this range. This is especially useful in closed federations,
398 # in which server admins can make sure every federating server applies the same
399 # rules.
400 #
401 #allowed_lifetime_min: 1d
402 #allowed_lifetime_max: 1y
403
404 # Server admins can define the settings of the background jobs purging the
405 # events which lifetime has expired under the 'purge_jobs' section.
406 #
407 # If no configuration is provided, a single job will be set up to delete expired
408 # events in every room daily.
409 #
410 # Each job's configuration defines which range of message lifetimes the job
411 # takes care of. For example, if 'shortest_max_lifetime' is '2d' and
412 # 'longest_max_lifetime' is '3d', the job will handle purging expired events in
413 # rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
414 # lower than or equal to 3 days. Both the minimum and the maximum value of a
415 # range are optional, e.g. a job with no 'shortest_max_lifetime' and a
416 # 'longest_max_lifetime' of '3d' will handle every room with a retention policy
417 # which 'max_lifetime' is lower than or equal to three days.
418 #
419 # The rationale for this per-job configuration is that some rooms might have a
420 # retention policy with a low 'max_lifetime', where history needs to be purged
421 # of outdated messages on a more frequent basis than for the rest of the rooms
422 # (e.g. every 12h), but not want that purge to be performed by a job that's
423 # iterating over every room it knows, which could be heavy on the server.
424 #
425 #purge_jobs:
426 # - shortest_max_lifetime: 1d
427 # longest_max_lifetime: 3d
428 # interval: 12h
429 # - shortest_max_lifetime: 3d
430 # longest_max_lifetime: 1y
431 # interval: 1d
432
433 # Inhibits the /requestToken endpoints from returning an error that might leak
434 # information about whether an e-mail address is in use or not on this
435 # homeserver.
436 # Note that for some endpoints the error situation is the e-mail already being
437 # used, and for others the error is entering the e-mail being unused.
438 # If this option is enabled, instead of returning an error, these endpoints will
439 # act as if no error happened and return a fake session ID ('sid') to clients.
440 #
441 #request_token_inhibit_3pid_errors: true
442
443
444 ## TLS ##
445
446 # PEM-encoded X509 certificate for TLS.
447 # This certificate, as of Synapse 1.0, will need to be a valid and verifiable
448 # certificate, signed by a recognised Certificate Authority.
449 #
450 # See 'ACME support' below to enable auto-provisioning this certificate via
451 # Let's Encrypt.
452 #
453 # If supplying your own, be sure to use a `.pem` file that includes the
454 # full certificate chain including any intermediate certificates (for
455 # instance, if using certbot, use `fullchain.pem` as your certificate,
456 # not `cert.pem`).
457 #
458 #tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
459
460 # PEM-encoded private key for TLS
461 #
462 #tls_private_key_path: "CONFDIR/SERVERNAME.tls.key"
463
464 # Whether to verify TLS server certificates for outbound federation requests.
465 #
466 # Defaults to `true`. To disable certificate verification, uncomment the
467 # following line.
468 #
469 #federation_verify_certificates: false
470
471 # The minimum TLS version that will be used for outbound federation requests.
472 #
473 # Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
474 # that setting this value higher than `1.2` will prevent federation to most
475 # of the public Matrix network: only configure it to `1.3` if you have an
476 # entirely private federation setup and you can ensure TLS 1.3 support.
477 #
478 #federation_client_minimum_tls_version: 1.2
479
480 # Skip federation certificate verification on the following whitelist
481 # of domains.
482 #
483 # This setting should only be used in very specific cases, such as
484 # federation over Tor hidden services and similar. For private networks
485 # of homeservers, you likely want to use a private CA instead.
486 #
487 # Only effective if federation_verify_certicates is `true`.
488 #
489 #federation_certificate_verification_whitelist:
490 # - lon.example.com
491 # - *.domain.com
492 # - *.onion
493
494 # List of custom certificate authorities for federation traffic.
495 #
496 # This setting should only normally be used within a private network of
497 # homeservers.
498 #
499 # Note that this list will replace those that are provided by your
500 # operating environment. Certificates must be in PEM format.
501 #
502 #federation_custom_ca_list:
503 # - myCA1.pem
504 # - myCA2.pem
505 # - myCA3.pem
506
507 # ACME support: This will configure Synapse to request a valid TLS certificate
508 # for your configured `server_name` via Let's Encrypt.
509 #
510 # Note that ACME v1 is now deprecated, and Synapse currently doesn't support
511 # ACME v2. This means that this feature currently won't work with installs set
512 # up after November 2019. For more info, and alternative solutions, see
513 # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
514 #
515 # Note that provisioning a certificate in this way requires port 80 to be
516 # routed to Synapse so that it can complete the http-01 ACME challenge.
517 # By default, if you enable ACME support, Synapse will attempt to listen on
518 # port 80 for incoming http-01 challenges - however, this will likely fail
519 # with 'Permission denied' or a similar error.
520 #
521 # There are a couple of potential solutions to this:
522 #
523 # * If you already have an Apache, Nginx, or similar listening on port 80,
524 # you can configure Synapse to use an alternate port, and have your web
525 # server forward the requests. For example, assuming you set 'port: 8009'
526 # below, on Apache, you would write:
527 #
528 # ProxyPass /.well-known/acme-challenge http://localhost:8009/.well-known/acme-challenge
529 #
530 # * Alternatively, you can use something like `authbind` to give Synapse
531 # permission to listen on port 80.
532 #
533 acme:
534 # ACME support is disabled by default. Set this to `true` and uncomment
535 # tls_certificate_path and tls_private_key_path above to enable it.
536 #
537 enabled: false
538
539 # Endpoint to use to request certificates. If you only want to test,
540 # use Let's Encrypt's staging url:
541 # https://acme-staging.api.letsencrypt.org/directory
542 #
543 #url: https://acme-v01.api.letsencrypt.org/directory
544
545 # Port number to listen on for the HTTP-01 challenge. Change this if
546 # you are forwarding connections through Apache/Nginx/etc.
547 #
548 port: 80
549
550 # Local addresses to listen on for incoming connections.
551 # Again, you may want to change this if you are forwarding connections
552 # through Apache/Nginx/etc.
553 #
554 bind_addresses: ['::', '0.0.0.0']
555
556 # How many days remaining on a certificate before it is renewed.
557 #
558 reprovision_threshold: 30
559
560 # The domain that the certificate should be for. Normally this
561 # should be the same as your Matrix domain (i.e., 'server_name'), but,
562 # by putting a file at 'https://<server_name>/.well-known/matrix/server',
563 # you can delegate incoming traffic to another server. If you do that,
564 # you should give the target of the delegation here.
565 #
566 # For example: if your 'server_name' is 'example.com', but
567 # 'https://example.com/.well-known/matrix/server' delegates to
568 # 'matrix.example.com', you should put 'matrix.example.com' here.
569 #
570 # If not set, defaults to your 'server_name'.
571 #
572 domain: matrix.example.com
573
574 # file to use for the account key. This will be generated if it doesn't
575 # exist.
576 #
577 # If unspecified, we will use CONFDIR/client.key.
578 #
579 account_key_file: DATADIR/acme_account.key
580
581 # List of allowed TLS fingerprints for this server to publish along
582 # with the signing keys for this server. Other matrix servers that
583 # make HTTPS requests to this server will check that the TLS
584 # certificates returned by this server match one of the fingerprints.
585 #
586 # Synapse automatically adds the fingerprint of its own certificate
587 # to the list. So if federation traffic is handled directly by synapse
588 # then no modification to the list is required.
589 #
590 # If synapse is run behind a load balancer that handles the TLS then it
591 # will be necessary to add the fingerprints of the certificates used by
592 # the loadbalancers to this list if they are different to the one
593 # synapse is using.
594 #
595 # Homeservers are permitted to cache the list of TLS fingerprints
596 # returned in the key responses up to the "valid_until_ts" returned in
597 # key. It may be necessary to publish the fingerprints of a new
598 # certificate and wait until the "valid_until_ts" of the previous key
599 # responses have passed before deploying it.
600 #
601 # You can calculate a fingerprint from a given TLS listener via:
602 # openssl s_client -connect $host:$port < /dev/null 2> /dev/null |
603 # openssl x509 -outform DER | openssl sha256 -binary | base64 | tr -d '='
604 # or by checking matrix.org/federationtester/api/report?server_name=$host
605 #
606 #tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
607
608611
609612
610613 ## Caching ##
681684 #database:
682685 # name: psycopg2
683686 # args:
684 # user: synapse
687 # user: synapse_user
685688 # password: secretpassword
686689 # database: synapse
687690 # host: localhost
18101813 # Each JSON Web Token needs to contain a "sub" (subject) claim, which is
18111814 # used as the localpart of the mxid.
18121815 #
1816 # Additionally, the expiration time ("exp"), not before time ("nbf"),
1817 # and issued at ("iat") claims are validated if present.
1818 #
18131819 # Note that this is a non-standard login type and client support is
18141820 # expected to be non-existant.
18151821 #
18361842 # Required if 'enabled' is true.
18371843 #
18381844 #algorithm: "provided-by-your-issuer"
1845
1846 # The issuer to validate the "iss" claim against.
1847 #
1848 # Optional, if provided the "iss" claim will be required and
1849 # validated for all JSON web tokens.
1850 #
1851 #issuer: "provided-by-your-issuer"
1852
1853 # A list of audiences to validate the "aud" claim against.
1854 #
1855 # Optional, if provided the "aud" claim will be required and
1856 # validated for all JSON web tokens.
1857 #
1858 # Note that if the "aud" claim is included in a JSON web token then
1859 # validation will fail without configuring audiences.
1860 #
1861 #audiences:
1862 # - "provided-by-your-issuer"
18391863
18401864
18411865 password_config:
19261950 #
19271951 #notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
19281952
1929 # app_name defines the default value for '%(app)s' in notif_from. It
1930 # defaults to 'Matrix'.
1953 # app_name defines the default value for '%(app)s' in notif_from and email
1954 # subjects. It defaults to 'Matrix'.
19311955 #
19321956 #app_name: my_branded_matrix_server
19331957
19952019 # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
19962020 #
19972021 #template_dir: "res/templates"
2022
2023 # Subjects to use when sending emails from Synapse.
2024 #
2025 # The placeholder '%(app)s' will be replaced with the value of the 'app_name'
2026 # setting above, or by a value dictated by the Matrix client application.
2027 #
2028 # If a subject isn't overridden in this configuration file, the value used as
2029 # its example will be used.
2030 #
2031 #subjects:
2032
2033 # Subjects for notification emails.
2034 #
2035 # On top of the '%(app)s' placeholder, these can use the following
2036 # placeholders:
2037 #
2038 # * '%(person)s', which will be replaced by the display name of the user(s)
2039 # that sent the message(s), e.g. "Alice and Bob".
2040 # * '%(room)s', which will be replaced by the name of the room the
2041 # message(s) have been sent to, e.g. "My super room".
2042 #
2043 # See the example provided for each setting to see which placeholder can be
2044 # used and how to use them.
2045 #
2046 # Subject to use to notify about one message from one or more user(s) in a
2047 # room which has a name.
2048 #message_from_person_in_room: "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room..."
2049 #
2050 # Subject to use to notify about one message from one or more user(s) in a
2051 # room which doesn't have a name.
2052 #message_from_person: "[%(app)s] You have a message on %(app)s from %(person)s..."
2053 #
2054 # Subject to use to notify about multiple messages from one or more users in
2055 # a room which doesn't have a name.
2056 #messages_from_person: "[%(app)s] You have messages on %(app)s from %(person)s..."
2057 #
2058 # Subject to use to notify about multiple messages in a room which has a
2059 # name.
2060 #messages_in_room: "[%(app)s] You have messages on %(app)s in the %(room)s room..."
2061 #
2062 # Subject to use to notify about multiple messages in multiple rooms.
2063 #messages_in_room_and_others: "[%(app)s] You have messages on %(app)s in the %(room)s room and others..."
2064 #
2065 # Subject to use to notify about multiple messages from multiple persons in
2066 # multiple rooms. This is similar to the setting above except it's used when
2067 # the room in which the notification was triggered has no name.
2068 #messages_from_person_and_others: "[%(app)s] You have messages on %(app)s from %(person)s and others..."
2069 #
2070 # Subject to use to notify about an invite to a room which has a name.
2071 #invite_from_person_to_room: "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s..."
2072 #
2073 # Subject to use to notify about an invite to a room which doesn't have a
2074 # name.
2075 #invite_from_person: "[%(app)s] %(person)s has invited you to chat on %(app)s..."
2076
2077 # Subject for emails related to account administration.
2078 #
2079 # On top of the '%(app)s' placeholder, these one can use the
2080 # '%(server_name)s' placeholder, which will be replaced by the value of the
2081 # 'server_name' setting in your Synapse configuration.
2082 #
2083 # Subject to use when sending a password reset email.
2084 #password_reset: "[%(server_name)s] Password reset"
2085 #
2086 # Subject to use when sending a verification email to assert an address's
2087 # ownership.
2088 #email_validation: "[%(server_name)s] Validate your email"
19982089
19992090
20002091 # Password providers allow homeserver administrators to integrate
23062397 #
23072398 # logging:
23082399 # false
2400
2401
2402 ## Workers ##
2403
2404 # Disables sending of outbound federation transactions on the main process.
2405 # Uncomment if using a federation sender worker.
2406 #
2407 #send_federation: false
2408
2409 # It is possible to run multiple federation sender workers, in which case the
2410 # work is balanced across them.
2411 #
2412 # This configuration must be shared between all federation sender workers, and if
2413 # changed all federation sender workers must be stopped at the same time and then
2414 # started, to ensure that all instances are running with the same config (otherwise
2415 # events may be dropped).
2416 #
2417 #federation_sender_instances:
2418 # - federation_sender1
2419
2420 # When using workers this should be a map from `worker_name` to the
2421 # HTTP replication listener of the worker, if configured.
2422 #
2423 #instance_map:
2424 # worker1:
2425 # host: localhost
2426 # port: 8034
2427
2428 # Experimental: When using workers you can define which workers should
2429 # handle event persistence and typing notifications. Any worker
2430 # specified here must also be in the `instance_map`.
2431 #
2432 #stream_writers:
2433 # events: worker1
2434 # typing: worker1
2435
2436
2437 # Configuration for Redis when using workers. This *must* be enabled when
2438 # using workers (unless using old style direct TCP configuration).
2439 #
2440 redis:
2441 # Uncomment the below to enable Redis support.
2442 #
2443 #enabled: true
2444
2445 # Optional host and port to use to connect to redis. Defaults to
2446 # localhost and 6379
2447 #
2448 #host: localhost
2449 #port: 6379
2450
2451 # Optional password if configured on the Redis instance
2452 #
2453 #password: <secret_password>
0 ### Using synctl with workers
1
2 If you want to use `synctl` to manage your synapse processes, you will need to
3 create an an additional configuration file for the main synapse process. That
4 configuration should look like this:
5
6 ```yaml
7 worker_app: synapse.app.homeserver
8 ```
9
10 Additionally, each worker app must be configured with the name of a "pid file",
11 to which it will write its process ID when it starts. For example, for a
12 synchrotron, you might write:
13
14 ```yaml
15 worker_pid_file: /home/matrix/synapse/worker1.pid
16 ```
17
18 Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
19 commandline option to tell it to operate on all the worker configurations found
20 in the given directory, e.g.:
21
22 synctl -a $CONFIG/workers start
23
24 Currently one should always restart all workers when restarting or upgrading
25 synapse, unless you explicitly know it's safe not to. For instance, restarting
26 synapse without restarting all the synchrotrons may result in broken typing
27 notifications.
28
29 To manipulate a specific worker, you pass the -w option to synctl:
30
31 synctl -w $CONFIG/workers/worker1.yaml restart
1515 be used for demo purposes and any admin considering workers should already be
1616 running PostgreSQL.
1717
18 ## Master/worker communication
19
20 The workers communicate with the master process via a Synapse-specific protocol
21 called 'replication' (analogous to MySQL- or Postgres-style database
22 replication) which feeds a stream of relevant data from the master to the
23 workers so they can be kept in sync with the master process and database state.
24
25 Additionally, workers may make HTTP requests to the master, to send information
26 in the other direction. Typically this is used for operations which need to
27 wait for a reply - such as sending an event.
28
29 ## Configuration
18 ## Main process/worker communication
19
20 The processes communicate with each other via a Synapse-specific protocol called
21 'replication' (analogous to MySQL- or Postgres-style database replication) which
22 feeds streams of newly written data between processes so they can be kept in
23 sync with the database state.
24
25 Additionally, processes may make HTTP requests to each other. Typically this is
26 used for operations which need to wait for a reply - such as sending an event.
27
28 As of Synapse v1.13.0, it is possible to configure Synapse to send replication
29 via a [Redis pub/sub channel](https://redis.io/topics/pubsub), and is now the
30 recommended way of configuring replication. This is an alternative to the old
31 direct TCP connections to the main process: rather than all the workers
32 connecting to the main process, all the workers and the main process connect to
33 Redis, which relays replication commands between processes. This can give a
34 significant cpu saving on the main process and will be a prerequisite for
35 upcoming performance improvements.
36
37 (See the [Architectural diagram](#architectural-diagram) section at the end for
38 a visualisation of what this looks like)
39
40
41 ## Setting up workers
42
43 A Redis server is required to manage the communication between the processes.
44 (The older direct TCP connections are now deprecated.) The Redis server
45 should be installed following the normal procedure for your distribution (e.g.
46 `apt install redis-server` on Debian). It is safe to use an existing Redis
47 deployment if you have one.
48
49 Once installed, check that Redis is running and accessible from the host running
50 Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
51 a response of `+PONG`.
52
53 The appropriate dependencies must also be installed for Synapse. If using a
54 virtualenv, these can be installed with:
55
56 ```sh
57 pip install matrix-synapse[redis]
58 ```
59
60 Note that these dependencies are included when synapse is installed with `pip
61 install matrix-synapse[all]`. They are also included in the debian packages from
62 `matrix.org` and in the docker images at
63 https://hub.docker.com/r/matrixdotorg/synapse/.
3064
3165 To make effective use of the workers, you will need to configure an HTTP
3266 reverse-proxy such as nginx or haproxy, which will direct incoming requests to
33 the correct worker, or to the main synapse instance. Note that this includes
34 requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md)
67 the correct worker, or to the main synapse instance. See [reverse_proxy.md](reverse_proxy.md)
3568 for information on setting up a reverse proxy.
3669
37 To enable workers, you need to add *two* replication listeners to the
38 main Synapse configuration file (`homeserver.yaml`). For example:
39
40 ```yaml
70 To enable workers you should create a configuration file for each worker
71 process. Each worker configuration file inherits the configuration of the shared
72 homeserver configuration file. You can then override configuration specific to
73 that worker, e.g. the HTTP listener that it provides (if any); logging
74 configuration; etc. You should minimise the number of overrides though to
75 maintain a usable config.
76
77 Next you need to add both a HTTP replication listener and redis config to the
78 shared Synapse configuration file (`homeserver.yaml`). For example:
79
80 ```yaml
81 # extend the existing `listeners` section. This defines the ports that the
82 # main process will listen on.
4183 listeners:
42 # The TCP replication port
43 - port: 9092
44 bind_address: '127.0.0.1'
45 type: replication
46
4784 # The HTTP replication port
4885 - port: 9093
4986 bind_address: '127.0.0.1'
5087 type: http
5188 resources:
5289 - names: [replication]
53 ```
54
55 Under **no circumstances** should these replication API listeners be exposed to
56 the public internet; they have no authentication and are unencrypted.
57
58 You should then create a set of configs for the various worker processes. Each
59 worker configuration file inherits the configuration of the main homeserver
60 configuration file. You can then override configuration specific to that
61 worker, e.g. the HTTP listener that it provides (if any); logging
62 configuration; etc. You should minimise the number of overrides though to
63 maintain a usable config.
90
91 redis:
92 enabled: true
93 ```
94
95 See the sample config for the full documentation of each option.
96
97 Under **no circumstances** should the replication listener be exposed to the
98 public internet; it has no authentication and is unencrypted.
6499
65100 In the config file for each worker, you must specify the type of worker
66 application (`worker_app`). The currently available worker applications are
67 listed below. You must also specify the replication endpoints that it should
68 talk to on the main synapse process. `worker_replication_host` should specify
69 the host of the main synapse, `worker_replication_port` should point to the TCP
70 replication listener port and `worker_replication_http_port` should point to
71 the HTTP replication port.
101 application (`worker_app`), and you should specify a unqiue name for the worker
102 (`worker_name`). The currently available worker applications are listed below.
103 You must also specify the HTTP replication endpoint that it should talk to on
104 the main synapse process. `worker_replication_host` should specify the host of
105 the main synapse and `worker_replication_http_port` should point to the HTTP
106 replication port. If the worker will handle HTTP requests then the
107 `worker_listeners` option should be set with a `http` listener, in the same way
108 as the `listeners` option in the shared config.
72109
73110 For example:
74111
75112 ```yaml
76 worker_app: synapse.app.synchrotron
77
78 # The replication listener on the synapse to talk to.
113 worker_app: synapse.app.generic_worker
114 worker_name: worker1
115
116 # The replication listener on the main synapse process.
79117 worker_replication_host: 127.0.0.1
80 worker_replication_port: 9092
81118 worker_replication_http_port: 9093
82119
83120 worker_listeners:
86123 resources:
87124 - names:
88125 - client
89
90 worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
91 ```
92
93 ...is a full configuration for a synchrotron worker instance, which will expose a
94 plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided
95 by the main synapse.
126 - federation
127
128 worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
129 ```
130
131 ...is a full configuration for a generic worker instance, which will expose a
132 plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
133 `/sync`, which are listed below.
96134
97135 Obviously you should configure your reverse-proxy to route the relevant
98136 endpoints to the worker (`localhost:8083` in the above example).
101139 `synctl` or your distribution's preferred service manager such as `systemd`. We
102140 recommend the use of `systemd` where available: for information on setting up
103141 `systemd` to start synapse workers, see
104 [systemd-with-workers](systemd-with-workers). To use `synctl`, see below.
105
106 ### **Experimental** support for replication over redis
107
108 As of Synapse v1.13.0, it is possible to configure Synapse to send replication
109 via a [Redis pub/sub channel](https://redis.io/topics/pubsub). This is an
110 alternative to direct TCP connections to the master: rather than all the
111 workers connecting to the master, all the workers and the master connect to
112 Redis, which relays replication commands between processes. This can give a
113 significant cpu saving on the master and will be a prerequisite for upcoming
114 performance improvements.
115
116 Note that this support is currently experimental; you may experience lost
117 messages and similar problems! It is strongly recommended that admins setting
118 up workers for the first time use direct TCP replication as above.
119
120 To configure Synapse to use Redis:
121
122 1. Install Redis following the normal procedure for your distribution - for
123 example, on Debian, `apt install redis-server`. (It is safe to use an
124 existing Redis deployment if you have one: we use a pub/sub stream named
125 according to the `server_name` of your synapse server.)
126 2. Check Redis is running and accessible: you should be able to `echo PING | nc -q1
127 localhost 6379` and get a response of `+PONG`.
128 3. Install the python prerequisites. If you installed synapse into a
129 virtualenv, this can be done with:
130 ```sh
131 pip install matrix-synapse[redis]
132 ```
133 The debian packages from matrix.org already include the required
134 dependencies.
135 4. Add config to the shared configuration (`homeserver.yaml`):
136 ```yaml
137 redis:
138 enabled: true
139 ```
140 Optional parameters which can go alongside `enabled` are `host`, `port`,
141 `password`. Normally none of these are required.
142 5. Restart master and all workers.
143
144 Once redis replication is in use, `worker_replication_port` is redundant and
145 can be removed from the worker configuration files. Similarly, the
146 configuration for the `listener` for the TCP replication port can be removed
147 from the main configuration file. Note that the HTTP replication port is
148 still required.
149
150 ### Using synctl
151
152 If you want to use `synctl` to manage your synapse processes, you will need to
153 create an an additional configuration file for the master synapse process. That
154 configuration should look like this:
155
156 ```yaml
157 worker_app: synapse.app.homeserver
158 ```
159
160 Additionally, each worker app must be configured with the name of a "pid file",
161 to which it will write its process ID when it starts. For example, for a
162 synchrotron, you might write:
163
164 ```yaml
165 worker_pid_file: /home/matrix/synapse/synchrotron.pid
166 ```
167
168 Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
169 commandline option to tell it to operate on all the worker configurations found
170 in the given directory, e.g.:
171
172 synctl -a $CONFIG/workers start
173
174 Currently one should always restart all workers when restarting or upgrading
175 synapse, unless you explicitly know it's safe not to. For instance, restarting
176 synapse without restarting all the synchrotrons may result in broken typing
177 notifications.
178
179 To manipulate a specific worker, you pass the -w option to synctl:
180
181 synctl -w $CONFIG/workers/synchrotron.yaml restart
142 [systemd-with-workers](systemd-with-workers). To use `synctl`, see
143 [synctl_workers.md](synctl_workers.md).
144
182145
183146 ## Available worker applications
184147
185 ### `synapse.app.pusher`
186
187 Handles sending push notifications to sygnal and email. Doesn't handle any
188 REST endpoints itself, but you should set `start_pushers: False` in the
189 shared configuration file to stop the main synapse sending these notifications.
190
191 Note this worker cannot be load-balanced: only one instance should be active.
192
193 ### `synapse.app.synchrotron`
194
195 The synchrotron handles `sync` requests from clients. In particular, it can
196 handle REST endpoints matching the following regular expressions:
197
148 ### `synapse.app.generic_worker`
149
150 This worker can handle API requests matching the following regular
151 expressions:
152
153 # Sync requests
198154 ^/_matrix/client/(v2_alpha|r0)/sync$
199155 ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
200156 ^/_matrix/client/(api/v1|r0)/initialSync$
201157 ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
202158
203 The above endpoints should all be routed to the synchrotron worker by the
204 reverse-proxy configuration.
205
206 It is possible to run multiple instances of the synchrotron to scale
207 horizontally. In this case the reverse-proxy should be configured to
208 load-balance across the instances, though it will be more efficient if all
209 requests from a particular user are routed to a single instance. Extracting
210 a userid from the access token is currently left as an exercise for the reader.
211
212 ### `synapse.app.appservice`
213
214 Handles sending output traffic to Application Services. Doesn't handle any
215 REST endpoints itself, but you should set `notify_appservices: False` in the
216 shared configuration file to stop the main synapse sending these notifications.
217
218 Note this worker cannot be load-balanced: only one instance should be active.
219
220 ### `synapse.app.federation_reader`
221
222 Handles a subset of federation endpoints. In particular, it can handle REST
223 endpoints matching the following regular expressions:
224
159 # Federation requests
225160 ^/_matrix/federation/v1/event/
226161 ^/_matrix/federation/v1/state/
227162 ^/_matrix/federation/v1/state_ids/
241176 ^/_matrix/federation/v1/event_auth/
242177 ^/_matrix/federation/v1/exchange_third_party_invite/
243178 ^/_matrix/federation/v1/user/devices/
244 ^/_matrix/federation/v1/send/
245179 ^/_matrix/federation/v1/get_groups_publicised$
246180 ^/_matrix/key/v2/query
247181
248 Additionally, the following REST endpoints can be handled for GET requests:
249
250 ^/_matrix/federation/v1/groups/
251
252 The above endpoints should all be routed to the federation_reader worker by the
253 reverse-proxy configuration.
254
255 The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
256 instance.
257
258 Note that `federation` must be added to the listener resources in the worker config:
259
260 ```yaml
261 worker_app: synapse.app.federation_reader
262 ...
263 worker_listeners:
264 - type: http
265 port: <port>
266 resources:
267 - names:
268 - federation
269 ```
270
271 ### `synapse.app.federation_sender`
272
273 Handles sending federation traffic to other servers. Doesn't handle any
274 REST endpoints itself, but you should set `send_federation: False` in the
275 shared configuration file to stop the main synapse sending this traffic.
276
277 Note this worker cannot be load-balanced: only one instance should be active.
278
279 ### `synapse.app.media_repository`
280
281 Handles the media repository. It can handle all endpoints starting with:
282
283 /_matrix/media/
284
285 ... and the following regular expressions matching media-specific administration APIs:
286
287 ^/_synapse/admin/v1/purge_media_cache$
288 ^/_synapse/admin/v1/room/.*/media.*$
289 ^/_synapse/admin/v1/user/.*/media.*$
290 ^/_synapse/admin/v1/media/.*$
291 ^/_synapse/admin/v1/quarantine_media/.*$
292
293 You should also set `enable_media_repo: False` in the shared configuration
294 file to stop the main synapse running background jobs related to managing the
295 media repository.
296
297 In the `media_repository` worker configuration file, configure the http listener to
298 expose the `media` resource. For example:
299
300 ```yaml
301 worker_listeners:
302 - type: http
303 port: 8085
304 resources:
305 - names:
306 - media
307 ```
308
309 Note that if running multiple media repositories they must be on the same server
310 and you must configure a single instance to run the background tasks, e.g.:
311
312 ```yaml
313 media_instance_running_background_jobs: "media-repository-1"
314 ```
315
316 ### `synapse.app.client_reader`
317
318 Handles client API endpoints. It can handle REST endpoints matching the
319 following regular expressions:
320
182 # Inbound federation transaction request
183 ^/_matrix/federation/v1/send/
184
185 # Client API requests
321186 ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
322187 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
323188 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
324189 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
325190 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
326 ^/_matrix/client/(api/v1|r0|unstable)/login$
327191 ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
328192 ^/_matrix/client/(api/v1|r0|unstable)/keys/query$
329193 ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
333197 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
334198 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
335199
336 Additionally, the following REST endpoints can be handled for GET requests:
337
338 ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
339 ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
340 ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
341 ^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
342
343 Additionally, the following REST endpoints can be handled, but all requests must
344 be routed to the same instance:
345
200 # Registration/login requests
201 ^/_matrix/client/(api/v1|r0|unstable)/login$
346202 ^/_matrix/client/(r0|unstable)/register$
347203 ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
348204
349 Pagination requests can also be handled, but all requests with the same path
350 room must be routed to the same instance. Additionally, care must be taken to
351 ensure that the purge history admin API is not used while pagination requests
352 for the room are in flight:
353
354 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
355
356 ### `synapse.app.user_dir`
357
358 Handles searches in the user directory. It can handle REST endpoints matching
359 the following regular expressions:
360
361 ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
362
363 When using this worker you must also set `update_user_directory: False` in the
364 shared configuration file to stop the main synapse running background
365 jobs related to updating the user directory.
366
367 ### `synapse.app.frontend_proxy`
368
369 Proxies some frequently-requested client endpoints to add caching and remove
370 load from the main synapse. It can handle REST endpoints matching the following
371 regular expressions:
372
373 ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
374
375 If `use_presence` is False in the homeserver config, it can also handle REST
376 endpoints matching the following regular expressions:
377
378 ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
379
380 This "stub" presence handler will pass through `GET` request but make the
381 `PUT` effectively a no-op.
382
383 It will proxy any requests it cannot handle to the main synapse instance. It
384 must therefore be configured with the location of the main instance, via
385 the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
386 file. For example:
387
388 worker_main_http_uri: http://127.0.0.1:8008
389
390 ### `synapse.app.event_creator`
391
392 Handles some event creation. It can handle REST endpoints matching:
393
205 # Event sending requests
394206 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
395207 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
396208 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
397209 ^/_matrix/client/(api/v1|r0|unstable)/join/
398210 ^/_matrix/client/(api/v1|r0|unstable)/profile/
399211
400 It will create events locally and then send them on to the main synapse
401 instance to be persisted and handled.
212
213 Additionally, the following REST endpoints can be handled for GET requests:
214
215 ^/_matrix/federation/v1/groups/
216
217 Pagination requests can also be handled, but all requests for a given
218 room must be routed to the same instance. Additionally, care must be taken to
219 ensure that the purge history admin API is not used while pagination requests
220 for the room are in flight:
221
222 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
223
224 Note that a HTTP listener with `client` and `federation` resources must be
225 configured in the `worker_listeners` option in the worker config.
226
227
228 #### Load balancing
229
230 It is possible to run multiple instances of this worker app, with incoming requests
231 being load-balanced between them by the reverse-proxy. However, different endpoints
232 have different characteristics and so admins
233 may wish to run multiple groups of workers handling different endpoints so that
234 load balancing can be done in different ways.
235
236 For `/sync` and `/initialSync` requests it will be more efficient if all
237 requests from a particular user are routed to a single instance. Extracting a
238 user ID from the access token or `Authorization` header is currently left as an
239 exercise for the reader. Admins may additionally wish to separate out `/sync`
240 requests that have a `since` query parameter from those that don't (and
241 `/initialSync`), as requests that don't are known as "initial sync" that happens
242 when a user logs in on a new device and can be *very* resource intensive, so
243 isolating these requests will stop them from interfering with other users ongoing
244 syncs.
245
246 Federation and client requests can be balanced via simple round robin.
247
248 The inbound federation transaction request `^/_matrix/federation/v1/send/`
249 should be balanced by source IP so that transactions from the same remote server
250 go to the same process.
251
252 Registration/login requests can be handled separately purely to help ensure that
253 unexpected load doesn't affect new logins and sign ups.
254
255 Finally, event sending requests can be balanced by the room ID in the URI (or
256 the full URI, or even just round robin), the room ID is the path component after
257 `/rooms/`. If there is a large bridge connected that is sending or may send lots
258 of events, then a dedicated set of workers can be provisioned to limit the
259 effects of bursts of events from that bridge on events sent by normal users.
260
261 #### Stream writers
262
263 Additionally, there is *experimental* support for moving writing of specific
264 streams (such as events) off of the main process to a particular worker. (This
265 is only supported with Redis-based replication.)
266
267 Currently support streams are `events` and `typing`.
268
269 To enable this, the worker must have a HTTP replication listener configured,
270 have a `worker_name` and be listed in the `instance_map` config. For example to
271 move event persistence off to a dedicated worker, the shared configuration would
272 include:
273
274 ```yaml
275 instance_map:
276 event_persister1:
277 host: localhost
278 port: 8034
279
280 streams_writers:
281 events: event_persister1
282 ```
283
284
285 ### `synapse.app.pusher`
286
287 Handles sending push notifications to sygnal and email. Doesn't handle any
288 REST endpoints itself, but you should set `start_pushers: False` in the
289 shared configuration file to stop the main synapse sending push notifications.
290
291 Note this worker cannot be load-balanced: only one instance should be active.
292
293 ### `synapse.app.appservice`
294
295 Handles sending output traffic to Application Services. Doesn't handle any
296 REST endpoints itself, but you should set `notify_appservices: False` in the
297 shared configuration file to stop the main synapse sending appservice notifications.
298
299 Note this worker cannot be load-balanced: only one instance should be active.
300
301
302 ### `synapse.app.federation_sender`
303
304 Handles sending federation traffic to other servers. Doesn't handle any
305 REST endpoints itself, but you should set `send_federation: False` in the
306 shared configuration file to stop the main synapse sending this traffic.
307
308 If running multiple federation senders then you must list each
309 instance in the `federation_sender_instances` option by their `worker_name`.
310 All instances must be stopped and started when adding or removing instances.
311 For example:
312
313 ```yaml
314 federation_sender_instances:
315 - federation_sender1
316 - federation_sender2
317 ```
318
319 ### `synapse.app.media_repository`
320
321 Handles the media repository. It can handle all endpoints starting with:
322
323 /_matrix/media/
324
325 ... and the following regular expressions matching media-specific administration APIs:
326
327 ^/_synapse/admin/v1/purge_media_cache$
328 ^/_synapse/admin/v1/room/.*/media.*$
329 ^/_synapse/admin/v1/user/.*/media.*$
330 ^/_synapse/admin/v1/media/.*$
331 ^/_synapse/admin/v1/quarantine_media/.*$
332
333 You should also set `enable_media_repo: False` in the shared configuration
334 file to stop the main synapse running background jobs related to managing the
335 media repository.
336
337 In the `media_repository` worker configuration file, configure the http listener to
338 expose the `media` resource. For example:
339
340 ```yaml
341 worker_listeners:
342 - type: http
343 port: 8085
344 resources:
345 - names:
346 - media
347 ```
348
349 Note that if running multiple media repositories they must be on the same server
350 and you must configure a single instance to run the background tasks, e.g.:
351
352 ```yaml
353 media_instance_running_background_jobs: "media-repository-1"
354 ```
355
356 ### `synapse.app.user_dir`
357
358 Handles searches in the user directory. It can handle REST endpoints matching
359 the following regular expressions:
360
361 ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
362
363 When using this worker you must also set `update_user_directory: False` in the
364 shared configuration file to stop the main synapse running background
365 jobs related to updating the user directory.
366
367 ### `synapse.app.frontend_proxy`
368
369 Proxies some frequently-requested client endpoints to add caching and remove
370 load from the main synapse. It can handle REST endpoints matching the following
371 regular expressions:
372
373 ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
374
375 If `use_presence` is False in the homeserver config, it can also handle REST
376 endpoints matching the following regular expressions:
377
378 ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
379
380 This "stub" presence handler will pass through `GET` request but make the
381 `PUT` effectively a no-op.
382
383 It will proxy any requests it cannot handle to the main synapse instance. It
384 must therefore be configured with the location of the main instance, via
385 the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
386 file. For example:
387
388 worker_main_http_uri: http://127.0.0.1:8008
389
390 ### Historical apps
391
392 *Note:* Historically there used to be more apps, however they have been
393 amalgamated into a single `synapse.app.generic_worker` app. The remaining apps
394 are ones that do specific processing unrelated to requests, e.g. the `pusher`
395 that handles sending out push notifications for new events. The intention is for
396 all these to be folded into the `generic_worker` app and to use config to define
397 which processes handle the various proccessing such as push notifications.
398
399
400 ## Architectural diagram
401
402 The following shows an example setup using Redis and a reverse proxy:
403
404 ```
405 Clients & Federation
406 |
407 v
408 +-----------+
409 | |
410 | Reverse |
411 | Proxy |
412 | |
413 +-----------+
414 | | |
415 | | | HTTP requests
416 +-------------------+ | +-----------+
417 | +---+ |
418 | | |
419 v v v
420 +--------------+ +--------------+ +--------------+ +--------------+
421 | Main | | Generic | | Generic | | Event |
422 | Process | | Worker 1 | | Worker 2 | | Persister |
423 +--------------+ +--------------+ +--------------+ +--------------+
424 ^ ^ | ^ | | ^ | ^ ^
425 | | | | | | | | | |
426 | | | | | HTTP | | | | |
427 | +----------+<--|---|---------+ | | | |
428 | | +-------------|-->+----------+ |
429 | | | |
430 | | | |
431 v v v v
432 ====================================================================
433 Redis pub/sub channel
434 ```
4747 )
4848 from synapse.storage.data_stores.main.registration import (
4949 RegistrationBackgroundUpdateStore,
50 find_max_generated_user_id_localpart,
5051 )
5152 from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
5253 from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
621622 )
622623 )
623624
624 # Step 5. Do final post-processing
625 # Step 5. Set up sequences
626 self.progress.set_state("Setting up sequence generators")
625627 await self._setup_state_group_id_seq()
628 await self._setup_user_id_seq()
626629
627630 self.progress.done()
628631 except Exception as e:
791794 txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
792795
793796 return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
797
798 def _setup_user_id_seq(self):
799 def r(txn):
800 next_id = find_max_generated_user_id_localpart(txn) + 1
801 txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
802
803 return self.postgres_store.db.runInteraction("setup_user_id_seq", r)
794804
795805
796806 ##############################################
2323 "debian:sid",
2424 "ubuntu:xenial",
2525 "ubuntu:bionic",
26 "ubuntu:eoan",
2726 "ubuntu:focal",
2827 )
2928
1010 then
1111 files=$*
1212 else
13 files="synapse tests scripts-dev scripts"
13 files="synapse tests scripts-dev scripts contrib synctl"
1414 fi
1515
1616 echo "Linting these locations: $files"
2121 def publish(self, channel: str, message: bytes): ...
2222
2323 class SubscriberProtocol:
24 def __init__(self, *args, **kwargs): ...
2425 password: Optional[str]
2526 def subscribe(self, channels: Union[str, List[str]]): ...
2627 def connectionMade(self): ...
3535 except ImportError:
3636 pass
3737
38 __version__ = "1.17.0"
38 __version__ = "1.18.0"
3939
4040 if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
4141 # We import here so that we don't have to install a bunch of deps when
126126 if current_state:
127127 member = current_state.get((EventTypes.Member, user_id), None)
128128 else:
129 member = yield self.state.get_current_state(
130 room_id=room_id, event_type=EventTypes.Member, state_key=user_id
129 member = yield defer.ensureDeferred(
130 self.state.get_current_state(
131 room_id=room_id, event_type=EventTypes.Member, state_key=user_id
132 )
131133 )
132134 membership = member.membership if member else None
133135
664666 )
665667 return member_event.membership, member_event.event_id
666668 except AuthError:
667 visibility = yield self.state.get_current_state(
668 room_id, EventTypes.RoomHistoryVisibility, ""
669 visibility = yield defer.ensureDeferred(
670 self.state.get_current_state(
671 room_id, EventTypes.RoomHistoryVisibility, ""
672 )
669673 )
670674 if (
671675 visibility
1616 """Contains exceptions and error codes."""
1717
1818 import logging
19 import typing
1920 from http import HTTPStatus
20 from typing import Dict, List
21 from typing import Dict, List, Optional, Union
2122
2223 from canonicaljson import json
2324
2425 from twisted.web import http
26
27 if typing.TYPE_CHECKING:
28 from synapse.types import JsonDict
2529
2630 logger = logging.getLogger(__name__)
2731
7781 """An exception with integer code and message string attributes.
7882
7983 Attributes:
80 code (int): HTTP error code
81 msg (str): string describing the error
82 """
83
84 def __init__(self, code, msg):
84 code: HTTP error code
85 msg: string describing the error
86 """
87
88 def __init__(self, code: Union[int, HTTPStatus], msg: str):
8589 super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
8690
8791 # Some calls to this method pass instances of http.HTTPStatus for `code`.
122126 message (as well as an HTTP status code).
123127
124128 Attributes:
125 errcode (str): Matrix error code e.g 'M_FORBIDDEN'
126 """
127
128 def __init__(self, code, msg, errcode=Codes.UNKNOWN):
129 errcode: Matrix error code e.g 'M_FORBIDDEN'
130 """
131
132 def __init__(self, code: int, msg: str, errcode: str = Codes.UNKNOWN):
129133 """Constructs a synapse error.
130134
131135 Args:
132 code (int): The integer error code (an HTTP response code)
133 msg (str): The human-readable error message.
134 errcode (str): The matrix error code e.g 'M_FORBIDDEN'
136 code: The integer error code (an HTTP response code)
137 msg: The human-readable error message.
138 errcode: The matrix error code e.g 'M_FORBIDDEN'
135139 """
136140 super(SynapseError, self).__init__(code, msg)
137141 self.errcode = errcode
144148 """An error from a general matrix endpoint, eg. from a proxied Matrix API call.
145149
146150 Attributes:
147 errcode (str): Matrix error code e.g 'M_FORBIDDEN'
148 """
149
150 def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
151 errcode: Matrix error code e.g 'M_FORBIDDEN'
152 """
153
154 def __init__(
155 self,
156 code: int,
157 msg: str,
158 errcode: str = Codes.UNKNOWN,
159 additional_fields: Optional[Dict] = None,
160 ):
151161 super(ProxiedRequestError, self).__init__(code, msg, errcode)
152162 if additional_fields is None:
153163 self._additional_fields = {} # type: Dict
163173 privacy policy.
164174 """
165175
166 def __init__(self, msg, consent_uri):
176 def __init__(self, msg: str, consent_uri: str):
167177 """Constructs a ConsentNotGivenError
168178
169179 Args:
170 msg (str): The human-readable error message
171 consent_url (str): The URL where the user can give their consent
180 msg: The human-readable error message
181 consent_url: The URL where the user can give their consent
172182 """
173183 super(ConsentNotGivenError, self).__init__(
174184 code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN
184194 authenticated endpoint, but the account has been deactivated.
185195 """
186196
187 def __init__(self, msg):
197 def __init__(self, msg: str):
188198 """Constructs a UserDeactivatedError
189199
190200 Args:
191 msg (str): The human-readable error message
201 msg: The human-readable error message
192202 """
193203 super(UserDeactivatedError, self).__init__(
194204 code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
200210 is not on its federation whitelist.
201211
202212 Attributes:
203 destination (str): The destination which has been denied
204 """
205
206 def __init__(self, destination):
213 destination: The destination which has been denied
214 """
215
216 def __init__(self, destination: Optional[str]):
207217 """Raised by federation client or server to indicate that we are
208218 are deliberately not attempting to contact a given server because it is
209219 not on our federation whitelist.
210220
211221 Args:
212 destination (str): the domain in question
222 destination: the domain in question
213223 """
214224
215225 self.destination = destination
227237 (This indicates we should return a 401 with 'result' as the body)
228238
229239 Attributes:
230 result (dict): the server response to the request, which should be
240 result: the server response to the request, which should be
231241 passed back to the client
232242 """
233243
234 def __init__(self, result):
244 def __init__(self, result: "JsonDict"):
235245 super(InteractiveAuthIncompleteError, self).__init__(
236246 "Interactive auth not yet complete"
237247 )
244254 def __init__(self, *args, **kwargs):
245255 if "errcode" not in kwargs:
246256 kwargs["errcode"] = Codes.UNRECOGNIZED
247 message = None
248257 if len(args) == 0:
249258 message = "Unrecognized request"
250259 else:
255264 class NotFoundError(SynapseError):
256265 """An error indicating we can't find the thing you asked for"""
257266
258 def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND):
267 def __init__(self, msg: str = "Not found", errcode: str = Codes.NOT_FOUND):
259268 super(NotFoundError, self).__init__(404, msg, errcode=errcode)
260269
261270
281290 M_UNKNOWN_TOKEN respectively.
282291 """
283292
284 def __init__(self, msg, errcode):
293 def __init__(self, msg: str, errcode: str):
285294 super().__init__(code=401, msg=msg, errcode=errcode)
286295
287296
288297 class MissingClientTokenError(InvalidClientCredentialsError):
289298 """Raised when we couldn't find the access token in a request"""
290299
291 def __init__(self, msg="Missing access token"):
300 def __init__(self, msg: str = "Missing access token"):
292301 super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
293302
294303
295304 class InvalidClientTokenError(InvalidClientCredentialsError):
296305 """Raised when we didn't understand the access token in a request"""
297306
298 def __init__(self, msg="Unrecognised access token", soft_logout=False):
307 def __init__(
308 self, msg: str = "Unrecognised access token", soft_logout: bool = False
309 ):
299310 super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
300311 self._soft_logout = soft_logout
301312
313324
314325 def __init__(
315326 self,
316 code,
317 msg,
318 errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
319 admin_contact=None,
320 limit_type=None,
327 code: int,
328 msg: str,
329 errcode: str = Codes.RESOURCE_LIMIT_EXCEEDED,
330 admin_contact: Optional[str] = None,
331 limit_type: Optional[str] = None,
321332 ):
322333 self.admin_contact = admin_contact
323334 self.limit_type = limit_type
365376 class InvalidCaptchaError(SynapseError):
366377 def __init__(
367378 self,
368 code=400,
369 msg="Invalid captcha.",
370 error_url=None,
371 errcode=Codes.CAPTCHA_INVALID,
379 code: int = 400,
380 msg: str = "Invalid captcha.",
381 error_url: Optional[str] = None,
382 errcode: str = Codes.CAPTCHA_INVALID,
372383 ):
373384 super(InvalidCaptchaError, self).__init__(code, msg, errcode)
374385 self.error_url = error_url
383394
384395 def __init__(
385396 self,
386 code=429,
387 msg="Too Many Requests",
388 retry_after_ms=None,
389 errcode=Codes.LIMIT_EXCEEDED,
397 code: int = 429,
398 msg: str = "Too Many Requests",
399 retry_after_ms: Optional[int] = None,
400 errcode: str = Codes.LIMIT_EXCEEDED,
390401 ):
391402 super(LimitExceededError, self).__init__(code, msg, errcode)
392403 self.retry_after_ms = retry_after_ms
399410 """A client has tried to upload to a non-current version of the room_keys store
400411 """
401412
402 def __init__(self, current_version):
413 def __init__(self, current_version: str):
403414 """
404415 Args:
405 current_version (str): the current version of the store they should have used
416 current_version: the current version of the store they should have used
406417 """
407418 super(RoomKeysVersionError, self).__init__(
408419 403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION
414425 """The client's request to create a room used a room version that the server does
415426 not support."""
416427
417 def __init__(self, msg="Homeserver does not support this room version"):
428 def __init__(self, msg: str = "Homeserver does not support this room version"):
418429 super(UnsupportedRoomVersionError, self).__init__(
419430 code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
420431 )
436447 failing.
437448 """
438449
439 def __init__(self, room_version):
450 def __init__(self, room_version: str):
440451 super(IncompatibleRoomVersionError, self).__init__(
441452 code=400,
442453 msg="Your homeserver does not support the features required to "
456467
457468 def __init__(
458469 self,
459 msg="This password doesn't comply with the server's policy",
460 errcode=Codes.WEAK_PASSWORD,
470 msg: str = "This password doesn't comply with the server's policy",
471 errcode: str = Codes.WEAK_PASSWORD,
461472 ):
462473 super(PasswordRefusedError, self).__init__(
463474 code=400, msg=msg, errcode=errcode,
482493 self.can_retry = can_retry
483494
484495
485 def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
496 def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs):
486497 """ Utility method for constructing an error response for client-server
487498 interactions.
488499
489500 Args:
490 msg (str): The error message.
491 code (str): The error code.
492 kwargs : Additional keys to add to the response.
501 msg: The error message.
502 code: The error code.
503 kwargs: Additional keys to add to the response.
493504 Returns:
494505 A dict representing the error response JSON.
495506 """
511522 is wrong (e.g., it referred to an invalid event)
512523 """
513524
514 def __init__(self, level, code, reason, affected, source=None):
525 def __init__(
526 self,
527 level: str,
528 code: int,
529 reason: str,
530 affected: str,
531 source: Optional[str] = None,
532 ):
515533 if level not in ["FATAL", "ERROR", "WARN"]:
516534 raise ValueError("Level is not valid: %s" % (level,))
517535 self.level = level
538556 Represents an HTTP-level failure of an outbound request
539557
540558 Attributes:
541 response (bytes): body of response
542 """
543
544 def __init__(self, code, msg, response):
559 response: body of response
560 """
561
562 def __init__(self, code: int, msg: str, response: bytes):
545563 """
546564
547565 Args:
548 code (int): HTTP status code
549 msg (str): reason phrase from HTTP response status line
550 response (bytes): body of response
566 code: HTTP status code
567 msg: reason phrase from HTTP response status line
568 response: body of response
551569 """
552570 super(HttpResponseException, self).__init__(code, msg)
553571 self.response = response
572590 # try to parse the body as json, to get better errcode/msg, but
573591 # default to M_UNKNOWN with the HTTP status as the error text
574592 try:
575 j = json.loads(self.response)
593 j = json.loads(self.response.decode("utf-8"))
576594 except ValueError:
577595 j = {}
578596
2020
2121 from typing_extensions import ContextManager
2222
23 from twisted.internet import address, defer, reactor
23 from twisted.internet import address, reactor
2424
2525 import synapse
2626 import synapse.events
8686 ReceiptsStream,
8787 TagAccountDataStream,
8888 ToDeviceStream,
89 TypingStream,
9089 )
9190 from synapse.rest.admin import register_servlets_for_media_repo
9291 from synapse.rest.client.v1 import events
110109 RoomSendEventRestServlet,
111110 RoomStateEventRestServlet,
112111 RoomStateRestServlet,
112 RoomTypingRestServlet,
113113 )
114114 from synapse.rest.client.v1.voip import VoipRestServlet
115115 from synapse.rest.client.v2_alpha import groups, sync, user_directory
373373
374374 return _user_syncing()
375375
376 @defer.inlineCallbacks
377 def notify_from_replication(self, states, stream_id):
378 parties = yield get_interested_parties(self.store, states)
376 async def notify_from_replication(self, states, stream_id):
377 parties = await get_interested_parties(self.store, states)
379378 room_ids_to_states, users_to_states = parties
380379
381380 self.notifier.on_new_event(
385384 users=users_to_states.keys(),
386385 )
387386
388 @defer.inlineCallbacks
389 def process_replication_rows(self, token, rows):
387 async def process_replication_rows(self, token, rows):
390388 states = [
391389 UserPresenceState(
392390 row.user_id,
404402 self.user_to_current_state[state.user_id] = state
405403
406404 stream_id = token
407 yield self.notify_from_replication(states, stream_id)
405 await self.notify_from_replication(states, stream_id)
408406
409407 def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
410408 return [
448446 # Proxy request to master
449447 user_id = user.to_string()
450448 await self._bump_active_client(user_id=user_id)
451
452
453 class GenericWorkerTyping(object):
454 def __init__(self, hs):
455 self._latest_room_serial = 0
456 self._reset()
457
458 def _reset(self):
459 """
460 Reset the typing handler's data caches.
461 """
462 # map room IDs to serial numbers
463 self._room_serials = {}
464 # map room IDs to sets of users currently typing
465 self._room_typing = {}
466
467 def process_replication_rows(self, token, rows):
468 if self._latest_room_serial > token:
469 # The master has gone backwards. To prevent inconsistent data, just
470 # clear everything.
471 self._reset()
472
473 # Set the latest serial token to whatever the server gave us.
474 self._latest_room_serial = token
475
476 for row in rows:
477 self._room_serials[row.room_id] = token
478 self._room_typing[row.room_id] = row.user_ids
479
480 def get_current_token(self) -> int:
481 return self._latest_room_serial
482449
483450
484451 class GenericWorkerSlavedStore(
510477 SearchWorkerStore,
511478 BaseSlavedStore,
512479 ):
513 def __init__(self, database, db_conn, hs):
514 super(GenericWorkerSlavedStore, self).__init__(database, db_conn, hs)
515
516 # We pull out the current federation stream position now so that we
517 # always have a known value for the federation position in memory so
518 # that we don't have to bounce via a deferred once when we start the
519 # replication streams.
520 self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
521
522 def _get_federation_out_pos(self, db_conn):
523 sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
524 sql = self.database_engine.convert_param_style(sql)
525
526 txn = db_conn.cursor()
527 txn.execute(sql, ("federation",))
528 rows = txn.fetchall()
529 txn.close()
530
531 return rows[0][0] if rows else -1
480 pass
532481
533482
534483 class GenericWorkerServer(HomeServer):
575524 KeyUploadServlet(self).register(resource)
576525 AccountDataServlet(self).register(resource)
577526 RoomAccountDataServlet(self).register(resource)
527 RoomTypingRestServlet(self).register(resource)
578528
579529 sync.register_servlets(self, resource)
580530 events.register_servlets(self, resource)
686636 def build_presence_handler(self):
687637 return GenericWorkerPresence(self)
688638
689 def build_typing_handler(self):
690 return GenericWorkerTyping(self)
691
692639
693640 class GenericWorkerReplicationHandler(ReplicationDataHandler):
694641 def __init__(self, hs):
695642 super(GenericWorkerReplicationHandler, self).__init__(hs)
696643
697644 self.store = hs.get_datastore()
698 self.typing_handler = hs.get_typing_handler()
699645 self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
700646 self.notifier = hs.get_notifier()
701647
731677 )
732678 await self.pusher_pool.on_new_receipts(
733679 token, token, {row.room_id for row in rows}
734 )
735 elif stream_name == TypingStream.NAME:
736 self.typing_handler.process_replication_rows(token, rows)
737 self.notifier.on_new_event(
738 "typing_key", token, rooms=[row.room_id for row in rows]
739680 )
740681 elif stream_name == ToDeviceStream.NAME:
741682 entities = [row.entity for row in rows if row.entity.startswith("@")]
811752 self.federation_sender = hs.get_federation_sender()
812753 self._hs = hs
813754
814 # if the worker is restarted, we want to pick up where we left off in
815 # the replication stream, so load the position from the database.
816 #
817 # XXX is this actually worthwhile? Whenever the master is restarted, we'll
818 # drop some rows anyway (which is mostly fine because we're only dropping
819 # typing and presence notifications). If the replication stream is
820 # unreliable, why do we do all this hoop-jumping to store the position in the
821 # database? See also https://github.com/matrix-org/synapse/issues/7535.
822 #
823 self.federation_position = self.store.federation_out_pos_startup
755 # Stores the latest position in the federation stream we've gotten up
756 # to. This is always set before we use it.
757 self.federation_position = None
824758
825759 self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
826 self._last_ack = self.federation_position
827760
828761 def on_start(self):
829762 # There may be some events that are persisted but haven't been sent,
931864 # We ACK this token over replication so that the master can drop
932865 # its in memory queues
933866 self._hs.get_tcp_replication().send_federation_ack(current_position)
934 self._last_ack = current_position
935867 except Exception:
936868 logger.exception("Error updating federation stream position")
937869
959891 )
960892
961893 if config.worker_app == "synapse.app.appservice":
962 if config.notify_appservices:
894 if config.appservice.notify_appservices:
963895 sys.stderr.write(
964896 "\nThe appservices must be disabled in the main synapse process"
965897 "\nbefore they can be run in a separate worker."
969901 sys.exit(1)
970902
971903 # Force the appservice to start since they will be disabled in the main config
972 config.notify_appservices = True
904 config.appservice.notify_appservices = True
973905 else:
974906 # For other worker types we force this to off.
975 config.notify_appservices = False
907 config.appservice.notify_appservices = False
976908
977909 if config.worker_app == "synapse.app.pusher":
978 if config.start_pushers:
910 if config.server.start_pushers:
979911 sys.stderr.write(
980912 "\nThe pushers must be disabled in the main synapse process"
981913 "\nbefore they can be run in a separate worker."
985917 sys.exit(1)
986918
987919 # Force the pushers to start since they will be disabled in the main config
988 config.start_pushers = True
920 config.server.start_pushers = True
989921 else:
990922 # For other worker types we force this to off.
991 config.start_pushers = False
923 config.server.start_pushers = False
992924
993925 if config.worker_app == "synapse.app.user_dir":
994 if config.update_user_directory:
926 if config.server.update_user_directory:
995927 sys.stderr.write(
996928 "\nThe update_user_directory must be disabled in the main synapse process"
997929 "\nbefore they can be run in a separate worker."
1001933 sys.exit(1)
1002934
1003935 # Force the pushers to start since they will be disabled in the main config
1004 config.update_user_directory = True
936 config.server.update_user_directory = True
1005937 else:
1006938 # For other worker types we force this to off.
1007 config.update_user_directory = False
939 config.server.update_user_directory = False
1008940
1009941 if config.worker_app == "synapse.app.federation_sender":
1010 if config.send_federation:
942 if config.worker.send_federation:
1011943 sys.stderr.write(
1012944 "\nThe send_federation must be disabled in the main synapse process"
1013945 "\nbefore they can be run in a separate worker."
1017949 sys.exit(1)
1018950
1019951 # Force the pushers to start since they will be disabled in the main config
1020 config.send_federation = True
952 config.worker.send_federation = True
1021953 else:
1022954 # For other worker types we force this to off.
1023 config.send_federation = False
955 config.worker.send_federation = False
1024956
1025957 synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
1026958
482482 _stats_process = []
483483
484484
485 @defer.inlineCallbacks
486 def phone_stats_home(hs, stats, stats_process=_stats_process):
485 async def phone_stats_home(hs, stats, stats_process=_stats_process):
487486 logger.info("Gathering stats for reporting")
488487 now = int(hs.get_clock().time())
489488 uptime = int(now - hs.start_time)
521520 stats["python_version"] = "{}.{}.{}".format(
522521 version.major, version.minor, version.micro
523522 )
524 stats["total_users"] = yield hs.get_datastore().count_all_users()
525
526 total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
523 stats["total_users"] = await hs.get_datastore().count_all_users()
524
525 total_nonbridged_users = await hs.get_datastore().count_nonbridged_users()
527526 stats["total_nonbridged_users"] = total_nonbridged_users
528527
529 daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
528 daily_user_type_results = await hs.get_datastore().count_daily_user_type()
530529 for name, count in daily_user_type_results.items():
531530 stats["daily_user_type_" + name] = count
532531
533 room_count = yield hs.get_datastore().get_room_count()
532 room_count = await hs.get_datastore().get_room_count()
534533 stats["total_room_count"] = room_count
535534
536 stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
537 stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
538 stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
539 stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
540
541 r30_results = yield hs.get_datastore().count_r30_users()
535 stats["daily_active_users"] = await hs.get_datastore().count_daily_users()
536 stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users()
537 stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms()
538 stats["daily_messages"] = await hs.get_datastore().count_daily_messages()
539
540 r30_results = await hs.get_datastore().count_r30_users()
542541 for name, count in r30_results.items():
543542 stats["r30_users_" + name] = count
544543
545 daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
544 daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
546545 stats["daily_sent_messages"] = daily_sent_messages
547546 stats["cache_factor"] = hs.config.caches.global_factor
548547 stats["event_cache_size"] = hs.config.caches.event_cache_size
557556
558557 logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
559558 try:
560 yield hs.get_proxied_http_client().put_json(
559 await hs.get_proxied_http_client().put_json(
561560 hs.config.report_stats_endpoint, stats
562561 )
563562 except Exception as e:
1818
1919 from twisted.internet import defer
2020
21 from synapse.api.constants import ThirdPartyEntityKind
21 from synapse.api.constants import EventTypes, ThirdPartyEntityKind
2222 from synapse.api.errors import CodeMessageException
2323 from synapse.events.utils import serialize_event
2424 from synapse.http.client import SimpleHttpClient
206206 if service.url is None:
207207 return True
208208
209 events = self._serialize(events)
209 events = self._serialize(service, events)
210210
211211 if txn_id is None:
212212 logger.warning(
232232 failed_transactions_counter.labels(service.id).inc()
233233 return False
234234
235 def _serialize(self, events):
235 def _serialize(self, service, events):
236236 time_now = self.clock.time_msec()
237 return [serialize_event(e, time_now, as_client_event=True) for e in events]
237 return [
238 serialize_event(
239 e,
240 time_now,
241 as_client_event=True,
242 is_invite=(
243 e.type == EventTypes.Member
244 and e.membership == "invite"
245 and service.is_interested_in_user(e.state_key)
246 ),
247 )
248 for e in events
249 ]
1818 import errno
1919 import os
2020 from collections import OrderedDict
21 from hashlib import sha256
2122 from textwrap import dedent
22 from typing import Any, MutableMapping, Optional
23
23 from typing import Any, List, MutableMapping, Optional
24
25 import attr
2426 import yaml
2527
2628
716718 return config_files
717719
718720
719 __all__ = ["Config", "RootConfig"]
721 @attr.s
722 class ShardedWorkerHandlingConfig:
723 """Algorithm for choosing which instance is responsible for handling some
724 sharded work.
725
726 For example, the federation senders use this to determine which instances
727 handles sending stuff to a given destination (which is used as the `key`
728 below).
729 """
730
731 instances = attr.ib(type=List[str])
732
733 def should_handle(self, instance_name: str, key: str) -> bool:
734 """Whether this instance is responsible for handling the given key.
735 """
736
737 # If multiple instances are not defined we always return true.
738 if not self.instances or len(self.instances) == 1:
739 return True
740
741 # We shard by taking the hash, modulo it by the number of instances and
742 # then checking whether this instance matches the instance at that
743 # index.
744 #
745 # (Technically this introduces some bias and is not entirely uniform,
746 # but since the hash is so large the bias is ridiculously small).
747 dest_hash = sha256(key.encode("utf8")).digest()
748 dest_int = int.from_bytes(dest_hash, byteorder="little")
749 remainder = dest_int % (len(self.instances))
750 return self.instances[remainder] == instance_name
751
752
753 __all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]
136136
137137 def read_config_files(config_files: List[str]): ...
138138 def find_config_files(search_paths: List[str]): ...
139
140 class ShardedWorkerHandlingConfig:
141 instances: List[str]
142 def __init__(self, instances: List[str]) -> None: ...
143 def should_handle(self, instance_name: str, key: str) -> bool: ...
5454 #database:
5555 # name: psycopg2
5656 # args:
57 # user: synapse
57 # user: synapse_user
5858 # password: secretpassword
5959 # database: synapse
6060 # host: localhost
2121 from enum import Enum
2222 from typing import Optional
2323
24 import attr
2425 import pkg_resources
2526
2627 from ._base import Config, ConfigError
3031 'email' block. However, the following required keys are missing:
3132 %s
3233 """
34
35 DEFAULT_SUBJECTS = {
36 "message_from_person_in_room": "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room...",
37 "message_from_person": "[%(app)s] You have a message on %(app)s from %(person)s...",
38 "messages_from_person": "[%(app)s] You have messages on %(app)s from %(person)s...",
39 "messages_in_room": "[%(app)s] You have messages on %(app)s in the %(room)s room...",
40 "messages_in_room_and_others": "[%(app)s] You have messages on %(app)s in the %(room)s room and others...",
41 "messages_from_person_and_others": "[%(app)s] You have messages on %(app)s from %(person)s and others...",
42 "invite_from_person": "[%(app)s] %(person)s has invited you to chat on %(app)s...",
43 "invite_from_person_to_room": "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s...",
44 "password_reset": "[%(server_name)s] Password reset",
45 "email_validation": "[%(server_name)s] Validate your email",
46 }
47
48
49 @attr.s
50 class EmailSubjectConfig:
51 message_from_person_in_room = attr.ib(type=str)
52 message_from_person = attr.ib(type=str)
53 messages_from_person = attr.ib(type=str)
54 messages_in_room = attr.ib(type=str)
55 messages_in_room_and_others = attr.ib(type=str)
56 messages_from_person_and_others = attr.ib(type=str)
57 invite_from_person = attr.ib(type=str)
58 invite_from_person_to_room = attr.ib(type=str)
59 password_reset = attr.ib(type=str)
60 email_validation = attr.ib(type=str)
3361
3462
3563 class EmailConfig(Config):
293321 if not os.path.isfile(p):
294322 raise ConfigError("Unable to find email template file %s" % (p,))
295323
324 subjects_config = email_config.get("subjects", {})
325 subjects = {}
326
327 for key, default in DEFAULT_SUBJECTS.items():
328 subjects[key] = subjects_config.get(key, default)
329
330 self.email_subjects = EmailSubjectConfig(**subjects)
331
296332 def generate_config_section(self, config_dir_path, server_name, **kwargs):
297 return """\
333 return (
334 """\
298335 # Configuration for sending emails from Synapse.
299336 #
300337 email:
322359 # notif_from defines the "From" address to use when sending emails.
323360 # It must be set if email sending is enabled.
324361 #
325 # The placeholder '%(app)s' will be replaced by the application name,
362 # The placeholder '%%(app)s' will be replaced by the application name,
326363 # which is normally 'app_name' (below), but may be overridden by the
327364 # Matrix client application.
328365 #
329 # Note that the placeholder must be written '%(app)s', including the
366 # Note that the placeholder must be written '%%(app)s', including the
330367 # trailing 's'.
331368 #
332 #notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
333
334 # app_name defines the default value for '%(app)s' in notif_from. It
335 # defaults to 'Matrix'.
369 #notif_from: "Your Friendly %%(app)s homeserver <noreply@example.com>"
370
371 # app_name defines the default value for '%%(app)s' in notif_from and email
372 # subjects. It defaults to 'Matrix'.
336373 #
337374 #app_name: my_branded_matrix_server
338375
400437 # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
401438 #
402439 #template_dir: "res/templates"
440
441 # Subjects to use when sending emails from Synapse.
442 #
443 # The placeholder '%%(app)s' will be replaced with the value of the 'app_name'
444 # setting above, or by a value dictated by the Matrix client application.
445 #
446 # If a subject isn't overridden in this configuration file, the value used as
447 # its example will be used.
448 #
449 #subjects:
450
451 # Subjects for notification emails.
452 #
453 # On top of the '%%(app)s' placeholder, these can use the following
454 # placeholders:
455 #
456 # * '%%(person)s', which will be replaced by the display name of the user(s)
457 # that sent the message(s), e.g. "Alice and Bob".
458 # * '%%(room)s', which will be replaced by the name of the room the
459 # message(s) have been sent to, e.g. "My super room".
460 #
461 # See the example provided for each setting to see which placeholder can be
462 # used and how to use them.
463 #
464 # Subject to use to notify about one message from one or more user(s) in a
465 # room which has a name.
466 #message_from_person_in_room: "%(message_from_person_in_room)s"
467 #
468 # Subject to use to notify about one message from one or more user(s) in a
469 # room which doesn't have a name.
470 #message_from_person: "%(message_from_person)s"
471 #
472 # Subject to use to notify about multiple messages from one or more users in
473 # a room which doesn't have a name.
474 #messages_from_person: "%(messages_from_person)s"
475 #
476 # Subject to use to notify about multiple messages in a room which has a
477 # name.
478 #messages_in_room: "%(messages_in_room)s"
479 #
480 # Subject to use to notify about multiple messages in multiple rooms.
481 #messages_in_room_and_others: "%(messages_in_room_and_others)s"
482 #
483 # Subject to use to notify about multiple messages from multiple persons in
484 # multiple rooms. This is similar to the setting above except it's used when
485 # the room in which the notification was triggered has no name.
486 #messages_from_person_and_others: "%(messages_from_person_and_others)s"
487 #
488 # Subject to use to notify about an invite to a room which has a name.
489 #invite_from_person_to_room: "%(invite_from_person_to_room)s"
490 #
491 # Subject to use to notify about an invite to a room which doesn't have a
492 # name.
493 #invite_from_person: "%(invite_from_person)s"
494
495 # Subject for emails related to account administration.
496 #
497 # On top of the '%%(app)s' placeholder, these one can use the
498 # '%%(server_name)s' placeholder, which will be replaced by the value of the
499 # 'server_name' setting in your Synapse configuration.
500 #
501 # Subject to use when sending a password reset email.
502 #password_reset: "%(password_reset)s"
503 #
504 # Subject to use when sending a verification email to assert an address's
505 # ownership.
506 #email_validation: "%(email_validation)s"
403507 """
508 % DEFAULT_SUBJECTS
509 )
404510
405511
406512 class ThreepidBehaviour(Enum):
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional
16
17 from netaddr import IPSet
18
19 from ._base import Config, ConfigError
20
21
22 class FederationConfig(Config):
23 section = "federation"
24
25 def read_config(self, config, **kwargs):
26 # FIXME: federation_domain_whitelist needs sytests
27 self.federation_domain_whitelist = None # type: Optional[dict]
28 federation_domain_whitelist = config.get("federation_domain_whitelist", None)
29
30 if federation_domain_whitelist is not None:
31 # turn the whitelist into a hash for speed of lookup
32 self.federation_domain_whitelist = {}
33
34 for domain in federation_domain_whitelist:
35 self.federation_domain_whitelist[domain] = True
36
37 self.federation_ip_range_blacklist = config.get(
38 "federation_ip_range_blacklist", []
39 )
40
41 # Attempt to create an IPSet from the given ranges
42 try:
43 self.federation_ip_range_blacklist = IPSet(
44 self.federation_ip_range_blacklist
45 )
46
47 # Always blacklist 0.0.0.0, ::
48 self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
49 except Exception as e:
50 raise ConfigError(
51 "Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
52 )
53
54 def generate_config_section(self, config_dir_path, server_name, **kwargs):
55 return """\
56 # Restrict federation to the following whitelist of domains.
57 # N.B. we recommend also firewalling your federation listener to limit
58 # inbound federation traffic as early as possible, rather than relying
59 # purely on this application-layer restriction. If not specified, the
60 # default is to whitelist everything.
61 #
62 #federation_domain_whitelist:
63 # - lon.example.com
64 # - nyc.example.com
65 # - syd.example.com
66
67 # Prevent federation requests from being sent to the following
68 # blacklist IP address CIDR ranges. If this option is not specified, or
69 # specified with an empty list, no ip range blacklist will be enforced.
70 #
71 # As of Synapse v1.4.0 this option also affects any outbound requests to identity
72 # servers provided by user input.
73 #
74 # (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
75 # listed here, since they correspond to unroutable addresses.)
76 #
77 federation_ip_range_blacklist:
78 - '127.0.0.0/8'
79 - '10.0.0.0/8'
80 - '172.16.0.0/12'
81 - '192.168.0.0/16'
82 - '100.64.0.0/10'
83 - '169.254.0.0/16'
84 - '::1/128'
85 - 'fe80::/64'
86 - 'fc00::/7'
87 """
2222 from .consent_config import ConsentConfig
2323 from .database import DatabaseConfig
2424 from .emailconfig import EmailConfig
25 from .federation import FederationConfig
2526 from .groups import GroupsConfig
2627 from .jwt_config import JWTConfig
2728 from .key import KeyConfig
5657 config_classes = [
5758 ServerConfig,
5859 TlsConfig,
60 FederationConfig,
5961 CacheConfig,
6062 DatabaseConfig,
6163 LoggingConfig,
7577 JWTConfig,
7678 PasswordConfig,
7779 EmailConfig,
78 WorkerConfig,
7980 PasswordAuthProviderConfig,
8081 PushConfig,
8182 SpamCheckerConfig,
8889 RoomDirectoryConfig,
8990 ThirdPartyRulesConfig,
9091 TracerConfig,
92 WorkerConfig,
9193 RedisConfig,
94 FederationConfig,
9295 ]
3131 self.jwt_secret = jwt_config["secret"]
3232 self.jwt_algorithm = jwt_config["algorithm"]
3333
34 # The issuer and audiences are optional, if provided, it is asserted
35 # that the claims exist on the JWT.
36 self.jwt_issuer = jwt_config.get("issuer")
37 self.jwt_audiences = jwt_config.get("audiences")
38
3439 try:
3540 import jwt
3641
4146 self.jwt_enabled = False
4247 self.jwt_secret = None
4348 self.jwt_algorithm = None
49 self.jwt_issuer = None
50 self.jwt_audiences = None
4451
4552 def generate_config_section(self, **kwargs):
4653 return """\
5057 #
5158 # Each JSON Web Token needs to contain a "sub" (subject) claim, which is
5259 # used as the localpart of the mxid.
60 #
61 # Additionally, the expiration time ("exp"), not before time ("nbf"),
62 # and issued at ("iat") claims are validated if present.
5363 #
5464 # Note that this is a non-standard login type and client support is
5565 # expected to be non-existant.
7787 # Required if 'enabled' is true.
7888 #
7989 #algorithm: "provided-by-your-issuer"
90
91 # The issuer to validate the "iss" claim against.
92 #
93 # Optional, if provided the "iss" claim will be required and
94 # validated for all JSON web tokens.
95 #
96 #issuer: "provided-by-your-issuer"
97
98 # A list of audiences to validate the "aud" claim against.
99 #
100 # Optional, if provided the "aud" claim will be required and
101 # validated for all JSON web tokens.
102 #
103 # Note that if the "aud" claim is included in a JSON web token then
104 # validation will fail without configuring audiences.
105 #
106 #audiences:
107 # - "provided-by-your-issuer"
80108 """
213213 Set up the logging subsystem.
214214
215215 Args:
216 config (LoggingConfig | synapse.config.workers.WorkerConfig):
216 config (LoggingConfig | synapse.config.worker.WorkerConfig):
217217 configuration data
218218
219219 use_worker_options (bool): True to use the 'worker_log_config' option
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 from ._base import Config
16 from ._base import Config, ShardedWorkerHandlingConfig
1717
1818
1919 class PushConfig(Config):
2222 def read_config(self, config, **kwargs):
2323 push_config = config.get("push", {})
2424 self.push_include_content = push_config.get("include_content", True)
25
26 pusher_instances = config.get("pusher_instances") or []
27 self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
2528
2629 # There was a a 'redact_content' setting but mistakenly read from the
2730 # 'email'section'. Check for the flag in the 'push' section, and log,
2020 section = "redis"
2121
2222 def read_config(self, config, **kwargs):
23 redis_config = config.get("redis", {})
23 redis_config = config.get("redis") or {}
2424 self.redis_enabled = redis_config.get("enabled", False)
2525
2626 if not self.redis_enabled:
3131 self.redis_host = redis_config.get("host", "localhost")
3232 self.redis_port = redis_config.get("port", 6379)
3333 self.redis_password = redis_config.get("password")
34
35 def generate_config_section(self, config_dir_path, server_name, **kwargs):
36 return """\
37 # Configuration for Redis when using workers. This *must* be enabled when
38 # using workers (unless using old style direct TCP configuration).
39 #
40 redis:
41 # Uncomment the below to enable Redis support.
42 #
43 #enabled: true
44
45 # Optional host and port to use to connect to redis. Defaults to
46 # localhost and 6379
47 #
48 #host: localhost
49 #port: 6379
50
51 # Optional password if configured on the Redis instance
52 #
53 #password: <secret_password>
54 """
4949 RoomCreationPreset.PRIVATE_CHAT,
5050 RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
5151 ]
52 elif encryption_for_room_type == RoomDefaultEncryptionTypes.OFF:
52 elif (
53 encryption_for_room_type == RoomDefaultEncryptionTypes.OFF
54 or encryption_for_room_type is False
55 ):
56 # PyYAML translates "off" into False if it's unquoted, so we also need to
57 # check for encryption_for_room_type being False.
5358 self.encryption_enabled_by_default_for_room_presets = []
5459 else:
5560 raise ConfigError(
2222
2323 import attr
2424 import yaml
25 from netaddr import IPSet
2625
2726 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
2827 from synapse.http.endpoint import parse_and_validate_server_name
135134 self.use_frozen_dicts = config.get("use_frozen_dicts", False)
136135 self.public_baseurl = config.get("public_baseurl")
137136
138 # Whether to send federation traffic out in this process. This only
139 # applies to some federation traffic, and so shouldn't be used to
140 # "disable" federation
141 self.send_federation = config.get("send_federation", True)
142
143137 # Whether to enable user presence.
144138 self.use_presence = config.get("use_presence", True)
145139
212206 # errors when attempting to search for messages.
213207 self.enable_search = config.get("enable_search", True)
214208
215 self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
209 self.filter_timeline_limit = config.get("filter_timeline_limit", 100)
216210
217211 # Whether we should block invites sent to users on this server
218212 # (other than those sent by local server admins)
261255 # Admin uri to direct users at should their instance become blocked
262256 # due to resource constraints
263257 self.admin_contact = config.get("admin_contact", None)
264
265 # FIXME: federation_domain_whitelist needs sytests
266 self.federation_domain_whitelist = None # type: Optional[dict]
267 federation_domain_whitelist = config.get("federation_domain_whitelist", None)
268
269 if federation_domain_whitelist is not None:
270 # turn the whitelist into a hash for speed of lookup
271 self.federation_domain_whitelist = {}
272
273 for domain in federation_domain_whitelist:
274 self.federation_domain_whitelist[domain] = True
275
276 self.federation_ip_range_blacklist = config.get(
277 "federation_ip_range_blacklist", []
278 )
279
280 # Attempt to create an IPSet from the given ranges
281 try:
282 self.federation_ip_range_blacklist = IPSet(
283 self.federation_ip_range_blacklist
284 )
285
286 # Always blacklist 0.0.0.0, ::
287 self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
288 except Exception as e:
289 raise ConfigError(
290 "Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
291 )
292258
293259 if self.public_baseurl is not None:
294260 if self.public_baseurl[-1] != "/":
726692 #gc_thresholds: [700, 10, 10]
727693
728694 # Set the limit on the returned events in the timeline in the get
729 # and sync operations. The default value is -1, means no upper limit.
695 # and sync operations. The default value is 100. -1 means no upper limit.
696 #
697 # Uncomment the following to increase the limit to 5000.
730698 #
731699 #filter_timeline_limit: 5000
732700
741709 # will receive errors when searching for messages. Defaults to enabled.
742710 #
743711 #enable_search: false
744
745 # Restrict federation to the following whitelist of domains.
746 # N.B. we recommend also firewalling your federation listener to limit
747 # inbound federation traffic as early as possible, rather than relying
748 # purely on this application-layer restriction. If not specified, the
749 # default is to whitelist everything.
750 #
751 #federation_domain_whitelist:
752 # - lon.example.com
753 # - nyc.example.com
754 # - syd.example.com
755
756 # Prevent federation requests from being sent to the following
757 # blacklist IP address CIDR ranges. If this option is not specified, or
758 # specified with an empty list, no ip range blacklist will be enforced.
759 #
760 # As of Synapse v1.4.0 this option also affects any outbound requests to identity
761 # servers provided by user input.
762 #
763 # (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
764 # listed here, since they correspond to unroutable addresses.)
765 #
766 federation_ip_range_blacklist:
767 - '127.0.0.0/8'
768 - '10.0.0.0/8'
769 - '172.16.0.0/12'
770 - '192.168.0.0/16'
771 - '100.64.0.0/10'
772 - '169.254.0.0/16'
773 - '::1/128'
774 - 'fe80::/64'
775 - 'fc00::/7'
776712
777713 # List of ports that Synapse should listen on, their purpose and their
778714 # configuration.
802738 # names: a list of names of HTTP resources. See below for a list of
803739 # valid resource names.
804740 #
805 # compress: set to true to enable HTTP comression for this resource.
741 # compress: set to true to enable HTTP compression for this resource.
806742 #
807743 # additional_resources: Only valid for an 'http' listener. A map of
808744 # additional endpoints which should be loaded via dynamic modules.
1414
1515 import attr
1616
17 from ._base import Config, ConfigError
17 from ._base import Config, ConfigError, ShardedWorkerHandlingConfig
1818 from .server import ListenerConfig, parse_listener_def
1919
2020
3333
3434 Attributes:
3535 events: The instance that writes to the event and backfill streams.
36 events: The instance that writes to the typing stream.
3637 """
3738
3839 events = attr.ib(default="master", type=str)
40 typing = attr.ib(default="master", type=str)
3941
4042
4143 class WorkerConfig(Config):
8284 )
8385 )
8486
87 # Whether to send federation traffic out in this process. This only
88 # applies to some federation traffic, and so shouldn't be used to
89 # "disable" federation
90 self.send_federation = config.get("send_federation", True)
91
92 federation_sender_instances = config.get("federation_sender_instances") or []
93 self.federation_shard_config = ShardedWorkerHandlingConfig(
94 federation_sender_instances
95 )
96
8597 # A map from instance name to host/port of their HTTP replication endpoint.
8698 instance_map = config.get("instance_map") or {}
8799 self.instance_map = {
92104 writers = config.get("stream_writers") or {}
93105 self.writers = WriterLocations(**writers)
94106
95 # Check that the configured writer for events also appears in
107 # Check that the configured writer for events and typing also appears in
96108 # `instance_map`.
97 if (
98 self.writers.events != "master"
99 and self.writers.events not in self.instance_map
100 ):
101 raise ConfigError(
102 "Instance %r is configured to write events but does not appear in `instance_map` config."
103 % (self.writers.events,)
104 )
109 for stream in ("events", "typing"):
110 instance = getattr(self.writers, stream)
111 if instance != "master" and instance not in self.instance_map:
112 raise ConfigError(
113 "Instance %r is configured to write %s but does not appear in `instance_map` config."
114 % (instance, stream)
115 )
116
117 def generate_config_section(self, config_dir_path, server_name, **kwargs):
118 return """\
119 ## Workers ##
120
121 # Disables sending of outbound federation transactions on the main process.
122 # Uncomment if using a federation sender worker.
123 #
124 #send_federation: false
125
126 # It is possible to run multiple federation sender workers, in which case the
127 # work is balanced across them.
128 #
129 # This configuration must be shared between all federation sender workers, and if
130 # changed all federation sender workers must be stopped at the same time and then
131 # started, to ensure that all instances are running with the same config (otherwise
132 # events may be dropped).
133 #
134 #federation_sender_instances:
135 # - federation_sender1
136
137 # When using workers this should be a map from `worker_name` to the
138 # HTTP replication listener of the worker, if configured.
139 #
140 #instance_map:
141 # worker1:
142 # host: localhost
143 # port: 8034
144
145 # Experimental: When using workers you can define which workers should
146 # handle event persistence and typing notifications. Any worker
147 # specified here must also be in the `instance_map`.
148 #
149 #stream_writers:
150 # events: worker1
151 # typing: worker1
152 """
105153
106154 def read_arguments(self, args):
107155 # We support a bunch of command line arguments that override options in
6464
6565 room_id = event.room_id
6666
67 # I'm not really expecting to get auth events in the wrong room, but let's
68 # sanity-check it
67 # We need to ensure that the auth events are actually for the same room, to
68 # stop people from using powers they've been granted in other rooms for
69 # example.
6970 for auth_event in auth_events.values():
7071 if auth_event.room_id != room_id:
71 raise Exception(
72 raise AuthError(
73 403,
7274 "During auth for event %s in room %s, found event %s in the state "
7375 "which is in room %s"
74 % (event.event_id, room_id, auth_event.event_id, auth_event.room_id)
76 % (event.event_id, room_id, auth_event.event_id, auth_event.room_id),
7577 )
7678
7779 if do_sig_check:
105105 Deferred[FrozenEvent]
106106 """
107107
108 state_ids = yield self._state.get_current_state_ids(
109 self.room_id, prev_event_ids
108 state_ids = yield defer.ensureDeferred(
109 self._state.get_current_state_ids(self.room_id, prev_event_ids)
110110 )
111111 auth_ids = yield self._auth.compute_auth_events(self, state_ids)
112112
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import collections
14 import collections.abc
1515 import re
1616 from typing import Any, Mapping, Union
1717
423423 Raises:
424424 TypeError if the input does not look like a valid power levels event content
425425 """
426 if not isinstance(old_power_levels, collections.Mapping):
426 if not isinstance(old_power_levels, collections.abc.Mapping):
427427 raise TypeError("Not a valid power-levels content: %r" % (old_power_levels,))
428428
429429 power_levels = {}
433433 power_levels[k] = v
434434 continue
435435
436 if isinstance(v, collections.Mapping):
436 if isinstance(v, collections.abc.Mapping):
437437 power_levels[k] = h = {}
438438 for k1, v1 in v.items():
439439 # we should only have one level of nesting
373373 """
374374 deferreds = self._check_sigs_and_hashes(room_version, pdus)
375375
376 @defer.inlineCallbacks
377 def handle_check_result(pdu: EventBase, deferred: Deferred):
376 async def handle_check_result(pdu: EventBase, deferred: Deferred):
378377 try:
379 res = yield make_deferred_yieldable(deferred)
378 res = await make_deferred_yieldable(deferred)
380379 except SynapseError:
381380 res = None
382381
383382 if not res:
384383 # Check local db.
385 res = yield self.store.get_event(
384 res = await self.store.get_event(
386385 pdu.event_id, allow_rejected=True, allow_none=True
387386 )
388387
389388 if not res and pdu.origin != origin:
390389 try:
391 res = yield defer.ensureDeferred(
392 self.get_pdu(
393 destinations=[pdu.origin],
394 event_id=pdu.event_id,
395 room_version=room_version,
396 outlier=outlier,
397 timeout=10000,
398 )
390 res = await self.get_pdu(
391 destinations=[pdu.origin],
392 event_id=pdu.event_id,
393 room_version=room_version,
394 outlier=outlier,
395 timeout=10000,
399396 )
400397 except SynapseError:
401398 pass
994991
995992 raise RuntimeError("Failed to send to any server.")
996993
997 @defer.inlineCallbacks
998 def get_room_complexity(self, destination, room_id):
994 async def get_room_complexity(
995 self, destination: str, room_id: str
996 ) -> Optional[dict]:
999997 """
1000998 Fetch the complexity of a remote room from another server.
1001999
10021000 Args:
1003 destination (str): The remote server
1004 room_id (str): The room ID to ask about.
1005
1006 Returns:
1007 Deferred[dict] or Deferred[None]: Dict contains the complexity
1008 metric versions, while None means we could not fetch the complexity.
1001 destination: The remote server
1002 room_id: The room ID to ask about.
1003
1004 Returns:
1005 Dict contains the complexity metric versions, while None means we
1006 could not fetch the complexity.
10091007 """
10101008 try:
1011 complexity = yield self.transport_layer.get_room_complexity(
1009 complexity = await self.transport_layer.get_room_complexity(
10121010 destination=destination, room_id=room_id
10131011 )
1014 defer.returnValue(complexity)
1012 return complexity
10151013 except CodeMessageException as e:
10161014 # We didn't manage to get it -- probably a 404. We are okay if other
10171015 # servers don't give it to us.
10281026
10291027 # If we don't manage to find it, return None. It's not an error if a
10301028 # server doesn't give it to us.
1031 defer.returnValue(None)
1029 return None
1414 # See the License for the specific language governing permissions and
1515 # limitations under the License.
1616 import logging
17 from typing import Any, Callable, Dict, List, Match, Optional, Tuple, Union
17 from typing import (
18 TYPE_CHECKING,
19 Any,
20 Awaitable,
21 Callable,
22 Dict,
23 List,
24 Match,
25 Optional,
26 Tuple,
27 Union,
28 )
1829
1930 from canonicaljson import json
2031 from prometheus_client import Counter, Histogram
5566 from synapse.util.async_helpers import Linearizer, concurrently_execute
5667 from synapse.util.caches.response_cache import ResponseCache
5768
69 if TYPE_CHECKING:
70 from synapse.server import HomeServer
71
5872 # when processing incoming transactions, we try to handle multiple rooms in
5973 # parallel, up to this limit.
6074 TRANSACTION_CONCURRENCY_LIMIT = 10
94108 # We cache responses to state queries, as they take a while and often
95109 # come in waves.
96110 self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
111 self._state_ids_resp_cache = ResponseCache(
112 hs, "state_ids_resp", timeout_ms=30000
113 )
97114
98115 async def on_backfill_request(
99116 self, origin: str, room_id: str, versions: List[str], limit: int
361378 if not in_room:
362379 raise AuthError(403, "Host not in room.")
363380
381 resp = await self._state_ids_resp_cache.wrap(
382 (room_id, event_id), self._on_state_ids_request_compute, room_id, event_id,
383 )
384
385 return 200, resp
386
387 async def _on_state_ids_request_compute(self, room_id, event_id):
364388 state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
365389 auth_chain_ids = await self.store.get_auth_chain_ids(state_ids)
366
367 return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
390 return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
368391
369392 async def _on_context_state_request_compute(
370393 self, room_id: str, event_id: str
525548 json_result = {} # type: Dict[str, Dict[str, dict]]
526549 for user_id, device_keys in results.items():
527550 for device_id, keys in device_keys.items():
528 for key_id, json_bytes in keys.items():
551 for key_id, json_str in keys.items():
529552 json_result.setdefault(user_id, {})[device_id] = {
530 key_id: json.loads(json_bytes)
553 key_id: json.loads(json_str)
531554 }
532555
533556 logger.info(
767790 query type for incoming federation traffic.
768791 """
769792
770 def __init__(self):
771 self.edu_handlers = {}
772 self.query_handlers = {}
773
774 def register_edu_handler(self, edu_type: str, handler: Callable[[str, dict], None]):
793 def __init__(self, hs: "HomeServer"):
794 self.config = hs.config
795 self.http_client = hs.get_simple_http_client()
796 self.clock = hs.get_clock()
797 self._instance_name = hs.get_instance_name()
798
799 # These are safe to load in monolith mode, but will explode if we try
800 # and use them. However we have guards before we use them to ensure that
801 # we don't route to ourselves, and in monolith mode that will always be
802 # the case.
803 self._get_query_client = ReplicationGetQueryRestServlet.make_client(hs)
804 self._send_edu = ReplicationFederationSendEduRestServlet.make_client(hs)
805
806 self.edu_handlers = (
807 {}
808 ) # type: Dict[str, Callable[[str, dict], Awaitable[None]]]
809 self.query_handlers = {} # type: Dict[str, Callable[[dict], Awaitable[None]]]
810
811 # Map from type to instance name that we should route EDU handling to.
812 self._edu_type_to_instance = {} # type: Dict[str, str]
813
814 def register_edu_handler(
815 self, edu_type: str, handler: Callable[[str, dict], Awaitable[None]]
816 ):
775817 """Sets the handler callable that will be used to handle an incoming
776818 federation EDU of the given type.
777819
808850
809851 self.query_handlers[query_type] = handler
810852
853 def register_instance_for_edu(self, edu_type: str, instance_name: str):
854 """Register that the EDU handler is on a different instance than master.
855 """
856 self._edu_type_to_instance[edu_type] = instance_name
857
811858 async def on_edu(self, edu_type: str, origin: str, content: dict):
859 if not self.config.use_presence and edu_type == "m.presence":
860 return
861
862 # Check if we have a handler on this instance
812863 handler = self.edu_handlers.get(edu_type)
813 if not handler:
814 logger.warning("No handler registered for EDU type %s", edu_type)
864 if handler:
865 with start_active_span_from_edu(content, "handle_edu"):
866 try:
867 await handler(origin, content)
868 except SynapseError as e:
869 logger.info("Failed to handle edu %r: %r", edu_type, e)
870 except Exception:
871 logger.exception("Failed to handle edu %r", edu_type)
815872 return
816873
817 with start_active_span_from_edu(content, "handle_edu"):
874 # Check if we can route it somewhere else that isn't us
875 route_to = self._edu_type_to_instance.get(edu_type, "master")
876 if route_to != self._instance_name:
818877 try:
819 await handler(origin, content)
878 await self._send_edu(
879 instance_name=route_to,
880 edu_type=edu_type,
881 origin=origin,
882 content=content,
883 )
820884 except SynapseError as e:
821885 logger.info("Failed to handle edu %r: %r", edu_type, e)
822886 except Exception:
823887 logger.exception("Failed to handle edu %r", edu_type)
824
825 def on_query(self, query_type: str, args: dict) -> defer.Deferred:
826 handler = self.query_handlers.get(query_type)
827 if not handler:
828 logger.warning("No handler registered for query type %s", query_type)
829 raise NotFoundError("No handler for Query type '%s'" % (query_type,))
830
831 return handler(args)
832
833
834 class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
835 """A FederationHandlerRegistry for worker processes.
836
837 When receiving EDU or queries it will check if an appropriate handler has
838 been registered on the worker, if there isn't one then it calls off to the
839 master process.
840 """
841
842 def __init__(self, hs):
843 self.config = hs.config
844 self.http_client = hs.get_simple_http_client()
845 self.clock = hs.get_clock()
846
847 self._get_query_client = ReplicationGetQueryRestServlet.make_client(hs)
848 self._send_edu = ReplicationFederationSendEduRestServlet.make_client(hs)
849
850 super(ReplicationFederationHandlerRegistry, self).__init__()
851
852 async def on_edu(self, edu_type: str, origin: str, content: dict):
853 """Overrides FederationHandlerRegistry
854 """
855 if not self.config.use_presence and edu_type == "m.presence":
856888 return
857889
858 handler = self.edu_handlers.get(edu_type)
859 if handler:
860 return await super(ReplicationFederationHandlerRegistry, self).on_edu(
861 edu_type, origin, content
862 )
863
864 return await self._send_edu(edu_type=edu_type, origin=origin, content=content)
890 # Oh well, let's just log and move on.
891 logger.warning("No handler registered for EDU type %s", edu_type)
865892
866893 async def on_query(self, query_type: str, args: dict):
867 """Overrides FederationHandlerRegistry
868 """
869894 handler = self.query_handlers.get(query_type)
870895 if handler:
871896 return await handler(args)
872897
873 return await self._get_query_client(query_type=query_type, args=args)
898 # Check if we can route it somewhere else that isn't us
899 if self._instance_name == "master":
900 return await self._get_query_client(query_type=query_type, args=args)
901
902 # Uh oh, no handler! Let's raise an exception so the request returns an
903 # error.
904 logger.warning("No handler registered for query type %s", query_type)
905 raise NotFoundError("No handler for Query type '%s'" % (query_type,))
5454 self.notifier = hs.get_notifier()
5555 self.is_mine_id = hs.is_mine_id
5656
57 # We may have multiple federation sender instances, so we need to track
58 # their positions separately.
59 self._sender_instances = hs.config.worker.federation_shard_config.instances
60 self._sender_positions = {}
61
5762 # Pending presence map user_id -> UserPresenceState
5863 self.presence_map = {} # type: Dict[str, UserPresenceState]
5964
260265 def get_current_token(self):
261266 return self.pos - 1
262267
263 def federation_ack(self, token):
268 def federation_ack(self, instance_name, token):
269 if self._sender_instances:
270 # If we have configured multiple federation sender instances we need
271 # to track their positions separately, and only clear the queue up
272 # to the token all instances have acked.
273 self._sender_positions[instance_name] = token
274 token = min(self._sender_positions.values())
275
264276 self._clear_queue_before_pos(token)
265277
266278 async def get_replication_rows(
6868
6969 self._transaction_manager = TransactionManager(hs)
7070
71 self._instance_name = hs.get_instance_name()
72 self._federation_shard_config = hs.config.worker.federation_shard_config
73
7174 # map from destination to PerDestinationQueue
7275 self._per_destination_queues = {} # type: Dict[str, PerDestinationQueue]
7376
190193 )
191194 return
192195
193 destinations = set(destinations)
196 destinations = {
197 d
198 for d in destinations
199 if self._federation_shard_config.should_handle(
200 self._instance_name, d
201 )
202 }
194203
195204 if send_on_behalf_of is not None:
196205 # If we are sending the event on behalf of another server
320329 room_id = receipt.room_id
321330
322331 # Work out which remote servers should be poked and poke them.
323 domains = yield self.state.get_current_hosts_in_room(room_id)
324 domains = [d for d in domains if d != self.server_name]
332 domains = yield defer.ensureDeferred(
333 self.state.get_current_hosts_in_room(room_id)
334 )
335 domains = [
336 d
337 for d in domains
338 if d != self.server_name
339 and self._federation_shard_config.should_handle(self._instance_name, d)
340 ]
325341 if not domains:
326342 return
327343
426442 for destination in destinations:
427443 if destination == self.server_name:
428444 continue
445 if not self._federation_shard_config.should_handle(
446 self._instance_name, destination
447 ):
448 continue
429449 self._get_per_destination_queue(destination).send_presence(states)
430450
431451 @measure_func("txnqueue._process_presence")
434454 """Given a list of states populate self.pending_presence_by_dest and
435455 poke to send a new transaction to each destination
436456 """
437 hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
457 hosts_and_states = yield defer.ensureDeferred(
458 get_interested_remotes(self.store, states, self.state)
459 )
438460
439461 for destinations, states in hosts_and_states:
440462 for destination in destinations:
441463 if destination == self.server_name:
442464 continue
465
466 if not self._federation_shard_config.should_handle(
467 self._instance_name, destination
468 ):
469 continue
470
443471 self._get_per_destination_queue(destination).send_presence(states)
444472
445473 def build_and_send_edu(
461489 logger.info("Not sending EDU to ourselves")
462490 return
463491
492 if not self._federation_shard_config.should_handle(
493 self._instance_name, destination
494 ):
495 return
496
464497 edu = Edu(
465498 origin=self.server_name,
466499 destination=destination,
477510 edu: edu to send
478511 key: clobbering key for this edu
479512 """
513 if not self._federation_shard_config.should_handle(
514 self._instance_name, edu.destination
515 ):
516 return
517
480518 queue = self._get_per_destination_queue(edu.destination)
481519 if key:
482520 queue.send_keyed_edu(edu, key)
488526 logger.warning("Not sending device update to ourselves")
489527 return
490528
529 if not self._federation_shard_config.should_handle(
530 self._instance_name, destination
531 ):
532 return
533
491534 self._get_per_destination_queue(destination).attempt_new_transaction()
492535
493536 def wake_destination(self, destination: str):
499542
500543 if destination == self.server_name:
501544 logger.warning("Not waking up ourselves")
545 return
546
547 if not self._federation_shard_config.should_handle(
548 self._instance_name, destination
549 ):
502550 return
503551
504552 self._get_per_destination_queue(destination).attempt_new_transaction()
7373 self._clock = hs.get_clock()
7474 self._store = hs.get_datastore()
7575 self._transaction_manager = transaction_manager
76 self._instance_name = hs.get_instance_name()
77 self._federation_shard_config = hs.config.worker.federation_shard_config
78
79 self._should_send_on_this_instance = True
80 if not self._federation_shard_config.should_handle(
81 self._instance_name, destination
82 ):
83 # We don't raise an exception here to avoid taking out any other
84 # processing. We have a guard in `attempt_new_transaction` that
85 # ensure we don't start sending stuff.
86 logger.error(
87 "Create a per destination queue for %s on wrong worker", destination,
88 )
89 self._should_send_on_this_instance = False
7690
7791 self._destination = destination
7892 self.transmission_loop_running = False
177191 # we need application-layer timeouts of some flavour of these
178192 # requests
179193 logger.debug("TX [%s] Transaction already in progress", self._destination)
194 return
195
196 if not self._should_send_on_this_instance:
197 # We don't raise an exception here to avoid taking out any other
198 # processing.
199 logger.error(
200 "Trying to start a transaction to %s on wrong worker", self._destination
201 )
180202 return
181203
182204 logger.debug("TX [%s] Starting transaction loop", self._destination)
6060 # all the edus in that transaction. This needs to be done since there is
6161 # no active span here, so if the edus were not received by the remote the
6262 # span would have no causality and it would be forgotten.
63 # The span_contexts is a generator so that it won't be evaluated if
64 # opentracing is disabled. (Yay speed!)
6563
6664 span_contexts = []
6765 keep_destination = whitelisted_homeserver(destination)
1818 import logging
1919 import re
2020 from typing import Optional, Tuple, Type
21
22 from twisted.internet.defer import maybeDeferred
2321
2422 import synapse
2523 from synapse.api.errors import Codes, FederationDeniedError, SynapseError
339337 if origin:
340338 with ratelimiter.ratelimit(origin) as d:
341339 await d
340 if request._disconnected:
341 logger.warning(
342 "client disconnected before we started processing "
343 "request"
344 )
345 return -1, None
342346 response = await func(
343347 origin, content, request.args, *args, **kwargs
344348 )
794798 # zero is a special value which corresponds to no limit.
795799 limit = None
796800
797 data = await maybeDeferred(
798 self.handler.get_local_public_room_list,
799 limit,
800 since_token,
801 network_tuple=network_tuple,
802 from_federation=True,
801 data = await self.handler.get_local_public_room_list(
802 limit, since_token, network_tuple=network_tuple, from_federation=True
803803 )
804804 return 200, data
805805
1414
1515 import logging
1616
17 from twisted.internet import defer
18
17 import synapse.state
18 import synapse.storage
1919 import synapse.types
2020 from synapse.api.constants import EventTypes, Membership
2121 from synapse.api.ratelimiting import Ratelimiter
2727 class BaseHandler(object):
2828 """
2929 Common base class for the event handlers.
30
31 Attributes:
32 store (synapse.storage.DataStore):
33 state_handler (synapse.state.StateHandler):
3430 """
3531
3632 def __init__(self, hs):
3834 Args:
3935 hs (synapse.server.HomeServer):
4036 """
41 self.store = hs.get_datastore()
37 self.store = hs.get_datastore() # type: synapse.storage.DataStore
4238 self.auth = hs.get_auth()
4339 self.notifier = hs.get_notifier()
44 self.state_handler = hs.get_state_handler()
40 self.state_handler = hs.get_state_handler() # type: synapse.state.StateHandler
4541 self.distributor = hs.get_distributor()
4642 self.clock = hs.get_clock()
4743 self.hs = hs
6763
6864 self.event_builder_factory = hs.get_event_builder_factory()
6965
70 @defer.inlineCallbacks
71 def ratelimit(self, requester, update=True, is_admin_redaction=False):
66 async def ratelimit(self, requester, update=True, is_admin_redaction=False):
7267 """Ratelimits requests.
7368
7469 Args:
10095 burst_count = self._rc_message.burst_count
10196
10297 # Check if there is a per user override in the DB.
103 override = yield self.store.get_ratelimit_for_user(user_id)
98 override = await self.store.get_ratelimit_for_user(user_id)
10499 if override:
105100 # If overridden with a null Hz then ratelimiting has been entirely
106101 # disabled for the user
7171 writer (ExfiltrationWriter)
7272
7373 Returns:
74 defer.Deferred: Resolves when all data for a user has been written.
74 Resolves when all data for a user has been written.
7575 The returned value is that returned by `writer.finished()`.
7676 """
7777 # Get all rooms the user is in or has been in
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import inspect
1516 import logging
1617 import time
1718 import unicodedata
862863 # see if any of our auth providers want to know about this
863864 for provider in self.password_providers:
864865 if hasattr(provider, "on_logged_out"):
865 await provider.on_logged_out(
866 # This might return an awaitable, if it does block the log out
867 # until it completes.
868 result = provider.on_logged_out(
866869 user_id=str(user_info["user"]),
867870 device_id=user_info["device_id"],
868871 access_token=access_token,
869872 )
873 if inspect.isawaitable(result):
874 await result
870875
871876 # delete pushers associated with this access token
872877 if user_info["token_id"] is not None:
103103 return user, displayname
104104
105105 def _parse_cas_response(
106 self, cas_response_body: str
106 self, cas_response_body: bytes
107107 ) -> Tuple[str, Dict[str, Optional[str]]]:
108108 """
109109 Retrieve the user and other parameters from the CAS response.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 import logging
16 from typing import Optional
1617
1718 from synapse.api.errors import SynapseError
1819 from synapse.metrics.background_process_metrics import run_as_background_process
2829
2930 def __init__(self, hs):
3031 super(DeactivateAccountHandler, self).__init__(hs)
32 self.hs = hs
3133 self._auth_handler = hs.get_auth_handler()
3234 self._device_handler = hs.get_device_handler()
3335 self._room_member_handler = hs.get_room_member_handler()
3941
4042 # Start the user parter loop so it can resume parting users from rooms where
4143 # it left off (if it has work left to do).
42 hs.get_reactor().callWhenRunning(self._start_user_parting)
44 if hs.config.worker_app is None:
45 hs.get_reactor().callWhenRunning(self._start_user_parting)
4346
4447 self._account_validity_enabled = hs.config.account_validity.enabled
4548
46 async def deactivate_account(self, user_id, erase_data, id_server=None):
49 async def deactivate_account(
50 self, user_id: str, erase_data: bool, id_server: Optional[str] = None
51 ) -> bool:
4752 """Deactivate a user's account
4853
4954 Args:
50 user_id (str): ID of user to be deactivated
51 erase_data (bool): whether to GDPR-erase the user's data
52 id_server (str|None): Use the given identity server when unbinding
55 user_id: ID of user to be deactivated
56 erase_data: whether to GDPR-erase the user's data
57 id_server: Use the given identity server when unbinding
5358 any threepids. If None then will attempt to unbind using the
5459 identity server specified when binding (if known).
5560
5661 Returns:
57 Deferred[bool]: True if identity server supports removing
58 threepids, otherwise False.
62 True if identity server supports removing threepids, otherwise False.
5963 """
6064 # FIXME: Theoretically there is a race here wherein user resets
6165 # password using threepid.
132136
133137 return identity_server_supports_unbinding
134138
135 async def _reject_pending_invites_for_user(self, user_id):
139 async def _reject_pending_invites_for_user(self, user_id: str):
136140 """Reject pending invites addressed to a given user ID.
137141
138142 Args:
139 user_id (str): The user ID to reject pending invites for.
143 user_id: The user ID to reject pending invites for.
140144 """
141145 user = UserID.from_string(user_id)
142146 pending_invites = await self.store.get_invited_rooms_for_local_user(user_id)
164168 room.room_id,
165169 )
166170
167 def _start_user_parting(self):
171 def _start_user_parting(self) -> None:
168172 """
169173 Start the process that goes through the table of users
170174 pending deactivation, if it isn't already running.
171
172 Returns:
173 None
174175 """
175176 if not self._user_parter_running:
176177 run_as_background_process("user_parter_loop", self._user_parter_loop)
177178
178 async def _user_parter_loop(self):
179 async def _user_parter_loop(self) -> None:
179180 """Loop that parts deactivated users from rooms
180
181 Returns:
182 None
183181 """
184182 self._user_parter_running = True
185183 logger.info("Starting user parter")
196194 finally:
197195 self._user_parter_running = False
198196
199 async def _part_user(self, user_id):
197 async def _part_user(self, user_id: str) -> None:
200198 """Causes the given user_id to leave all the rooms they're joined to
201
202 Returns:
203 None
204199 """
205200 user = UserID.from_string(user_id)
206201
222217 user_id,
223218 room_id,
224219 )
220
221 async def activate_account(self, user_id: str) -> None:
222 """
223 Activate an account that was previously deactivated.
224
225 This marks the user as active and not erased in the database, but does
226 not attempt to rejoin rooms, re-add threepids, etc.
227
228 If enabled, the user will be re-added to the user directory.
229
230 The user will also need a password hash set to actually login.
231
232 Args:
233 user_id: ID of user to be re-activated
234 """
235 # Add the user to the directory, if necessary.
236 user = UserID.from_string(user_id)
237 if self.hs.config.user_directory_search_all_users:
238 profile = await self.store.get_profileinfo(user.localpart)
239 await self.user_directory_handler.handle_local_profile_change(
240 user_id, profile
241 )
242
243 # Ensure the user is not marked as erased.
244 await self.store.mark_user_not_erased(user_id)
245
246 # Mark the user as active.
247 await self.store.set_user_deactivated_status(user_id, False)
1414 # See the License for the specific language governing permissions and
1515 # limitations under the License.
1616 import logging
17 from typing import Any, Dict, Optional
18
19 from twisted.internet import defer
17 from typing import Any, Dict, List, Optional
2018
2119 from synapse.api import errors
2220 from synapse.api.constants import EventTypes
5654 self._auth_handler = hs.get_auth_handler()
5755
5856 @trace
59 @defer.inlineCallbacks
60 def get_devices_by_user(self, user_id):
57 async def get_devices_by_user(self, user_id: str) -> List[Dict[str, Any]]:
6158 """
6259 Retrieve the given user's devices
6360
6461 Args:
65 user_id (str):
62 user_id: The user ID to query for devices.
6663 Returns:
67 defer.Deferred: list[dict[str, X]]: info on each device
64 info on each device
6865 """
6966
7067 set_tag("user_id", user_id)
71 device_map = yield self.store.get_devices_by_user(user_id)
72
73 ips = yield self.store.get_last_client_ip_by_device(user_id, device_id=None)
68 device_map = await self.store.get_devices_by_user(user_id)
69
70 ips = await self.store.get_last_client_ip_by_device(user_id, device_id=None)
7471
7572 devices = list(device_map.values())
7673 for device in devices:
8077 return devices
8178
8279 @trace
83 @defer.inlineCallbacks
84 def get_device(self, user_id, device_id):
80 async def get_device(self, user_id: str, device_id: str) -> Dict[str, Any]:
8581 """ Retrieve the given device
8682
8783 Args:
88 user_id (str):
89 device_id (str):
84 user_id: The user to get the device from
85 device_id: The device to fetch.
9086
9187 Returns:
92 defer.Deferred: dict[str, X]: info on the device
88 info on the device
9389 Raises:
9490 errors.NotFoundError: if the device was not found
9591 """
9692 try:
97 device = yield self.store.get_device(user_id, device_id)
93 device = await self.store.get_device(user_id, device_id)
9894 except errors.StoreError:
9995 raise errors.NotFoundError
100 ips = yield self.store.get_last_client_ip_by_device(user_id, device_id)
96 ips = await self.store.get_last_client_ip_by_device(user_id, device_id)
10197 _update_device_from_client_ips(device, ips)
10298
10399 set_tag("device", device)
105101
106102 return device
107103
104 @trace
108105 @measure_func("device.get_user_ids_changed")
109 @trace
110 @defer.inlineCallbacks
111 def get_user_ids_changed(self, user_id, from_token):
106 async def get_user_ids_changed(self, user_id, from_token):
112107 """Get list of users that have had the devices updated, or have newly
113108 joined a room, that `user_id` may be interested in.
114109
119114
120115 set_tag("user_id", user_id)
121116 set_tag("from_token", from_token)
122 now_room_key = yield self.store.get_room_events_max_id()
123
124 room_ids = yield self.store.get_rooms_for_user(user_id)
117 now_room_key = await self.store.get_room_events_max_id()
118
119 room_ids = await self.store.get_rooms_for_user(user_id)
125120
126121 # First we check if any devices have changed for users that we share
127122 # rooms with.
128 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
123 users_who_share_room = await self.store.get_users_who_share_room_with_user(
129124 user_id
130125 )
131126
134129 # Always tell the user about their own devices
135130 tracked_users.add(user_id)
136131
137 changed = yield self.store.get_users_whose_devices_changed(
132 changed = await self.store.get_users_whose_devices_changed(
138133 from_token.device_list_key, tracked_users
139134 )
140135
141136 # Then work out if any users have since joined
142137 rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
143138
144 member_events = yield self.store.get_membership_changes_for_user(
139 member_events = await self.store.get_membership_changes_for_user(
145140 user_id, from_token.room_key, now_room_key
146141 )
147142 rooms_changed.update(event.room_id for event in member_events)
151146 possibly_changed = set(changed)
152147 possibly_left = set()
153148 for room_id in rooms_changed:
154 current_state_ids = yield self.store.get_current_state_ids(room_id)
149 current_state_ids = await self.store.get_current_state_ids(room_id)
155150
156151 # The user may have left the room
157152 # TODO: Check if they actually did or if we were just invited.
165160
166161 # Fetch the current state at the time.
167162 try:
168 event_ids = yield self.store.get_forward_extremeties_for_room(
163 event_ids = await self.store.get_forward_extremeties_for_room(
169164 room_id, stream_ordering=stream_ordering
170165 )
171166 except errors.StoreError:
191186 continue
192187
193188 # mapping from event_id -> state_dict
194 prev_state_ids = yield self.state_store.get_state_ids_for_events(event_ids)
189 prev_state_ids = await self.state_store.get_state_ids_for_events(event_ids)
195190
196191 # Check if we've joined the room? If so we just blindly add all the users to
197192 # the "possibly changed" users.
237232
238233 return result
239234
240 @defer.inlineCallbacks
241 def on_federation_query_user_devices(self, user_id):
242 stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
243 master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
244 self_signing_key = yield self.store.get_e2e_cross_signing_key(
235 async def on_federation_query_user_devices(self, user_id):
236 stream_id, devices = await self.store.get_devices_with_keys_by_user(user_id)
237 master_key = await self.store.get_e2e_cross_signing_key(user_id, "master")
238 self_signing_key = await self.store.get_e2e_cross_signing_key(
245239 user_id, "self_signing"
246240 )
247241
270264
271265 hs.get_distributor().observe("user_left_room", self.user_left_room)
272266
273 @defer.inlineCallbacks
274 def check_device_registered(
267 async def check_device_registered(
275268 self, user_id, device_id, initial_device_display_name=None
276269 ):
277270 """
289282 str: device id (generated if none was supplied)
290283 """
291284 if device_id is not None:
292 new_device = yield self.store.store_device(
285 new_device = await self.store.store_device(
293286 user_id=user_id,
294287 device_id=device_id,
295288 initial_device_display_name=initial_device_display_name,
296289 )
297290 if new_device:
298 yield self.notify_device_update(user_id, [device_id])
291 await self.notify_device_update(user_id, [device_id])
299292 return device_id
300293
301294 # if the device id is not specified, we'll autogen one, but loop a few
303296 attempts = 0
304297 while attempts < 5:
305298 device_id = stringutils.random_string(10).upper()
306 new_device = yield self.store.store_device(
299 new_device = await self.store.store_device(
307300 user_id=user_id,
308301 device_id=device_id,
309302 initial_device_display_name=initial_device_display_name,
310303 )
311304 if new_device:
312 yield self.notify_device_update(user_id, [device_id])
305 await self.notify_device_update(user_id, [device_id])
313306 return device_id
314307 attempts += 1
315308
316309 raise errors.StoreError(500, "Couldn't generate a device ID.")
317310
318311 @trace
319 @defer.inlineCallbacks
320 def delete_device(self, user_id, device_id):
312 async def delete_device(self, user_id: str, device_id: str) -> None:
321313 """ Delete the given device
322314
323315 Args:
324 user_id (str):
325 device_id (str):
326
327 Returns:
328 defer.Deferred:
316 user_id: The user to delete the device from.
317 device_id: The device to delete.
329318 """
330319
331320 try:
332 yield self.store.delete_device(user_id, device_id)
321 await self.store.delete_device(user_id, device_id)
333322 except errors.StoreError as e:
334323 if e.code == 404:
335324 # no match
341330 else:
342331 raise
343332
344 yield defer.ensureDeferred(
345 self._auth_handler.delete_access_tokens_for_user(
346 user_id, device_id=device_id
347 )
348 )
349
350 yield self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
351
352 yield self.notify_device_update(user_id, [device_id])
333 await self._auth_handler.delete_access_tokens_for_user(
334 user_id, device_id=device_id
335 )
336
337 await self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
338
339 await self.notify_device_update(user_id, [device_id])
353340
354341 @trace
355 @defer.inlineCallbacks
356 def delete_all_devices_for_user(self, user_id, except_device_id=None):
342 async def delete_all_devices_for_user(
343 self, user_id: str, except_device_id: Optional[str] = None
344 ) -> None:
357345 """Delete all of the user's devices
358346
359347 Args:
360 user_id (str):
361 except_device_id (str|None): optional device id which should not
362 be deleted
363
364 Returns:
365 defer.Deferred:
366 """
367 device_map = yield self.store.get_devices_by_user(user_id)
348 user_id: The user to remove all devices from
349 except_device_id: optional device id which should not be deleted
350 """
351 device_map = await self.store.get_devices_by_user(user_id)
368352 device_ids = list(device_map)
369353 if except_device_id is not None:
370354 device_ids = [d for d in device_ids if d != except_device_id]
371 yield self.delete_devices(user_id, device_ids)
372
373 @defer.inlineCallbacks
374 def delete_devices(self, user_id, device_ids):
355 await self.delete_devices(user_id, device_ids)
356
357 async def delete_devices(self, user_id: str, device_ids: List[str]) -> None:
375358 """ Delete several devices
376359
377360 Args:
378 user_id (str):
379 device_ids (List[str]): The list of device IDs to delete
380
381 Returns:
382 defer.Deferred:
361 user_id: The user to delete devices from.
362 device_ids: The list of device IDs to delete
383363 """
384364
385365 try:
386 yield self.store.delete_devices(user_id, device_ids)
366 await self.store.delete_devices(user_id, device_ids)
387367 except errors.StoreError as e:
388368 if e.code == 404:
389369 # no match
396376 # Delete access tokens and e2e keys for each device. Not optimised as it is not
397377 # considered as part of a critical path.
398378 for device_id in device_ids:
399 yield defer.ensureDeferred(
400 self._auth_handler.delete_access_tokens_for_user(
401 user_id, device_id=device_id
402 )
403 )
404 yield self.store.delete_e2e_keys_by_device(
379 await self._auth_handler.delete_access_tokens_for_user(
380 user_id, device_id=device_id
381 )
382 await self.store.delete_e2e_keys_by_device(
405383 user_id=user_id, device_id=device_id
406384 )
407385
408 yield self.notify_device_update(user_id, device_ids)
409
410 @defer.inlineCallbacks
411 def update_device(self, user_id, device_id, content):
386 await self.notify_device_update(user_id, device_ids)
387
388 async def update_device(self, user_id: str, device_id: str, content: dict) -> None:
412389 """ Update the given device
413390
414391 Args:
415 user_id (str):
416 device_id (str):
417 content (dict): body of update request
418
419 Returns:
420 defer.Deferred:
392 user_id: The user to update devices of.
393 device_id: The device to update.
394 content: body of update request
421395 """
422396
423397 # Reject a new displayname which is too long.
430404 )
431405
432406 try:
433 yield self.store.update_device(
407 await self.store.update_device(
434408 user_id, device_id, new_display_name=new_display_name
435409 )
436 yield self.notify_device_update(user_id, [device_id])
410 await self.notify_device_update(user_id, [device_id])
437411 except errors.StoreError as e:
438412 if e.code == 404:
439413 raise errors.NotFoundError()
442416
443417 @trace
444418 @measure_func("notify_device_update")
445 @defer.inlineCallbacks
446 def notify_device_update(self, user_id, device_ids):
419 async def notify_device_update(self, user_id, device_ids):
447420 """Notify that a user's device(s) has changed. Pokes the notifier, and
448421 remote servers if the user is local.
449422 """
450 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
423 if not device_ids:
424 # No changes to notify about, so this is a no-op.
425 return
426
427 users_who_share_room = await self.store.get_users_who_share_room_with_user(
451428 user_id
452429 )
453430
458435
459436 set_tag("target_hosts", hosts)
460437
461 position = yield self.store.add_device_change_to_streams(
438 position = await self.store.add_device_change_to_streams(
462439 user_id, device_ids, list(hosts)
463440 )
441
442 if not position:
443 # This should only happen if there are no updates, so we bail.
444 return
464445
465446 for device_id in device_ids:
466447 logger.debug(
467448 "Notifying about update %r/%r, ID: %r", user_id, device_id, position
468449 )
469450
470 room_ids = yield self.store.get_rooms_for_user(user_id)
451 room_ids = await self.store.get_rooms_for_user(user_id)
471452
472453 # specify the user ID too since the user should always get their own device list
473454 # updates, even if they aren't in any rooms.
474 yield self.notifier.on_new_event(
455 self.notifier.on_new_event(
475456 "device_list_key", position, users=[user_id], rooms=room_ids
476457 )
477458
483464 self.federation_sender.send_device_messages(host)
484465 log_kv({"message": "sent device update to host", "host": host})
485466
486 @defer.inlineCallbacks
487 def notify_user_signature_update(self, from_user_id, user_ids):
467 async def notify_user_signature_update(
468 self, from_user_id: str, user_ids: List[str]
469 ) -> None:
488470 """Notify a user that they have made new signatures of other users.
489471
490472 Args:
491 from_user_id (str): the user who made the signature
492 user_ids (list[str]): the users IDs that have new signatures
493 """
494
495 position = yield self.store.add_user_signature_change_to_streams(
473 from_user_id: the user who made the signature
474 user_ids: the users IDs that have new signatures
475 """
476
477 position = await self.store.add_user_signature_change_to_streams(
496478 from_user_id, user_ids
497479 )
498480
499481 self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
500482
501 @defer.inlineCallbacks
502 def user_left_room(self, user, room_id):
483 async def user_left_room(self, user, room_id):
503484 user_id = user.to_string()
504 room_ids = yield self.store.get_rooms_for_user(user_id)
485 room_ids = await self.store.get_rooms_for_user(user_id)
505486 if not room_ids:
506487 # We no longer share rooms with this user, so we'll no longer
507488 # receive device updates. Mark this in DB.
508 yield self.store.mark_remote_user_device_list_as_unsubscribed(user_id)
489 await self.store.mark_remote_user_device_list_as_unsubscribed(user_id)
509490
510491
511492 def _update_device_from_client_ips(device, client_ips):
548529 )
549530
550531 @trace
551 @defer.inlineCallbacks
552 def incoming_device_list_update(self, origin, edu_content):
532 async def incoming_device_list_update(self, origin, edu_content):
553533 """Called on incoming device list update from federation. Responsible
554534 for parsing the EDU and adding to pending updates list.
555535 """
582562 )
583563 return
584564
585 room_ids = yield self.store.get_rooms_for_user(user_id)
565 room_ids = await self.store.get_rooms_for_user(user_id)
586566 if not room_ids:
587567 # We don't share any rooms with this user. Ignore update, as we
588568 # probably won't get any further updates.
607587 (device_id, stream_id, prev_ids, edu_content)
608588 )
609589
610 yield self._handle_device_updates(user_id)
590 await self._handle_device_updates(user_id)
611591
612592 @measure_func("_incoming_device_list_update")
613 @defer.inlineCallbacks
614 def _handle_device_updates(self, user_id):
593 async def _handle_device_updates(self, user_id):
615594 "Actually handle pending updates."
616595
617 with (yield self._remote_edu_linearizer.queue(user_id)):
596 with (await self._remote_edu_linearizer.queue(user_id)):
618597 pending_updates = self._pending_updates.pop(user_id, [])
619598 if not pending_updates:
620599 # This can happen since we batch updates
631610
632611 # Given a list of updates we check if we need to resync. This
633612 # happens if we've missed updates.
634 resync = yield self._need_to_do_resync(user_id, pending_updates)
613 resync = await self._need_to_do_resync(user_id, pending_updates)
635614
636615 if logger.isEnabledFor(logging.INFO):
637616 logger.info(
642621 )
643622
644623 if resync:
645 yield self.user_device_resync(user_id)
624 await self.user_device_resync(user_id)
646625 else:
647626 # Simply update the single device, since we know that is the only
648627 # change (because of the single prev_id matching the current cache)
649628 for device_id, stream_id, prev_ids, content in pending_updates:
650 yield self.store.update_remote_device_list_cache_entry(
629 await self.store.update_remote_device_list_cache_entry(
651630 user_id, device_id, content, stream_id
652631 )
653632
654 yield self.device_handler.notify_device_update(
633 await self.device_handler.notify_device_update(
655634 user_id, [device_id for device_id, _, _, _ in pending_updates]
656635 )
657636
659638 stream_id for _, stream_id, _, _ in pending_updates
660639 )
661640
662 @defer.inlineCallbacks
663 def _need_to_do_resync(self, user_id, updates):
641 async def _need_to_do_resync(self, user_id, updates):
664642 """Given a list of updates for a user figure out if we need to do a full
665643 resync, or whether we have enough data that we can just apply the delta.
666644 """
667645 seen_updates = self._seen_updates.get(user_id, set())
668646
669 extremity = yield self.store.get_device_list_last_stream_id_for_remote(user_id)
647 extremity = await self.store.get_device_list_last_stream_id_for_remote(user_id)
670648
671649 logger.debug("Current extremity for %r: %r", user_id, extremity)
672650
691669 return False
692670
693671 @trace
694 @defer.inlineCallbacks
695 def _maybe_retry_device_resync(self):
672 async def _maybe_retry_device_resync(self):
696673 """Retry to resync device lists that are out of sync, except if another retry is
697674 in progress.
698675 """
704681 # we don't send too many requests.
705682 self._resync_retry_in_progress = True
706683 # Get all of the users that need resyncing.
707 need_resync = yield self.store.get_user_ids_requiring_device_list_resync()
684 need_resync = await self.store.get_user_ids_requiring_device_list_resync()
708685 # Iterate over the set of user IDs.
709686 for user_id in need_resync:
710687 try:
711688 # Try to resync the current user's devices list.
712 result = yield self.user_device_resync(
689 result = await self.user_device_resync(
713690 user_id=user_id, mark_failed_as_stale=False,
714691 )
715692
733710 # Allow future calls to retry resyncinc out of sync device lists.
734711 self._resync_retry_in_progress = False
735712
736 @defer.inlineCallbacks
737 def user_device_resync(self, user_id, mark_failed_as_stale=True):
713 async def user_device_resync(
714 self, user_id: str, mark_failed_as_stale: bool = True
715 ) -> Optional[dict]:
738716 """Fetches all devices for a user and updates the device cache with them.
739717
740718 Args:
741 user_id (str): The user's id whose device_list will be updated.
742 mark_failed_as_stale (bool): Whether to mark the user's device list as stale
719 user_id: The user's id whose device_list will be updated.
720 mark_failed_as_stale: Whether to mark the user's device list as stale
743721 if the attempt to resync failed.
744722 Returns:
745 Deferred[dict]: a dict with device info as under the "devices" in the result of this
723 A dict with device info as under the "devices" in the result of this
746724 request:
747725 https://matrix.org/docs/spec/server_server/r0.1.2#get-matrix-federation-v1-user-devices-userid
748726 """
751729 # Fetch all devices for the user.
752730 origin = get_domain_from_id(user_id)
753731 try:
754 result = yield self.federation.query_user_devices(origin, user_id)
732 result = await self.federation.query_user_devices(origin, user_id)
755733 except NotRetryingDestination:
756734 if mark_failed_as_stale:
757735 # Mark the remote user's device list as stale so we know we need to retry
758736 # it later.
759 yield self.store.mark_remote_user_device_cache_as_stale(user_id)
737 await self.store.mark_remote_user_device_cache_as_stale(user_id)
760738
761739 return
762740 except (RequestSendFailed, HttpResponseException) as e:
767745 if mark_failed_as_stale:
768746 # Mark the remote user's device list as stale so we know we need to retry
769747 # it later.
770 yield self.store.mark_remote_user_device_cache_as_stale(user_id)
748 await self.store.mark_remote_user_device_cache_as_stale(user_id)
771749
772750 # We abort on exceptions rather than accepting the update
773751 # as otherwise synapse will 'forget' that its device list
791769 if mark_failed_as_stale:
792770 # Mark the remote user's device list as stale so we know we need to retry
793771 # it later.
794 yield self.store.mark_remote_user_device_cache_as_stale(user_id)
772 await self.store.mark_remote_user_device_cache_as_stale(user_id)
795773
796774 return
797775 log_kv({"result": result})
832810 stream_id,
833811 )
834812
835 yield self.store.update_remote_device_list_cache(user_id, devices, stream_id)
813 await self.store.update_remote_device_list_cache(user_id, devices, stream_id)
836814 device_ids = [device["device_id"] for device in devices]
837815
838816 # Handle cross-signing keys.
839 cross_signing_device_ids = yield self.process_cross_signing_key_update(
817 cross_signing_device_ids = await self.process_cross_signing_key_update(
840818 user_id, master_key, self_signing_key,
841819 )
842820 device_ids = device_ids + cross_signing_device_ids
843821
844 yield self.device_handler.notify_device_update(user_id, device_ids)
822 await self.device_handler.notify_device_update(user_id, device_ids)
845823
846824 # We clobber the seen updates since we've re-synced from a given
847825 # point.
848826 self._seen_updates[user_id] = {stream_id}
849827
850 defer.returnValue(result)
851
852 @defer.inlineCallbacks
853 def process_cross_signing_key_update(
828 return result
829
830 async def process_cross_signing_key_update(
854831 self,
855832 user_id: str,
856833 master_key: Optional[Dict[str, Any]],
871848 device_ids = []
872849
873850 if master_key:
874 yield self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
851 await self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
875852 _, verify_key = get_verify_key_from_cross_signing_key(master_key)
876853 # verify_key is a VerifyKey from signedjson, which uses
877854 # .version to denote the portion of the key ID after the
878855 # algorithm and colon, which is the device ID
879856 device_ids.append(verify_key.version)
880857 if self_signing_key:
881 yield self.store.set_e2e_cross_signing_key(
858 await self.store.set_e2e_cross_signing_key(
882859 user_id, "self_signing", self_signing_key
883860 )
884861 _, verify_key = get_verify_key_from_cross_signing_key(self_signing_key)
1515 # limitations under the License.
1616
1717 import logging
18 from typing import Dict, List, Optional, Tuple
1819
1920 import attr
2021 from canonicaljson import encode_canonical_json, json
21 from signedjson.key import decode_verify_key_bytes
22 from signedjson.key import VerifyKey, decode_verify_key_bytes
2223 from signedjson.sign import SignatureVerifyException, verify_signed_json
2324 from unpaddedbase64 import decode_base64
2425
7677 )
7778
7879 @trace
79 @defer.inlineCallbacks
80 def query_devices(self, query_body, timeout, from_user_id):
80 async def query_devices(self, query_body, timeout, from_user_id):
8181 """ Handle a device key query from a client
8282
8383 {
123123 failures = {}
124124 results = {}
125125 if local_query:
126 local_result = yield self.query_local_devices(local_query)
126 local_result = await self.query_local_devices(local_query)
127127 for user_id, keys in local_result.items():
128128 if user_id in local_query:
129129 results[user_id] = keys
141141 (
142142 user_ids_not_in_cache,
143143 remote_results,
144 ) = yield self.store.get_user_devices_from_cache(query_list)
144 ) = await self.store.get_user_devices_from_cache(query_list)
145145 for user_id, devices in remote_results.items():
146146 user_devices = results.setdefault(user_id, {})
147147 for device_id, device in devices.items():
160160 r[user_id] = remote_queries[user_id]
161161
162162 # Get cached cross-signing keys
163 cross_signing_keys = yield self.get_cross_signing_keys_from_cache(
163 cross_signing_keys = await self.get_cross_signing_keys_from_cache(
164164 device_keys_query, from_user_id
165165 )
166166
167167 # Now fetch any devices that we don't have in our cache
168168 @trace
169 @defer.inlineCallbacks
170 def do_remote_query(destination):
169 async def do_remote_query(destination):
171170 """This is called when we are querying the device list of a user on
172171 a remote homeserver and their device list is not in the device list
173172 cache. If we share a room with this user and we're not querying for
191190 if device_list:
192191 continue
193192
194 room_ids = yield self.store.get_rooms_for_user(user_id)
193 room_ids = await self.store.get_rooms_for_user(user_id)
195194 if not room_ids:
196195 continue
197196
200199 # done an initial sync on the device list so we do it now.
201200 try:
202201 if self._is_master:
203 user_devices = yield self.device_handler.device_list_updater.user_device_resync(
202 user_devices = await self.device_handler.device_list_updater.user_device_resync(
204203 user_id
205204 )
206205 else:
207 user_devices = yield self._user_device_resync_client(
206 user_devices = await self._user_device_resync_client(
208207 user_id=user_id
209208 )
210209
226225 destination_query.pop(user_id)
227226
228227 try:
229 remote_result = yield self.federation.query_client_keys(
228 remote_result = await self.federation.query_client_keys(
230229 destination, {"device_keys": destination_query}, timeout=timeout
231230 )
232231
250249 set_tag("error", True)
251250 set_tag("reason", failure)
252251
253 yield make_deferred_yieldable(
252 await make_deferred_yieldable(
254253 defer.gatherResults(
255254 [
256255 run_in_background(do_remote_query, destination)
266265
267266 return ret
268267
269 @defer.inlineCallbacks
270 def get_cross_signing_keys_from_cache(self, query, from_user_id):
268 async def get_cross_signing_keys_from_cache(
269 self, query, from_user_id
270 ) -> Dict[str, Dict[str, dict]]:
271271 """Get cross-signing keys for users from the database
272272
273273 Args:
279279 can see.
280280
281281 Returns:
282 defer.Deferred[dict[str, dict[str, dict]]]: map from
283 (master_keys|self_signing_keys|user_signing_keys) -> user_id -> key
282 A map from (master_keys|self_signing_keys|user_signing_keys) -> user_id -> key
284283 """
285284 master_keys = {}
286285 self_signing_keys = {}
288287
289288 user_ids = list(query)
290289
291 keys = yield self.store.get_e2e_cross_signing_keys_bulk(user_ids, from_user_id)
290 keys = await self.store.get_e2e_cross_signing_keys_bulk(user_ids, from_user_id)
292291
293292 for user_id, user_info in keys.items():
294293 if user_info is None:
314313 }
315314
316315 @trace
317 @defer.inlineCallbacks
318 def query_local_devices(self, query):
316 async def query_local_devices(
317 self, query: Dict[str, Optional[List[str]]]
318 ) -> Dict[str, Dict[str, dict]]:
319319 """Get E2E device keys for local users
320320
321321 Args:
322 query (dict[string, list[string]|None): map from user_id to a list
322 query: map from user_id to a list
323323 of devices to query (None for all devices)
324324
325325 Returns:
326 defer.Deferred: (resolves to dict[string, dict[string, dict]]):
327 map from user_id -> device_id -> device details
326 A map from user_id -> device_id -> device details
328327 """
329328 set_tag("local_query", query)
330329 local_query = []
353352 # make sure that each queried user appears in the result dict
354353 result_dict[user_id] = {}
355354
356 results = yield self.store.get_e2e_device_keys(local_query)
355 results = await self.store.get_e2e_device_keys(local_query)
357356
358357 # Build the result structure
359358 for user_id, device_keys in results.items():
363362 log_kv(results)
364363 return result_dict
365364
366 @defer.inlineCallbacks
367 def on_federation_query_client_keys(self, query_body):
365 async def on_federation_query_client_keys(self, query_body):
368366 """ Handle a device key query from a federated server
369367 """
370368 device_keys_query = query_body.get("device_keys", {})
371 res = yield self.query_local_devices(device_keys_query)
369 res = await self.query_local_devices(device_keys_query)
372370 ret = {"device_keys": res}
373371
374372 # add in the cross-signing keys
375 cross_signing_keys = yield self.get_cross_signing_keys_from_cache(
373 cross_signing_keys = await self.get_cross_signing_keys_from_cache(
376374 device_keys_query, None
377375 )
378376
381379 return ret
382380
383381 @trace
384 @defer.inlineCallbacks
385 def claim_one_time_keys(self, query, timeout):
382 async def claim_one_time_keys(self, query, timeout):
386383 local_query = []
387384 remote_queries = {}
388385
398395 set_tag("local_key_query", local_query)
399396 set_tag("remote_key_query", remote_queries)
400397
401 results = yield self.store.claim_e2e_one_time_keys(local_query)
398 results = await self.store.claim_e2e_one_time_keys(local_query)
402399
403400 json_result = {}
404401 failures = {}
410407 }
411408
412409 @trace
413 @defer.inlineCallbacks
414 def claim_client_keys(destination):
410 async def claim_client_keys(destination):
415411 set_tag("destination", destination)
416412 device_keys = remote_queries[destination]
417413 try:
418 remote_result = yield self.federation.claim_client_keys(
414 remote_result = await self.federation.claim_client_keys(
419415 destination, {"one_time_keys": device_keys}, timeout=timeout
420416 )
421417 for user_id, keys in remote_result["one_time_keys"].items():
428424 set_tag("error", True)
429425 set_tag("reason", failure)
430426
431 yield make_deferred_yieldable(
427 await make_deferred_yieldable(
432428 defer.gatherResults(
433429 [
434430 run_in_background(claim_client_keys, destination)
453449 log_kv({"one_time_keys": json_result, "failures": failures})
454450 return {"one_time_keys": json_result, "failures": failures}
455451
456 @defer.inlineCallbacks
457452 @tag_args
458 def upload_keys_for_user(self, user_id, device_id, keys):
453 async def upload_keys_for_user(self, user_id, device_id, keys):
459454
460455 time_now = self.clock.time_msec()
461456
476471 }
477472 )
478473 # TODO: Sign the JSON with the server key
479 changed = yield self.store.set_e2e_device_keys(
474 changed = await self.store.set_e2e_device_keys(
480475 user_id, device_id, time_now, device_keys
481476 )
482477 if changed:
483478 # Only notify about device updates *if* the keys actually changed
484 yield self.device_handler.notify_device_update(user_id, [device_id])
479 await self.device_handler.notify_device_update(user_id, [device_id])
485480 else:
486481 log_kv({"message": "Not updating device_keys for user", "user_id": user_id})
487482 one_time_keys = keys.get("one_time_keys", None)
493488 "device_id": device_id,
494489 }
495490 )
496 yield self._upload_one_time_keys_for_user(
491 await self._upload_one_time_keys_for_user(
497492 user_id, device_id, time_now, one_time_keys
498493 )
499494 else:
506501 # old access_token without an associated device_id. Either way, we
507502 # need to double-check the device is registered to avoid ending up with
508503 # keys without a corresponding device.
509 yield self.device_handler.check_device_registered(user_id, device_id)
510
511 result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
504 await self.device_handler.check_device_registered(user_id, device_id)
505
506 result = await self.store.count_e2e_one_time_keys(user_id, device_id)
512507
513508 set_tag("one_time_key_counts", result)
514509 return {"one_time_key_counts": result}
515510
516 @defer.inlineCallbacks
517 def _upload_one_time_keys_for_user(
511 async def _upload_one_time_keys_for_user(
518512 self, user_id, device_id, time_now, one_time_keys
519513 ):
520514 logger.info(
532526 key_list.append((algorithm, key_id, key_obj))
533527
534528 # First we check if we have already persisted any of the keys.
535 existing_key_map = yield self.store.get_e2e_one_time_keys(
529 existing_key_map = await self.store.get_e2e_one_time_keys(
536530 user_id, device_id, [k_id for _, k_id, _ in key_list]
537531 )
538532
555549 )
556550
557551 log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
558 yield self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
559
560 @defer.inlineCallbacks
561 def upload_signing_keys_for_user(self, user_id, keys):
552 await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
553
554 async def upload_signing_keys_for_user(self, user_id, keys):
562555 """Upload signing keys for cross-signing
563556
564557 Args:
573566
574567 _check_cross_signing_key(master_key, user_id, "master")
575568 else:
576 master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
569 master_key = await self.store.get_e2e_cross_signing_key(user_id, "master")
577570
578571 # if there is no master key, then we can't do anything, because all the
579572 # other cross-signing keys need to be signed by the master key
612605 # if everything checks out, then store the keys and send notifications
613606 deviceids = []
614607 if "master_key" in keys:
615 yield self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
608 await self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
616609 deviceids.append(master_verify_key.version)
617610 if "self_signing_key" in keys:
618 yield self.store.set_e2e_cross_signing_key(
611 await self.store.set_e2e_cross_signing_key(
619612 user_id, "self_signing", self_signing_key
620613 )
621614 try:
625618 except ValueError:
626619 raise SynapseError(400, "Invalid self-signing key", Codes.INVALID_PARAM)
627620 if "user_signing_key" in keys:
628 yield self.store.set_e2e_cross_signing_key(
621 await self.store.set_e2e_cross_signing_key(
629622 user_id, "user_signing", user_signing_key
630623 )
631624 # the signature stream matches the semantics that we want for
632625 # user-signing key updates: only the user themselves is notified of
633626 # their own user-signing key updates
634 yield self.device_handler.notify_user_signature_update(user_id, [user_id])
627 await self.device_handler.notify_user_signature_update(user_id, [user_id])
635628
636629 # master key and self-signing key updates match the semantics of device
637630 # list updates: all users who share an encrypted room are notified
638631 if len(deviceids):
639 yield self.device_handler.notify_device_update(user_id, deviceids)
632 await self.device_handler.notify_device_update(user_id, deviceids)
640633
641634 return {}
642635
643 @defer.inlineCallbacks
644 def upload_signatures_for_device_keys(self, user_id, signatures):
636 async def upload_signatures_for_device_keys(self, user_id, signatures):
645637 """Upload device signatures for cross-signing
646638
647639 Args:
666658 self_signatures = signatures.get(user_id, {})
667659 other_signatures = {k: v for k, v in signatures.items() if k != user_id}
668660
669 self_signature_list, self_failures = yield self._process_self_signatures(
661 self_signature_list, self_failures = await self._process_self_signatures(
670662 user_id, self_signatures
671663 )
672664 signature_list.extend(self_signature_list)
673665 failures.update(self_failures)
674666
675 other_signature_list, other_failures = yield self._process_other_signatures(
667 other_signature_list, other_failures = await self._process_other_signatures(
676668 user_id, other_signatures
677669 )
678670 signature_list.extend(other_signature_list)
680672
681673 # store the signature, and send the appropriate notifications for sync
682674 logger.debug("upload signature failures: %r", failures)
683 yield self.store.store_e2e_cross_signing_signatures(user_id, signature_list)
675 await self.store.store_e2e_cross_signing_signatures(user_id, signature_list)
684676
685677 self_device_ids = [item.target_device_id for item in self_signature_list]
686678 if self_device_ids:
687 yield self.device_handler.notify_device_update(user_id, self_device_ids)
679 await self.device_handler.notify_device_update(user_id, self_device_ids)
688680 signed_users = [item.target_user_id for item in other_signature_list]
689681 if signed_users:
690 yield self.device_handler.notify_user_signature_update(
682 await self.device_handler.notify_user_signature_update(
691683 user_id, signed_users
692684 )
693685
694686 return {"failures": failures}
695687
696 @defer.inlineCallbacks
697 def _process_self_signatures(self, user_id, signatures):
688 async def _process_self_signatures(self, user_id, signatures):
698689 """Process uploaded signatures of the user's own keys.
699690
700691 Signatures of the user's own keys from this API come in two forms:
727718 _,
728719 self_signing_key_id,
729720 self_signing_verify_key,
730 ) = yield self._get_e2e_cross_signing_verify_key(user_id, "self_signing")
721 ) = await self._get_e2e_cross_signing_verify_key(user_id, "self_signing")
731722
732723 # get our master key, since we may have received a signature of it.
733724 # We need to fetch it here so that we know what its key ID is, so
737728 master_key,
738729 _,
739730 master_verify_key,
740 ) = yield self._get_e2e_cross_signing_verify_key(user_id, "master")
731 ) = await self._get_e2e_cross_signing_verify_key(user_id, "master")
741732
742733 # fetch our stored devices. This is used to 1. verify
743734 # signatures on the master key, and 2. to compare with what
744735 # was sent if the device was signed
745 devices = yield self.store.get_e2e_device_keys([(user_id, None)])
736 devices = await self.store.get_e2e_device_keys([(user_id, None)])
746737
747738 if user_id not in devices:
748739 raise NotFoundError("No device keys found")
852843
853844 return master_key_signature_list
854845
855 @defer.inlineCallbacks
856 def _process_other_signatures(self, user_id, signatures):
846 async def _process_other_signatures(self, user_id, signatures):
857847 """Process uploaded signatures of other users' keys. These will be the
858848 target user's master keys, signed by the uploading user's user-signing
859849 key.
881871 user_signing_key,
882872 user_signing_key_id,
883873 user_signing_verify_key,
884 ) = yield self._get_e2e_cross_signing_verify_key(user_id, "user_signing")
874 ) = await self._get_e2e_cross_signing_verify_key(user_id, "user_signing")
885875 except SynapseError as e:
886876 failure = _exception_to_failure(e)
887877 for user, devicemap in signatures.items():
904894 master_key,
905895 master_key_id,
906896 _,
907 ) = yield self._get_e2e_cross_signing_verify_key(
897 ) = await self._get_e2e_cross_signing_verify_key(
908898 target_user, "master", user_id
909899 )
910900
957947
958948 return signature_list, failures
959949
960 @defer.inlineCallbacks
961 def _get_e2e_cross_signing_verify_key(
950 async def _get_e2e_cross_signing_verify_key(
962951 self, user_id: str, key_type: str, from_user_id: str = None
963952 ):
964953 """Fetch locally or remotely query for a cross-signing public key.
982971 SynapseError: if `user_id` is invalid
983972 """
984973 user = UserID.from_string(user_id)
985 key = yield self.store.get_e2e_cross_signing_key(
974 key = await self.store.get_e2e_cross_signing_key(
986975 user_id, key_type, from_user_id
987976 )
988977
1008997 key,
1009998 key_id,
1010999 verify_key,
1011 ) = yield self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
1000 ) = await self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
10121001
10131002 if key is None:
10141003 raise NotFoundError("No %s key found for %s" % (key_type, user_id))
10151004
10161005 return key, key_id, verify_key
10171006
1018 @defer.inlineCallbacks
1019 def _retrieve_cross_signing_keys_for_remote_user(
1007 async def _retrieve_cross_signing_keys_for_remote_user(
10201008 self, user: UserID, desired_key_type: str,
1021 ):
1009 ) -> Tuple[Optional[dict], Optional[str], Optional[VerifyKey]]:
10221010 """Queries cross-signing keys for a remote user and saves them to the database
10231011
10241012 Only the key specified by `key_type` will be returned, while all retrieved keys
10291017 desired_key_type: The type of key to receive. One of "master", "self_signing"
10301018
10311019 Returns:
1032 Deferred[Tuple[Optional[Dict], Optional[str], Optional[VerifyKey]]]: A tuple
1033 of the retrieved key content, the key's ID and the matching VerifyKey.
1020 A tuple of the retrieved key content, the key's ID and the matching VerifyKey.
10341021 If the key cannot be retrieved, all values in the tuple will instead be None.
10351022 """
10361023 try:
1037 remote_result = yield self.federation.query_user_devices(
1024 remote_result = await self.federation.query_user_devices(
10381025 user.domain, user.to_string()
10391026 )
10401027 except Exception as e:
11001087 desired_key_id = key_id
11011088
11021089 # At the same time, store this key in the db for subsequent queries
1103 yield self.store.set_e2e_cross_signing_key(
1090 await self.store.set_e2e_cross_signing_key(
11041091 user.to_string(), key_type, key_content
11051092 )
11061093
11071094 # Notify clients that new devices for this user have been discovered
11081095 if retrieved_device_ids:
11091096 # XXX is this necessary?
1110 yield self.device_handler.notify_device_update(
1097 await self.device_handler.notify_device_update(
11111098 user.to_string(), retrieved_device_ids
11121099 )
11131100
12491236 iterable=True,
12501237 )
12511238
1252 @defer.inlineCallbacks
1253 def incoming_signing_key_update(self, origin, edu_content):
1239 async def incoming_signing_key_update(self, origin, edu_content):
12541240 """Called on incoming signing key update from federation. Responsible for
12551241 parsing the EDU and adding to pending updates list.
12561242
12671253 logger.warning("Got signing key update edu for %r from %r", user_id, origin)
12681254 return
12691255
1270 room_ids = yield self.store.get_rooms_for_user(user_id)
1256 room_ids = await self.store.get_rooms_for_user(user_id)
12711257 if not room_ids:
12721258 # We don't share any rooms with this user. Ignore update, as we
12731259 # probably won't get any further updates.
12771263 (master_key, self_signing_key)
12781264 )
12791265
1280 yield self._handle_signing_key_updates(user_id)
1281
1282 @defer.inlineCallbacks
1283 def _handle_signing_key_updates(self, user_id):
1266 await self._handle_signing_key_updates(user_id)
1267
1268 async def _handle_signing_key_updates(self, user_id):
12841269 """Actually handle pending updates.
12851270
12861271 Args:
12901275 device_handler = self.e2e_keys_handler.device_handler
12911276 device_list_updater = device_handler.device_list_updater
12921277
1293 with (yield self._remote_edu_linearizer.queue(user_id)):
1278 with (await self._remote_edu_linearizer.queue(user_id)):
12941279 pending_updates = self._pending_updates.pop(user_id, [])
12951280 if not pending_updates:
12961281 # This can happen since we batch updates
13011286 logger.info("pending updates: %r", pending_updates)
13021287
13031288 for master_key, self_signing_key in pending_updates:
1304 new_device_ids = yield device_list_updater.process_cross_signing_key_update(
1289 new_device_ids = await device_list_updater.process_cross_signing_key_update(
13051290 user_id, master_key, self_signing_key,
13061291 )
13071292 device_ids = device_ids + new_device_ids
13081293
1309 yield device_handler.notify_device_update(user_id, device_ids)
1294 await device_handler.notify_device_update(user_id, device_ids)
1515
1616 import logging
1717
18 from twisted.internet import defer
19
2018 from synapse.api.errors import (
2119 Codes,
2220 NotFoundError,
4947 self._upload_linearizer = Linearizer("upload_room_keys_lock")
5048
5149 @trace
52 @defer.inlineCallbacks
53 def get_room_keys(self, user_id, version, room_id=None, session_id=None):
50 async def get_room_keys(self, user_id, version, room_id=None, session_id=None):
5451 """Bulk get the E2E room keys for a given backup, optionally filtered to a given
5552 room, or a given session.
5653 See EndToEndRoomKeyStore.get_e2e_room_keys for full details.
7067
7168 # we deliberately take the lock to get keys so that changing the version
7269 # works atomically
73 with (yield self._upload_linearizer.queue(user_id)):
70 with (await self._upload_linearizer.queue(user_id)):
7471 # make sure the backup version exists
7572 try:
76 yield self.store.get_e2e_room_keys_version_info(user_id, version)
73 await self.store.get_e2e_room_keys_version_info(user_id, version)
7774 except StoreError as e:
7875 if e.code == 404:
7976 raise NotFoundError("Unknown backup version")
8077 else:
8178 raise
8279
83 results = yield self.store.get_e2e_room_keys(
80 results = await self.store.get_e2e_room_keys(
8481 user_id, version, room_id, session_id
8582 )
8683
8885 return results
8986
9087 @trace
91 @defer.inlineCallbacks
92 def delete_room_keys(self, user_id, version, room_id=None, session_id=None):
88 async def delete_room_keys(self, user_id, version, room_id=None, session_id=None):
9389 """Bulk delete the E2E room keys for a given backup, optionally filtered to a given
9490 room or a given session.
9591 See EndToEndRoomKeyStore.delete_e2e_room_keys for full details.
108104 """
109105
110106 # lock for consistency with uploading
111 with (yield self._upload_linearizer.queue(user_id)):
107 with (await self._upload_linearizer.queue(user_id)):
112108 # make sure the backup version exists
113109 try:
114 version_info = yield self.store.get_e2e_room_keys_version_info(
110 version_info = await self.store.get_e2e_room_keys_version_info(
115111 user_id, version
116112 )
117113 except StoreError as e:
120116 else:
121117 raise
122118
123 yield self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
119 await self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
124120
125121 version_etag = version_info["etag"] + 1
126 yield self.store.update_e2e_room_keys_version(
122 await self.store.update_e2e_room_keys_version(
127123 user_id, version, None, version_etag
128124 )
129125
130 count = yield self.store.count_e2e_room_keys(user_id, version)
126 count = await self.store.count_e2e_room_keys(user_id, version)
131127 return {"etag": str(version_etag), "count": count}
132128
133129 @trace
134 @defer.inlineCallbacks
135 def upload_room_keys(self, user_id, version, room_keys):
130 async def upload_room_keys(self, user_id, version, room_keys):
136131 """Bulk upload a list of room keys into a given backup version, asserting
137132 that the given version is the current backup version. room_keys are merged
138133 into the current backup as described in RoomKeysServlet.on_PUT().
168163 # TODO: Validate the JSON to make sure it has the right keys.
169164
170165 # XXX: perhaps we should use a finer grained lock here?
171 with (yield self._upload_linearizer.queue(user_id)):
166 with (await self._upload_linearizer.queue(user_id)):
172167
173168 # Check that the version we're trying to upload is the current version
174169 try:
175 version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
170 version_info = await self.store.get_e2e_room_keys_version_info(user_id)
176171 except StoreError as e:
177172 if e.code == 404:
178173 raise NotFoundError("Version '%s' not found" % (version,))
182177 if version_info["version"] != version:
183178 # Check that the version we're trying to upload actually exists
184179 try:
185 version_info = yield self.store.get_e2e_room_keys_version_info(
180 version_info = await self.store.get_e2e_room_keys_version_info(
186181 user_id, version
187182 )
188183 # if we get this far, the version must exist
197192 # submitted. Then compare them with the submitted keys. If the
198193 # key is new, insert it; if the key should be updated, then update
199194 # it; otherwise, drop it.
200 existing_keys = yield self.store.get_e2e_room_keys_multi(
195 existing_keys = await self.store.get_e2e_room_keys_multi(
201196 user_id, version, room_keys["rooms"]
202197 )
203198 to_insert = [] # batch the inserts together
226221 # updates are done one at a time in the DB, so send
227222 # updates right away rather than batching them up,
228223 # like we do with the inserts
229 yield self.store.update_e2e_room_key(
224 await self.store.update_e2e_room_key(
230225 user_id, version, room_id, session_id, room_key
231226 )
232227 changed = True
245240 changed = True
246241
247242 if len(to_insert):
248 yield self.store.add_e2e_room_keys(user_id, version, to_insert)
243 await self.store.add_e2e_room_keys(user_id, version, to_insert)
249244
250245 version_etag = version_info["etag"]
251246 if changed:
252247 version_etag = version_etag + 1
253 yield self.store.update_e2e_room_keys_version(
248 await self.store.update_e2e_room_keys_version(
254249 user_id, version, None, version_etag
255250 )
256251
257 count = yield self.store.count_e2e_room_keys(user_id, version)
252 count = await self.store.count_e2e_room_keys(user_id, version)
258253 return {"etag": str(version_etag), "count": count}
259254
260255 @staticmethod
290285 return True
291286
292287 @trace
293 @defer.inlineCallbacks
294 def create_version(self, user_id, version_info):
288 async def create_version(self, user_id, version_info):
295289 """Create a new backup version. This automatically becomes the new
296290 backup version for the user's keys; previous backups will no longer be
297291 writeable to.
312306 # TODO: Validate the JSON to make sure it has the right keys.
313307
314308 # lock everyone out until we've switched version
315 with (yield self._upload_linearizer.queue(user_id)):
316 new_version = yield self.store.create_e2e_room_keys_version(
309 with (await self._upload_linearizer.queue(user_id)):
310 new_version = await self.store.create_e2e_room_keys_version(
317311 user_id, version_info
318312 )
319313 return new_version
320314
321 @defer.inlineCallbacks
322 def get_version_info(self, user_id, version=None):
315 async def get_version_info(self, user_id, version=None):
323316 """Get the info about a given version of the user's backup
324317
325318 Args:
338331 }
339332 """
340333
341 with (yield self._upload_linearizer.queue(user_id)):
342 try:
343 res = yield self.store.get_e2e_room_keys_version_info(user_id, version)
334 with (await self._upload_linearizer.queue(user_id)):
335 try:
336 res = await self.store.get_e2e_room_keys_version_info(user_id, version)
344337 except StoreError as e:
345338 if e.code == 404:
346339 raise NotFoundError("Unknown backup version")
347340 else:
348341 raise
349342
350 res["count"] = yield self.store.count_e2e_room_keys(user_id, res["version"])
343 res["count"] = await self.store.count_e2e_room_keys(user_id, res["version"])
351344 res["etag"] = str(res["etag"])
352345 return res
353346
354347 @trace
355 @defer.inlineCallbacks
356 def delete_version(self, user_id, version=None):
348 async def delete_version(self, user_id, version=None):
357349 """Deletes a given version of the user's e2e_room_keys backup
358350
359351 Args:
363355 NotFoundError: if this backup version doesn't exist
364356 """
365357
366 with (yield self._upload_linearizer.queue(user_id)):
367 try:
368 yield self.store.delete_e2e_room_keys_version(user_id, version)
358 with (await self._upload_linearizer.queue(user_id)):
359 try:
360 await self.store.delete_e2e_room_keys_version(user_id, version)
369361 except StoreError as e:
370362 if e.code == 404:
371363 raise NotFoundError("Unknown backup version")
373365 raise
374366
375367 @trace
376 @defer.inlineCallbacks
377 def update_version(self, user_id, version, version_info):
368 async def update_version(self, user_id, version, version_info):
378369 """Update the info about a given version of the user's backup
379370
380371 Args:
392383 raise SynapseError(
393384 400, "Version in body does not match", Codes.INVALID_PARAM
394385 )
395 with (yield self._upload_linearizer.queue(user_id)):
396 try:
397 old_info = yield self.store.get_e2e_room_keys_version_info(
386 with (await self._upload_linearizer.queue(user_id)):
387 try:
388 old_info = await self.store.get_e2e_room_keys_version_info(
398389 user_id, version
399390 )
400391 except StoreError as e:
405396 if old_info["algorithm"] != version_info["algorithm"]:
406397 raise SynapseError(400, "Algorithm does not match", Codes.INVALID_PARAM)
407398
408 yield self.store.update_e2e_room_keys_version(
399 await self.store.update_e2e_room_keys_version(
409400 user_id, version, version_info
410401 )
411402
1818
1919 import itertools
2020 import logging
21 from collections import Container
21 from collections.abc import Container
2222 from http import HTTPStatus
2323 from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Union
2424
4343 FederationDeniedError,
4444 FederationError,
4545 HttpResponseException,
46 NotFoundError,
4647 RequestSendFailed,
4748 SynapseError,
4849 )
6061 run_in_background,
6162 )
6263 from synapse.logging.utils import log_function
64 from synapse.metrics.background_process_metrics import run_as_background_process
6365 from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
6466 from synapse.replication.http.federation import (
6567 ReplicationCleanRoomRestServlet,
617619 will be omitted from the result. Likewise, any events which turn out not to
618620 be in the given room.
619621
622 This function *does not* automatically get missing auth events of the
623 newly fetched events. Callers must include the full auth chain of
624 of the missing events in the `event_ids` argument, to ensure that any
625 missing auth events are correctly fetched.
626
620627 Returns:
621628 map from event_id to event
622629 """
783790 resync = True
784791
785792 if resync:
786 await self.store.mark_remote_user_device_cache_as_stale(event.sender)
787
788 # Immediately attempt a resync in the background
789 if self.config.worker_app:
790 return run_in_background(self._user_device_resync, event.sender)
791 else:
792 return run_in_background(
793 self._device_list_updater.user_device_resync, event.sender
794 )
793 run_as_background_process(
794 "resync_device_due_to_pdu", self._resync_device, event.sender
795 )
796
797 async def _resync_device(self, sender: str) -> None:
798 """We have detected that the device list for the given user may be out
799 of sync, so we try and resync them.
800 """
801
802 try:
803 await self.store.mark_remote_user_device_cache_as_stale(sender)
804
805 # Immediately attempt a resync in the background
806 if self.config.worker_app:
807 await self._user_device_resync(user_id=sender)
808 else:
809 await self._device_list_updater.user_device_resync(sender)
810 except Exception:
811 logger.exception("Failed to resync device for %s", sender)
795812
796813 @log_function
797814 async def backfill(self, dest, room_id, limit, extremities):
11301147 ):
11311148 """Fetch the given events from a server, and persist them as outliers.
11321149
1150 This function *does not* recursively get missing auth events of the
1151 newly fetched events. Callers must include in the `events` argument
1152 any missing events from the auth chain.
1153
11331154 Logs a warning if we can't find the given event.
11341155 """
11351156
11361157 room_version = await self.store.get_room_version(room_id)
11371158
1138 event_infos = []
1159 event_map = {} # type: Dict[str, EventBase]
11391160
11401161 async def get_event(event_id: str):
11411162 with nested_logging_context(event_id):
11491170 )
11501171 return
11511172
1152 # recursively fetch the auth events for this event
1153 auth_events = await self._get_events_from_store_or_dest(
1154 destination, room_id, event.auth_event_ids()
1155 )
1156 auth = {}
1157 for auth_event_id in event.auth_event_ids():
1158 ae = auth_events.get(auth_event_id)
1159 if ae:
1160 auth[(ae.type, ae.state_key)] = ae
1161
1162 event_infos.append(_NewEventInfo(event, None, auth))
1173 event_map[event.event_id] = event
11631174
11641175 except Exception as e:
11651176 logger.warning(
11701181 )
11711182
11721183 await concurrently_execute(get_event, events, 5)
1184
1185 # Make a map of auth events for each event. We do this after fetching
1186 # all the events as some of the events' auth events will be in the list
1187 # of requested events.
1188
1189 auth_events = [
1190 aid
1191 for event in event_map.values()
1192 for aid in event.auth_event_ids()
1193 if aid not in event_map
1194 ]
1195 persisted_events = await self.store.get_events(
1196 auth_events, allow_rejected=True,
1197 )
1198
1199 event_infos = []
1200 for event in event_map.values():
1201 auth = {}
1202 for auth_event_id in event.auth_event_ids():
1203 ae = persisted_events.get(auth_event_id) or event_map.get(auth_event_id)
1204 if ae:
1205 auth[(ae.type, ae.state_key)] = ae
1206 else:
1207 logger.info("Missing auth event %s", auth_event_id)
1208
1209 event_infos.append(_NewEventInfo(event, None, auth))
11731210
11741211 await self._handle_new_events(
11751212 destination, event_infos,
13561393 # it's just a best-effort thing at this point. We do want to do
13571394 # them roughly in order, though, otherwise we'll end up making
13581395 # lots of requests for missing prev_events which we do actually
1359 # have. Hence we fire off the deferred, but don't wait for it.
1396 # have. Hence we fire off the background task, but don't wait for it.
13601397
13611398 run_in_background(self._handle_queued_pdus, room_queue)
13621399
14021439 )
14031440 raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
14041441
1442 # checking the room version will check that we've actually heard of the room
1443 # (and return a 404 otherwise)
1444 room_version = await self.store.get_room_version_id(room_id)
1445
1446 # now check that we are *still* in the room
1447 is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
1448 if not is_in_room:
1449 logger.info(
1450 "Got /make_join request for room %s we are no longer in", room_id,
1451 )
1452 raise NotFoundError("Not an active room on this server")
1453
14051454 event_content = {"membership": Membership.JOIN}
1406
1407 room_version = await self.store.get_room_version_id(room_id)
14081455
14091456 builder = self.event_builder_factory.new(
14101457 room_version,
18391886 origin, event, state=state, auth_events=auth_events, backfilled=backfilled
18401887 )
18411888
1842 # reraise does not allow inlineCallbacks to preserve the stacktrace, so we
1843 # hack around with a try/finally instead.
1844 success = False
18451889 try:
18461890 if (
18471891 not event.internal_metadata.is_outlier()
18551899 await self.persist_events_and_notify(
18561900 [(event, context)], backfilled=backfilled
18571901 )
1858 success = True
1859 finally:
1860 if not success:
1861 run_in_background(
1862 self.store.remove_push_actions_from_staging, event.event_id
1863 )
1902 except Exception:
1903 run_in_background(
1904 self.store.remove_push_actions_from_staging, event.event_id
1905 )
1906 raise
18641907
18651908 return context
18661909
29462989 else:
29472990 user_joined_room(self.distributor, user, room_id)
29482991
2949 async def get_room_complexity(self, remote_room_hosts, room_id):
2992 async def get_room_complexity(
2993 self, remote_room_hosts: List[str], room_id: str
2994 ) -> Optional[dict]:
29502995 """
29512996 Fetch the complexity of a remote room over federation.
29522997
29553000 room_id (str): The room ID to ask about.
29563001
29573002 Returns:
2958 Deferred[dict] or Deferred[None]: Dict contains the complexity
3003 Dict contains the complexity
29593004 metric versions, while None means we could not fetch the complexity.
29603005 """
29613006
1818
1919 import logging
2020 import urllib.parse
21 from typing import Awaitable, Callable, Dict, List, Optional, Tuple
2122
2223 from canonicaljson import json
2324 from signedjson.key import decode_verify_key_bytes
3536 )
3637 from synapse.config.emailconfig import ThreepidBehaviour
3738 from synapse.http.client import SimpleHttpClient
39 from synapse.types import JsonDict, Requester
3840 from synapse.util.hash import sha256_and_url_safe_base64
3941 from synapse.util.stringutils import assert_valid_client_secret, random_string
4042
5860 self.federation_http_client = hs.get_http_client()
5961 self.hs = hs
6062
61 async def threepid_from_creds(self, id_server, creds):
63 async def threepid_from_creds(
64 self, id_server: str, creds: Dict[str, str]
65 ) -> Optional[JsonDict]:
6266 """
6367 Retrieve and validate a threepid identifier from a "credentials" dictionary against a
6468 given identity server
6569
6670 Args:
67 id_server (str): The identity server to validate 3PIDs against. Must be a
71 id_server: The identity server to validate 3PIDs against. Must be a
6872 complete URL including the protocol (http(s)://)
69
70 creds (dict[str, str]): Dictionary containing the following keys:
73 creds: Dictionary containing the following keys:
7174 * client_secret|clientSecret: A unique secret str provided by the client
7275 * sid: The ID of the validation session
7376
7477 Returns:
75 Deferred[dict[str,str|int]|None]: A dictionary consisting of response params to
76 the /getValidated3pid endpoint of the Identity Service API, or None if the
77 threepid was not found
78 A dictionary consisting of response params to the /getValidated3pid
79 endpoint of the Identity Service API, or None if the threepid was not found
7880 """
7981 client_secret = creds.get("client_secret") or creds.get("clientSecret")
8082 if not client_secret:
118120 return None
119121
120122 async def bind_threepid(
121 self, client_secret, sid, mxid, id_server, id_access_token=None, use_v2=True
122 ):
123 self,
124 client_secret: str,
125 sid: str,
126 mxid: str,
127 id_server: str,
128 id_access_token: Optional[str] = None,
129 use_v2: bool = True,
130 ) -> JsonDict:
123131 """Bind a 3PID to an identity server
124132
125133 Args:
126 client_secret (str): A unique secret provided by the client
127
128 sid (str): The ID of the validation session
129
130 mxid (str): The MXID to bind the 3PID to
131
132 id_server (str): The domain of the identity server to query
133
134 id_access_token (str): The access token to authenticate to the identity
134 client_secret: A unique secret provided by the client
135 sid: The ID of the validation session
136 mxid: The MXID to bind the 3PID to
137 id_server: The domain of the identity server to query
138 id_access_token: The access token to authenticate to the identity
135139 server with, if necessary. Required if use_v2 is true
136
137 use_v2 (bool): Whether to use v2 Identity Service API endpoints. Defaults to True
138
139 Returns:
140 Deferred[dict]: The response from the identity server
140 use_v2: Whether to use v2 Identity Service API endpoints. Defaults to True
141
142 Returns:
143 The response from the identity server
141144 """
142145 logger.debug("Proxying threepid bind request for %s to %s", mxid, id_server)
143146
150153 bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid}
151154 if use_v2:
152155 bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,)
153 headers["Authorization"] = create_id_access_token_header(id_access_token)
156 headers["Authorization"] = create_id_access_token_header(id_access_token) # type: ignore
154157 else:
155158 bind_url = "https://%s/_matrix/identity/api/v1/3pid/bind" % (id_server,)
156159
186189 )
187190 return res
188191
189 async def try_unbind_threepid(self, mxid, threepid):
192 async def try_unbind_threepid(self, mxid: str, threepid: dict) -> bool:
190193 """Attempt to remove a 3PID from an identity server, or if one is not provided, all
191194 identity servers we're aware the binding is present on
192195
193196 Args:
194 mxid (str): Matrix user ID of binding to be removed
195 threepid (dict): Dict with medium & address of binding to be
197 mxid: Matrix user ID of binding to be removed
198 threepid: Dict with medium & address of binding to be
196199 removed, and an optional id_server.
197200
198201 Raises:
199202 SynapseError: If we failed to contact the identity server
200203
201204 Returns:
202 Deferred[bool]: True on success, otherwise False if the identity
205 True on success, otherwise False if the identity
203206 server doesn't support unbinding (or no identity server found to
204207 contact).
205208 """
222225
223226 return changed
224227
225 async def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server):
228 async def try_unbind_threepid_with_id_server(
229 self, mxid: str, threepid: dict, id_server: str
230 ) -> bool:
226231 """Removes a binding from an identity server
227232
228233 Args:
229 mxid (str): Matrix user ID of binding to be removed
230 threepid (dict): Dict with medium & address of binding to be removed
231 id_server (str): Identity server to unbind from
234 mxid: Matrix user ID of binding to be removed
235 threepid: Dict with medium & address of binding to be removed
236 id_server: Identity server to unbind from
232237
233238 Raises:
234239 SynapseError: If we failed to contact the identity server
235240
236241 Returns:
237 Deferred[bool]: True on success, otherwise False if the identity
242 True on success, otherwise False if the identity
238243 server doesn't support unbinding
239244 """
240245 url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,)
286291
287292 async def send_threepid_validation(
288293 self,
289 email_address,
290 client_secret,
291 send_attempt,
292 send_email_func,
293 next_link=None,
294 ):
294 email_address: str,
295 client_secret: str,
296 send_attempt: int,
297 send_email_func: Callable[[str, str, str, str], Awaitable],
298 next_link: Optional[str] = None,
299 ) -> str:
295300 """Send a threepid validation email for password reset or
296301 registration purposes
297302
298303 Args:
299 email_address (str): The user's email address
300 client_secret (str): The provided client secret
301 send_attempt (int): Which send attempt this is
302 send_email_func (func): A function that takes an email address, token,
303 client_secret and session_id, sends an email
304 and returns a Deferred.
305 next_link (str|None): The URL to redirect the user to after validation
304 email_address: The user's email address
305 client_secret: The provided client secret
306 send_attempt: Which send attempt this is
307 send_email_func: A function that takes an email address, token,
308 client_secret and session_id, sends an email
309 and returns an Awaitable.
310 next_link: The URL to redirect the user to after validation
306311
307312 Returns:
308313 The new session_id upon success
371376 return session_id
372377
373378 async def requestEmailToken(
374 self, id_server, email, client_secret, send_attempt, next_link=None
375 ):
379 self,
380 id_server: str,
381 email: str,
382 client_secret: str,
383 send_attempt: int,
384 next_link: Optional[str] = None,
385 ) -> JsonDict:
376386 """
377387 Request an external server send an email on our behalf for the purposes of threepid
378388 validation.
379389
380390 Args:
381 id_server (str): The identity server to proxy to
382 email (str): The email to send the message to
383 client_secret (str): The unique client_secret sends by the user
384 send_attempt (int): Which attempt this is
391 id_server: The identity server to proxy to
392 email: The email to send the message to
393 client_secret: The unique client_secret sends by the user
394 send_attempt: Which attempt this is
385395 next_link: A link to redirect the user to once they submit the token
386396
387397 Returns:
418428
419429 async def requestMsisdnToken(
420430 self,
421 id_server,
422 country,
423 phone_number,
424 client_secret,
425 send_attempt,
426 next_link=None,
427 ):
431 id_server: str,
432 country: str,
433 phone_number: str,
434 client_secret: str,
435 send_attempt: int,
436 next_link: Optional[str] = None,
437 ) -> JsonDict:
428438 """
429439 Request an external server send an SMS message on our behalf for the purposes of
430440 threepid validation.
431441 Args:
432 id_server (str): The identity server to proxy to
433 country (str): The country code of the phone number
434 phone_number (str): The number to send the message to
435 client_secret (str): The unique client_secret sends by the user
436 send_attempt (int): Which attempt this is
442 id_server: The identity server to proxy to
443 country: The country code of the phone number
444 phone_number: The number to send the message to
445 client_secret: The unique client_secret sends by the user
446 send_attempt: Which attempt this is
437447 next_link: A link to redirect the user to once they submit the token
438448
439449 Returns:
479489 )
480490 return data
481491
482 async def validate_threepid_session(self, client_secret, sid):
492 async def validate_threepid_session(
493 self, client_secret: str, sid: str
494 ) -> Optional[JsonDict]:
483495 """Validates a threepid session with only the client secret and session ID
484496 Tries validating against any configured account_threepid_delegates as well as locally.
485497
486498 Args:
487 client_secret (str): A secret provided by the client
488
489 sid (str): The ID of the session
490
491 Returns:
492 Dict[str, str|int] if validation was successful, otherwise None
499 client_secret: A secret provided by the client
500 sid: The ID of the session
501
502 Returns:
503 The json response if validation was successful, otherwise None
493504 """
494505 # XXX: We shouldn't need to keep wrapping and unwrapping this value
495506 threepid_creds = {"client_secret": client_secret, "sid": sid}
522533
523534 return validation_session
524535
525 async def proxy_msisdn_submit_token(self, id_server, client_secret, sid, token):
536 async def proxy_msisdn_submit_token(
537 self, id_server: str, client_secret: str, sid: str, token: str
538 ) -> JsonDict:
526539 """Proxy a POST submitToken request to an identity server for verification purposes
527540
528541 Args:
529 id_server (str): The identity server URL to contact
530
531 client_secret (str): Secret provided by the client
532
533 sid (str): The ID of the session
534
535 token (str): The verification token
542 id_server: The identity server URL to contact
543 client_secret: Secret provided by the client
544 sid: The ID of the session
545 token: The verification token
536546
537547 Raises:
538548 SynapseError: If we failed to contact the identity server
539549
540550 Returns:
541 Deferred[dict]: The response dict from the identity server
551 The response dict from the identity server
542552 """
543553 body = {"client_secret": client_secret, "sid": sid, "token": token}
544554
553563 logger.warning("Error contacting msisdn account_threepid_delegate: %s", e)
554564 raise SynapseError(400, "Error contacting the identity server")
555565
556 async def lookup_3pid(self, id_server, medium, address, id_access_token=None):
566 async def lookup_3pid(
567 self,
568 id_server: str,
569 medium: str,
570 address: str,
571 id_access_token: Optional[str] = None,
572 ) -> Optional[str]:
557573 """Looks up a 3pid in the passed identity server.
558574
559575 Args:
560 id_server (str): The server name (including port, if required)
576 id_server: The server name (including port, if required)
561577 of the identity server to use.
562 medium (str): The type of the third party identifier (e.g. "email").
563 address (str): The third party identifier (e.g. "foo@example.com").
564 id_access_token (str|None): The access token to authenticate to the identity
578 medium: The type of the third party identifier (e.g. "email").
579 address: The third party identifier (e.g. "foo@example.com").
580 id_access_token: The access token to authenticate to the identity
565581 server with
566582
567583 Returns:
568 str|None: the matrix ID of the 3pid, or None if it is not recognized.
584 the matrix ID of the 3pid, or None if it is not recognized.
569585 """
570586 if id_access_token is not None:
571587 try:
590606
591607 return await self._lookup_3pid_v1(id_server, medium, address)
592608
593 async def _lookup_3pid_v1(self, id_server, medium, address):
609 async def _lookup_3pid_v1(
610 self, id_server: str, medium: str, address: str
611 ) -> Optional[str]:
594612 """Looks up a 3pid in the passed identity server using v1 lookup.
595613
596614 Args:
597 id_server (str): The server name (including port, if required)
615 id_server: The server name (including port, if required)
598616 of the identity server to use.
599 medium (str): The type of the third party identifier (e.g. "email").
600 address (str): The third party identifier (e.g. "foo@example.com").
601
602 Returns:
603 str: the matrix ID of the 3pid, or None if it is not recognized.
617 medium: The type of the third party identifier (e.g. "email").
618 address: The third party identifier (e.g. "foo@example.com").
619
620 Returns:
621 the matrix ID of the 3pid, or None if it is not recognized.
604622 """
605623 try:
606624 data = await self.blacklisting_http_client.get_json(
620638
621639 return None
622640
623 async def _lookup_3pid_v2(self, id_server, id_access_token, medium, address):
641 async def _lookup_3pid_v2(
642 self, id_server: str, id_access_token: str, medium: str, address: str
643 ) -> Optional[str]:
624644 """Looks up a 3pid in the passed identity server using v2 lookup.
625645
626646 Args:
627 id_server (str): The server name (including port, if required)
647 id_server: The server name (including port, if required)
628648 of the identity server to use.
629 id_access_token (str): The access token to authenticate to the identity server with
630 medium (str): The type of the third party identifier (e.g. "email").
631 address (str): The third party identifier (e.g. "foo@example.com").
632
633 Returns:
634 Deferred[str|None]: the matrix ID of the 3pid, or None if it is not recognised.
649 id_access_token: The access token to authenticate to the identity server with
650 medium: The type of the third party identifier (e.g. "email").
651 address: The third party identifier (e.g. "foo@example.com").
652
653 Returns:
654 the matrix ID of the 3pid, or None if it is not recognised.
635655 """
636656 # Check what hashing details are supported by this identity server
637657 try:
756776
757777 async def ask_id_server_for_third_party_invite(
758778 self,
759 requester,
760 id_server,
761 medium,
762 address,
763 room_id,
764 inviter_user_id,
765 room_alias,
766 room_avatar_url,
767 room_join_rules,
768 room_name,
769 inviter_display_name,
770 inviter_avatar_url,
771 id_access_token=None,
772 ):
779 requester: Requester,
780 id_server: str,
781 medium: str,
782 address: str,
783 room_id: str,
784 inviter_user_id: str,
785 room_alias: str,
786 room_avatar_url: str,
787 room_join_rules: str,
788 room_name: str,
789 inviter_display_name: str,
790 inviter_avatar_url: str,
791 id_access_token: Optional[str] = None,
792 ) -> Tuple[str, List[Dict[str, str]], Dict[str, str], str]:
773793 """
774794 Asks an identity server for a third party invite.
775795
776796 Args:
777 requester (Requester)
778 id_server (str): hostname + optional port for the identity server.
779 medium (str): The literal string "email".
780 address (str): The third party address being invited.
781 room_id (str): The ID of the room to which the user is invited.
782 inviter_user_id (str): The user ID of the inviter.
783 room_alias (str): An alias for the room, for cosmetic notifications.
784 room_avatar_url (str): The URL of the room's avatar, for cosmetic
797 requester
798 id_server: hostname + optional port for the identity server.
799 medium: The literal string "email".
800 address: The third party address being invited.
801 room_id: The ID of the room to which the user is invited.
802 inviter_user_id: The user ID of the inviter.
803 room_alias: An alias for the room, for cosmetic notifications.
804 room_avatar_url: The URL of the room's avatar, for cosmetic
785805 notifications.
786 room_join_rules (str): The join rules of the email (e.g. "public").
787 room_name (str): The m.room.name of the room.
788 inviter_display_name (str): The current display name of the
806 room_join_rules: The join rules of the email (e.g. "public").
807 room_name: The m.room.name of the room.
808 inviter_display_name: The current display name of the
789809 inviter.
790 inviter_avatar_url (str): The URL of the inviter's avatar.
810 inviter_avatar_url: The URL of the inviter's avatar.
791811 id_access_token (str|None): The access token to authenticate to the identity
792812 server with
793813
794814 Returns:
795 A deferred tuple containing:
796 token (str): The token which must be signed to prove authenticity.
815 A tuple containing:
816 token: The token which must be signed to prove authenticity.
797817 public_keys ([{"public_key": str, "key_validity_url": str}]):
798818 public_key is a base64-encoded ed25519 public key.
799819 fallback_public_key: One element from public_keys.
800 display_name (str): A user-friendly name to represent the invited
801 user.
820 display_name: A user-friendly name to represent the invited user.
802821 """
803822 invite_config = {
804823 "medium": medium,
895914 return token, public_keys, fallback_public_key, display_name
896915
897916
898 def create_id_access_token_header(id_access_token):
917 def create_id_access_token_header(id_access_token: str) -> List[str]:
899918 """Create an Authorization header for passing to SimpleHttpClient as the header value
900919 of an HTTP request.
901920
902921 Args:
903 id_access_token (str): An identity server access token.
922 id_access_token: An identity server access token.
904923
905924 Returns:
906 list[str]: The ascii-encoded bearer token encased in a list.
925 The ascii-encoded bearer token encased in a list.
907926 """
908927 # Prefix with Bearer
909928 bearer_token = "Bearer %s" % id_access_token
1414 # See the License for the specific language governing permissions and
1515 # limitations under the License.
1616 import logging
17 from typing import TYPE_CHECKING, Optional, Tuple
17 from typing import TYPE_CHECKING, List, Optional, Tuple
1818
1919 from canonicaljson import encode_canonical_json, json
2020
21 from twisted.internet import defer
22 from twisted.internet.defer import succeed
2321 from twisted.internet.interfaces import IDelayedCall
2422
2523 from synapse import event_auth
4038 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
4139 from synapse.api.urls import ConsentURIBuilder
4240 from synapse.events import EventBase
41 from synapse.events.builder import EventBuilder
42 from synapse.events.snapshot import EventContext
4343 from synapse.events.validator import EventValidator
4444 from synapse.logging.context import run_in_background
4545 from synapse.metrics.background_process_metrics import run_as_background_process
4646 from synapse.replication.http.send_event import ReplicationSendEventRestServlet
4747 from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
4848 from synapse.storage.state import StateFilter
49 from synapse.types import Collection, RoomAlias, UserID, create_requester
49 from synapse.types import (
50 Collection,
51 Requester,
52 RoomAlias,
53 StreamToken,
54 UserID,
55 create_requester,
56 )
5057 from synapse.util.async_helpers import Linearizer
5158 from synapse.util.frozenutils import frozendict_json_encoder
5259 from synapse.util.metrics import measure_func
8390 "_schedule_next_expiry", self._schedule_next_expiry
8491 )
8592
86 @defer.inlineCallbacks
87 def get_room_data(
88 self, user_id=None, room_id=None, event_type=None, state_key="", is_guest=False
89 ):
93 async def get_room_data(
94 self,
95 user_id: str = None,
96 room_id: str = None,
97 event_type: Optional[str] = None,
98 state_key: str = "",
99 is_guest: bool = False,
100 ) -> dict:
90101 """ Get data from a room.
91102
92103 Args:
93 event : The room path event
104 user_id
105 room_id
106 event_type
107 state_key
108 is_guest
94109 Returns:
95110 The path data content.
96111 Raises:
99114 (
100115 membership,
101116 membership_event_id,
102 ) = yield self.auth.check_user_in_room_or_world_readable(
117 ) = await self.auth.check_user_in_room_or_world_readable(
103118 room_id, user_id, allow_departed_users=True
104119 )
105120
106121 if membership == Membership.JOIN:
107 data = yield self.state.get_current_state(room_id, event_type, state_key)
122 data = await self.state.get_current_state(room_id, event_type, state_key)
108123 elif membership == Membership.LEAVE:
109124 key = (event_type, state_key)
110 room_state = yield self.state_store.get_state_for_events(
125 room_state = await self.state_store.get_state_for_events(
111126 [membership_event_id], StateFilter.from_types([key])
112127 )
113128 data = room_state[membership_event_id].get(key)
114129
115130 return data
116131
117 @defer.inlineCallbacks
118 def get_state_events(
132 async def get_state_events(
119133 self,
120 user_id,
121 room_id,
122 state_filter=StateFilter.all(),
123 at_token=None,
124 is_guest=False,
125 ):
134 user_id: str,
135 room_id: str,
136 state_filter: StateFilter = StateFilter.all(),
137 at_token: Optional[StreamToken] = None,
138 is_guest: bool = False,
139 ) -> List[dict]:
126140 """Retrieve all state events for a given room. If the user is
127141 joined to the room then return the current state. If the user has
128142 left the room return the state events from when they left. If an explicit
130144 visible.
131145
132146 Args:
133 user_id(str): The user requesting state events.
134 room_id(str): The room ID to get all state events from.
135 state_filter (StateFilter): The state filter used to fetch state
136 from the database.
137 at_token(StreamToken|None): the stream token of the at which we are requesting
147 user_id: The user requesting state events.
148 room_id: The room ID to get all state events from.
149 state_filter: The state filter used to fetch state from the database.
150 at_token: the stream token of the at which we are requesting
138151 the stats. If the user is not allowed to view the state as of that
139152 stream token, we raise a 403 SynapseError. If None, returns the current
140153 state based on the current_state_events table.
141 is_guest(bool): whether this user is a guest
154 is_guest: whether this user is a guest
142155 Returns:
143156 A list of dicts representing state events. [{}, {}, {}]
144157 Raises:
152165 # get_recent_events_for_room operates by topo ordering. This therefore
153166 # does not reliably give you the state at the given stream position.
154167 # (https://github.com/matrix-org/synapse/issues/3305)
155 last_events, _ = yield self.store.get_recent_events_for_room(
168 last_events, _ = await self.store.get_recent_events_for_room(
156169 room_id, end_token=at_token.room_key, limit=1
157170 )
158171
159172 if not last_events:
160173 raise NotFoundError("Can't find event for token %s" % (at_token,))
161174
162 visible_events = yield filter_events_for_client(
175 visible_events = await filter_events_for_client(
163176 self.storage, user_id, last_events, filter_send_to_client=False
164177 )
165178
166179 event = last_events[0]
167180 if visible_events:
168 room_state = yield self.state_store.get_state_for_events(
181 room_state = await self.state_store.get_state_for_events(
169182 [event.event_id], state_filter=state_filter
170183 )
171184 room_state = room_state[event.event_id]
179192 (
180193 membership,
181194 membership_event_id,
182 ) = yield self.auth.check_user_in_room_or_world_readable(
195 ) = await self.auth.check_user_in_room_or_world_readable(
183196 room_id, user_id, allow_departed_users=True
184197 )
185198
186199 if membership == Membership.JOIN:
187 state_ids = yield self.store.get_filtered_current_state_ids(
200 state_ids = await self.store.get_filtered_current_state_ids(
188201 room_id, state_filter=state_filter
189202 )
190 room_state = yield self.store.get_events(state_ids.values())
203 room_state = await self.store.get_events(state_ids.values())
191204 elif membership == Membership.LEAVE:
192 room_state = yield self.state_store.get_state_for_events(
205 room_state = await self.state_store.get_state_for_events(
193206 [membership_event_id], state_filter=state_filter
194207 )
195208 room_state = room_state[membership_event_id]
196209
197210 now = self.clock.time_msec()
198 events = yield self._event_serializer.serialize_events(
211 events = await self._event_serializer.serialize_events(
199212 room_state.values(),
200213 now,
201214 # We don't bother bundling aggregations in when asked for state
204217 )
205218 return events
206219
207 @defer.inlineCallbacks
208 def get_joined_members(self, requester, room_id):
220 async def get_joined_members(self, requester: Requester, room_id: str) -> dict:
209221 """Get all the joined members in the room and their profile information.
210222
211223 If the user has left the room return the state events from when they left.
212224
213225 Args:
214 requester(Requester): The user requesting state events.
215 room_id(str): The room ID to get all state events from.
226 requester: The user requesting state events.
227 room_id: The room ID to get all state events from.
216228 Returns:
217229 A dict of user_id to profile info
218230 """
220232 if not requester.app_service:
221233 # We check AS auth after fetching the room membership, as it
222234 # requires us to pull out all joined members anyway.
223 membership, _ = yield self.auth.check_user_in_room_or_world_readable(
235 membership, _ = await self.auth.check_user_in_room_or_world_readable(
224236 room_id, user_id, allow_departed_users=True
225237 )
226238 if membership != Membership.JOIN:
228240 "Getting joined members after leaving is not implemented"
229241 )
230242
231 users_with_profile = yield self.state.get_current_users_in_room(room_id)
243 users_with_profile = await self.state.get_current_users_in_room(room_id)
232244
233245 # If this is an AS, double check that they are allowed to see the members.
234246 # This can either be because the AS user is in the room or because there
249261 for user_id, profile in users_with_profile.items()
250262 }
251263
252 def maybe_schedule_expiry(self, event):
264 def maybe_schedule_expiry(self, event: EventBase):
253265 """Schedule the expiry of an event if there's not already one scheduled,
254266 or if the one running is for an event that will expire after the provided
255267 timestamp.
258270 the master process, and therefore needs to be run on there.
259271
260272 Args:
261 event (EventBase): The event to schedule the expiry of.
273 event: The event to schedule the expiry of.
262274 """
263275
264276 expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
269281 # a task scheduled for a timestamp that's sooner than the provided one.
270282 self._schedule_expiry_for_event(event.event_id, expiry_ts)
271283
272 @defer.inlineCallbacks
273 def _schedule_next_expiry(self):
284 async def _schedule_next_expiry(self):
274285 """Retrieve the ID and the expiry timestamp of the next event to be expired,
275286 and schedule an expiry task for it.
276287
278289 future call to save_expiry_ts can schedule a new expiry task.
279290 """
280291 # Try to get the expiry timestamp of the next event to expire.
281 res = yield self.store.get_next_event_to_expire()
292 res = await self.store.get_next_event_to_expire()
282293 if res:
283294 event_id, expiry_ts = res
284295 self._schedule_expiry_for_event(event_id, expiry_ts)
285296
286 def _schedule_expiry_for_event(self, event_id, expiry_ts):
297 def _schedule_expiry_for_event(self, event_id: str, expiry_ts: int):
287298 """Schedule an expiry task for the provided event if there's not already one
288299 scheduled at a timestamp that's sooner than the provided one.
289300
290301 Args:
291 event_id (str): The ID of the event to expire.
292 expiry_ts (int): The timestamp at which to expire the event.
302 event_id: The ID of the event to expire.
303 expiry_ts: The timestamp at which to expire the event.
293304 """
294305 if self._scheduled_expiry:
295306 # If the provided timestamp refers to a time before the scheduled time of the
319330 event_id,
320331 )
321332
322 @defer.inlineCallbacks
323 def _expire_event(self, event_id):
333 async def _expire_event(self, event_id: str):
324334 """Retrieve and expire an event that needs to be expired from the database.
325335
326336 If the event doesn't exist in the database, log it and delete the expiry date
335345 try:
336346 # Expire the event if we know about it. This function also deletes the expiry
337347 # date from the database in the same database transaction.
338 yield self.store.expire_event(event_id)
348 await self.store.expire_event(event_id)
339349 except Exception as e:
340350 logger.error("Could not expire event %s: %r", event_id, e)
341351
342352 # Schedule the expiry of the next event to expire.
343 yield self._schedule_next_expiry()
353 await self._schedule_next_expiry()
344354
345355
346356 # The duration (in ms) after which rooms should be removed
422432
423433 self._dummy_events_threshold = hs.config.dummy_events_threshold
424434
425 @defer.inlineCallbacks
426 def create_event(
435 async def create_event(
427436 self,
428 requester,
429 event_dict,
430 token_id=None,
431 txn_id=None,
437 requester: Requester,
438 event_dict: dict,
439 token_id: Optional[str] = None,
440 txn_id: Optional[str] = None,
432441 prev_event_ids: Optional[Collection[str]] = None,
433 require_consent=True,
434 ):
442 require_consent: bool = True,
443 ) -> Tuple[EventBase, EventContext]:
435444 """
436445 Given a dict from a client, create a new event.
437446
442451
443452 Args:
444453 requester
445 event_dict (dict): An entire event
446 token_id (str)
447 txn_id (str)
448
454 event_dict: An entire event
455 token_id
456 txn_id
449457 prev_event_ids:
450458 the forward extremities to use as the prev_events for the
451459 new event.
452460
453461 If None, they will be requested from the database.
454
455 require_consent (bool): Whether to check if the requester has
456 consented to privacy policy.
462 require_consent: Whether to check if the requester has
463 consented to the privacy policy.
457464 Raises:
458465 ResourceLimitError if server is blocked to some resource being
459466 exceeded
460467 Returns:
461 Tuple of created event (FrozenEvent), Context
462 """
463 yield self.auth.check_auth_blocking(requester.user.to_string())
468 Tuple of created event, Context
469 """
470 await self.auth.check_auth_blocking(requester.user.to_string())
464471
465472 if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
466473 room_version = event_dict["content"]["room_version"]
467474 else:
468475 try:
469 room_version = yield self.store.get_room_version_id(
476 room_version = await self.store.get_room_version_id(
470477 event_dict["room_id"]
471478 )
472479 except NotFoundError:
487494
488495 try:
489496 if "displayname" not in content:
490 displayname = yield profile.get_displayname(target)
497 displayname = await profile.get_displayname(target)
491498 if displayname is not None:
492499 content["displayname"] = displayname
493500 if "avatar_url" not in content:
494 avatar_url = yield profile.get_avatar_url(target)
501 avatar_url = await profile.get_avatar_url(target)
495502 if avatar_url is not None:
496503 content["avatar_url"] = avatar_url
497504 except Exception as e:
499506 "Failed to get profile information for %r: %s", target, e
500507 )
501508
502 is_exempt = yield self._is_exempt_from_privacy_policy(builder, requester)
509 is_exempt = await self._is_exempt_from_privacy_policy(builder, requester)
503510 if require_consent and not is_exempt:
504 yield self.assert_accepted_privacy_policy(requester)
511 await self.assert_accepted_privacy_policy(requester)
505512
506513 if token_id is not None:
507514 builder.internal_metadata.token_id = token_id
509516 if txn_id is not None:
510517 builder.internal_metadata.txn_id = txn_id
511518
512 event, context = yield self.create_new_client_event(
519 event, context = await self.create_new_client_event(
513520 builder=builder, requester=requester, prev_event_ids=prev_event_ids,
514521 )
515522
525532 # federation as well as those created locally. As of room v3, aliases events
526533 # can be created by users that are not in the room, therefore we have to
527534 # tolerate them in event_auth.check().
528 prev_state_ids = yield context.get_prev_state_ids()
535 prev_state_ids = await context.get_prev_state_ids()
529536 prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
530537 prev_event = (
531 yield self.store.get_event(prev_event_id, allow_none=True)
538 await self.store.get_event(prev_event_id, allow_none=True)
532539 if prev_event_id
533540 else None
534541 )
551558
552559 return (event, context)
553560
554 def _is_exempt_from_privacy_policy(self, builder, requester):
561 async def _is_exempt_from_privacy_policy(
562 self, builder: EventBuilder, requester: Requester
563 ) -> bool:
555564 """"Determine if an event to be sent is exempt from having to consent
556565 to the privacy policy
557566
558567 Args:
559 builder (synapse.events.builder.EventBuilder): event being created
560 requester (Requster): user requesting this event
568 builder: event being created
569 requester: user requesting this event
561570
562571 Returns:
563 Deferred[bool]: true if the event can be sent without the user
564 consenting
572 true if the event can be sent without the user consenting
565573 """
566574 # the only thing the user can do is join the server notices room.
567575 if builder.type == EventTypes.Member:
568576 membership = builder.content.get("membership", None)
569577 if membership == Membership.JOIN:
570 return self._is_server_notices_room(builder.room_id)
578 return await self._is_server_notices_room(builder.room_id)
571579 elif membership == Membership.LEAVE:
572580 # the user is always allowed to leave (but not kick people)
573581 return builder.state_key == requester.user.to_string()
574 return succeed(False)
575
576 @defer.inlineCallbacks
577 def _is_server_notices_room(self, room_id):
582 return False
583
584 async def _is_server_notices_room(self, room_id: str) -> bool:
578585 if self.config.server_notices_mxid is None:
579586 return False
580 user_ids = yield self.store.get_users_in_room(room_id)
587 user_ids = await self.store.get_users_in_room(room_id)
581588 return self.config.server_notices_mxid in user_ids
582589
583 @defer.inlineCallbacks
584 def assert_accepted_privacy_policy(self, requester):
590 async def assert_accepted_privacy_policy(self, requester: Requester) -> None:
585591 """Check if a user has accepted the privacy policy
586592
587593 Called when the given user is about to do something that requires
590596 raised.
591597
592598 Args:
593 requester (synapse.types.Requester):
594 The user making the request
599 requester: The user making the request
595600
596601 Returns:
597 Deferred[None]: returns normally if the user has consented or is
598 exempt
602 Returns normally if the user has consented or is exempt
599603
600604 Raises:
601605 ConsentNotGivenError: if the user has not given consent yet
616620 ):
617621 return
618622
619 u = yield self.store.get_user_by_id(user_id)
623 u = await self.store.get_user_by_id(user_id)
620624 assert u is not None
621625 if u["user_type"] in (UserTypes.SUPPORT, UserTypes.BOT):
622626 # support and bot users are not required to consent
634638 raise ConsentNotGivenError(msg=msg, consent_uri=consent_uri)
635639
636640 async def send_nonmember_event(
637 self, requester, event, context, ratelimit=True
641 self,
642 requester: Requester,
643 event: EventBase,
644 context: EventContext,
645 ratelimit: bool = True,
638646 ) -> int:
639647 """
640648 Persists and notifies local clients and federation of an event.
641649
642650 Args:
643 event (FrozenEvent) the event to send.
644 context (Context) the context of the event.
645 ratelimit (bool): Whether to rate limit this send.
646 is_guest (bool): Whether the sender is a guest.
651 requester
652 event the event to send.
653 context: the context of the event.
654 ratelimit: Whether to rate limit this send.
647655
648656 Return:
649657 The stream_id of the persisted event.
671679 requester=requester, event=event, context=context, ratelimit=ratelimit
672680 )
673681
674 @defer.inlineCallbacks
675 def deduplicate_state_event(self, event, context):
682 async def deduplicate_state_event(
683 self, event: EventBase, context: EventContext
684 ) -> None:
676685 """
677686 Checks whether event is in the latest resolved state in context.
678687
679688 If so, returns the version of the event in context.
680689 Otherwise, returns None.
681690 """
682 prev_state_ids = yield context.get_prev_state_ids()
691 prev_state_ids = await context.get_prev_state_ids()
683692 prev_event_id = prev_state_ids.get((event.type, event.state_key))
684693 if not prev_event_id:
685694 return
686 prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
695 prev_event = await self.store.get_event(prev_event_id, allow_none=True)
687696 if not prev_event:
688697 return
689698
695704 return
696705
697706 async def create_and_send_nonmember_event(
698 self, requester, event_dict, ratelimit=True, txn_id=None
707 self,
708 requester: Requester,
709 event_dict: EventBase,
710 ratelimit: bool = True,
711 txn_id: Optional[str] = None,
699712 ) -> Tuple[EventBase, int]:
700713 """
701714 Creates an event, then sends it.
725738 return event, stream_id
726739
727740 @measure_func("create_new_client_event")
728 @defer.inlineCallbacks
729 def create_new_client_event(
730 self, builder, requester=None, prev_event_ids: Optional[Collection[str]] = None
731 ):
741 async def create_new_client_event(
742 self,
743 builder: EventBuilder,
744 requester: Optional[Requester] = None,
745 prev_event_ids: Optional[Collection[str]] = None,
746 ) -> Tuple[EventBase, EventContext]:
732747 """Create a new event for a local client
733748
734749 Args:
735 builder (EventBuilder):
736
737 requester (synapse.types.Requester|None):
738
750 builder:
751 requester:
739752 prev_event_ids:
740753 the forward extremities to use as the prev_events for the
741754 new event.
743756 If None, they will be requested from the database.
744757
745758 Returns:
746 Deferred[(synapse.events.EventBase, synapse.events.snapshot.EventContext)]
759 Tuple of created event, context
747760 """
748761
749762 if prev_event_ids is not None:
752765 % (len(prev_event_ids),)
753766 )
754767 else:
755 prev_event_ids = yield self.store.get_prev_events_for_room(builder.room_id)
756
757 event = yield builder.build(prev_event_ids=prev_event_ids)
758 context = yield self.state.compute_event_context(event)
768 prev_event_ids = await self.store.get_prev_events_for_room(builder.room_id)
769
770 event = await builder.build(prev_event_ids=prev_event_ids)
771 context = await self.state.compute_event_context(event)
759772 if requester:
760773 context.app_service = requester.app_service
761774
769782 relates_to = relation["event_id"]
770783 aggregation_key = relation["key"]
771784
772 already_exists = yield self.store.has_user_annotated_event(
785 already_exists = await self.store.has_user_annotated_event(
773786 relates_to, event.type, aggregation_key, event.sender
774787 )
775788 if already_exists:
781794
782795 @measure_func("handle_new_client_event")
783796 async def handle_new_client_event(
784 self, requester, event, context, ratelimit=True, extra_users=[]
797 self,
798 requester: Requester,
799 event: EventBase,
800 context: EventContext,
801 ratelimit: bool = True,
802 extra_users: List[UserID] = [],
785803 ) -> int:
786804 """Processes a new event. This includes checking auth, persisting it,
787805 notifying users, sending to remote servers, etc.
790808 processing.
791809
792810 Args:
793 requester (Requester)
794 event (FrozenEvent)
795 context (EventContext)
796 ratelimit (bool)
797 extra_users (list(UserID)): Any extra users to notify about event
811 requester
812 event
813 context
814 ratelimit
815 extra_users: Any extra users to notify about event
798816
799817 Return:
800818 The stream_id of the persisted event.
838856
839857 await self.action_generator.handle_push_actions_for_event(event, context)
840858
841 # reraise does not allow inlineCallbacks to preserve the stacktrace, so we
842 # hack around with a try/finally instead.
843 success = False
844859 try:
845860 # If we're a worker we need to hit out to the master.
846861 if not self._is_event_writer:
856871 )
857872 stream_id = result["stream_id"]
858873 event.internal_metadata.stream_ordering = stream_id
859 success = True
860874 return stream_id
861875
862876 stream_id = await self.persist_and_notify_client_event(
863877 requester, event, context, ratelimit=ratelimit, extra_users=extra_users
864878 )
865879
866 success = True
867880 return stream_id
868 finally:
869 if not success:
870 # Ensure that we actually remove the entries in the push actions
871 # staging area, if we calculated them.
872 run_in_background(
873 self.store.remove_push_actions_from_staging, event.event_id
874 )
875
876 @defer.inlineCallbacks
877 def _validate_canonical_alias(
878 self, directory_handler, room_alias_str, expected_room_id
879 ):
881 except Exception:
882 # Ensure that we actually remove the entries in the push actions
883 # staging area, if we calculated them.
884 run_in_background(
885 self.store.remove_push_actions_from_staging, event.event_id
886 )
887 raise
888
889 async def _validate_canonical_alias(
890 self, directory_handler, room_alias_str: str, expected_room_id: str
891 ) -> None:
880892 """
881893 Ensure that the given room alias points to the expected room ID.
882894
887899 """
888900 room_alias = RoomAlias.from_string(room_alias_str)
889901 try:
890 mapping = yield defer.ensureDeferred(
891 directory_handler.get_association(room_alias)
892 )
902 mapping = await directory_handler.get_association(room_alias)
893903 except SynapseError as e:
894904 # Turn M_NOT_FOUND errors into M_BAD_ALIAS errors.
895905 if e.errcode == Codes.NOT_FOUND:
908918 )
909919
910920 async def persist_and_notify_client_event(
911 self, requester, event, context, ratelimit=True, extra_users=[]
921 self,
922 requester: Requester,
923 event: EventBase,
924 context: EventContext,
925 ratelimit: bool = True,
926 extra_users: List[UserID] = [],
912927 ) -> int:
913928 """Called when we have fully built the event, have already
914929 calculated the push actions for the event, and checked auth.
11011116
11021117 return event_stream_id
11031118
1104 async def _bump_active_time(self, user):
1119 async def _bump_active_time(self, user: UserID) -> None:
11051120 try:
11061121 presence = self.hs.get_presence_handler()
11071122 await presence.bump_presence_active_time(user)
2929 from prometheus_client import Counter
3030 from typing_extensions import ContextManager
3131
32 from twisted.internet import defer
33
3432 import synapse.metrics
3533 from synapse.api.constants import EventTypes, Membership, PresenceState
3634 from synapse.api.errors import SynapseError
3836 from synapse.logging.utils import log_function
3937 from synapse.metrics import LaterGauge
4038 from synapse.metrics.background_process_metrics import run_as_background_process
39 from synapse.state import StateHandler
40 from synapse.storage.data_stores.main import DataStore
4141 from synapse.storage.presence import UserPresenceState
4242 from synapse.types import JsonDict, UserID, get_domain_from_id
4343 from synapse.util.async_helpers import Linearizer
894894
895895 await self._on_user_joined_room(room_id, state_key)
896896
897 async def _on_user_joined_room(self, room_id, user_id):
897 async def _on_user_joined_room(self, room_id: str, user_id: str) -> None:
898898 """Called when we detect a user joining the room via the current state
899899 delta stream.
900
901 Args:
902 room_id (str)
903 user_id (str)
904
905 Returns:
906 Deferred
907900 """
908901
909902 if self.is_mine_id(user_id):
934927 # TODO: Check that this is actually a new server joining the
935928 # room.
936929
937 user_ids = await self.state.get_current_users_in_room(room_id)
938 user_ids = list(filter(self.is_mine_id, user_ids))
930 users = await self.state.get_current_users_in_room(room_id)
931 user_ids = list(filter(self.is_mine_id, users))
939932
940933 states_d = await self.current_state_for_users(user_ids)
941934
12951288 return new_state, persist_and_notify, federation_ping
12961289
12971290
1298 @defer.inlineCallbacks
1299 def get_interested_parties(store, states):
1291 async def get_interested_parties(
1292 store: DataStore, states: List[UserPresenceState]
1293 ) -> Tuple[Dict[str, List[UserPresenceState]], Dict[str, List[UserPresenceState]]]:
13001294 """Given a list of states return which entities (rooms, users)
13011295 are interested in the given states.
13021296
13031297 Args:
1304 states (list(UserPresenceState))
1298 store
1299 states
13051300
13061301 Returns:
1307 2-tuple: `(room_ids_to_states, users_to_states)`,
1302 A 2-tuple of `(room_ids_to_states, users_to_states)`,
13081303 with each item being a dict of `entity_name` -> `[UserPresenceState]`
13091304 """
13101305 room_ids_to_states = {} # type: Dict[str, List[UserPresenceState]]
13111306 users_to_states = {} # type: Dict[str, List[UserPresenceState]]
13121307 for state in states:
1313 room_ids = yield store.get_rooms_for_user(state.user_id)
1308 room_ids = await store.get_rooms_for_user(state.user_id)
13141309 for room_id in room_ids:
13151310 room_ids_to_states.setdefault(room_id, []).append(state)
13161311
13201315 return room_ids_to_states, users_to_states
13211316
13221317
1323 @defer.inlineCallbacks
1324 def get_interested_remotes(store, states, state_handler):
1318 async def get_interested_remotes(
1319 store: DataStore, states: List[UserPresenceState], state_handler: StateHandler
1320 ) -> List[Tuple[List[str], List[UserPresenceState]]]:
13251321 """Given a list of presence states figure out which remote servers
13261322 should be sent which.
13271323
13281324 All the presence states should be for local users only.
13291325
13301326 Args:
1331 store (DataStore)
1332 states (list(UserPresenceState))
1327 store
1328 states
1329 state_handler
13331330
13341331 Returns:
1335 Deferred list of ([destinations], [UserPresenceState]), where for
1336 each row the list of UserPresenceState should be sent to each
1332 A list of 2-tuples of destinations and states, where for
1333 each tuple the list of UserPresenceState should be sent to each
13371334 destination
13381335 """
13391336 hosts_and_states = []
13411338 # First we look up the rooms each user is in (as well as any explicit
13421339 # subscriptions), then for each distinct room we look up the remote
13431340 # hosts in those rooms.
1344 room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
1341 room_ids_to_states, users_to_states = await get_interested_parties(store, states)
13451342
13461343 for room_id, states in room_ids_to_states.items():
1347 hosts = yield state_handler.get_current_hosts_in_room(room_id)
1344 hosts = await state_handler.get_current_hosts_in_room(room_id)
13481345 hosts_and_states.append((hosts, states))
13491346
13501347 for user_id, states in users_to_states.items():
1414
1515 import logging
1616
17 from twisted.internet import defer
18
1917 from synapse.api.errors import (
2018 AuthError,
2119 Codes,
5351
5452 self.user_directory_handler = hs.get_user_directory_handler()
5553
56 @defer.inlineCallbacks
57 def get_profile(self, user_id):
54 async def get_profile(self, user_id):
5855 target_user = UserID.from_string(user_id)
5956
6057 if self.hs.is_mine(target_user):
6158 try:
62 displayname = yield self.store.get_profile_displayname(
63 target_user.localpart
64 )
65 avatar_url = yield self.store.get_profile_avatar_url(
59 displayname = await self.store.get_profile_displayname(
60 target_user.localpart
61 )
62 avatar_url = await self.store.get_profile_avatar_url(
6663 target_user.localpart
6764 )
6865 except StoreError as e:
7370 return {"displayname": displayname, "avatar_url": avatar_url}
7471 else:
7572 try:
76 result = yield self.federation.make_query(
73 result = await self.federation.make_query(
7774 destination=target_user.domain,
7875 query_type="profile",
7976 args={"user_id": user_id},
8582 except HttpResponseException as e:
8683 raise e.to_synapse_error()
8784
88 @defer.inlineCallbacks
89 def get_profile_from_cache(self, user_id):
85 async def get_profile_from_cache(self, user_id):
9086 """Get the profile information from our local cache. If the user is
9187 ours then the profile information will always be corect. Otherwise,
9288 it may be out of date/missing.
9490 target_user = UserID.from_string(user_id)
9591 if self.hs.is_mine(target_user):
9692 try:
97 displayname = yield self.store.get_profile_displayname(
98 target_user.localpart
99 )
100 avatar_url = yield self.store.get_profile_avatar_url(
93 displayname = await self.store.get_profile_displayname(
94 target_user.localpart
95 )
96 avatar_url = await self.store.get_profile_avatar_url(
10197 target_user.localpart
10298 )
10399 except StoreError as e:
107103
108104 return {"displayname": displayname, "avatar_url": avatar_url}
109105 else:
110 profile = yield self.store.get_from_remote_profile_cache(user_id)
106 profile = await self.store.get_from_remote_profile_cache(user_id)
111107 return profile or {}
112108
113 @defer.inlineCallbacks
114 def get_displayname(self, target_user):
109 async def get_displayname(self, target_user):
115110 if self.hs.is_mine(target_user):
116111 try:
117 displayname = yield self.store.get_profile_displayname(
112 displayname = await self.store.get_profile_displayname(
118113 target_user.localpart
119114 )
120115 except StoreError as e:
125120 return displayname
126121 else:
127122 try:
128 result = yield self.federation.make_query(
123 result = await self.federation.make_query(
129124 destination=target_user.domain,
130125 query_type="profile",
131126 args={"user_id": target_user.to_string(), "field": "displayname"},
188183
189184 await self._update_join_states(requester, target_user)
190185
191 @defer.inlineCallbacks
192 def get_avatar_url(self, target_user):
186 async def get_avatar_url(self, target_user):
193187 if self.hs.is_mine(target_user):
194188 try:
195 avatar_url = yield self.store.get_profile_avatar_url(
189 avatar_url = await self.store.get_profile_avatar_url(
196190 target_user.localpart
197191 )
198192 except StoreError as e:
202196 return avatar_url
203197 else:
204198 try:
205 result = yield self.federation.make_query(
199 result = await self.federation.make_query(
206200 destination=target_user.domain,
207201 query_type="profile",
208202 args={"user_id": target_user.to_string(), "field": "avatar_url"},
252246
253247 await self._update_join_states(requester, target_user)
254248
255 @defer.inlineCallbacks
256 def on_profile_query(self, args):
249 async def on_profile_query(self, args):
257250 user = UserID.from_string(args["user_id"])
258251 if not self.hs.is_mine(user):
259252 raise SynapseError(400, "User is not hosted on this homeserver")
263256 response = {}
264257 try:
265258 if just_field is None or just_field == "displayname":
266 response["displayname"] = yield self.store.get_profile_displayname(
259 response["displayname"] = await self.store.get_profile_displayname(
267260 user.localpart
268261 )
269262
270263 if just_field is None or just_field == "avatar_url":
271 response["avatar_url"] = yield self.store.get_profile_avatar_url(
264 response["avatar_url"] = await self.store.get_profile_avatar_url(
272265 user.localpart
273266 )
274267 except StoreError as e:
303296 "Failed to update join event for room %s - %s", room_id, str(e)
304297 )
305298
306 @defer.inlineCallbacks
307 def check_profile_query_allowed(self, target_user, requester=None):
299 async def check_profile_query_allowed(self, target_user, requester=None):
308300 """Checks whether a profile query is allowed. If the
309301 'require_auth_for_profile_requests' config flag is set to True and a
310302 'requester' is provided, the query is only allowed if the two users
336328 return
337329
338330 try:
339 requester_rooms = yield self.store.get_rooms_for_user(requester.to_string())
340 target_user_rooms = yield self.store.get_rooms_for_user(
331 requester_rooms = await self.store.get_rooms_for_user(requester.to_string())
332 target_user_rooms = await self.store.get_rooms_for_user(
341333 target_user.to_string()
342334 )
343335
370362 "Update remote profile", self._update_remote_profile_cache
371363 )
372364
373 @defer.inlineCallbacks
374 def _update_remote_profile_cache(self):
365 async def _update_remote_profile_cache(self):
375366 """Called periodically to check profiles of remote users we haven't
376367 checked in a while.
377368 """
378 entries = yield self.store.get_remote_profile_cache_entries_that_expire(
369 entries = await self.store.get_remote_profile_cache_entries_that_expire(
379370 last_checked=self.clock.time_msec() - self.PROFILE_UPDATE_EVERY_MS
380371 )
381372
382373 for user_id, displayname, avatar_url in entries:
383 is_subscribed = yield self.store.is_subscribed_remote_profile_for_user(
374 is_subscribed = await self.store.is_subscribed_remote_profile_for_user(
384375 user_id
385376 )
386377 if not is_subscribed:
387 yield self.store.maybe_delete_remote_profile_cache(user_id)
378 await self.store.maybe_delete_remote_profile_cache(user_id)
388379 continue
389380
390381 try:
391 profile = yield self.federation.make_query(
382 profile = await self.federation.make_query(
392383 destination=get_domain_from_id(user_id),
393384 query_type="profile",
394385 args={"user_id": user_id},
397388 except Exception:
398389 logger.exception("Failed to get avatar_url")
399390
400 yield self.store.update_remote_profile_cache(
391 await self.store.update_remote_profile_cache(
401392 user_id, displayname, avatar_url
402393 )
403394 continue
406397 new_avatar = profile.get("avatar_url")
407398
408399 # We always hit update to update the last_check timestamp
409 yield self.store.update_remote_profile_cache(user_id, new_name, new_avatar)
400 await self.store.update_remote_profile_cache(user_id, new_name, new_avatar)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15
16 from twisted.internet import defer
1715
1816 from synapse.handlers._base import BaseHandler
1917 from synapse.types import ReadReceipt, get_domain_from_id
128126 def __init__(self, hs):
129127 self.store = hs.get_datastore()
130128
131 @defer.inlineCallbacks
132 def get_new_events(self, from_key, room_ids, **kwargs):
129 async def get_new_events(self, from_key, room_ids, **kwargs):
133130 from_key = int(from_key)
134 to_key = yield self.get_current_key()
131 to_key = self.get_current_key()
135132
136133 if from_key == to_key:
137134 return [], to_key
138135
139 events = yield self.store.get_linearized_receipts_for_rooms(
136 events = await self.store.get_linearized_receipts_for_rooms(
140137 room_ids, from_key=from_key, to_key=to_key
141138 )
142139
145142 def get_current_key(self, direction="f"):
146143 return self.store.get_max_receipt_stream_id()
147144
148 @defer.inlineCallbacks
149 def get_pagination_rows(self, user, config, key):
145 async def get_pagination_rows(self, user, config, key):
150146 to_key = int(config.from_key)
151147
152148 if config.to_key:
154150 else:
155151 from_key = None
156152
157 room_ids = yield self.store.get_rooms_for_user(user.to_string())
158 events = yield self.store.get_linearized_receipts_for_rooms(
153 room_ids = await self.store.get_rooms_for_user(user.to_string())
154 events = await self.store.get_linearized_receipts_for_rooms(
159155 room_ids, from_key=from_key, to_key=to_key
160156 )
161157
2727 )
2828 from synapse.storage.state import StateFilter
2929 from synapse.types import RoomAlias, UserID, create_requester
30 from synapse.util.async_helpers import Linearizer
3130
3231 from ._base import BaseHandler
3332
4948 self.user_directory_handler = hs.get_user_directory_handler()
5049 self.identity_handler = self.hs.get_handlers().identity_handler
5150 self.ratelimiter = hs.get_registration_ratelimiter()
52
53 self._next_generated_user_id = None
54
5551 self.macaroon_gen = hs.get_macaroon_generator()
56
57 self._generate_user_id_linearizer = Linearizer(
58 name="_generate_user_id_linearizer"
59 )
6052 self._server_notices_mxid = hs.config.server_notices_mxid
6153
6254 if hs.config.worker_app:
218210 if fail_count > 10:
219211 raise SynapseError(500, "Unable to find a suitable guest user ID")
220212
221 localpart = await self._generate_user_id()
213 localpart = await self.store.generate_user_id()
222214 user = UserID(localpart, self.hs.hostname)
223215 user_id = user.to_string()
224216 self.check_user_id_not_appservice_exclusive(user_id)
508500 "This user ID is reserved by an application service.",
509501 errcode=Codes.EXCLUSIVE,
510502 )
511
512 async def _generate_user_id(self):
513 if self._next_generated_user_id is None:
514 with await self._generate_user_id_linearizer.queue(()):
515 if self._next_generated_user_id is None:
516 self._next_generated_user_id = (
517 await self.store.find_next_generated_user_id_localpart()
518 )
519
520 id = self._next_generated_user_id
521 self._next_generated_user_id += 1
522 return str(id)
523503
524504 def check_registration_ratelimit(self, address):
525505 """A simple helper method to check whether the registration rate limit has been hit
2121 import math
2222 import string
2323 from collections import OrderedDict
24 from typing import Tuple
24 from typing import Optional, Tuple
2525
2626 from synapse.api.constants import (
2727 EventTypes,
2828 JoinRules,
29 Membership,
2930 RoomCreationPreset,
3031 RoomEncryptionAlgorithms,
3132 )
4243 StateMap,
4344 StreamToken,
4445 UserID,
46 create_requester,
4547 )
4648 from synapse.util import stringutils
47 from synapse.util.async_helpers import Linearizer
49 from synapse.util.async_helpers import Linearizer, maybe_awaitable
4850 from synapse.util.caches.response_cache import ResponseCache
4951 from synapse.visibility import filter_events_for_client
5052
116118
117119 async def upgrade_room(
118120 self, requester: Requester, old_room_id: str, new_version: RoomVersion
119 ):
121 ) -> str:
120122 """Replace a room with a new room with a different version
121123
122124 Args:
125127 new_version: the new room version to use
126128
127129 Returns:
128 Deferred[unicode]: the new room id
130 the new room id
129131 """
130132 await self.ratelimit(requester)
131133
236238 old_room_id: str,
237239 new_room_id: str,
238240 old_room_state: StateMap[str],
239 ):
241 ) -> None:
240242 """Send updated power levels in both rooms after an upgrade
241243
242244 Args:
244246 old_room_id: the id of the room to be replaced
245247 new_room_id: the id of the replacement room
246248 old_room_state: the state map for the old room
247
248 Returns:
249 Deferred
250249 """
251250 old_room_pl_event_id = old_room_state.get((EventTypes.PowerLevels, ""))
252251
319318 new_room_id: str,
320319 new_room_version: RoomVersion,
321320 tombstone_event_id: str,
322 ):
321 ) -> None:
323322 """Populate a new room based on an old room
324323
325324 Args:
329328 created with _gemerate_room_id())
330329 new_room_version: the new room version to use
331330 tombstone_event_id: the ID of the tombstone event in the old room.
332 Returns:
333 Deferred
334331 """
335332 user_id = requester.user.to_string()
336333
10881085
10891086 def get_current_key_for_room(self, room_id):
10901087 return self.store.get_room_events_max_id(room_id)
1088
1089
1090 class RoomShutdownHandler(object):
1091
1092 DEFAULT_MESSAGE = (
1093 "Sharing illegal content on this server is not permitted and rooms in"
1094 " violation will be blocked."
1095 )
1096 DEFAULT_ROOM_NAME = "Content Violation Notification"
1097
1098 def __init__(self, hs):
1099 self.hs = hs
1100 self.room_member_handler = hs.get_room_member_handler()
1101 self._room_creation_handler = hs.get_room_creation_handler()
1102 self._replication = hs.get_replication_data_handler()
1103 self.event_creation_handler = hs.get_event_creation_handler()
1104 self.state = hs.get_state_handler()
1105 self.store = hs.get_datastore()
1106
1107 async def shutdown_room(
1108 self,
1109 room_id: str,
1110 requester_user_id: str,
1111 new_room_user_id: Optional[str] = None,
1112 new_room_name: Optional[str] = None,
1113 message: Optional[str] = None,
1114 block: bool = False,
1115 ) -> dict:
1116 """
1117 Shuts down a room. Moves all local users and room aliases automatically
1118 to a new room if `new_room_user_id` is set. Otherwise local users only
1119 leave the room without any information.
1120
1121 The new room will be created with the user specified by the
1122 `new_room_user_id` parameter as room administrator and will contain a
1123 message explaining what happened. Users invited to the new room will
1124 have power level `-10` by default, and thus be unable to speak.
1125
1126 The local server will only have the power to move local user and room
1127 aliases to the new room. Users on other servers will be unaffected.
1128
1129 Args:
1130 room_id: The ID of the room to shut down.
1131 requester_user_id:
1132 User who requested the action and put the room on the
1133 blocking list.
1134 new_room_user_id:
1135 If set, a new room will be created with this user ID
1136 as the creator and admin, and all users in the old room will be
1137 moved into that room. If not set, no new room will be created
1138 and the users will just be removed from the old room.
1139 new_room_name:
1140 A string representing the name of the room that new users will
1141 be invited to. Defaults to `Content Violation Notification`
1142 message:
1143 A string containing the first message that will be sent as
1144 `new_room_user_id` in the new room. Ideally this will clearly
1145 convey why the original room was shut down.
1146 Defaults to `Sharing illegal content on this server is not
1147 permitted and rooms in violation will be blocked.`
1148 block:
1149 If set to `true`, this room will be added to a blocking list,
1150 preventing future attempts to join the room. Defaults to `false`.
1151
1152 Returns: a dict containing the following keys:
1153 kicked_users: An array of users (`user_id`) that were kicked.
1154 failed_to_kick_users:
1155 An array of users (`user_id`) that that were not kicked.
1156 local_aliases:
1157 An array of strings representing the local aliases that were
1158 migrated from the old room to the new.
1159 new_room_id: A string representing the room ID of the new room.
1160 """
1161
1162 if not new_room_name:
1163 new_room_name = self.DEFAULT_ROOM_NAME
1164 if not message:
1165 message = self.DEFAULT_MESSAGE
1166
1167 if not RoomID.is_valid(room_id):
1168 raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
1169
1170 if not await self.store.get_room(room_id):
1171 raise NotFoundError("Unknown room id %s" % (room_id,))
1172
1173 # This will work even if the room is already blocked, but that is
1174 # desirable in case the first attempt at blocking the room failed below.
1175 if block:
1176 await self.store.block_room(room_id, requester_user_id)
1177
1178 if new_room_user_id is not None:
1179 if not self.hs.is_mine_id(new_room_user_id):
1180 raise SynapseError(
1181 400, "User must be our own: %s" % (new_room_user_id,)
1182 )
1183
1184 room_creator_requester = create_requester(new_room_user_id)
1185
1186 info, stream_id = await self._room_creation_handler.create_room(
1187 room_creator_requester,
1188 config={
1189 "preset": RoomCreationPreset.PUBLIC_CHAT,
1190 "name": new_room_name,
1191 "power_level_content_override": {"users_default": -10},
1192 },
1193 ratelimit=False,
1194 )
1195 new_room_id = info["room_id"]
1196
1197 logger.info(
1198 "Shutting down room %r, joining to new room: %r", room_id, new_room_id
1199 )
1200
1201 # We now wait for the create room to come back in via replication so
1202 # that we can assume that all the joins/invites have propogated before
1203 # we try and auto join below.
1204 #
1205 # TODO: Currently the events stream is written to from master
1206 await self._replication.wait_for_stream_position(
1207 self.hs.config.worker.writers.events, "events", stream_id
1208 )
1209 else:
1210 new_room_id = None
1211 logger.info("Shutting down room %r", room_id)
1212
1213 users = await self.state.get_current_users_in_room(room_id)
1214 kicked_users = []
1215 failed_to_kick_users = []
1216 for user_id in users:
1217 if not self.hs.is_mine_id(user_id):
1218 continue
1219
1220 logger.info("Kicking %r from %r...", user_id, room_id)
1221
1222 try:
1223 # Kick users from room
1224 target_requester = create_requester(user_id)
1225 _, stream_id = await self.room_member_handler.update_membership(
1226 requester=target_requester,
1227 target=target_requester.user,
1228 room_id=room_id,
1229 action=Membership.LEAVE,
1230 content={},
1231 ratelimit=False,
1232 require_consent=False,
1233 )
1234
1235 # Wait for leave to come in over replication before trying to forget.
1236 await self._replication.wait_for_stream_position(
1237 self.hs.config.worker.writers.events, "events", stream_id
1238 )
1239
1240 await self.room_member_handler.forget(target_requester.user, room_id)
1241
1242 # Join users to new room
1243 if new_room_user_id:
1244 await self.room_member_handler.update_membership(
1245 requester=target_requester,
1246 target=target_requester.user,
1247 room_id=new_room_id,
1248 action=Membership.JOIN,
1249 content={},
1250 ratelimit=False,
1251 require_consent=False,
1252 )
1253
1254 kicked_users.append(user_id)
1255 except Exception:
1256 logger.exception(
1257 "Failed to leave old room and join new room for %r", user_id
1258 )
1259 failed_to_kick_users.append(user_id)
1260
1261 # Send message in new room and move aliases
1262 if new_room_user_id:
1263 await self.event_creation_handler.create_and_send_nonmember_event(
1264 room_creator_requester,
1265 {
1266 "type": "m.room.message",
1267 "content": {"body": message, "msgtype": "m.text"},
1268 "room_id": new_room_id,
1269 "sender": new_room_user_id,
1270 },
1271 ratelimit=False,
1272 )
1273
1274 aliases_for_room = await maybe_awaitable(
1275 self.store.get_aliases_for_room(room_id)
1276 )
1277
1278 await self.store.update_aliases_for_room(
1279 room_id, new_room_id, requester_user_id
1280 )
1281 else:
1282 aliases_for_room = []
1283
1284 return {
1285 "kicked_users": kicked_users,
1286 "failed_to_kick_users": failed_to_kick_users,
1287 "local_aliases": aliases_for_room,
1288 "new_room_id": new_room_id,
1289 }
1919 import msgpack
2020 from unpaddedbase64 import decode_base64, encode_base64
2121
22 from twisted.internet import defer
23
2422 from synapse.api.constants import EventTypes, JoinRules
2523 from synapse.api.errors import Codes, HttpResponseException
2624 from synapse.types import ThirdPartyInstanceID
27 from synapse.util.caches.descriptors import cachedInlineCallbacks
25 from synapse.util.caches.descriptors import cached
2826 from synapse.util.caches.response_cache import ResponseCache
2927
3028 from ._base import BaseHandler
4644 hs, "remote_room_list", timeout_ms=30 * 1000
4745 )
4846
49 def get_local_public_room_list(
47 async def get_local_public_room_list(
5048 self,
5149 limit=None,
5250 since_token=None,
7169 API
7270 """
7371 if not self.enable_room_list_search:
74 return defer.succeed({"chunk": [], "total_room_count_estimate": 0})
72 return {"chunk": [], "total_room_count_estimate": 0}
7573
7674 logger.info(
7775 "Getting public room list: limit=%r, since=%r, search=%r, network=%r",
8684 # appservice specific lists.
8785 logger.info("Bypassing cache as search request.")
8886
89 return self._get_public_room_list(
87 return await self._get_public_room_list(
9088 limit,
9189 since_token,
9290 search_filter,
9593 )
9694
9795 key = (limit, since_token, network_tuple)
98 return self.response_cache.wrap(
96 return await self.response_cache.wrap(
9997 key,
10098 self._get_public_room_list,
10199 limit,
104102 from_federation=from_federation,
105103 )
106104
107 @defer.inlineCallbacks
108 def _get_public_room_list(
105 async def _get_public_room_list(
109106 self,
110107 limit: Optional[int] = None,
111108 since_token: Optional[str] = None,
144141 # we request one more than wanted to see if there are more pages to come
145142 probing_limit = limit + 1 if limit is not None else None
146143
147 results = yield self.store.get_largest_public_rooms(
144 results = await self.store.get_largest_public_rooms(
148145 network_tuple,
149146 search_filter,
150147 probing_limit,
220217
221218 response["chunk"] = results
222219
223 response["total_room_count_estimate"] = yield self.store.count_public_rooms(
220 response["total_room_count_estimate"] = await self.store.count_public_rooms(
224221 network_tuple, ignore_non_federatable=from_federation
225222 )
226223
227224 return response
228225
229 @cachedInlineCallbacks(num_args=1, cache_context=True)
230 def generate_room_entry(
226 @cached(num_args=1, cache_context=True)
227 async def generate_room_entry(
231228 self,
232 room_id,
233 num_joined_users,
229 room_id: str,
230 num_joined_users: int,
234231 cache_context,
235 with_alias=True,
236 allow_private=False,
237 ):
232 with_alias: bool = True,
233 allow_private: bool = False,
234 ) -> Optional[dict]:
238235 """Returns the entry for a room
239236
240237 Args:
241 room_id (str): The room's ID.
242 num_joined_users (int): Number of users in the room.
238 room_id: The room's ID.
239 num_joined_users: Number of users in the room.
243240 cache_context: Information for cached responses.
244 with_alias (bool): Whether to return the room's aliases in the result.
245 allow_private (bool): Whether invite-only rooms should be shown.
241 with_alias: Whether to return the room's aliases in the result.
242 allow_private: Whether invite-only rooms should be shown.
246243
247244 Returns:
248 Deferred[dict|None]: Returns a room entry as a dictionary, or None if this
245 Returns a room entry as a dictionary, or None if this
249246 room was determined not to be shown publicly.
250247 """
251248 result = {"room_id": room_id, "num_joined_members": num_joined_users}
252249
253250 if with_alias:
254 aliases = yield self.store.get_aliases_for_room(
251 aliases = await self.store.get_aliases_for_room(
255252 room_id, on_invalidate=cache_context.invalidate
256253 )
257254 if aliases:
258255 result["aliases"] = aliases
259256
260 current_state_ids = yield self.store.get_current_state_ids(
257 current_state_ids = await self.store.get_current_state_ids(
261258 room_id, on_invalidate=cache_context.invalidate
262259 )
263260
265262 # We're not in the room, so may as well bail out here.
266263 return result
267264
268 event_map = yield self.store.get_events(
265 event_map = await self.store.get_events(
269266 [
270267 event_id
271268 for key, event_id in current_state_ids.items()
335332
336333 return result
337334
338 @defer.inlineCallbacks
339 def get_remote_public_room_list(
335 async def get_remote_public_room_list(
340336 self,
341337 server_name,
342338 limit=None,
355351 # to a locally-filtered search if we must.
356352
357353 try:
358 res = yield self._get_remote_list_cached(
354 res = await self._get_remote_list_cached(
359355 server_name,
360356 limit=limit,
361357 since_token=since_token,
380376 limit = None
381377 since_token = None
382378
383 res = yield self._get_remote_list_cached(
379 res = await self._get_remote_list_cached(
384380 server_name,
385381 limit=limit,
386382 since_token=since_token,
399395
400396 return res
401397
402 def _get_remote_list_cached(
398 async def _get_remote_list_cached(
403399 self,
404400 server_name,
405401 limit=None,
411407 repl_layer = self.hs.get_federation_client()
412408 if search_filter:
413409 # We can't cache when asking for search
414 return repl_layer.get_public_rooms(
410 return await repl_layer.get_public_rooms(
415411 server_name,
416412 limit=limit,
417413 since_token=since_token,
427423 include_all_networks,
428424 third_party_instance_id,
429425 )
430 return self.remote_response_cache.wrap(
426 return await self.remote_response_cache.wrap(
431427 key,
432428 repl_layer.get_public_rooms,
433429 server_name,
1414
1515 import itertools
1616 import logging
17 from typing import Iterable
1718
1819 from unpaddedbase64 import decode_base64, encode_base64
1920
3637 self.state_store = self.storage.state
3738 self.auth = hs.get_auth()
3839
39 async def get_old_rooms_from_upgraded_room(self, room_id):
40 async def get_old_rooms_from_upgraded_room(self, room_id: str) -> Iterable[str]:
4041 """Retrieves room IDs of old rooms in the history of an upgraded room.
4142
4243 We do so by checking the m.room.create event of the room for a
4748 The full list of all found rooms in then returned.
4849
4950 Args:
50 room_id (str): id of the room to search through.
51 room_id: id of the room to search through.
5152
5253 Returns:
53 Deferred[iterable[str]]: predecessor room ids
54 Predecessor room ids
5455 """
5556
5657 historical_room_ids = []
282282 timeout,
283283 full_state,
284284 )
285 logger.debug("Returning sync response for %s", user_id)
285286 return res
286287
287288 async def _wait_for_sync_for_user(
419420 potential_recents: Optional[List[EventBase]] = None,
420421 newly_joined_room: bool = False,
421422 ) -> TimelineBatch:
422 """
423 Returns:
424 a Deferred TimelineBatch
425 """
426423 with Measure(self.clock, "load_filtered_recents"):
427424 timeline_limit = sync_config.filter_collection.timeline_limit()
428425 block_all_timeline = (
989986 joined_room_ids=joined_room_ids,
990987 )
991988
989 logger.debug("Fetching account data")
990
992991 account_data_by_room = await self._generate_sync_entry_for_account_data(
993992 sync_result_builder
994993 )
995994
995 logger.debug("Fetching room data")
996
996997 res = await self._generate_sync_entry_for_rooms(
997998 sync_result_builder, account_data_by_room
998999 )
10031004 since_token is None and sync_config.filter_collection.blocks_all_presence()
10041005 )
10051006 if self.hs_config.use_presence and not block_all_presence_data:
1007 logger.debug("Fetching presence data")
10061008 await self._generate_sync_entry_for_presence(
10071009 sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
10081010 )
10091011
1012 logger.debug("Fetching to-device data")
10101013 await self._generate_sync_entry_for_to_device(sync_result_builder)
10111014
10121015 device_lists = await self._generate_sync_entry_for_device_list(
10171020 newly_left_users=newly_left_users,
10181021 )
10191022
1023 logger.debug("Fetching OTK data")
10201024 device_id = sync_config.device_id
10211025 one_time_key_counts = {} # type: JsonDict
10221026 if device_id:
10241028 user_id, device_id
10251029 )
10261030
1031 logger.debug("Fetching group data")
10271032 await self._generate_sync_entry_for_groups(sync_result_builder)
10281033
10291034 # debug for https://github.com/matrix-org/synapse/issues/4422
10341039 "Sync result for newly joined room %s: %r", room_id, joined_room
10351040 )
10361041
1042 logger.debug("Sync response calculation complete")
10371043 return SyncResult(
10381044 presence=sync_result_builder.presence,
10391045 account_data=sync_result_builder.account_data,
14061412 newly_joined_rooms = room_changes.newly_joined_rooms
14071413 newly_left_rooms = room_changes.newly_left_rooms
14081414
1409 def handle_room_entries(room_entry):
1410 return self._generate_room_entry(
1415 async def handle_room_entries(room_entry):
1416 logger.debug("Generating room entry for %s", room_entry.room_id)
1417 res = await self._generate_room_entry(
14111418 sync_result_builder,
14121419 ignored_users,
14131420 room_entry,
14161423 account_data=account_data_by_room.get(room_entry.room_id, {}),
14171424 always_include=sync_result_builder.full_state,
14181425 )
1426 logger.debug("Generated room entry for %s", room_entry.room_id)
1427 return res
14191428
14201429 await concurrently_execute(handle_room_entries, room_entries, 10)
14211430
1414
1515 import logging
1616 from collections import namedtuple
17 from typing import List, Tuple
17 from typing import TYPE_CHECKING, List, Set, Tuple
1818
1919 from synapse.api.errors import AuthError, SynapseError
20 from synapse.logging.context import run_in_background
20 from synapse.metrics.background_process_metrics import run_as_background_process
21 from synapse.replication.tcp.streams import TypingStream
2122 from synapse.types import UserID, get_domain_from_id
2223 from synapse.util.caches.stream_change_cache import StreamChangeCache
2324 from synapse.util.metrics import Measure
2425 from synapse.util.wheel_timer import WheelTimer
2526
27 if TYPE_CHECKING:
28 from synapse.server import HomeServer
29
2630 logger = logging.getLogger(__name__)
2731
2832
3842 FEDERATION_PING_INTERVAL = 40 * 1000
3943
4044
41 class TypingHandler(object):
42 def __init__(self, hs):
45 class FollowerTypingHandler:
46 """A typing handler on a different process than the writer that is updated
47 via replication.
48 """
49
50 def __init__(self, hs: "HomeServer"):
4351 self.store = hs.get_datastore()
4452 self.server_name = hs.config.server_name
45 self.auth = hs.get_auth()
53 self.clock = hs.get_clock()
4654 self.is_mine_id = hs.is_mine_id
47 self.notifier = hs.get_notifier()
48 self.state = hs.get_state_handler()
49
50 self.hs = hs
51
52 self.clock = hs.get_clock()
55
56 self.federation = None
57 if hs.should_send_federation():
58 self.federation = hs.get_federation_sender()
59
60 if hs.config.worker.writers.typing != hs.get_instance_name():
61 hs.get_federation_registry().register_instance_for_edu(
62 "m.typing", hs.config.worker.writers.typing,
63 )
64
65 # map room IDs to serial numbers
66 self._room_serials = {}
67 # map room IDs to sets of users currently typing
68 self._room_typing = {}
69
70 self._member_last_federation_poke = {}
5371 self.wheel_timer = WheelTimer(bucket_size=5000)
54
55 self.federation = hs.get_federation_sender()
56
57 hs.get_federation_registry().register_edu_handler("m.typing", self._recv_edu)
58
59 hs.get_distributor().observe("user_left_room", self.user_left_room)
60
61 self._member_typing_until = {} # clock time we expect to stop
62 self._member_last_federation_poke = {}
63
6472 self._latest_room_serial = 0
65 self._reset()
66
67 # caches which room_ids changed at which serials
68 self._typing_stream_change_cache = StreamChangeCache(
69 "TypingStreamChangeCache", self._latest_room_serial
70 )
7173
7274 self.clock.looping_call(self._handle_timeouts, 5000)
7375
7476 def _reset(self):
75 """
76 Reset the typing handler's data caches.
77 """Reset the typing handler's data caches.
7778 """
7879 # map room IDs to serial numbers
7980 self._room_serials = {}
8081 # map room IDs to sets of users currently typing
8182 self._room_typing = {}
8283
84 self._member_last_federation_poke = {}
85 self.wheel_timer = WheelTimer(bucket_size=5000)
86
8387 def _handle_timeouts(self):
8488 logger.debug("Checking for typing timeouts")
8589
8892 members = set(self.wheel_timer.fetch(now))
8993
9094 for member in members:
91 if not self.is_typing(member):
92 # Nothing to do if they're no longer typing
93 continue
94
95 until = self._member_typing_until.get(member, None)
96 if not until or until <= now:
97 logger.info("Timing out typing for: %s", member.user_id)
98 self._stopped_typing(member)
99 continue
100
101 # Check if we need to resend a keep alive over federation for this
102 # user.
103 if self.hs.is_mine_id(member.user_id):
104 last_fed_poke = self._member_last_federation_poke.get(member, None)
105 if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
106 run_in_background(self._push_remote, member=member, typing=True)
107
108 # Add a paranoia timer to ensure that we always have a timer for
109 # each person typing.
110 self.wheel_timer.insert(now=now, obj=member, then=now + 60 * 1000)
95 self._handle_timeout_for_member(now, member)
96
97 def _handle_timeout_for_member(self, now: int, member: RoomMember):
98 if not self.is_typing(member):
99 # Nothing to do if they're no longer typing
100 return
101
102 # Check if we need to resend a keep alive over federation for this
103 # user.
104 if self.federation and self.is_mine_id(member.user_id):
105 last_fed_poke = self._member_last_federation_poke.get(member, None)
106 if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
107 run_as_background_process(
108 "typing._push_remote", self._push_remote, member=member, typing=True
109 )
110
111 # Add a paranoia timer to ensure that we always have a timer for
112 # each person typing.
113 self.wheel_timer.insert(now=now, obj=member, then=now + 60 * 1000)
111114
112115 def is_typing(self, member):
113116 return member.user_id in self._room_typing.get(member.room_id, [])
114117
115 async def started_typing(self, target_user, auth_user, room_id, timeout):
116 target_user_id = target_user.to_string()
117 auth_user_id = auth_user.to_string()
118
119 if not self.is_mine_id(target_user_id):
120 raise SynapseError(400, "User is not hosted on this homeserver")
121
122 if target_user_id != auth_user_id:
123 raise AuthError(400, "Cannot set another user's typing state")
124
125 await self.auth.check_user_in_room(room_id, target_user_id)
126
127 logger.debug("%s has started typing in %s", target_user_id, room_id)
128
129 member = RoomMember(room_id=room_id, user_id=target_user_id)
130
131 was_present = member.user_id in self._room_typing.get(room_id, set())
132
133 now = self.clock.time_msec()
134 self._member_typing_until[member] = now + timeout
135
136 self.wheel_timer.insert(now=now, obj=member, then=now + timeout)
137
138 if was_present:
139 # No point sending another notification
140 return None
141
142 self._push_update(member=member, typing=True)
143
144 async def stopped_typing(self, target_user, auth_user, room_id):
145 target_user_id = target_user.to_string()
146 auth_user_id = auth_user.to_string()
147
148 if not self.is_mine_id(target_user_id):
149 raise SynapseError(400, "User is not hosted on this homeserver")
150
151 if target_user_id != auth_user_id:
152 raise AuthError(400, "Cannot set another user's typing state")
153
154 await self.auth.check_user_in_room(room_id, target_user_id)
155
156 logger.debug("%s has stopped typing in %s", target_user_id, room_id)
157
158 member = RoomMember(room_id=room_id, user_id=target_user_id)
159
160 self._stopped_typing(member)
161
162 def user_left_room(self, user, room_id):
163 user_id = user.to_string()
164 if self.is_mine_id(user_id):
165 member = RoomMember(room_id=room_id, user_id=user_id)
166 self._stopped_typing(member)
167
168 def _stopped_typing(self, member):
169 if member.user_id not in self._room_typing.get(member.room_id, set()):
170 # No point
171 return None
172
173 self._member_typing_until.pop(member, None)
174 self._member_last_federation_poke.pop(member, None)
175
176 self._push_update(member=member, typing=False)
177
178 def _push_update(self, member, typing):
179 if self.hs.is_mine_id(member.user_id):
180 # Only send updates for changes to our own users.
181 run_in_background(self._push_remote, member, typing)
182
183 self._push_update_local(member=member, typing=typing)
184
185118 async def _push_remote(self, member, typing):
119 if not self.federation:
120 return
121
186122 try:
187 users = await self.state.get_current_users_in_room(member.room_id)
123 users = await self.store.get_users_in_room(member.room_id)
188124 self._member_last_federation_poke[member] = self.clock.time_msec()
189125
190126 now = self.clock.time_msec()
208144 except Exception:
209145 logger.exception("Error pushing typing notif to remotes")
210146
147 def process_replication_rows(
148 self, token: int, rows: List[TypingStream.TypingStreamRow]
149 ):
150 """Should be called whenever we receive updates for typing stream.
151 """
152
153 if self._latest_room_serial > token:
154 # The master has gone backwards. To prevent inconsistent data, just
155 # clear everything.
156 self._reset()
157
158 # Set the latest serial token to whatever the server gave us.
159 self._latest_room_serial = token
160
161 for row in rows:
162 self._room_serials[row.room_id] = token
163
164 prev_typing = set(self._room_typing.get(row.room_id, []))
165 now_typing = set(row.user_ids)
166 self._room_typing[row.room_id] = row.user_ids
167
168 run_as_background_process(
169 "_handle_change_in_typing",
170 self._handle_change_in_typing,
171 row.room_id,
172 prev_typing,
173 now_typing,
174 )
175
176 async def _handle_change_in_typing(
177 self, room_id: str, prev_typing: Set[str], now_typing: Set[str]
178 ):
179 """Process a change in typing of a room from replication, sending EDUs
180 for any local users.
181 """
182 for user_id in now_typing - prev_typing:
183 if self.is_mine_id(user_id):
184 await self._push_remote(RoomMember(room_id, user_id), True)
185
186 for user_id in prev_typing - now_typing:
187 if self.is_mine_id(user_id):
188 await self._push_remote(RoomMember(room_id, user_id), False)
189
190 def get_current_token(self):
191 return self._latest_room_serial
192
193
194 class TypingWriterHandler(FollowerTypingHandler):
195 def __init__(self, hs):
196 super().__init__(hs)
197
198 assert hs.config.worker.writers.typing == hs.get_instance_name()
199
200 self.auth = hs.get_auth()
201 self.notifier = hs.get_notifier()
202
203 self.hs = hs
204
205 hs.get_federation_registry().register_edu_handler("m.typing", self._recv_edu)
206
207 hs.get_distributor().observe("user_left_room", self.user_left_room)
208
209 self._member_typing_until = {} # clock time we expect to stop
210
211 # caches which room_ids changed at which serials
212 self._typing_stream_change_cache = StreamChangeCache(
213 "TypingStreamChangeCache", self._latest_room_serial
214 )
215
216 def _handle_timeout_for_member(self, now: int, member: RoomMember):
217 super()._handle_timeout_for_member(now, member)
218
219 if not self.is_typing(member):
220 # Nothing to do if they're no longer typing
221 return
222
223 until = self._member_typing_until.get(member, None)
224 if not until or until <= now:
225 logger.info("Timing out typing for: %s", member.user_id)
226 self._stopped_typing(member)
227 return
228
229 async def started_typing(self, target_user, auth_user, room_id, timeout):
230 target_user_id = target_user.to_string()
231 auth_user_id = auth_user.to_string()
232
233 if not self.is_mine_id(target_user_id):
234 raise SynapseError(400, "User is not hosted on this homeserver")
235
236 if target_user_id != auth_user_id:
237 raise AuthError(400, "Cannot set another user's typing state")
238
239 await self.auth.check_user_in_room(room_id, target_user_id)
240
241 logger.debug("%s has started typing in %s", target_user_id, room_id)
242
243 member = RoomMember(room_id=room_id, user_id=target_user_id)
244
245 was_present = member.user_id in self._room_typing.get(room_id, set())
246
247 now = self.clock.time_msec()
248 self._member_typing_until[member] = now + timeout
249
250 self.wheel_timer.insert(now=now, obj=member, then=now + timeout)
251
252 if was_present:
253 # No point sending another notification
254 return None
255
256 self._push_update(member=member, typing=True)
257
258 async def stopped_typing(self, target_user, auth_user, room_id):
259 target_user_id = target_user.to_string()
260 auth_user_id = auth_user.to_string()
261
262 if not self.is_mine_id(target_user_id):
263 raise SynapseError(400, "User is not hosted on this homeserver")
264
265 if target_user_id != auth_user_id:
266 raise AuthError(400, "Cannot set another user's typing state")
267
268 await self.auth.check_user_in_room(room_id, target_user_id)
269
270 logger.debug("%s has stopped typing in %s", target_user_id, room_id)
271
272 member = RoomMember(room_id=room_id, user_id=target_user_id)
273
274 self._stopped_typing(member)
275
276 def user_left_room(self, user, room_id):
277 user_id = user.to_string()
278 if self.is_mine_id(user_id):
279 member = RoomMember(room_id=room_id, user_id=user_id)
280 self._stopped_typing(member)
281
282 def _stopped_typing(self, member):
283 if member.user_id not in self._room_typing.get(member.room_id, set()):
284 # No point
285 return None
286
287 self._member_typing_until.pop(member, None)
288 self._member_last_federation_poke.pop(member, None)
289
290 self._push_update(member=member, typing=False)
291
292 def _push_update(self, member, typing):
293 if self.hs.is_mine_id(member.user_id):
294 # Only send updates for changes to our own users.
295 run_as_background_process(
296 "typing._push_remote", self._push_remote, member, typing
297 )
298
299 self._push_update_local(member=member, typing=typing)
300
211301 async def _recv_edu(self, origin, content):
212302 room_id = content["room_id"]
213303 user_id = content["user_id"]
223313 )
224314 return
225315
226 users = await self.state.get_current_users_in_room(room_id)
316 users = await self.store.get_users_in_room(room_id)
227317 domains = {get_domain_from_id(u) for u in users}
228318
229319 if self.server_name in domains:
303393
304394 return rows, current_id, limited
305395
306 def get_current_token(self):
307 return self._latest_room_serial
396 def process_replication_rows(
397 self, token: int, rows: List[TypingStream.TypingStreamRow]
398 ):
399 # The writing process should never get updates from replication.
400 raise Exception("Typing writer instance got typing info over replication")
308401
309402
310403 class TypingNotificationEventSource(object):
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
1415 import logging
16 from typing import Any
1517
1618 from canonicaljson import json
1719
18 from twisted.internet import defer
1920 from twisted.web.client import PartialDownloadError
2021
2122 from synapse.api.constants import LoginType
3132 def __init__(self, hs):
3233 pass
3334
34 def is_enabled(self):
35 def is_enabled(self) -> bool:
3536 """Check if the configuration of the homeserver allows this checker to work
3637
3738 Returns:
38 bool: True if this login type is enabled.
39 True if this login type is enabled.
3940 """
4041
41 def check_auth(self, authdict, clientip):
42 async def check_auth(self, authdict: dict, clientip: str) -> Any:
4243 """Given the authentication dict from the client, attempt to check this step
4344
4445 Args:
45 authdict (dict): authentication dictionary from the client
46 clientip (str): The IP address of the client.
46 authdict: authentication dictionary from the client
47 clientip: The IP address of the client.
4748
4849 Raises:
4950 SynapseError if authentication failed
5051
5152 Returns:
52 Deferred: the result of authentication (to pass back to the client?)
53 The result of authentication (to pass back to the client?)
5354 """
5455 raise NotImplementedError()
5556
6061 def is_enabled(self):
6162 return True
6263
63 def check_auth(self, authdict, clientip):
64 return defer.succeed(True)
64 async def check_auth(self, authdict, clientip):
65 return True
6566
6667
6768 class TermsAuthChecker(UserInteractiveAuthChecker):
7071 def is_enabled(self):
7172 return True
7273
73 def check_auth(self, authdict, clientip):
74 return defer.succeed(True)
74 async def check_auth(self, authdict, clientip):
75 return True
7576
7677
7778 class RecaptchaAuthChecker(UserInteractiveAuthChecker):
8788 def is_enabled(self):
8889 return self._enabled
8990
90 @defer.inlineCallbacks
91 def check_auth(self, authdict, clientip):
91 async def check_auth(self, authdict, clientip):
9292 try:
9393 user_response = authdict["response"]
9494 except KeyError:
105105 # TODO: get this from the homeserver rather than creating a new one for
106106 # each request
107107 try:
108 resp_body = yield self._http_client.post_urlencoded_get_json(
108 resp_body = await self._http_client.post_urlencoded_get_json(
109109 self._url,
110110 args={
111111 "secret": self._secret,
116116 except PartialDownloadError as pde:
117117 # Twisted is silly
118118 data = pde.response
119 resp_body = json.loads(data)
119 resp_body = json.loads(data.decode("utf-8"))
120120
121121 if "success" in resp_body:
122122 # Note that we do NOT check the hostname here: we explicitly
217217 ThreepidBehaviour.LOCAL,
218218 )
219219
220 def check_auth(self, authdict, clientip):
221 return defer.ensureDeferred(self._check_threepid("email", authdict))
220 async def check_auth(self, authdict, clientip):
221 return await self._check_threepid("email", authdict)
222222
223223
224224 class MsisdnAuthChecker(UserInteractiveAuthChecker, _BaseThreepidAuthChecker):
231231 def is_enabled(self):
232232 return bool(self.hs.config.account_threepid_delegate_msisdn)
233233
234 def check_auth(self, authdict, clientip):
235 return defer.ensureDeferred(self._check_threepid("msisdn", authdict))
234 async def check_auth(self, authdict, clientip):
235 return await self._check_threepid("msisdn", authdict)
236236
237237
238238 INTERACTIVE_AUTH_CHECKERS = [
3030 IReactorPluggableNameResolver,
3131 IResolutionReceiver,
3232 )
33 from twisted.internet.task import Cooperator
3334 from twisted.python.failure import Failure
3435 from twisted.web._newclient import ResponseDone
3536 from twisted.web.client import Agent, HTTPConnectionPool, readBody
6869 return False
6970
7071
72 _EPSILON = 0.00000001
73
74
75 def _make_scheduler(reactor):
76 """Makes a schedular suitable for a Cooperator using the given reactor.
77
78 (This is effectively just a copy from `twisted.internet.task`)
79 """
80
81 def _scheduler(x):
82 return reactor.callLater(_EPSILON, x)
83
84 return _scheduler
85
86
7187 class IPBlacklistingResolver(object):
7288 """
7389 A proxy for reactor.nameResolver which only produces non-blacklisted IP
210226 self.clock = hs.get_clock()
211227 if hs.config.user_agent_suffix:
212228 self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix)
229
230 # We use this for our body producers to ensure that they use the correct
231 # reactor.
232 self._cooperator = Cooperator(scheduler=_make_scheduler(hs.get_reactor()))
213233
214234 self.user_agent = self.user_agent.encode("ascii")
215235
291311 try:
292312 body_producer = None
293313 if data is not None:
294 body_producer = QuieterFileBodyProducer(BytesIO(data))
314 body_producer = QuieterFileBodyProducer(
315 BytesIO(data), cooperator=self._cooperator,
316 )
295317
296318 request_deferred = treq.request(
297319 method,
370392 body = yield make_deferred_yieldable(readBody(response))
371393
372394 if 200 <= response.code < 300:
373 return json.loads(body)
395 return json.loads(body.decode("utf-8"))
374396 else:
375397 raise HttpResponseException(response.code, response.phrase, body)
376398
411433 body = yield make_deferred_yieldable(readBody(response))
412434
413435 if 200 <= response.code < 300:
414 return json.loads(body)
436 return json.loads(body.decode("utf-8"))
415437 else:
416438 raise HttpResponseException(response.code, response.phrase, body)
417439
440462 actual_headers.update(headers)
441463
442464 body = yield self.get_raw(uri, args, headers=headers)
443 return json.loads(body)
465 return json.loads(body.decode("utf-8"))
444466
445467 @defer.inlineCallbacks
446468 def put_json(self, uri, json_body, args={}, headers=None):
484506 body = yield make_deferred_yieldable(readBody(response))
485507
486508 if 200 <= response.code < 300:
487 return json.loads(body)
509 return json.loads(body.decode("utf-8"))
488510 else:
489511 raise HttpResponseException(response.code, response.phrase, body)
490512
502524 header name to a list of values for that header
503525 Returns:
504526 Deferred: Succeeds when we get *any* 2xx HTTP response, with the
505 HTTP body at text.
527 HTTP body as bytes.
506528 Raises:
507529 HttpResponseException on a non-2xx HTTP response.
508530 """
1414
1515 import logging
1616 import urllib
17 from typing import List
1718
1819 from netaddr import AddrFormatError, IPAddress
1920 from zope.interface import implementer
235236
236237 return run_in_background(self._do_connect, protocol_factory)
237238
238 @defer.inlineCallbacks
239 def _do_connect(self, protocol_factory):
239 async def _do_connect(self, protocol_factory):
240240 first_exception = None
241241
242 server_list = yield self._resolve_server()
242 server_list = await self._resolve_server()
243243
244244 for server in server_list:
245245 host = server.host
250250 endpoint = HostnameEndpoint(self._reactor, host, port)
251251 if self._tls_options:
252252 endpoint = wrapClientTLS(self._tls_options, endpoint)
253 result = yield make_deferred_yieldable(
253 result = await make_deferred_yieldable(
254254 endpoint.connect(protocol_factory)
255255 )
256256
270270 # to try and if that doesn't work then we'll have an exception.
271271 raise Exception("Failed to resolve server %r" % (self._parsed_uri.netloc,))
272272
273 @defer.inlineCallbacks
274 def _resolve_server(self):
273 async def _resolve_server(self) -> List[Server]:
275274 """Resolves the server name to a list of hosts and ports to attempt to
276275 connect to.
277
278 Returns:
279 Deferred[list[Server]]
280276 """
281277
282278 if self._parsed_uri.scheme != b"matrix":
297293 if port or _is_ip_literal(host):
298294 return [Server(host, port or 8448)]
299295
300 server_list = yield self._srv_resolver.resolve_service(b"_matrix._tcp." + host)
296 server_list = await self._srv_resolver.resolve_service(b"_matrix._tcp." + host)
301297
302298 if server_list:
303299 return server_list
1616 import logging
1717 import random
1818 import time
19 from typing import List
1920
2021 import attr
2122
22 from twisted.internet import defer
2323 from twisted.internet.error import ConnectError
2424 from twisted.names import client, dns
2525 from twisted.names.error import DNSNameError, DomainError
112112 self._cache = cache
113113 self._get_time = get_time
114114
115 @defer.inlineCallbacks
116 def resolve_service(self, service_name):
115 async def resolve_service(self, service_name: bytes) -> List[Server]:
117116 """Look up a SRV record
118117
119118 Args:
120119 service_name (bytes): record to look up
121120
122121 Returns:
123 Deferred[list[Server]]:
124 a list of the SRV records, or an empty list if none found
122 a list of the SRV records, or an empty list if none found
125123 """
126124 now = int(self._get_time())
127125
135133 return _sort_server_list(servers)
136134
137135 try:
138 answers, _, _ = yield make_deferred_yieldable(
136 answers, _, _ = await make_deferred_yieldable(
139137 self._dns_client.lookupService(service_name)
140138 )
141139 except DNSNameError:
216216 return NOT_DONE_YET
217217
218218 @wrap_async_request_handler
219 async def _async_render_wrapper(self, request):
219 async def _async_render_wrapper(self, request: SynapseRequest):
220220 """This is a wrapper that delegates to `_async_render` and handles
221221 exceptions, return values, metrics, etc.
222222 """
236236 f = failure.Failure()
237237 self._send_error_response(f, request)
238238
239 async def _async_render(self, request):
239 async def _async_render(self, request: Request):
240240 """Delegates to `_async_render_<METHOD>` methods, or returns a 400 if
241241 no appropriate method exists. Can be overriden in sub classes for
242242 different routing.
277277 """
278278
279279 def _send_response(
280 self, request, code, response_object,
280 self, request: Request, code: int, response_object: Any,
281281 ):
282282 """Implements _AsyncResource._send_response
283283 """
441441 return super().render_GET(request)
442442
443443
444 def _options_handler(request):
445 """Request handler for OPTIONS requests
446
447 This is a request handler suitable for return from
448 _get_handler_for_request. It returns a 200 and an empty body.
449
450 Args:
451 request (twisted.web.http.Request):
452
453 Returns:
454 Tuple[int, dict]: http code, response body.
455 """
456 return 200, {}
457
458
459444 def _unrecognised_request_handler(request):
460445 """Request handler for unrecognised requests
461446
489474 """Responds to OPTION requests for itself and all children."""
490475
491476 def render_OPTIONS(self, request):
492 code, response_json_object = _options_handler(request)
493
494 return respond_with_json(
495 request, code, response_json_object, send_cors=True, canonical_json=False,
496 )
477 request.setResponseCode(204)
478 request.setHeader(b"Content-Length", b"0")
479
480 set_cors_headers(request)
481
482 return b""
497483
498484 def getChildWithDefault(self, path, request):
499485 if request.method == b"OPTIONS":
506492
507493
508494 def respond_with_json(
509 request,
510 code,
511 json_object,
512 send_cors=False,
513 response_code_message=None,
514 pretty_print=False,
515 canonical_json=True,
495 request: Request,
496 code: int,
497 json_object: Any,
498 send_cors: bool = False,
499 pretty_print: bool = False,
500 canonical_json: bool = True,
516501 ):
502 """Sends encoded JSON in response to the given request.
503
504 Args:
505 request: The http request to respond to.
506 code: The HTTP response code.
507 json_object: The object to serialize to JSON.
508 send_cors: Whether to send Cross-Origin Resource Sharing headers
509 https://fetch.spec.whatwg.org/#http-cors-protocol
510 pretty_print: Whether to include indentation and line-breaks in the
511 resulting JSON bytes.
512 canonical_json: Whether to use the canonicaljson algorithm when encoding
513 the JSON bytes.
514
515 Returns:
516 twisted.web.server.NOT_DONE_YET if the request is still active.
517 """
517518 # could alternatively use request.notifyFinish() and flip a flag when
518519 # the Deferred fires, but since the flag is RIGHT THERE it seems like
519520 # a waste.
521522 logger.warning(
522523 "Not sending response to request %s, already disconnected.", request
523524 )
524 return
525 return None
525526
526527 if pretty_print:
527528 json_bytes = encode_pretty_printed_json(json_object) + b"\n"
532533 else:
533534 json_bytes = json.dumps(json_object).encode("utf-8")
534535
535 return respond_with_json_bytes(
536 request,
537 code,
538 json_bytes,
539 send_cors=send_cors,
540 response_code_message=response_code_message,
541 )
536 return respond_with_json_bytes(request, code, json_bytes, send_cors=send_cors)
542537
543538
544539 def respond_with_json_bytes(
545 request, code, json_bytes, send_cors=False, response_code_message=None
540 request: Request, code: int, json_bytes: bytes, send_cors: bool = False,
546541 ):
547542 """Sends encoded JSON in response to the given request.
548543
549544 Args:
550 request (twisted.web.http.Request): The http request to respond to.
551 code (int): The HTTP response code.
552 json_bytes (bytes): The json bytes to use as the response body.
553 send_cors (bool): Whether to send Cross-Origin Resource Sharing headers
545 request: The http request to respond to.
546 code: The HTTP response code.
547 json_bytes: The json bytes to use as the response body.
548 send_cors: Whether to send Cross-Origin Resource Sharing headers
554549 https://fetch.spec.whatwg.org/#http-cors-protocol
550
555551 Returns:
556 twisted.web.server.NOT_DONE_YET"""
557
558 request.setResponseCode(code, message=response_code_message)
552 twisted.web.server.NOT_DONE_YET if the request is still active.
553 """
554
555 request.setResponseCode(code)
559556 request.setHeader(b"Content-Type", b"application/json")
560557 request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
561558 request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")
563560 if send_cors:
564561 set_cors_headers(request)
565562
566 # todo: we can almost certainly avoid this copy and encode the json straight into
567 # the bytesIO, but it would involve faffing around with string->bytes wrappers.
563 # note that this is zero-copy (the bytesio shares a copy-on-write buffer with
564 # the original `bytes`).
568565 bytes_io = BytesIO(json_bytes)
569566
570567 producer = NoRangeStaticProducer(request, bytes_io)
572569 return NOT_DONE_YET
573570
574571
575 def set_cors_headers(request):
576 """Set the CORs headers so that javascript running in a web browsers can
572 def set_cors_headers(request: Request):
573 """Set the CORS headers so that javascript running in a web browsers can
577574 use this API
578575
579576 Args:
580 request (twisted.web.http.Request): The http request to add CORs to.
577 request: The http request to add CORS to.
581578 """
582579 request.setHeader(b"Access-Control-Allow-Origin", b"*")
583580 request.setHeader(
642639 request.setHeader(b"Content-Security-Policy", b"frame-ancestors 'none';")
643640
644641
645 def finish_request(request):
642 def finish_request(request: Request):
646643 """ Finish writing the response to the request.
647644
648645 Twisted throws a RuntimeException if the connection closed before the
661658 logger.info("Connection disconnected before response was written: %r", e)
662659
663660
664 def _request_user_agent_is_curl(request):
661 def _request_user_agent_is_curl(request: Request) -> bool:
665662 user_agents = request.requestHeaders.getRawHeaders(b"User-Agent", default=[])
666663 for user_agent in user_agents:
667664 if b"curl" in user_agent:
213213 if not content_bytes and allow_empty_body:
214214 return None
215215
216 # Decode to Unicode so that simplejson will return Unicode strings on
217 # Python 2
218216 try:
219 content_unicode = content_bytes.decode("utf8")
220 except UnicodeDecodeError:
221 logger.warning("Unable to decode UTF-8")
222 raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)
223
224 try:
225 content = json.loads(content_unicode)
217 content = json.loads(content_bytes.decode("utf-8"))
226218 except Exception as e:
227219 logger.warning("Unable to parse JSON: %s", e)
228220 raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)
214214 # It's useful to log it here so that we can get an idea of when
215215 # the client disconnects.
216216 with PreserveLoggingContext(self.logcontext):
217 logger.warning(
218 "Error processing request %r: %s %s", self, reason.type, reason.value
219 )
217 logger.info("Connection from client lost before response was sent")
220218
221219 if not self._is_processing:
222220 self._finished_processing()
565565 return True
566566
567567
568 class PreserveLoggingContext(object):
569 """Captures the current logging context and restores it when the scope is
570 exited. Used to restore the context after a function using
571 @defer.inlineCallbacks is resumed by a callback from the reactor."""
572
573 __slots__ = ["current_context", "new_context", "has_parent"]
568 class PreserveLoggingContext:
569 """Context manager which replaces the logging context
570
571 The previous logging context is restored on exit."""
572
573 __slots__ = ["_old_context", "_new_context"]
574574
575575 def __init__(
576576 self, new_context: LoggingContextOrSentinel = SENTINEL_CONTEXT
577577 ) -> None:
578 self.new_context = new_context
578 self._new_context = new_context
579579
580580 def __enter__(self) -> None:
581 """Captures the current logging context"""
582 self.current_context = set_current_context(self.new_context)
583
584 if self.current_context:
585 self.has_parent = self.current_context.previous_context is not None
581 self._old_context = set_current_context(self._new_context)
586582
587583 def __exit__(self, type, value, traceback) -> None:
588 """Restores the current logging context"""
589 context = set_current_context(self.current_context)
590
591 if context != self.new_context:
584 context = set_current_context(self._old_context)
585
586 if context != self._new_context:
592587 if not context:
593 logger.warning("Expected logging context %s was lost", self.new_context)
588 logger.warning(
589 "Expected logging context %s was lost", self._new_context
590 )
594591 else:
595592 logger.warning(
596593 "Expected logging context %s but found %s",
597 self.new_context,
594 self._new_context,
598595 context,
599596 )
600597
732732
733733 _opname = opname if opname else func.__name__
734734
735 @wraps(func)
736 def _trace_inner(*args, **kwargs):
737 if opentracing is None:
738 return func(*args, **kwargs)
739
740 scope = start_active_span(_opname)
741 scope.__enter__()
742
743 try:
744 result = func(*args, **kwargs)
745 if isinstance(result, defer.Deferred):
746
747 def call_back(result):
735 if inspect.iscoroutinefunction(func):
736
737 @wraps(func)
738 async def _trace_inner(*args, **kwargs):
739 with start_active_span(_opname):
740 return await func(*args, **kwargs)
741
742 else:
743 # The other case here handles both sync functions and those
744 # decorated with inlineDeferred.
745 @wraps(func)
746 def _trace_inner(*args, **kwargs):
747 scope = start_active_span(_opname)
748 scope.__enter__()
749
750 try:
751 result = func(*args, **kwargs)
752 if isinstance(result, defer.Deferred):
753
754 def call_back(result):
755 scope.__exit__(None, None, None)
756 return result
757
758 def err_back(result):
759 scope.__exit__(None, None, None)
760 return result
761
762 result.addCallbacks(call_back, err_back)
763
764 else:
748765 scope.__exit__(None, None, None)
749 return result
750
751 def err_back(result):
752 scope.span.set_tag(tags.ERROR, True)
753 scope.__exit__(None, None, None)
754 return result
755
756 result.addCallbacks(call_back, err_back)
757
758 else:
759 scope.__exit__(None, None, None)
760
761 return result
762
763 except Exception as e:
764 scope.__exit__(type(e), None, e.__traceback__)
765 raise
766
767 return result
768
769 except Exception as e:
770 scope.__exit__(type(e), None, e.__traceback__)
771 raise
766772
767773 return _trace_inner
768774
115115 if self._enter_logcontext:
116116 self.logcontext.__enter__()
117117
118 return self
119
118120 def __exit__(self, type, value, traceback):
119121 if type == twisted.internet.defer._DefGen_Return:
120122 super(_LogContextScope, self).__exit__(None, None, None)
1313 # limitations under the License.
1414
1515
16 import inspect
1716 import logging
18 import time
1917 from functools import wraps
2018 from inspect import getcallargs
2119
7371
7472 wrapped.__name__ = func_name
7573 return wrapped
76
77
78 def time_function(f):
79 func_name = f.__name__
80
81 @wraps(f)
82 def wrapped(*args, **kwargs):
83 global _TIME_FUNC_ID
84 id = _TIME_FUNC_ID
85 _TIME_FUNC_ID += 1
86
87 start = time.clock()
88
89 try:
90 _log_debug_as_f(f, "[FUNC START] {%s-%d}", (func_name, id))
91
92 r = f(*args, **kwargs)
93 finally:
94 end = time.clock()
95 _log_debug_as_f(
96 f, "[FUNC END] {%s-%d} %.3f sec", (func_name, id, end - start)
97 )
98
99 return r
100
101 return wrapped
102
103
104 def trace_function(f):
105 func_name = f.__name__
106 linenum = f.func_code.co_firstlineno
107 pathname = f.func_code.co_filename
108
109 @wraps(f)
110 def wrapped(*args, **kwargs):
111 name = f.__module__
112 logger = logging.getLogger(name)
113 level = logging.DEBUG
114
115 frame = inspect.currentframe()
116 if frame is None:
117 raise Exception("Can't get current frame!")
118
119 s = frame.f_back
120
121 to_print = [
122 "\t%s:%s %s. Args: args=%s, kwargs=%s"
123 % (pathname, linenum, func_name, args, kwargs)
124 ]
125 while s:
126 if True or s.f_globals["__name__"].startswith("synapse"):
127 filename, lineno, function, _, _ = inspect.getframeinfo(s)
128 args_string = inspect.formatargvalues(*inspect.getargvalues(s))
129
130 to_print.append(
131 "\t%s:%d %s. Args: %s" % (filename, lineno, function, args_string)
132 )
133
134 s = s.f_back
135
136 msg = "\nTraceback for %s:\n" % (func_name,) + "\n".join(to_print)
137
138 record = logging.LogRecord(
139 name=name,
140 level=level,
141 pathname=pathname,
142 lineno=lineno,
143 msg=msg,
144 args=(),
145 exc_info=None,
146 )
147
148 logger.handle(record)
149
150 return f(*args, **kwargs)
151
152 wrapped.__name__ = func_name
153 return wrapped
154
155
156 def get_previous_frames():
157
158 frame = inspect.currentframe()
159 if frame is None:
160 raise Exception("Can't get current frame!")
161
162 s = frame.f_back.f_back
163 to_return = []
164 while s:
165 if s.f_globals["__name__"].startswith("synapse"):
166 filename, lineno, function, _, _ = inspect.getframeinfo(s)
167 args_string = inspect.formatargvalues(*inspect.getargvalues(s))
168
169 to_return.append(
170 "{{ %s:%d %s - Args: %s }}" % (filename, lineno, function, args_string)
171 )
172
173 s = s.f_back
174
175 return ", ".join(to_return)
176
177
178 def get_previous_frame(ignore=[]):
179 frame = inspect.currentframe()
180 if frame is None:
181 raise Exception("Can't get current frame!")
182 s = frame.f_back.f_back
183
184 while s:
185 if s.f_globals["__name__"].startswith("synapse"):
186 if not any(s.f_globals["__name__"].startswith(ig) for ig in ignore):
187 filename, lineno, function, _, _ = inspect.getframeinfo(s)
188 args_string = inspect.formatargvalues(*inspect.getargvalues(s))
189
190 return "{{ %s:%d %s - Args: %s }}" % (
191 filename,
192 lineno,
193 function,
194 args_string,
195 )
196
197 s = s.f_back
198
199 return None
303303
304304 push_rules_delta_state_cache_metric.inc_hits()
305305 else:
306 current_state_ids = yield context.get_current_state_ids()
306 current_state_ids = yield defer.ensureDeferred(
307 context.get_current_state_ids()
308 )
307309 push_rules_delta_state_cache_metric.inc_misses()
308310
309311 push_rules_state_size_counter.inc(len(current_state_ids))
2626
2727 from synapse.api.constants import EventTypes
2828 from synapse.api.errors import StoreError
29 from synapse.config.emailconfig import EmailSubjectConfig
2930 from synapse.logging.context import make_deferred_yieldable
3031 from synapse.push.presentable_names import (
3132 calculate_room_name,
4041
4142 T = TypeVar("T")
4243
43
44 MESSAGE_FROM_PERSON_IN_ROOM = (
45 "You have a message on %(app)s from %(person)s in the %(room)s room..."
46 )
47 MESSAGE_FROM_PERSON = "You have a message on %(app)s from %(person)s..."
48 MESSAGES_FROM_PERSON = "You have messages on %(app)s from %(person)s..."
49 MESSAGES_IN_ROOM = "You have messages on %(app)s in the %(room)s room..."
50 MESSAGES_IN_ROOM_AND_OTHERS = (
51 "You have messages on %(app)s in the %(room)s room and others..."
52 )
53 MESSAGES_FROM_PERSON_AND_OTHERS = (
54 "You have messages on %(app)s from %(person)s and others..."
55 )
56 INVITE_FROM_PERSON_TO_ROOM = (
57 "%(person)s has invited you to join the %(room)s room on %(app)s..."
58 )
59 INVITE_FROM_PERSON = "%(person)s has invited you to chat on %(app)s..."
6044
6145 CONTEXT_BEFORE = 1
6246 CONTEXT_AFTER = 1
120104 self.state_handler = self.hs.get_state_handler()
121105 self.storage = hs.get_storage()
122106 self.app_name = app_name
107 self.email_subjects = hs.config.email_subjects # type: EmailSubjectConfig
123108
124109 logger.info("Created Mailer for app_name %s" % app_name)
125110
146131
147132 await self.send_email(
148133 email_address,
149 "[%s] Password Reset" % self.hs.config.server_name,
134 self.email_subjects.password_reset
135 % {"server_name": self.hs.config.server_name},
150136 template_vars,
151137 )
152138
173159
174160 await self.send_email(
175161 email_address,
176 "[%s] Register your Email Address" % self.hs.config.server_name,
162 self.email_subjects.email_validation
163 % {"server_name": self.hs.config.server_name},
177164 template_vars,
178165 )
179166
201188
202189 await self.send_email(
203190 email_address,
204 "[%s] Validate Your Email" % self.hs.config.server_name,
191 self.email_subjects.email_validation
192 % {"server_name": self.hs.config.server_name},
205193 template_vars,
206194 )
207195
268256 user_id, app_id, email_address
269257 ),
270258 "summary_text": summary_text,
271 "app_name": self.app_name,
272259 "rooms": rooms,
273260 "reason": reason,
274261 }
275262
276 await self.send_email(
277 email_address, "[%s] %s" % (self.app_name, summary_text), template_vars
278 )
279
280 async def send_email(self, email_address, subject, template_vars):
263 await self.send_email(email_address, summary_text, template_vars)
264
265 async def send_email(self, email_address, subject, extra_template_vars):
281266 """Send an email with the given information and template text"""
282267 try:
283268 from_string = self.hs.config.email_notif_from % {"app": self.app_name}
289274
290275 if raw_to == "":
291276 raise RuntimeError("Invalid 'to' address")
277
278 template_vars = {
279 "app_name": self.app_name,
280 "server_name": self.hs.config.server.server_name,
281 }
282
283 template_vars.update(extra_template_vars)
292284
293285 html_text = self.template_html.render(**template_vars)
294286 html_part = MIMEText(html_text, "html", "utf8")
475467 inviter_name = name_from_member_event(inviter_member_event)
476468
477469 if room_name is None:
478 return INVITE_FROM_PERSON % {
470 return self.email_subjects.invite_from_person % {
479471 "person": inviter_name,
480472 "app": self.app_name,
481473 }
482474 else:
483 return INVITE_FROM_PERSON_TO_ROOM % {
475 return self.email_subjects.invite_from_person_to_room % {
484476 "person": inviter_name,
485477 "room": room_name,
486478 "app": self.app_name,
498490 sender_name = name_from_member_event(state_event)
499491
500492 if sender_name is not None and room_name is not None:
501 return MESSAGE_FROM_PERSON_IN_ROOM % {
493 return self.email_subjects.message_from_person_in_room % {
502494 "person": sender_name,
503495 "room": room_name,
504496 "app": self.app_name,
505497 }
506498 elif sender_name is not None:
507 return MESSAGE_FROM_PERSON % {
499 return self.email_subjects.message_from_person % {
508500 "person": sender_name,
509501 "app": self.app_name,
510502 }
512504 # There's more than one notification for this room, so just
513505 # say there are several
514506 if room_name is not None:
515 return MESSAGES_IN_ROOM % {"room": room_name, "app": self.app_name}
507 return self.email_subjects.messages_in_room % {
508 "room": room_name,
509 "app": self.app_name,
510 }
516511 else:
517512 # If the room doesn't have a name, say who the messages
518513 # are from explicitly to avoid, "messages in the Bob room"
530525 ]
531526 )
532527
533 return MESSAGES_FROM_PERSON % {
528 return self.email_subjects.messages_from_person % {
534529 "person": descriptor_from_member_events(member_events.values()),
535530 "app": self.app_name,
536531 }
539534
540535 # ...but we still refer to the 'reason' room which triggered the mail
541536 if reason["room_name"] is not None:
542 return MESSAGES_IN_ROOM_AND_OTHERS % {
537 return self.email_subjects.messages_in_room_and_others % {
543538 "room": reason["room_name"],
544539 "app": self.app_name,
545540 }
559554 [room_state_ids[room_id][("m.room.member", s)] for s in sender_ids]
560555 )
561556
562 return MESSAGES_FROM_PERSON_AND_OTHERS % {
557 return self.email_subjects.messages_from_person_and_others % {
563558 "person": descriptor_from_member_events(member_events.values()),
564559 "app": self.app_name,
565560 }
1414 # limitations under the License.
1515
1616 import logging
17 from collections import defaultdict
18 from threading import Lock
19 from typing import Dict, Tuple, Union
17 from typing import TYPE_CHECKING, Dict, Union
18
19 from prometheus_client import Gauge
2020
2121 from twisted.internet import defer
2222
23 from synapse.metrics import LaterGauge
2423 from synapse.metrics.background_process_metrics import run_as_background_process
2524 from synapse.push import PusherConfigException
2625 from synapse.push.emailpusher import EmailPusher
2827 from synapse.push.pusher import PusherFactory
2928 from synapse.util.async_helpers import concurrently_execute
3029
30 if TYPE_CHECKING:
31 from synapse.server import HomeServer
32
33
3134 logger = logging.getLogger(__name__)
35
36
37 synapse_pushers = Gauge(
38 "synapse_pushers", "Number of active synapse pushers", ["kind", "app_id"]
39 )
3240
3341
3442 class PusherPool:
4654 Pusher.on_new_receipts are not expected to return deferreds.
4755 """
4856
49 def __init__(self, _hs):
50 self.hs = _hs
51 self.pusher_factory = PusherFactory(_hs)
52 self._should_start_pushers = _hs.config.start_pushers
57 def __init__(self, hs: "HomeServer"):
58 self.hs = hs
59 self.pusher_factory = PusherFactory(hs)
60 self._should_start_pushers = hs.config.start_pushers
5361 self.store = self.hs.get_datastore()
5462 self.clock = self.hs.get_clock()
5563
64 # We shard the handling of push notifications by user ID.
65 self._pusher_shard_config = hs.config.push.pusher_shard_config
66 self._instance_name = hs.get_instance_name()
67
5668 # map from user id to app_id:pushkey to pusher
5769 self.pushers = {} # type: Dict[str, Dict[str, Union[HttpPusher, EmailPusher]]]
58
59 # a lock for the pushers dict, since `count_pushers` is called from an different
60 # and we otherwise get concurrent modification errors
61 self._pushers_lock = Lock()
62
63 def count_pushers():
64 results = defaultdict(int) # type: Dict[Tuple[str, str], int]
65 with self._pushers_lock:
66 for pushers in self.pushers.values():
67 for pusher in pushers.values():
68 k = (type(pusher).__name__, pusher.app_id)
69 results[k] += 1
70 return results
71
72 LaterGauge(
73 name="synapse_pushers",
74 desc="the number of active pushers",
75 labels=["kind", "app_id"],
76 caller=count_pushers,
77 )
7870
7971 def start(self):
8072 """Starts the pushers off in a background process.
10395 Returns:
10496 Deferred[EmailPusher|HttpPusher]
10597 """
98
10699 time_now_msec = self.clock.time_msec()
107100
108101 # we try to create the pusher just to validate the config: it
175168 access_tokens (Iterable[int]): access token *ids* to remove pushers
176169 for
177170 """
171 if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
172 return
173
178174 tokens = set(access_tokens)
179175 for p in (yield self.store.get_pushers_by_user_id(user_id)):
180176 if p["access_token"] in tokens:
236232 if not self._should_start_pushers:
237233 return
238234
235 if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
236 return
237
239238 resultlist = yield self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
240239
241240 pusher_dict = None
274273 Returns:
275274 Deferred[EmailPusher|HttpPusher]
276275 """
276 if not self._pusher_shard_config.should_handle(
277 self._instance_name, pusherdict["user_name"]
278 ):
279 return
280
277281 try:
278282 p = self.pusher_factory.create_pusher(pusherdict)
279283 except PusherConfigException as e:
297301
298302 appid_pushkey = "%s:%s" % (pusherdict["app_id"], pusherdict["pushkey"])
299303
300 with self._pushers_lock:
301 byuser = self.pushers.setdefault(pusherdict["user_name"], {})
302 if appid_pushkey in byuser:
303 byuser[appid_pushkey].on_stop()
304 byuser[appid_pushkey] = p
304 byuser = self.pushers.setdefault(pusherdict["user_name"], {})
305 if appid_pushkey in byuser:
306 byuser[appid_pushkey].on_stop()
307 byuser[appid_pushkey] = p
308
309 synapse_pushers.labels(type(p).__name__, p.app_id).inc()
305310
306311 # Check if there *may* be push to process. We do this as this check is a
307312 # lot cheaper to do than actually fetching the exact rows we need to
329334
330335 if appid_pushkey in byuser:
331336 logger.info("Stopping pusher %s / %s", user_id, appid_pushkey)
332 byuser[appid_pushkey].on_stop()
333 with self._pushers_lock:
334 del byuser[appid_pushkey]
337 pusher = byuser.pop(appid_pushkey)
338 pusher.on_stop()
339
340 synapse_pushers.labels(type(pusher).__name__, pusher.app_id).dec()
335341
336342 yield self.store.delete_pusher_by_app_id_pushkey_user_id(
337343 app_id, pushkey, user_id
3838 federation.register_servlets(hs, self)
3939 presence.register_servlets(hs, self)
4040 membership.register_servlets(hs, self)
41 streams.register_servlets(hs, self)
4142
4243 # The following can't currently be instantiated on workers.
4344 if hs.config.worker.worker_app is None:
4445 login.register_servlets(hs, self)
4546 register.register_servlets(hs, self)
4647 devices.register_servlets(hs, self)
47 streams.register_servlets(hs, self)
2525 def __init__(self, database: Database, db_conn, hs):
2626 super(SlavedDeviceInboxStore, self).__init__(database, db_conn, hs)
2727 self._device_inbox_id_gen = SlavedIdTracker(
28 db_conn, "device_max_stream_id", "stream_id"
28 db_conn, "device_inbox", "stream_id"
2929 )
3030 self._device_inbox_stream_cache = StreamChangeCache(
3131 "DeviceInboxStreamChangeCache",
2323 from synapse.api.constants import EventTypes
2424 from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
2525 from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol
26 from synapse.replication.tcp.streams import TypingStream
2627 from synapse.replication.tcp.streams.events import (
2728 EventsStream,
2829 EventsStreamEventRow,
103104 self._clock = hs.get_clock()
104105 self._streams = hs.get_replication_streams()
105106 self._instance_name = hs.get_instance_name()
107 self._typing_handler = hs.get_typing_handler()
106108
107109 # Map from stream to list of deferreds waiting for the stream to
108110 # arrive at a particular position. The lists are sorted by stream position.
125127 rows: a list of Stream.ROW_TYPE objects as returned by Stream.parse_row.
126128 """
127129 self.store.process_replication_rows(stream_name, instance_name, token, rows)
130
131 if stream_name == TypingStream.NAME:
132 self._typing_handler.process_replication_rows(token, rows)
133 self.notifier.on_new_event(
134 "typing_key", token, rooms=[row.room_id for row in rows]
135 )
128136
129137 if stream_name == EventsStream.NAME:
130138 # We shouldn't get multiple rows per token for events stream, so
292292
293293 Format::
294294
295 FEDERATION_ACK <token>
295 FEDERATION_ACK <instance_name> <token>
296296 """
297297
298298 NAME = "FEDERATION_ACK"
299299
300 def __init__(self, token):
300 def __init__(self, instance_name, token):
301 self.instance_name = instance_name
301302 self.token = token
302303
303304 @classmethod
304305 def from_line(cls, line):
305 return cls(int(line))
306
307 def to_line(self):
308 return str(self.token)
306 instance_name, token = line.split(" ")
307 return cls(instance_name, int(token))
308
309 def to_line(self):
310 return "%s %s" % (self.instance_name, self.token)
309311
310312
311313 class RemovePusherCommand(Command):
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 import logging
16 from typing import Any, Dict, Iterable, Iterator, List, Optional, Set, Tuple, TypeVar
16 from typing import (
17 Any,
18 Awaitable,
19 Dict,
20 Iterable,
21 Iterator,
22 List,
23 Optional,
24 Set,
25 Tuple,
26 TypeVar,
27 Union,
28 )
1729
1830 from prometheus_client import Counter
31 from typing_extensions import Deque
1932
2033 from twisted.internet.protocol import ReconnectingClientFactory
2134
2235 from synapse.metrics import LaterGauge
36 from synapse.metrics.background_process_metrics import run_as_background_process
2337 from synapse.replication.tcp.client import DirectTcpReplicationClientFactory
2438 from synapse.replication.tcp.commands import (
2539 ClearUserSyncsCommand,
4155 EventsStream,
4256 FederationStream,
4357 Stream,
58 TypingStream,
4459 )
45 from synapse.util.async_helpers import Linearizer
4660
4761 logger = logging.getLogger(__name__)
4862
5468 user_sync_counter = Counter("synapse_replication_tcp_resource_user_sync", "")
5569 federation_ack_counter = Counter("synapse_replication_tcp_resource_federation_ack", "")
5670 remove_pusher_counter = Counter("synapse_replication_tcp_resource_remove_pusher", "")
57 invalidate_cache_counter = Counter(
58 "synapse_replication_tcp_resource_invalidate_cache", ""
59 )
71
6072 user_ip_cache_counter = Counter("synapse_replication_tcp_resource_user_ip_cache", "")
73
74
75 # the type of the entries in _command_queues_by_stream
76 _StreamCommandQueue = Deque[
77 Tuple[Union[RdataCommand, PositionCommand], AbstractConnection]
78 ]
6179
6280
6381 class ReplicationCommandHandler:
95113
96114 continue
97115
116 if isinstance(stream, TypingStream):
117 # Only add TypingStream as a source on the instance in charge of
118 # typing.
119 if hs.config.worker.writers.typing == hs.get_instance_name():
120 self._streams_to_replicate.append(stream)
121
122 continue
123
98124 # Only add any other streams if we're on master.
99125 if hs.config.worker_app is not None:
100126 continue
106132
107133 self._streams_to_replicate.append(stream)
108134
109 self._position_linearizer = Linearizer(
110 "replication_position", clock=self._clock
111 )
112
113135 # Map of stream name to batched updates. See RdataCommand for info on
114136 # how batching works.
115137 self._pending_batches = {} # type: Dict[str, List[Any]]
120142 # The currently connected connections. (The list of places we need to send
121143 # outgoing replication commands to.)
122144 self._connections = [] # type: List[AbstractConnection]
123
124 # For each connection, the incoming stream names that are coming from
125 # that connection.
126 self._streams_by_connection = {} # type: Dict[AbstractConnection, Set[str]]
127145
128146 LaterGauge(
129147 "synapse_replication_tcp_resource_total_connections",
132150 lambda: len(self._connections),
133151 )
134152
153 # When POSITION or RDATA commands arrive, we stick them in a queue and process
154 # them in order in a separate background process.
155
156 # the streams which are currently being processed by _unsafe_process_queue
157 self._processing_streams = set() # type: Set[str]
158
159 # for each stream, a queue of commands that are awaiting processing, and the
160 # connection that they arrived on.
161 self._command_queues_by_stream = {
162 stream_name: _StreamCommandQueue() for stream_name in self._streams
163 }
164
165 # For each connection, the incoming stream names that have received a POSITION
166 # from that connection.
167 self._streams_by_connection = {} # type: Dict[AbstractConnection, Set[str]]
168
169 LaterGauge(
170 "synapse_replication_tcp_command_queue",
171 "Number of inbound RDATA/POSITION commands queued for processing",
172 ["stream_name"],
173 lambda: {
174 (stream_name,): len(queue)
175 for stream_name, queue in self._command_queues_by_stream.items()
176 },
177 )
178
135179 self._is_master = hs.config.worker_app is None
136180
137181 self._federation_sender = None
141185 self._server_notices_sender = None
142186 if self._is_master:
143187 self._server_notices_sender = hs.get_server_notices_sender()
188
189 def _add_command_to_stream_queue(
190 self, conn: AbstractConnection, cmd: Union[RdataCommand, PositionCommand]
191 ) -> None:
192 """Queue the given received command for processing
193
194 Adds the given command to the per-stream queue, and processes the queue if
195 necessary
196 """
197 stream_name = cmd.stream_name
198 queue = self._command_queues_by_stream.get(stream_name)
199 if queue is None:
200 logger.error("Got %s for unknown stream: %s", cmd.NAME, stream_name)
201 return
202
203 queue.append((cmd, conn))
204
205 # if we're already processing this stream, there's nothing more to do:
206 # the new entry on the queue will get picked up in due course
207 if stream_name in self._processing_streams:
208 return
209
210 # fire off a background process to start processing the queue.
211 run_as_background_process(
212 "process-replication-data", self._unsafe_process_queue, stream_name
213 )
214
215 async def _unsafe_process_queue(self, stream_name: str):
216 """Processes the command queue for the given stream, until it is empty
217
218 Does not check if there is already a thread processing the queue, hence "unsafe"
219 """
220 assert stream_name not in self._processing_streams
221
222 self._processing_streams.add(stream_name)
223 try:
224 queue = self._command_queues_by_stream.get(stream_name)
225 while queue:
226 cmd, conn = queue.popleft()
227 try:
228 await self._process_command(cmd, conn, stream_name)
229 except Exception:
230 logger.exception("Failed to handle command %s", cmd)
231 finally:
232 self._processing_streams.discard(stream_name)
233
234 async def _process_command(
235 self,
236 cmd: Union[PositionCommand, RdataCommand],
237 conn: AbstractConnection,
238 stream_name: str,
239 ) -> None:
240 if isinstance(cmd, PositionCommand):
241 await self._process_position(stream_name, conn, cmd)
242 elif isinstance(cmd, RdataCommand):
243 await self._process_rdata(stream_name, conn, cmd)
244 else:
245 # This shouldn't be possible
246 raise Exception("Unrecognised command %s in stream queue", cmd.NAME)
144247
145248 def start_replication(self, hs):
146249 """Helper method to start a replication connection to the remote server
198301 """
199302 return self._streams_to_replicate
200303
201 async def on_REPLICATE(self, conn: AbstractConnection, cmd: ReplicateCommand):
304 def on_REPLICATE(self, conn: AbstractConnection, cmd: ReplicateCommand):
202305 self.send_positions_to_connection(conn)
203306
204307 def send_positions_to_connection(self, conn: AbstractConnection):
217320 )
218321 )
219322
220 async def on_USER_SYNC(self, conn: AbstractConnection, cmd: UserSyncCommand):
323 def on_USER_SYNC(
324 self, conn: AbstractConnection, cmd: UserSyncCommand
325 ) -> Optional[Awaitable[None]]:
221326 user_sync_counter.inc()
222327
223328 if self._is_master:
224 await self._presence_handler.update_external_syncs_row(
329 return self._presence_handler.update_external_syncs_row(
225330 cmd.instance_id, cmd.user_id, cmd.is_syncing, cmd.last_sync_ms
226331 )
227
228 async def on_CLEAR_USER_SYNC(
332 else:
333 return None
334
335 def on_CLEAR_USER_SYNC(
229336 self, conn: AbstractConnection, cmd: ClearUserSyncsCommand
230 ):
337 ) -> Optional[Awaitable[None]]:
231338 if self._is_master:
232 await self._presence_handler.update_external_syncs_clear(cmd.instance_id)
233
234 async def on_FEDERATION_ACK(
235 self, conn: AbstractConnection, cmd: FederationAckCommand
236 ):
339 return self._presence_handler.update_external_syncs_clear(cmd.instance_id)
340 else:
341 return None
342
343 def on_FEDERATION_ACK(self, conn: AbstractConnection, cmd: FederationAckCommand):
237344 federation_ack_counter.inc()
238345
239346 if self._federation_sender:
240 self._federation_sender.federation_ack(cmd.token)
241
242 async def on_REMOVE_PUSHER(
347 self._federation_sender.federation_ack(cmd.instance_name, cmd.token)
348
349 def on_REMOVE_PUSHER(
243350 self, conn: AbstractConnection, cmd: RemovePusherCommand
244 ):
351 ) -> Optional[Awaitable[None]]:
245352 remove_pusher_counter.inc()
246353
247354 if self._is_master:
248 await self._store.delete_pusher_by_app_id_pushkey_user_id(
249 app_id=cmd.app_id, pushkey=cmd.push_key, user_id=cmd.user_id
250 )
251
252 self._notifier.on_new_replication_data()
253
254 async def on_USER_IP(self, conn: AbstractConnection, cmd: UserIpCommand):
355 return self._handle_remove_pusher(cmd)
356 else:
357 return None
358
359 async def _handle_remove_pusher(self, cmd: RemovePusherCommand):
360 await self._store.delete_pusher_by_app_id_pushkey_user_id(
361 app_id=cmd.app_id, pushkey=cmd.push_key, user_id=cmd.user_id
362 )
363
364 self._notifier.on_new_replication_data()
365
366 def on_USER_IP(
367 self, conn: AbstractConnection, cmd: UserIpCommand
368 ) -> Optional[Awaitable[None]]:
255369 user_ip_cache_counter.inc()
256370
257371 if self._is_master:
258 await self._store.insert_client_ip(
259 cmd.user_id,
260 cmd.access_token,
261 cmd.ip,
262 cmd.user_agent,
263 cmd.device_id,
264 cmd.last_seen,
265 )
266
267 if self._server_notices_sender:
268 await self._server_notices_sender.on_user_ip(cmd.user_id)
269
270 async def on_RDATA(self, conn: AbstractConnection, cmd: RdataCommand):
372 return self._handle_user_ip(cmd)
373 else:
374 return None
375
376 async def _handle_user_ip(self, cmd: UserIpCommand):
377 await self._store.insert_client_ip(
378 cmd.user_id,
379 cmd.access_token,
380 cmd.ip,
381 cmd.user_agent,
382 cmd.device_id,
383 cmd.last_seen,
384 )
385
386 assert self._server_notices_sender is not None
387 await self._server_notices_sender.on_user_ip(cmd.user_id)
388
389 def on_RDATA(self, conn: AbstractConnection, cmd: RdataCommand):
271390 if cmd.instance_name == self._instance_name:
272391 # Ignore RDATA that are just our own echoes
273392 return
275394 stream_name = cmd.stream_name
276395 inbound_rdata_count.labels(stream_name).inc()
277396
278 try:
279 row = STREAMS_MAP[stream_name].parse_row(cmd.row)
280 except Exception:
281 logger.exception("Failed to parse RDATA: %r %r", stream_name, cmd.row)
282 raise
283
284 # We linearize here for two reasons:
397 # We put the received command into a queue here for two reasons:
285398 # 1. so we don't try and concurrently handle multiple rows for the
286399 # same stream, and
287400 # 2. so we don't race with getting a POSITION command and fetching
288401 # missing RDATA.
289 with await self._position_linearizer.queue(cmd.stream_name):
290 # make sure that we've processed a POSITION for this stream *on this
291 # connection*. (A POSITION on another connection is no good, as there
292 # is no guarantee that we have seen all the intermediate updates.)
293 sbc = self._streams_by_connection.get(conn)
294 if not sbc or stream_name not in sbc:
295 # Let's drop the row for now, on the assumption we'll receive a
296 # `POSITION` soon and we'll catch up correctly then.
297 logger.debug(
298 "Discarding RDATA for unconnected stream %s -> %s",
299 stream_name,
300 cmd.token,
301 )
302 return
303
304 if cmd.token is None:
305 # I.e. this is part of a batch of updates for this stream (in
306 # which case batch until we get an update for the stream with a non
307 # None token).
308 self._pending_batches.setdefault(stream_name, []).append(row)
309 else:
310 # Check if this is the last of a batch of updates
311 rows = self._pending_batches.pop(stream_name, [])
312 rows.append(row)
313
314 stream = self._streams.get(stream_name)
315 if not stream:
316 logger.error("Got RDATA for unknown stream: %s", stream_name)
317 return
318
319 # Find where we previously streamed up to.
320 current_token = stream.current_token(cmd.instance_name)
321
322 # Discard this data if this token is earlier than the current
323 # position. Note that streams can be reset (in which case you
324 # expect an earlier token), but that must be preceded by a
325 # POSITION command.
326 if cmd.token <= current_token:
327 logger.debug(
328 "Discarding RDATA from stream %s at position %s before previous position %s",
329 stream_name,
330 cmd.token,
331 current_token,
332 )
333 else:
334 await self.on_rdata(stream_name, cmd.instance_name, cmd.token, rows)
402
403 self._add_command_to_stream_queue(conn, cmd)
404
405 async def _process_rdata(
406 self, stream_name: str, conn: AbstractConnection, cmd: RdataCommand
407 ) -> None:
408 """Process an RDATA command
409
410 Called after the command has been popped off the queue of inbound commands
411 """
412 try:
413 row = STREAMS_MAP[stream_name].parse_row(cmd.row)
414 except Exception as e:
415 raise Exception(
416 "Failed to parse RDATA: %r %r" % (stream_name, cmd.row)
417 ) from e
418
419 # make sure that we've processed a POSITION for this stream *on this
420 # connection*. (A POSITION on another connection is no good, as there
421 # is no guarantee that we have seen all the intermediate updates.)
422 sbc = self._streams_by_connection.get(conn)
423 if not sbc or stream_name not in sbc:
424 # Let's drop the row for now, on the assumption we'll receive a
425 # `POSITION` soon and we'll catch up correctly then.
426 logger.debug(
427 "Discarding RDATA for unconnected stream %s -> %s",
428 stream_name,
429 cmd.token,
430 )
431 return
432
433 if cmd.token is None:
434 # I.e. this is part of a batch of updates for this stream (in
435 # which case batch until we get an update for the stream with a non
436 # None token).
437 self._pending_batches.setdefault(stream_name, []).append(row)
438 return
439
440 # Check if this is the last of a batch of updates
441 rows = self._pending_batches.pop(stream_name, [])
442 rows.append(row)
443
444 stream = self._streams[stream_name]
445
446 # Find where we previously streamed up to.
447 current_token = stream.current_token(cmd.instance_name)
448
449 # Discard this data if this token is earlier than the current
450 # position. Note that streams can be reset (in which case you
451 # expect an earlier token), but that must be preceded by a
452 # POSITION command.
453 if cmd.token <= current_token:
454 logger.debug(
455 "Discarding RDATA from stream %s at position %s before previous position %s",
456 stream_name,
457 cmd.token,
458 current_token,
459 )
460 else:
461 await self.on_rdata(stream_name, cmd.instance_name, cmd.token, rows)
335462
336463 async def on_rdata(
337464 self, stream_name: str, instance_name: str, token: int, rows: list
350477 stream_name, instance_name, token, rows
351478 )
352479
353 async def on_POSITION(self, conn: AbstractConnection, cmd: PositionCommand):
480 def on_POSITION(self, conn: AbstractConnection, cmd: PositionCommand):
354481 if cmd.instance_name == self._instance_name:
355482 # Ignore POSITION that are just our own echoes
356483 return
357484
358485 logger.info("Handling '%s %s'", cmd.NAME, cmd.to_line())
359486
360 stream_name = cmd.stream_name
361 stream = self._streams.get(stream_name)
362 if not stream:
363 logger.error("Got POSITION for unknown stream: %s", stream_name)
364 return
365
366 # We protect catching up with a linearizer in case the replication
367 # connection reconnects under us.
368 with await self._position_linearizer.queue(stream_name):
369 # We're about to go and catch up with the stream, so remove from set
370 # of connected streams.
371 for streams in self._streams_by_connection.values():
372 streams.discard(stream_name)
373
374 # We clear the pending batches for the stream as the fetching of the
375 # missing updates below will fetch all rows in the batch.
376 self._pending_batches.pop(stream_name, [])
377
378 # Find where we previously streamed up to.
379 current_token = stream.current_token(cmd.instance_name)
380
381 # If the position token matches our current token then we're up to
382 # date and there's nothing to do. Otherwise, fetch all updates
383 # between then and now.
384 missing_updates = cmd.token != current_token
385 while missing_updates:
386 logger.info(
387 "Fetching replication rows for '%s' between %i and %i",
487 self._add_command_to_stream_queue(conn, cmd)
488
489 async def _process_position(
490 self, stream_name: str, conn: AbstractConnection, cmd: PositionCommand
491 ) -> None:
492 """Process a POSITION command
493
494 Called after the command has been popped off the queue of inbound commands
495 """
496 stream = self._streams[stream_name]
497
498 # We're about to go and catch up with the stream, so remove from set
499 # of connected streams.
500 for streams in self._streams_by_connection.values():
501 streams.discard(stream_name)
502
503 # We clear the pending batches for the stream as the fetching of the
504 # missing updates below will fetch all rows in the batch.
505 self._pending_batches.pop(stream_name, [])
506
507 # Find where we previously streamed up to.
508 current_token = stream.current_token(cmd.instance_name)
509
510 # If the position token matches our current token then we're up to
511 # date and there's nothing to do. Otherwise, fetch all updates
512 # between then and now.
513 missing_updates = cmd.token != current_token
514 while missing_updates:
515 logger.info(
516 "Fetching replication rows for '%s' between %i and %i",
517 stream_name,
518 current_token,
519 cmd.token,
520 )
521 (updates, current_token, missing_updates) = await stream.get_updates_since(
522 cmd.instance_name, current_token, cmd.token
523 )
524
525 # TODO: add some tests for this
526
527 # Some streams return multiple rows with the same stream IDs,
528 # which need to be processed in batches.
529
530 for token, rows in _batch_updates(updates):
531 await self.on_rdata(
388532 stream_name,
389 current_token,
390 cmd.token,
533 cmd.instance_name,
534 token,
535 [stream.parse_row(row) for row in rows],
391536 )
392 (
393 updates,
394 current_token,
395 missing_updates,
396 ) = await stream.get_updates_since(
397 cmd.instance_name, current_token, cmd.token
398 )
399
400 # TODO: add some tests for this
401
402 # Some streams return multiple rows with the same stream IDs,
403 # which need to be processed in batches.
404
405 for token, rows in _batch_updates(updates):
406 await self.on_rdata(
407 stream_name,
408 cmd.instance_name,
409 token,
410 [stream.parse_row(row) for row in rows],
411 )
412
413 logger.info("Caught up with stream '%s' to %i", stream_name, cmd.token)
414
415 # We've now caught up to position sent to us, notify handler.
416 await self._replication_data_handler.on_position(
417 cmd.stream_name, cmd.instance_name, cmd.token
418 )
419
420 self._streams_by_connection.setdefault(conn, set()).add(stream_name)
421
422 async def on_REMOTE_SERVER_UP(
423 self, conn: AbstractConnection, cmd: RemoteServerUpCommand
424 ):
537
538 logger.info("Caught up with stream '%s' to %i", stream_name, cmd.token)
539
540 # We've now caught up to position sent to us, notify handler.
541 await self._replication_data_handler.on_position(
542 cmd.stream_name, cmd.instance_name, cmd.token
543 )
544
545 self._streams_by_connection.setdefault(conn, set()).add(stream_name)
546
547 def on_REMOTE_SERVER_UP(self, conn: AbstractConnection, cmd: RemoteServerUpCommand):
425548 """"Called when get a new REMOTE_SERVER_UP command."""
426549 self._replication_data_handler.on_remote_server_up(cmd.data)
427550
526649 """Ack data for the federation stream. This allows the master to drop
527650 data stored purely in memory.
528651 """
529 self.send_command(FederationAckCommand(token))
652 self.send_command(FederationAckCommand(self._instance_name, token))
530653
531654 def send_user_sync(
532655 self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
4949 import fcntl
5050 import logging
5151 import struct
52 from inspect import isawaitable
5253 from typing import TYPE_CHECKING, List
5354
5455 from prometheus_client import Counter
5657 from twisted.protocols.basic import LineOnlyReceiver
5758 from twisted.python.failure import Failure
5859
60 from synapse.logging.context import PreserveLoggingContext
5961 from synapse.metrics import LaterGauge
60 from synapse.metrics.background_process_metrics import run_as_background_process
62 from synapse.metrics.background_process_metrics import (
63 BackgroundProcessLoggingContext,
64 run_as_background_process,
65 )
6166 from synapse.replication.tcp.commands import (
6267 VALID_CLIENT_COMMANDS,
6368 VALID_SERVER_COMMANDS,
123128
124129 On receiving a new command it calls `on_<COMMAND_NAME>` with the parsed
125130 command before delegating to `ReplicationCommandHandler.on_<COMMAND_NAME>`.
131 `ReplicationCommandHandler.on_<COMMAND_NAME>` can optionally return a coroutine;
132 if so, that will get run as a background process.
126133
127134 It also sends `PING` periodically, and correctly times out remote connections
128135 (if they send a `PING` command)
158165
159166 # The LoopingCall for sending pings.
160167 self._send_ping_loop = None
168
169 # a logcontext which we use for processing incoming commands. We declare it as a
170 # background process so that the CPU stats get reported to prometheus.
171 ctx_name = "replication-conn-%s" % self.conn_id
172 self._logging_context = BackgroundProcessLoggingContext(ctx_name)
173 self._logging_context.request = ctx_name
161174
162175 def connectionMade(self):
163176 logger.info("[%s] Connection established", self.id())
209222 def lineReceived(self, line: bytes):
210223 """Called when we've received a line
211224 """
225 with PreserveLoggingContext(self._logging_context):
226 self._parse_and_dispatch_line(line)
227
228 def _parse_and_dispatch_line(self, line: bytes):
212229 if line.strip() == "":
213230 # Ignore blank lines
214231 return
231248
232249 tcp_inbound_commands_counter.labels(cmd.NAME, self.name).inc()
233250
234 # Now lets try and call on_<CMD_NAME> function
235 run_as_background_process(
236 "replication-" + cmd.get_logcontext_id(), self.handle_command, cmd
237 )
238
239 async def handle_command(self, cmd: Command):
251 self.handle_command(cmd)
252
253 def handle_command(self, cmd: Command) -> None:
240254 """Handle a command we have received over the replication stream.
241255
242256 First calls `self.on_<COMMAND>` if it exists, then calls
243 `self.command_handler.on_<COMMAND>` if it exists. This allows for
244 protocol level handling of commands (e.g. PINGs), before delegating to
245 the handler.
257 `self.command_handler.on_<COMMAND>` if it exists (which can optionally
258 return an Awaitable).
259
260 This allows for protocol level handling of commands (e.g. PINGs), before
261 delegating to the handler.
246262
247263 Args:
248264 cmd: received command
253269 # specific handling.
254270 cmd_func = getattr(self, "on_%s" % (cmd.NAME,), None)
255271 if cmd_func:
256 await cmd_func(cmd)
272 cmd_func(cmd)
257273 handled = True
258274
259275 # Then call out to the handler.
260276 cmd_func = getattr(self.command_handler, "on_%s" % (cmd.NAME,), None)
261277 if cmd_func:
262 await cmd_func(self, cmd)
278 res = cmd_func(self, cmd)
279
280 # the handler might be a coroutine: fire it off as a background process
281 # if so.
282
283 if isawaitable(res):
284 run_as_background_process(
285 "replication-" + cmd.get_logcontext_id(), lambda: res
286 )
287
263288 handled = True
264289
265290 if not handled:
335360 for cmd in pending:
336361 self.send_command(cmd)
337362
338 async def on_PING(self, line):
363 def on_PING(self, line):
339364 self.received_ping = True
340365
341 async def on_ERROR(self, cmd):
366 def on_ERROR(self, cmd):
342367 logger.error("[%s] Remote reported error: %r", self.id(), cmd.data)
343368
344369 def pauseProducing(self):
395420
396421 if self.transport:
397422 self.transport.unregisterProducer()
423
424 # mark the logging context as finished
425 self._logging_context.__exit__(None, None, None)
398426
399427 def __str__(self):
400428 addr = None
430458 self.send_command(ServerCommand(self.server_name))
431459 super().connectionMade()
432460
433 async def on_NAME(self, cmd):
461 def on_NAME(self, cmd):
434462 logger.info("[%s] Renamed to %r", self.id(), cmd.data)
435463 self.name = cmd.data
436464
459487 # Once we've connected subscribe to the necessary streams
460488 self.replicate()
461489
462 async def on_SERVER(self, cmd):
490 def on_SERVER(self, cmd):
463491 if cmd.data != self.server_name:
464492 logger.error("[%s] Connected to wrong remote: %r", self.id(), cmd.data)
465493 self.send_error("Wrong remote")
1313 # limitations under the License.
1414
1515 import logging
16 from inspect import isawaitable
1617 from typing import TYPE_CHECKING
1718
1819 import txredisapi
1920
20 from synapse.logging.context import make_deferred_yieldable
21 from synapse.metrics.background_process_metrics import run_as_background_process
21 from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
22 from synapse.metrics.background_process_metrics import (
23 BackgroundProcessLoggingContext,
24 run_as_background_process,
25 )
2226 from synapse.replication.tcp.commands import (
2327 Command,
2428 ReplicateCommand,
6468 handler = None # type: ReplicationCommandHandler
6569 stream_name = None # type: str
6670 outbound_redis_connection = None # type: txredisapi.RedisProtocol
71
72 def __init__(self, *args, **kwargs):
73 super().__init__(*args, **kwargs)
74
75 # a logcontext which we use for processing incoming commands. We declare it as a
76 # background process so that the CPU stats get reported to prometheus.
77 self._logging_context = BackgroundProcessLoggingContext(
78 "replication_command_handler"
79 )
6780
6881 def connectionMade(self):
6982 logger.info("Connected to redis")
91104 def messageReceived(self, pattern: str, channel: str, message: str):
92105 """Received a message from redis.
93106 """
94
107 with PreserveLoggingContext(self._logging_context):
108 self._parse_and_dispatch_message(message)
109
110 def _parse_and_dispatch_message(self, message: str):
95111 if message.strip() == "":
96112 # Ignore blank lines
97113 return
108124 # remote instances.
109125 tcp_inbound_commands_counter.labels(cmd.NAME, "redis").inc()
110126
111 # Now lets try and call on_<CMD_NAME> function
112 run_as_background_process(
113 "replication-" + cmd.get_logcontext_id(), self.handle_command, cmd
114 )
115
116 async def handle_command(self, cmd: Command):
127 self.handle_command(cmd)
128
129 def handle_command(self, cmd: Command) -> None:
117130 """Handle a command we have received over the replication stream.
118131
119 By default delegates to on_<COMMAND>, which should return an awaitable.
132 Delegates to `self.handler.on_<COMMAND>` (which can optionally return an
133 Awaitable).
120134
121135 Args:
122136 cmd: received command
123137 """
124 handled = False
125
126 # First call any command handlers on this instance. These are for redis
127 # specific handling.
128 cmd_func = getattr(self, "on_%s" % (cmd.NAME,), None)
129 if cmd_func:
130 await cmd_func(cmd)
131 handled = True
132
133 # Then call out to the handler.
138
134139 cmd_func = getattr(self.handler, "on_%s" % (cmd.NAME,), None)
135 if cmd_func:
136 await cmd_func(self, cmd)
137 handled = True
138
139 if not handled:
140 if not cmd_func:
140141 logger.warning("Unhandled command: %r", cmd)
142 return
143
144 res = cmd_func(self, cmd)
145
146 # the handler might be a coroutine: fire it off as a background process
147 # if so.
148
149 if isawaitable(res):
150 run_as_background_process(
151 "replication-" + cmd.get_logcontext_id(), lambda: res
152 )
141153
142154 def connectionLost(self, reason):
143155 logger.info("Lost connection to redis")
144156 super().connectionLost(reason)
145157 self.handler.lost_connection(self)
158
159 # mark the logging context as finished
160 self._logging_context.__exit__(None, None, None)
146161
147162 def send_command(self, cmd: Command):
148163 """Send a command if connection has been established.
293293 def __init__(self, hs):
294294 typing_handler = hs.get_typing_handler()
295295
296 if hs.config.worker_app is None:
297 # on the master, query the typing handler
296 writer_instance = hs.config.worker.writers.typing
297 if writer_instance == hs.get_instance_name():
298 # On the writer, query the typing handler
298299 update_function = typing_handler.get_all_typing_updates
299300 else:
300 # Query master process
301 # Query the typing writer process
301302 update_function = make_http_update_function(hs, self.NAME)
302303
303304 super().__init__(
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 import heapq
16 from collections import Iterable
16 from collections.abc import Iterable
1717 from typing import List, Tuple, Type
1818
1919 import attr
0 .header {
1 border-bottom: 4px solid #e4f7ed ! important;
2 }
3
4 .notif_link a, .footer a {
5 color: #76CFA6 ! important;
6 }
2121 <img src="http://riot.im/img/external/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
2222 {% elif app_name == "Vector" %}
2323 <img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
24 {% elif app_name == "Element" %}
25 <img src="https://static.element.io/images/email-logo.png" width="83" height="83" alt="[Element]"/>
2426 {% else %}
2527 <img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
2628 {% endif %}
2121 <img src="http://riot.im/img/external/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
2222 {% elif app_name == "Vector" %}
2323 <img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
24 {% elif app_name == "Element" %}
25 <img src="https://static.element.io/images/email-logo.png" width="83" height="83" alt="[Element]"/>
2426 {% else %}
2527 <img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
2628 {% endif %}
3434 from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
3535 from synapse.rest.admin.purge_room_servlet import PurgeRoomServlet
3636 from synapse.rest.admin.rooms import (
37 DeleteRoomRestServlet,
3738 JoinRoomAliasServlet,
3839 ListRoomRestServlet,
40 RoomMembersRestServlet,
3941 RoomRestServlet,
4042 ShutdownRoomRestServlet,
4143 )
199201 register_servlets_for_client_rest_resource(hs, http_server)
200202 ListRoomRestServlet(hs).register(http_server)
201203 RoomRestServlet(hs).register(http_server)
204 RoomMembersRestServlet(hs).register(http_server)
205 DeleteRoomRestServlet(hs).register(http_server)
202206 JoinRoomAliasServlet(hs).register(http_server)
203207 PurgeRoomServlet(hs).register(http_server)
204208 SendServerNoticeServlet(hs).register(http_server)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from http import HTTPStatus
1516 from typing import List, Optional
1617
17 from synapse.api.constants import EventTypes, JoinRules, Membership, RoomCreationPreset
18 from synapse.api.constants import EventTypes, JoinRules
1819 from synapse.api.errors import Codes, NotFoundError, SynapseError
1920 from synapse.http.servlet import (
2021 RestServlet,
3132 )
3233 from synapse.storage.data_stores.main.room import RoomSortOrder
3334 from synapse.types import RoomAlias, RoomID, UserID, create_requester
34 from synapse.util.async_helpers import maybe_awaitable
3535
3636 logger = logging.getLogger(__name__)
3737
4545
4646 PATTERNS = historical_admin_path_patterns("/shutdown_room/(?P<room_id>[^/]+)")
4747
48 DEFAULT_MESSAGE = (
49 "Sharing illegal content on this server is not permitted and rooms in"
50 " violation will be blocked."
51 )
52
53 def __init__(self, hs):
54 self.hs = hs
55 self.store = hs.get_datastore()
56 self.state = hs.get_state_handler()
57 self._room_creation_handler = hs.get_room_creation_handler()
58 self.event_creation_handler = hs.get_event_creation_handler()
59 self.room_member_handler = hs.get_room_member_handler()
60 self.auth = hs.get_auth()
61 self._replication = hs.get_replication_data_handler()
48 def __init__(self, hs):
49 self.hs = hs
50 self.auth = hs.get_auth()
51 self.room_shutdown_handler = hs.get_room_shutdown_handler()
6252
6353 async def on_POST(self, request, room_id):
6454 requester = await self.auth.get_user_by_req(request)
6656
6757 content = parse_json_object_from_request(request)
6858 assert_params_in_dict(content, ["new_room_user_id"])
69 new_room_user_id = content["new_room_user_id"]
70
71 room_creator_requester = create_requester(new_room_user_id)
72
73 message = content.get("message", self.DEFAULT_MESSAGE)
74 room_name = content.get("room_name", "Content Violation Notification")
75
76 info, stream_id = await self._room_creation_handler.create_room(
77 room_creator_requester,
78 config={
79 "preset": RoomCreationPreset.PUBLIC_CHAT,
80 "name": room_name,
81 "power_level_content_override": {"users_default": -10},
82 },
83 ratelimit=False,
59
60 ret = await self.room_shutdown_handler.shutdown_room(
61 room_id=room_id,
62 new_room_user_id=content["new_room_user_id"],
63 new_room_name=content.get("room_name"),
64 message=content.get("message"),
65 requester_user_id=requester.user.to_string(),
66 block=True,
8467 )
85 new_room_id = info["room_id"]
86
87 requester_user_id = requester.user.to_string()
88
89 logger.info(
90 "Shutting down room %r, joining to new room: %r", room_id, new_room_id
68
69 return (200, ret)
70
71
72 class DeleteRoomRestServlet(RestServlet):
73 """Delete a room from server. It is a combination and improvement of
74 shut down and purge room.
75 Shuts down a room by removing all local users from the room.
76 Blocking all future invites and joins to the room is optional.
77 If desired any local aliases will be repointed to a new room
78 created by `new_room_user_id` and kicked users will be auto
79 joined to the new room.
80 It will remove all trace of a room from the database.
81 """
82
83 PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/delete$")
84
85 def __init__(self, hs):
86 self.hs = hs
87 self.auth = hs.get_auth()
88 self.room_shutdown_handler = hs.get_room_shutdown_handler()
89 self.pagination_handler = hs.get_pagination_handler()
90
91 async def on_POST(self, request, room_id):
92 requester = await self.auth.get_user_by_req(request)
93 await assert_user_is_admin(self.auth, requester.user)
94
95 content = parse_json_object_from_request(request)
96
97 block = content.get("block", False)
98 if not isinstance(block, bool):
99 raise SynapseError(
100 HTTPStatus.BAD_REQUEST,
101 "Param 'block' must be a boolean, if given",
102 Codes.BAD_JSON,
103 )
104
105 ret = await self.room_shutdown_handler.shutdown_room(
106 room_id=room_id,
107 new_room_user_id=content.get("new_room_user_id"),
108 new_room_name=content.get("room_name"),
109 message=content.get("message"),
110 requester_user_id=requester.user.to_string(),
111 block=block,
91112 )
92113
93 # This will work even if the room is already blocked, but that is
94 # desirable in case the first attempt at blocking the room failed below.
95 await self.store.block_room(room_id, requester_user_id)
96
97 # We now wait for the create room to come back in via replication so
98 # that we can assume that all the joins/invites have propogated before
99 # we try and auto join below.
100 #
101 # TODO: Currently the events stream is written to from master
102 await self._replication.wait_for_stream_position(
103 self.hs.config.worker.writers.events, "events", stream_id
104 )
105
106 users = await self.state.get_current_users_in_room(room_id)
107 kicked_users = []
108 failed_to_kick_users = []
109 for user_id in users:
110 if not self.hs.is_mine_id(user_id):
111 continue
112
113 logger.info("Kicking %r from %r...", user_id, room_id)
114
115 try:
116 target_requester = create_requester(user_id)
117 _, stream_id = await self.room_member_handler.update_membership(
118 requester=target_requester,
119 target=target_requester.user,
120 room_id=room_id,
121 action=Membership.LEAVE,
122 content={},
123 ratelimit=False,
124 require_consent=False,
125 )
126
127 # Wait for leave to come in over replication before trying to forget.
128 await self._replication.wait_for_stream_position(
129 self.hs.config.worker.writers.events, "events", stream_id
130 )
131
132 await self.room_member_handler.forget(target_requester.user, room_id)
133
134 await self.room_member_handler.update_membership(
135 requester=target_requester,
136 target=target_requester.user,
137 room_id=new_room_id,
138 action=Membership.JOIN,
139 content={},
140 ratelimit=False,
141 require_consent=False,
142 )
143
144 kicked_users.append(user_id)
145 except Exception:
146 logger.exception(
147 "Failed to leave old room and join new room for %r", user_id
148 )
149 failed_to_kick_users.append(user_id)
150
151 await self.event_creation_handler.create_and_send_nonmember_event(
152 room_creator_requester,
153 {
154 "type": "m.room.message",
155 "content": {"body": message, "msgtype": "m.text"},
156 "room_id": new_room_id,
157 "sender": new_room_user_id,
158 },
159 ratelimit=False,
160 )
161
162 aliases_for_room = await maybe_awaitable(
163 self.store.get_aliases_for_room(room_id)
164 )
165
166 await self.store.update_aliases_for_room(
167 room_id, new_room_id, requester_user_id
168 )
169
170 return (
171 200,
172 {
173 "kicked_users": kicked_users,
174 "failed_to_kick_users": failed_to_kick_users,
175 "local_aliases": aliases_for_room,
176 "new_room_id": new_room_id,
177 },
178 )
114 # Purge room
115 await self.pagination_handler.purge_room(room_id)
116
117 return (200, ret)
179118
180119
181120 class ListRoomRestServlet(RestServlet):
291230 return 200, ret
292231
293232
233 class RoomMembersRestServlet(RestServlet):
234 """
235 Get members list of a room.
236 """
237
238 PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/members")
239
240 def __init__(self, hs):
241 self.hs = hs
242 self.auth = hs.get_auth()
243 self.store = hs.get_datastore()
244
245 async def on_GET(self, request, room_id):
246 await assert_requester_is_admin(self.auth, request)
247
248 ret = await self.store.get_room(room_id)
249 if not ret:
250 raise NotFoundError("Room not found")
251
252 members = await self.store.get_users_in_room(room_id)
253 ret = {"members": members, "total": len(members)}
254
255 return 200, ret
256
257
294258 class JoinRoomAliasServlet(RestServlet):
295259
296260 PATTERNS = admin_patterns("/join/(?P<room_identifier>[^/]*)")
238238 await self.deactivate_account_handler.deactivate_account(
239239 target_user.to_string(), False
240240 )
241 elif not deactivate and user["deactivated"]:
242 if "password" not in body:
243 raise SynapseError(
244 400, "Must provide a password to re-activate an account."
245 )
246
247 await self.deactivate_account_handler.activate_account(
248 target_user.to_string()
249 )
241250
242251 user = await self.admin_handler.get_user(target_user)
243252 return 200, user
253262 admin = body.get("admin", None)
254263 user_type = body.get("user_type", None)
255264 displayname = body.get("displayname", None)
256 threepids = body.get("threepids", None)
257265
258266 if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
259267 raise SynapseError(400, "Invalid user type")
8888 def __init__(self, hs):
8989 super(LoginRestServlet, self).__init__()
9090 self.hs = hs
91
92 # JWT configuration variables.
9193 self.jwt_enabled = hs.config.jwt_enabled
9294 self.jwt_secret = hs.config.jwt_secret
9395 self.jwt_algorithm = hs.config.jwt_algorithm
96 self.jwt_issuer = hs.config.jwt_issuer
97 self.jwt_audiences = hs.config.jwt_audiences
98
99 # SSO configuration.
94100 self.saml2_enabled = hs.config.saml2_enabled
95101 self.cas_enabled = hs.config.cas_enabled
96102 self.oidc_enabled = hs.config.oidc_enabled
103
97104 self.auth_handler = self.hs.get_auth_handler()
98105 self.registration_handler = hs.get_registration_handler()
99106 self.handlers = hs.get_handlers()
363370 token = login_submission.get("token", None)
364371 if token is None:
365372 raise LoginError(
366 401, "Token field for JWT is missing", errcode=Codes.UNAUTHORIZED
373 403, "Token field for JWT is missing", errcode=Codes.FORBIDDEN
367374 )
368375
369376 import jwt
370 from jwt.exceptions import InvalidTokenError
371377
372378 try:
373379 payload = jwt.decode(
374 token, self.jwt_secret, algorithms=[self.jwt_algorithm]
380 token,
381 self.jwt_secret,
382 algorithms=[self.jwt_algorithm],
383 issuer=self.jwt_issuer,
384 audience=self.jwt_audiences,
375385 )
376 except jwt.ExpiredSignatureError:
377 raise LoginError(401, "JWT expired", errcode=Codes.UNAUTHORIZED)
378 except InvalidTokenError:
379 raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED)
386 except jwt.PyJWTError as e:
387 # A JWT error occurred, return some info back to the client.
388 raise LoginError(
389 403, "JWT validation failed: %s" % (str(e),), errcode=Codes.FORBIDDEN,
390 )
380391
381392 user = payload.get("sub", None)
382393 if user is None:
383 raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED)
394 raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
384395
385396 user_id = UserID(user, self.hs.hostname).to_string()
386397 result = await self._complete_login(
1414 # limitations under the License.
1515
1616 """ This module contains REST servlets to do with rooms: /rooms/<paths> """
17
1718 import logging
1819 import re
1920 from typing import List, Optional
514515 requester = await self.auth.get_user_by_req(request, allow_guest=True)
515516 pagination_config = PaginationConfig.from_request(request, default_limit=10)
516517 as_client_event = b"raw" not in request.args
517 filter_bytes = parse_string(request, b"filter", encoding=None)
518 if filter_bytes:
519 filter_json = urlparse.unquote(filter_bytes.decode("UTF-8"))
518 filter_str = parse_string(request, b"filter", encoding="utf-8")
519 if filter_str:
520 filter_json = urlparse.unquote(filter_str)
520521 event_filter = Filter(json.loads(filter_json)) # type: Optional[Filter]
521522 if (
522523 event_filter
626627 limit = parse_integer(request, "limit", default=10)
627628
628629 # picking the API shape for symmetry with /messages
629 filter_bytes = parse_string(request, "filter")
630 if filter_bytes:
631 filter_json = urlparse.unquote(filter_bytes)
630 filter_str = parse_string(request, b"filter", encoding="utf-8")
631 if filter_str:
632 filter_json = urlparse.unquote(filter_str)
632633 event_filter = Filter(json.loads(filter_json)) # type: Optional[Filter]
633634 else:
634635 event_filter = None
815816 self.typing_handler = hs.get_typing_handler()
816817 self.auth = hs.get_auth()
817818
819 # If we're not on the typing writer instance we should scream if we get
820 # requests.
821 self._is_typing_writer = (
822 hs.config.worker.writers.typing == hs.get_instance_name()
823 )
824
818825 async def on_PUT(self, request, room_id, user_id):
819826 requester = await self.auth.get_user_by_req(request)
827
828 if not self._is_typing_writer:
829 raise Exception("Got /typing request on instance that is not typing writer")
820830
821831 room_id = urlparse.unquote(room_id)
822832 target_user = UserID.from_string(urlparse.unquote(user_id))
1616 """
1717 import logging
1818 import re
19
20 from twisted.internet import defer
19 from typing import Iterable, Pattern
2120
2221 from synapse.api.errors import InteractiveAuthIncompleteError
2322 from synapse.api.urls import CLIENT_API_PREFIX
23 from synapse.types import JsonDict
2424
2525 logger = logging.getLogger(__name__)
2626
2727
28 def client_patterns(path_regex, releases=(0,), unstable=True, v1=False):
28 def client_patterns(
29 path_regex: str,
30 releases: Iterable[int] = (0,),
31 unstable: bool = True,
32 v1: bool = False,
33 ) -> Iterable[Pattern]:
2934 """Creates a regex compiled client path with the correct client path
3035 prefix.
3136
3237 Args:
33 path_regex (str): The regex string to match. This should NOT have a ^
38 path_regex: The regex string to match. This should NOT have a ^
3439 as this will be prefixed.
40 releases: An iterable of releases to include this endpoint under.
41 unstable: If true, include this endpoint under the "unstable" prefix.
42 v1: If true, include this endpoint under the "api/v1" prefix.
3543 Returns:
36 SRE_Pattern
44 An iterable of patterns.
3745 """
3846 patterns = []
3947
5058 return patterns
5159
5260
53 def set_timeline_upper_limit(filter_json, filter_timeline_limit):
61 def set_timeline_upper_limit(filter_json: JsonDict, filter_timeline_limit: int) -> None:
62 """
63 Enforces a maximum limit of a timeline query.
64
65 Params:
66 filter_json: The timeline query to modify.
67 filter_timeline_limit: The maximum limit to allow, passing -1 will
68 disable enforcing a maximum limit.
69 """
5470 if filter_timeline_limit < 0:
5571 return # no upper limits
5672 timeline = filter_json.get("room", {}).get("timeline", {})
6379 def interactive_auth_handler(orig):
6480 """Wraps an on_POST method to handle InteractiveAuthIncompleteErrors
6581
66 Takes a on_POST method which returns a deferred (errcode, body) response
82 Takes a on_POST method which returns an Awaitable (errcode, body) response
6783 and adds exception handling to turn a InteractiveAuthIncompleteError into
6884 a 401 response.
6985
7086 Normal usage is:
7187
7288 @interactive_auth_handler
73 @defer.inlineCallbacks
74 def on_POST(self, request):
89 async def on_POST(self, request):
7590 # ...
76 yield self.auth_handler.check_auth
77 """
91 await self.auth_handler.check_auth
92 """
7893
79 def wrapped(*args, **kwargs):
80 res = defer.ensureDeferred(orig(*args, **kwargs))
81 res.addErrback(_catch_incomplete_interactive_auth)
82 return res
94 async def wrapped(*args, **kwargs):
95 try:
96 return await orig(*args, **kwargs)
97 except InteractiveAuthIncompleteError as e:
98 return 401, e.result
8399
84100 return wrapped
85
86
87 def _catch_incomplete_interactive_auth(f):
88 """helper for interactive_auth_handler
89
90 Catches InteractiveAuthIncompleteErrors and turns them into 401 responses
91
92 Args:
93 f (failure.Failure):
94 """
95 f.trap(InteractiveAuthIncompleteError)
96 return 401, f.value.result
177177 full_state=full_state,
178178 )
179179
180 # the client may have disconnected by now; don't bother to serialize the
181 # response if so.
182 if request._disconnected:
183 logger.info("Client has disconnected; not serializing response.")
184 return 200, {}
185
180186 time_now = self.clock.time_msec()
181187 response_content = await self.encode_response(
182188 time_now, sync_result, requester.access_token_id, filter_collection
183189 )
184190
191 logger.debug("Event formatting complete")
185192 return 200, response_content
186193
187194 async def encode_response(self, time_now, sync_result, access_token_id, filter):
195 logger.debug("Formatting events in sync response")
188196 if filter.event_format == "client":
189197 event_formatter = format_event_for_client_v2_without_room_id
190198 elif filter.event_format == "federation":
212220 event_formatter,
213221 )
214222
223 logger.debug("building sync response dict")
215224 return {
216225 "account_data": {"events": sync_result.account_data},
217226 "to_device": {"events": sync_result.to_device},
201201
202202 if miss:
203203 cache_misses.setdefault(server_name, set()).add(key_id)
204 # Cast to bytes since postgresql returns a memoryview.
204205 json_results.add(bytes(most_recent_result["key_json"]))
205206 else:
206207 for ts_added, result in results:
208 # Cast to bytes since postgresql returns a memoryview.
207209 json_results.add(bytes(result["key_json"]))
208210
209211 if cache_misses and query_remote_on_cache_miss:
212214 else:
213215 signed_keys = []
214216 for key_json in json_results:
215 key_json = json.loads(key_json)
217 key_json = json.loads(key_json.decode("utf-8"))
216218 for signing_key in self.config.key_server_signing_keys:
217219 key_json = sign_json(key_json, self.config.server_name, signing_key)
218220
1717 import os
1818 import urllib
1919
20 from twisted.internet import defer
2120 from twisted.protocols.basic import FileSender
2221
2322 from synapse.api.errors import Codes, SynapseError, cs_error
7675 )
7776
7877
79 @defer.inlineCallbacks
80 def respond_with_file(request, media_type, file_path, file_size=None, upload_name=None):
78 async def respond_with_file(
79 request, media_type, file_path, file_size=None, upload_name=None
80 ):
8181 logger.debug("Responding with %r", file_path)
8282
8383 if os.path.isfile(file_path):
8888 add_file_headers(request, media_type, file_size, upload_name)
8989
9090 with open(file_path, "rb") as f:
91 yield make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
91 await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
9292
9393 finish_request(request)
9494 else:
197197 return True
198198
199199
200 @defer.inlineCallbacks
201 def respond_with_responder(request, responder, media_type, file_size, upload_name=None):
200 async def respond_with_responder(
201 request, responder, media_type, file_size, upload_name=None
202 ):
202203 """Responds to the request with given responder. If responder is None then
203204 returns 404.
204205
217218 add_file_headers(request, media_type, file_size, upload_name)
218219 try:
219220 with responder:
220 yield responder.write_to_consumer(request)
221 await responder.write_to_consumer(request)
221222 except Exception as e:
222223 # The majority of the time this will be due to the client having gone
223224 # away. Unfortunately, Twisted simply throws a generic exception at us
1313 # limitations under the License.
1414
1515 import contextlib
16 import inspect
1617 import logging
1718 import os
1819 import shutil
19
20 from twisted.internet import defer
20 from typing import Optional
21
2122 from twisted.protocols.basic import FileSender
2223
2324 from synapse.logging.context import defer_to_thread, make_deferred_yieldable
2425 from synapse.util.file_consumer import BackgroundFileConsumer
2526
26 from ._base import Responder
27 from ._base import FileInfo, Responder
2728
2829 logger = logging.getLogger(__name__)
2930
4546 self.filepaths = filepaths
4647 self.storage_providers = storage_providers
4748
48 @defer.inlineCallbacks
49 def store_file(self, source, file_info):
49 async def store_file(self, source, file_info: FileInfo) -> str:
5050 """Write `source` to the on disk media store, and also any other
5151 configured storage providers
5252
5353 Args:
5454 source: A file like object that should be written
55 file_info (FileInfo): Info about the file to store
56
57 Returns:
58 Deferred[str]: the file path written to in the primary media store
55 file_info: Info about the file to store
56
57 Returns:
58 the file path written to in the primary media store
5959 """
6060
6161 with self.store_into_file(file_info) as (f, fname, finish_cb):
6262 # Write to the main repository
63 yield defer_to_thread(
63 await defer_to_thread(
6464 self.hs.get_reactor(), _write_file_synchronously, source, f
6565 )
66 yield finish_cb()
66 await finish_cb()
6767
6868 return fname
6969
7474
7575 Actually yields a 3-tuple (file, fname, finish_cb), where file is a file
7676 like object that can be written to, fname is the absolute path of file
77 on disk, and finish_cb is a function that returns a Deferred.
77 on disk, and finish_cb is a function that returns an awaitable.
7878
7979 fname can be used to read the contents from after upload, e.g. to
8080 generate thumbnails.
9090
9191 with media_storage.store_into_file(info) as (f, fname, finish_cb):
9292 # .. write into f ...
93 yield finish_cb()
93 await finish_cb()
9494 """
9595
9696 path = self._file_info_to_path(file_info)
102102
103103 finished_called = [False]
104104
105 @defer.inlineCallbacks
106 def finish():
105 async def finish():
107106 for provider in self.storage_providers:
108 yield provider.store_file(path, file_info)
107 # store_file is supposed to return an Awaitable, but guard
108 # against improper implementations.
109 result = provider.store_file(path, file_info)
110 if inspect.isawaitable(result):
111 await result
109112
110113 finished_called[0] = True
111114
122125 if not finished_called:
123126 raise Exception("Finished callback not called")
124127
125 @defer.inlineCallbacks
126 def fetch_media(self, file_info):
128 async def fetch_media(self, file_info: FileInfo) -> Optional[Responder]:
127129 """Attempts to fetch media described by file_info from the local cache
128130 and configured storage providers.
129131
130132 Args:
131 file_info (FileInfo)
132
133 Returns:
134 Deferred[Responder|None]: Returns a Responder if the file was found,
135 otherwise None.
133 file_info
134
135 Returns:
136 Returns a Responder if the file was found, otherwise None.
136137 """
137138
138139 path = self._file_info_to_path(file_info)
141142 return FileResponder(open(local_path, "rb"))
142143
143144 for provider in self.storage_providers:
144 res = yield provider.fetch(path, file_info)
145 res = provider.fetch(path, file_info)
146 # Fetch is supposed to return an Awaitable, but guard against
147 # improper implementations.
148 if inspect.isawaitable(res):
149 res = await res
145150 if res:
146151 logger.debug("Streaming %s from %s", path, provider)
147152 return res
148153
149154 return None
150155
151 @defer.inlineCallbacks
152 def ensure_media_is_in_local_cache(self, file_info):
156 async def ensure_media_is_in_local_cache(self, file_info: FileInfo) -> str:
153157 """Ensures that the given file is in the local cache. Attempts to
154158 download it from storage providers if it isn't.
155159
156160 Args:
157 file_info (FileInfo)
158
159 Returns:
160 Deferred[str]: Full path to local file
161 file_info
162
163 Returns:
164 Full path to local file
161165 """
162166 path = self._file_info_to_path(file_info)
163167 local_path = os.path.join(self.local_media_directory, path)
169173 os.makedirs(dirname)
170174
171175 for provider in self.storage_providers:
172 res = yield provider.fetch(path, file_info)
176 res = provider.fetch(path, file_info)
177 # Fetch is supposed to return an Awaitable, but guard against
178 # improper implementations.
179 if inspect.isawaitable(res):
180 res = await res
173181 if res:
174182 with res:
175183 consumer = BackgroundFileConsumer(
176184 open(local_path, "wb"), self.hs.get_reactor()
177185 )
178 yield res.write_to_consumer(consumer)
179 yield consumer.wait()
186 await res.write_to_consumer(consumer)
187 await consumer.wait()
180188 return local_path
181189
182190 raise Exception("file could not be found")
2525 from typing import Dict, Optional
2626 from urllib import parse as urlparse
2727
28 import attr
2829 from canonicaljson import json
2930
3031 from twisted.internet import defer
5455
5556 OG_TAG_NAME_MAXLEN = 50
5657 OG_TAG_VALUE_MAXLEN = 1000
58
59 ONE_HOUR = 60 * 60 * 1000
60
61 # A map of globs to API endpoints.
62 _oembed_globs = {
63 # Twitter.
64 "https://publish.twitter.com/oembed": [
65 "https://twitter.com/*/status/*",
66 "https://*.twitter.com/*/status/*",
67 "https://twitter.com/*/moments/*",
68 "https://*.twitter.com/*/moments/*",
69 # Include the HTTP versions too.
70 "http://twitter.com/*/status/*",
71 "http://*.twitter.com/*/status/*",
72 "http://twitter.com/*/moments/*",
73 "http://*.twitter.com/*/moments/*",
74 ],
75 }
76 # Convert the globs to regular expressions.
77 _oembed_patterns = {}
78 for endpoint, globs in _oembed_globs.items():
79 for glob in globs:
80 # Convert the glob into a sane regular expression to match against. The
81 # rules followed will be slightly different for the domain portion vs.
82 # the rest.
83 #
84 # 1. The scheme must be one of HTTP / HTTPS (and have no globs).
85 # 2. The domain can have globs, but we limit it to characters that can
86 # reasonably be a domain part.
87 # TODO: This does not attempt to handle Unicode domain names.
88 # 3. Other parts allow a glob to be any one, or more, characters.
89 results = urlparse.urlparse(glob)
90
91 # Ensure the scheme does not have wildcards (and is a sane scheme).
92 if results.scheme not in {"http", "https"}:
93 raise ValueError("Insecure oEmbed glob scheme: %s" % (results.scheme,))
94
95 pattern = urlparse.urlunparse(
96 [
97 results.scheme,
98 re.escape(results.netloc).replace("\\*", "[a-zA-Z0-9_-]+"),
99 ]
100 + [re.escape(part).replace("\\*", ".+") for part in results[2:]]
101 )
102 _oembed_patterns[re.compile(pattern)] = endpoint
103
104
105 @attr.s
106 class OEmbedResult:
107 # Either HTML content or URL must be provided.
108 html = attr.ib(type=Optional[str])
109 url = attr.ib(type=Optional[str])
110 title = attr.ib(type=Optional[str])
111 # Number of seconds to cache the content.
112 cache_age = attr.ib(type=int)
113
114
115 class OEmbedError(Exception):
116 """An error occurred processing the oEmbed object."""
57117
58118
59119 class PreviewUrlResource(DirectServeJsonResource):
98158 cache_name="url_previews",
99159 clock=self.clock,
100160 # don't spider URLs more often than once an hour
101 expiry_ms=60 * 60 * 1000,
161 expiry_ms=ONE_HOUR,
102162 )
103163
104164 if self._worker_run_media_background_jobs:
309369
310370 return jsonog.encode("utf8")
311371
372 def _get_oembed_url(self, url: str) -> Optional[str]:
373 """
374 Check whether the URL should be downloaded as oEmbed content instead.
375
376 Params:
377 url: The URL to check.
378
379 Returns:
380 A URL to use instead or None if the original URL should be used.
381 """
382 for url_pattern, endpoint in _oembed_patterns.items():
383 if url_pattern.fullmatch(url):
384 return endpoint
385
386 # No match.
387 return None
388
389 async def _get_oembed_content(self, endpoint: str, url: str) -> OEmbedResult:
390 """
391 Request content from an oEmbed endpoint.
392
393 Params:
394 endpoint: The oEmbed API endpoint.
395 url: The URL to pass to the API.
396
397 Returns:
398 An object representing the metadata returned.
399
400 Raises:
401 OEmbedError if fetching or parsing of the oEmbed information fails.
402 """
403 try:
404 logger.debug("Trying to get oEmbed content for url '%s'", url)
405 result = await self.client.get_json(
406 endpoint,
407 # TODO Specify max height / width.
408 # Note that only the JSON format is supported.
409 args={"url": url},
410 )
411
412 # Ensure there's a version of 1.0.
413 if result.get("version") != "1.0":
414 raise OEmbedError("Invalid version: %s" % (result.get("version"),))
415
416 oembed_type = result.get("type")
417
418 # Ensure the cache age is None or an int.
419 cache_age = result.get("cache_age")
420 if cache_age:
421 cache_age = int(cache_age)
422
423 oembed_result = OEmbedResult(None, None, result.get("title"), cache_age)
424
425 # HTML content.
426 if oembed_type == "rich":
427 oembed_result.html = result.get("html")
428 return oembed_result
429
430 if oembed_type == "photo":
431 oembed_result.url = result.get("url")
432 return oembed_result
433
434 # TODO Handle link and video types.
435
436 if "thumbnail_url" in result:
437 oembed_result.url = result.get("thumbnail_url")
438 return oembed_result
439
440 raise OEmbedError("Incompatible oEmbed information.")
441
442 except OEmbedError as e:
443 # Trap OEmbedErrors first so we can directly re-raise them.
444 logger.warning("Error parsing oEmbed metadata from %s: %r", url, e)
445 raise
446
447 except Exception as e:
448 # Trap any exception and let the code follow as usual.
449 # FIXME: pass through 404s and other error messages nicely
450 logger.warning("Error downloading oEmbed metadata from %s: %r", url, e)
451 raise OEmbedError() from e
452
312453 async def _download_url(self, url, user):
313454 # TODO: we should probably honour robots.txt... except in practice
314455 # we're most likely being explicitly triggered by a human rather than a
318459
319460 file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
320461
321 with self.media_storage.store_into_file(file_info) as (f, fname, finish):
462 # If this URL can be accessed via oEmbed, use that instead.
463 url_to_download = url
464 oembed_url = self._get_oembed_url(url)
465 if oembed_url:
466 # The result might be a new URL to download, or it might be HTML content.
322467 try:
323 logger.debug("Trying to get preview for url '%s'", url)
324 length, headers, uri, code = await self.client.get_file(
325 url,
326 output_stream=f,
327 max_size=self.max_spider_size,
328 headers={"Accept-Language": self.url_preview_accept_language},
329 )
330 except SynapseError:
331 # Pass SynapseErrors through directly, so that the servlet
332 # handler will return a SynapseError to the client instead of
333 # blank data or a 500.
334 raise
335 except DNSLookupError:
336 # DNS lookup returned no results
337 # Note: This will also be the case if one of the resolved IP
338 # addresses is blacklisted
339 raise SynapseError(
340 502,
341 "DNS resolution failure during URL preview generation",
342 Codes.UNKNOWN,
343 )
344 except Exception as e:
345 # FIXME: pass through 404s and other error messages nicely
346 logger.warning("Error downloading %s: %r", url, e)
347
348 raise SynapseError(
349 500,
350 "Failed to download content: %s"
351 % (traceback.format_exception_only(sys.exc_info()[0], e),),
352 Codes.UNKNOWN,
353 )
354 await finish()
468 oembed_result = await self._get_oembed_content(oembed_url, url)
469 if oembed_result.url:
470 url_to_download = oembed_result.url
471 elif oembed_result.html:
472 url_to_download = None
473 except OEmbedError:
474 # If an error occurs, try doing a normal preview.
475 pass
476
477 if url_to_download:
478 with self.media_storage.store_into_file(file_info) as (f, fname, finish):
479 try:
480 logger.debug("Trying to get preview for url '%s'", url_to_download)
481 length, headers, uri, code = await self.client.get_file(
482 url_to_download,
483 output_stream=f,
484 max_size=self.max_spider_size,
485 headers={"Accept-Language": self.url_preview_accept_language},
486 )
487 except SynapseError:
488 # Pass SynapseErrors through directly, so that the servlet
489 # handler will return a SynapseError to the client instead of
490 # blank data or a 500.
491 raise
492 except DNSLookupError:
493 # DNS lookup returned no results
494 # Note: This will also be the case if one of the resolved IP
495 # addresses is blacklisted
496 raise SynapseError(
497 502,
498 "DNS resolution failure during URL preview generation",
499 Codes.UNKNOWN,
500 )
501 except Exception as e:
502 # FIXME: pass through 404s and other error messages nicely
503 logger.warning("Error downloading %s: %r", url_to_download, e)
504
505 raise SynapseError(
506 500,
507 "Failed to download content: %s"
508 % (traceback.format_exception_only(sys.exc_info()[0], e),),
509 Codes.UNKNOWN,
510 )
511 await finish()
512
513 if b"Content-Type" in headers:
514 media_type = headers[b"Content-Type"][0].decode("ascii")
515 else:
516 media_type = "application/octet-stream"
517
518 download_name = get_filename_from_headers(headers)
519
520 # FIXME: we should calculate a proper expiration based on the
521 # Cache-Control and Expire headers. But for now, assume 1 hour.
522 expires = ONE_HOUR
523 etag = headers["ETag"][0] if "ETag" in headers else None
524 else:
525 html_bytes = oembed_result.html.encode("utf-8") # type: ignore
526 with self.media_storage.store_into_file(file_info) as (f, fname, finish):
527 f.write(html_bytes)
528 await finish()
529
530 media_type = "text/html"
531 download_name = oembed_result.title
532 length = len(html_bytes)
533 # If a specific cache age was not given, assume 1 hour.
534 expires = oembed_result.cache_age or ONE_HOUR
535 uri = oembed_url
536 code = 200
537 etag = None
355538
356539 try:
357 if b"Content-Type" in headers:
358 media_type = headers[b"Content-Type"][0].decode("ascii")
359 else:
360 media_type = "application/octet-stream"
361540 time_now_ms = self.clock.time_msec()
362
363 download_name = get_filename_from_headers(headers)
364541
365542 await self.store.store_local_media(
366543 media_id=file_id,
367544 media_type=media_type,
368 time_now_ms=self.clock.time_msec(),
545 time_now_ms=time_now_ms,
369546 upload_name=download_name,
370547 media_length=length,
371548 user_id=user,
388565 "filename": fname,
389566 "uri": uri,
390567 "response_code": code,
391 # FIXME: we should calculate a proper expiration based on the
392 # Cache-Control and Expire headers. But for now, assume 1 hour.
393 "expires": 60 * 60 * 1000,
394 "etag": headers["ETag"][0] if "ETag" in headers else None,
568 "expires": expires,
569 "etag": etag,
395570 }
396571
397572 def _start_expire_url_cache_data(self):
448623 # These may be cached for a bit on the client (i.e., they
449624 # may have a room open with a preview url thing open).
450625 # So we wait a couple of days before deleting, just in case.
451 expire_before = now - 2 * 24 * 60 * 60 * 1000
626 expire_before = now - 2 * 24 * ONE_HOUR
452627 media_ids = await self.store.get_url_cache_media_before(expire_before)
453628
454629 removed_media = []
4343 from synapse.federation.federation_server import (
4444 FederationHandlerRegistry,
4545 FederationServer,
46 ReplicationFederationHandlerRegistry,
4746 )
4847 from synapse.federation.send_queue import FederationRemoteSendQueue
4948 from synapse.federation.sender import FederationSender
7271 from synapse.handlers.read_marker import ReadMarkerHandler
7372 from synapse.handlers.receipts import ReceiptsHandler
7473 from synapse.handlers.register import RegistrationHandler
75 from synapse.handlers.room import RoomContextHandler, RoomCreationHandler
74 from synapse.handlers.room import (
75 RoomContextHandler,
76 RoomCreationHandler,
77 RoomShutdownHandler,
78 )
7679 from synapse.handlers.room_list import RoomListHandler
7780 from synapse.handlers.room_member import RoomMemberMasterHandler
7881 from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
7982 from synapse.handlers.set_password import SetPasswordHandler
8083 from synapse.handlers.stats import StatsHandler
8184 from synapse.handlers.sync import SyncHandler
82 from synapse.handlers.typing import TypingHandler
85 from synapse.handlers.typing import FollowerTypingHandler, TypingWriterHandler
8386 from synapse.handlers.user_directory import UserDirectoryHandler
8487 from synapse.http.client import InsecureInterceptableContextFactory, SimpleHttpClient
8588 from synapse.http.matrixfederationclient import MatrixFederationHttpClient
101104 WorkerServerNoticesSender,
102105 )
103106 from synapse.state import StateHandler, StateResolutionHandler
104 from synapse.storage import DataStores, Storage
107 from synapse.storage import DataStore, DataStores, Storage
105108 from synapse.streams.events import EventSources
106109 from synapse.util import Clock
107110 from synapse.util.distributor import Distributor
143146 "handlers",
144147 "auth",
145148 "room_creation_handler",
149 "room_shutdown_handler",
146150 "state_handler",
147151 "state_resolution_handler",
148152 "presence_handler",
306310 def get_clock(self):
307311 return self.clock
308312
309 def get_datastore(self):
313 def get_datastore(self) -> DataStore:
310314 return self.datastores.main
311315
312316 def get_datastores(self):
356360 def build_room_creation_handler(self):
357361 return RoomCreationHandler(self)
358362
363 def build_room_shutdown_handler(self):
364 return RoomShutdownHandler(self)
365
359366 def build_sendmail(self):
360367 return sendmail
361368
369376 return PresenceHandler(self)
370377
371378 def build_typing_handler(self):
372 return TypingHandler(self)
379 if self.config.worker.writers.typing == self.get_instance_name():
380 return TypingWriterHandler(self)
381 else:
382 return FollowerTypingHandler(self)
373383
374384 def build_sync_handler(self):
375385 return SyncHandler(self)
525535 return RoomMemberMasterHandler(self)
526536
527537 def build_federation_registry(self):
528 if self.config.worker_app:
529 return ReplicationFederationHandlerRegistry(self)
530 else:
531 return FederationHandlerRegistry()
538 return FederationHandlerRegistry(self)
532539
533540 def build_server_notices_manager(self):
534541 if self.config.worker_app:
1919 import synapse.handlers.room_member
2020 import synapse.handlers.set_password
2121 import synapse.http.client
22 import synapse.http.matrixfederationclient
2223 import synapse.notifier
2324 import synapse.push.pusherpool
2425 import synapse.replication.tcp.client
2930 import synapse.state
3031 import synapse.storage
3132 from synapse.events.builder import EventBuilderFactory
33 from synapse.handlers.typing import FollowerTypingHandler
3234 from synapse.replication.tcp.streams import Stream
3335
3436 class HomeServer(object):
6971 def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
7072 pass
7173 def get_room_member_handler(self) -> synapse.handlers.room_member.RoomMemberHandler:
74 pass
75 def get_room_shutdown_handler(self) -> synapse.handlers.room.RoomShutdownHandler:
7276 pass
7377 def get_event_creation_handler(
7478 self,
140144 pass
141145 def get_replication_streams(self) -> Dict[str, Stream]:
142146 pass
147 def get_http_client(
148 self,
149 ) -> synapse.http.matrixfederationclient.MatrixFederationHttpClient:
150 pass
151 def should_send_federation(self) -> bool:
152 pass
153 def get_typing_handler(self) -> FollowerTypingHandler:
154 pass
1515
1616 import logging
1717 from collections import namedtuple
18 from typing import Dict, Iterable, List, Optional, Set
18 from typing import Awaitable, Dict, Iterable, List, Optional, Set
1919
2020 import attr
2121 from frozendict import frozendict
2222 from prometheus_client import Histogram
23
24 from twisted.internet import defer
2523
2624 from synapse.api.constants import EventTypes
2725 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, StateResolutionVersions
3028 from synapse.logging.utils import log_function
3129 from synapse.state import v1, v2
3230 from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
31 from synapse.storage.roommember import ProfileInfo
3332 from synapse.types import StateMap
3433 from synapse.util import Clock
3534 from synapse.util.async_helpers import Linearizer
107106 self.hs = hs
108107 self._state_resolution_handler = hs.get_state_resolution_handler()
109108
110 @defer.inlineCallbacks
111 def get_current_state(
109 async def get_current_state(
112110 self, room_id, event_type=None, state_key="", latest_event_ids=None
113111 ):
114112 """ Retrieves the current state for the room. This is done by
125123 map from (type, state_key) to event
126124 """
127125 if not latest_event_ids:
128 latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
126 latest_event_ids = await self.store.get_latest_event_ids_in_room(room_id)
129127
130128 logger.debug("calling resolve_state_groups from get_current_state")
131 ret = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
129 ret = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
132130 state = ret.state
133131
134132 if event_type:
135133 event_id = state.get((event_type, state_key))
136134 event = None
137135 if event_id:
138 event = yield self.store.get_event(event_id, allow_none=True)
136 event = await self.store.get_event(event_id, allow_none=True)
139137 return event
140138
141 state_map = yield self.store.get_events(
139 state_map = await self.store.get_events(
142140 list(state.values()), get_prev_content=False
143141 )
144142 state = {
147145
148146 return state
149147
150 @defer.inlineCallbacks
151 def get_current_state_ids(self, room_id, latest_event_ids=None):
148 async def get_current_state_ids(self, room_id, latest_event_ids=None):
152149 """Get the current state, or the state at a set of events, for a room
153150
154151 Args:
163160 (event_type, state_key) -> event_id
164161 """
165162 if not latest_event_ids:
166 latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
163 latest_event_ids = await self.store.get_latest_event_ids_in_room(room_id)
167164
168165 logger.debug("calling resolve_state_groups from get_current_state_ids")
169 ret = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
166 ret = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
170167 state = ret.state
171168
172169 return state
173170
174 @defer.inlineCallbacks
175 def get_current_users_in_room(self, room_id, latest_event_ids=None):
171 async def get_current_users_in_room(
172 self, room_id: str, latest_event_ids: Optional[List[str]] = None
173 ) -> Dict[str, ProfileInfo]:
176174 """
177175 Get the users who are currently in a room.
178176
179177 Args:
180 room_id (str): The ID of the room.
181 latest_event_ids (List[str]|None): Precomputed list of latest
182 event IDs. Will be computed if None.
183 Returns:
184 Deferred[Dict[str,ProfileInfo]]: Dictionary of user IDs to their
185 profileinfo.
178 room_id: The ID of the room.
179 latest_event_ids: Precomputed list of latest event IDs. Will be computed if None.
180 Returns:
181 Dictionary of user IDs to their profileinfo.
186182 """
187183 if not latest_event_ids:
188 latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
184 latest_event_ids = await self.store.get_latest_event_ids_in_room(room_id)
189185 logger.debug("calling resolve_state_groups from get_current_users_in_room")
190 entry = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
191 joined_users = yield self.store.get_joined_users_from_state(room_id, entry)
186 entry = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
187 joined_users = await self.store.get_joined_users_from_state(room_id, entry)
192188 return joined_users
193189
194 @defer.inlineCallbacks
195 def get_current_hosts_in_room(self, room_id):
196 event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
197 return (yield self.get_hosts_in_room_at_events(room_id, event_ids))
198
199 @defer.inlineCallbacks
200 def get_hosts_in_room_at_events(self, room_id, event_ids):
190 async def get_current_hosts_in_room(self, room_id):
191 event_ids = await self.store.get_latest_event_ids_in_room(room_id)
192 return await self.get_hosts_in_room_at_events(room_id, event_ids)
193
194 async def get_hosts_in_room_at_events(self, room_id, event_ids):
201195 """Get the hosts that were in a room at the given event ids
202196
203197 Args:
207201 Returns:
208202 Deferred[list[str]]: the hosts in the room at the given events
209203 """
210 entry = yield self.resolve_state_groups_for_events(room_id, event_ids)
211 joined_hosts = yield self.store.get_joined_hosts(room_id, entry)
204 entry = await self.resolve_state_groups_for_events(room_id, event_ids)
205 joined_hosts = await self.store.get_joined_hosts(room_id, entry)
212206 return joined_hosts
213207
214 @defer.inlineCallbacks
215 def compute_event_context(
208 async def compute_event_context(
216209 self, event: EventBase, old_state: Optional[Iterable[EventBase]] = None
217210 ):
218211 """Build an EventContext structure for the event.
277270 # otherwise, we'll need to resolve the state across the prev_events.
278271 logger.debug("calling resolve_state_groups from compute_event_context")
279272
280 entry = yield self.resolve_state_groups_for_events(
273 entry = await self.resolve_state_groups_for_events(
281274 event.room_id, event.prev_event_ids()
282275 )
283276
294287 #
295288
296289 if not state_group_before_event:
297 state_group_before_event = yield self.state_store.store_state_group(
290 state_group_before_event = await self.state_store.store_state_group(
298291 event.event_id,
299292 event.room_id,
300293 prev_group=state_group_before_event_prev_group,
334327 state_ids_after_event[key] = event.event_id
335328 delta_ids = {key: event.event_id}
336329
337 state_group_after_event = yield self.state_store.store_state_group(
330 state_group_after_event = await self.state_store.store_state_group(
338331 event.event_id,
339332 event.room_id,
340333 prev_group=state_group_before_event,
352345 )
353346
354347 @measure_func()
355 @defer.inlineCallbacks
356 def resolve_state_groups_for_events(self, room_id, event_ids):
348 async def resolve_state_groups_for_events(self, room_id, event_ids):
357349 """ Given a list of event_ids this method fetches the state at each
358350 event, resolves conflicts between them and returns them.
359351
372364 # map from state group id to the state in that state group (where
373365 # 'state' is a map from state key to event id)
374366 # dict[int, dict[(str, str), str]]
375 state_groups_ids = yield self.state_store.get_state_groups_ids(
367 state_groups_ids = await self.state_store.get_state_groups_ids(
376368 room_id, event_ids
377369 )
378370
381373 elif len(state_groups_ids) == 1:
382374 name, state_list = list(state_groups_ids.items()).pop()
383375
384 prev_group, delta_ids = yield self.state_store.get_state_group_delta(name)
376 prev_group, delta_ids = await self.state_store.get_state_group_delta(name)
385377
386378 return _StateCacheEntry(
387379 state=state_list,
390382 delta_ids=delta_ids,
391383 )
392384
393 room_version = yield self.store.get_room_version_id(room_id)
394
395 result = yield self._state_resolution_handler.resolve_state_groups(
385 room_version = await self.store.get_room_version_id(room_id)
386
387 result = await self._state_resolution_handler.resolve_state_groups(
396388 room_id,
397389 room_version,
398390 state_groups_ids,
401393 )
402394 return result
403395
404 @defer.inlineCallbacks
405 def resolve_events(self, room_version, state_sets, event):
396 async def resolve_events(self, room_version, state_sets, event):
406397 logger.info(
407398 "Resolving state for %s with %d groups", event.room_id, len(state_sets)
408399 )
413404 state_map = {ev.event_id: ev for st in state_sets for ev in st}
414405
415406 with Measure(self.clock, "state._resolve_events"):
416 new_state = yield resolve_events_with_store(
407 new_state = await resolve_events_with_store(
417408 self.clock,
418409 event.room_id,
419410 room_version,
450441 reset_expiry_on_get=True,
451442 )
452443
453 @defer.inlineCallbacks
454444 @log_function
455 def resolve_state_groups(
445 async def resolve_state_groups(
456446 self, room_id, room_version, state_groups_ids, event_map, state_res_store
457447 ):
458448 """Resolves conflicts between a set of state groups
478468 state_res_store (StateResolutionStore)
479469
480470 Returns:
481 Deferred[_StateCacheEntry]: resolved state
471 _StateCacheEntry: resolved state
482472 """
483473 logger.debug("resolve_state_groups state_groups %s", state_groups_ids.keys())
484474
485475 group_names = frozenset(state_groups_ids.keys())
486476
487 with (yield self.resolve_linearizer.queue(group_names)):
477 with (await self.resolve_linearizer.queue(group_names)):
488478 if self._state_cache is not None:
489479 cache = self._state_cache.get(group_names, None)
490480 if cache:
516506 if conflicted_state:
517507 logger.info("Resolving conflicted state for %r", room_id)
518508 with Measure(self.clock, "state._resolve_events"):
519 new_state = yield resolve_events_with_store(
509 new_state = await resolve_events_with_store(
520510 self.clock,
521511 room_id,
522512 room_version,
597587 state_sets: List[StateMap[str]],
598588 event_map: Optional[Dict[str, EventBase]],
599589 state_res_store: "StateResolutionStore",
600 ):
590 ) -> Awaitable[StateMap[str]]:
601591 """
602592 Args:
603593 room_id: the room we are working in
618608 state_res_store: a place to fetch events from
619609
620610 Returns:
621 Deferred[dict[(str, str), str]]:
622 a map from (type, state_key) to event_id.
611 a map from (type, state_key) to event_id.
623612 """
624613 v = KNOWN_ROOM_VERSIONS[room_version]
625614 if v.state_res == StateResolutionVersions.V1:
1414
1515 import hashlib
1616 import logging
17 from typing import Callable, Dict, List, Optional
18
19 from twisted.internet import defer
17 from typing import Awaitable, Callable, Dict, List, Optional
2018
2119 from synapse import event_auth
2220 from synapse.api.constants import EventTypes
3129 POWER_KEY = (EventTypes.PowerLevels, "")
3230
3331
34 @defer.inlineCallbacks
35 def resolve_events_with_store(
32 async def resolve_events_with_store(
3633 room_id: str,
3734 state_sets: List[StateMap[str]],
3835 event_map: Optional[Dict[str, EventBase]],
39 state_map_factory: Callable,
36 state_map_factory: Callable[[List[str]], Awaitable],
4037 ):
4138 """
4239 Args:
5552
5653 state_map_factory: will be called
5754 with a list of event_ids that are needed, and should return with
58 a Deferred of dict of event_id to event.
55 an Awaitable that resolves to a dict of event_id to event.
5956
6057 Returns:
6158 Deferred[dict[(str, str), str]]:
7976
8077 # dict[str, FrozenEvent]: a map from state event id to event. Only includes
8178 # the state events which are in conflict (and those in event_map)
82 state_map = yield state_map_factory(needed_events)
79 state_map = await state_map_factory(needed_events)
8380 if event_map is not None:
8481 state_map.update(event_map)
8582
109106 "Asking for %d/%d auth events", len(new_needed_events), new_needed_event_count
110107 )
111108
112 state_map_new = yield state_map_factory(new_needed_events)
109 state_map_new = await state_map_factory(new_needed_events)
113110 for event in state_map_new.values():
114111 if event.room_id != room_id:
115112 raise Exception(
1717 import logging
1818 from typing import Dict, List, Optional
1919
20 from twisted.internet import defer
21
2220 import synapse.state
2321 from synapse import event_auth
2422 from synapse.api.constants import EventTypes
3129 logger = logging.getLogger(__name__)
3230
3331
34 # We want to yield to the reactor occasionally during state res when dealing
32 # We want to await to the reactor occasionally during state res when dealing
3533 # with large data sets, so that we don't exhaust the reactor. This is done by
36 # yielding to reactor during loops every N iterations.
37 _YIELD_AFTER_ITERATIONS = 100
38
39
40 @defer.inlineCallbacks
41 def resolve_events_with_store(
34 # awaiting to reactor during loops every N iterations.
35 _AWAIT_AFTER_ITERATIONS = 100
36
37
38 async def resolve_events_with_store(
4239 clock: Clock,
4340 room_id: str,
4441 room_version: str,
8683
8784 # Also fetch all auth events that appear in only some of the state sets'
8885 # auth chains.
89 auth_diff = yield _get_auth_chain_difference(state_sets, event_map, state_res_store)
86 auth_diff = await _get_auth_chain_difference(state_sets, event_map, state_res_store)
9087
9188 full_conflicted_set = set(
9289 itertools.chain(
9491 )
9592 )
9693
97 events = yield state_res_store.get_events(
94 events = await state_res_store.get_events(
9895 [eid for eid in full_conflicted_set if eid not in event_map],
9996 allow_rejected=True,
10097 )
117114 eid for eid in full_conflicted_set if _is_power_event(event_map[eid])
118115 )
119116
120 sorted_power_events = yield _reverse_topological_power_sort(
117 sorted_power_events = await _reverse_topological_power_sort(
121118 clock, room_id, power_events, event_map, state_res_store, full_conflicted_set
122119 )
123120
124121 logger.debug("sorted %d power events", len(sorted_power_events))
125122
126123 # Now sequentially auth each one
127 resolved_state = yield _iterative_auth_checks(
124 resolved_state = await _iterative_auth_checks(
128125 clock,
129126 room_id,
130127 room_version,
147144 logger.debug("sorting %d remaining events", len(leftover_events))
148145
149146 pl = resolved_state.get((EventTypes.PowerLevels, ""), None)
150 leftover_events = yield _mainline_sort(
147 leftover_events = await _mainline_sort(
151148 clock, room_id, leftover_events, pl, event_map, state_res_store
152149 )
153150
154151 logger.debug("resolving remaining events")
155152
156 resolved_state = yield _iterative_auth_checks(
153 resolved_state = await _iterative_auth_checks(
157154 clock,
158155 room_id,
159156 room_version,
173170 return resolved_state
174171
175172
176 @defer.inlineCallbacks
177 def _get_power_level_for_sender(room_id, event_id, event_map, state_res_store):
173 async def _get_power_level_for_sender(room_id, event_id, event_map, state_res_store):
178174 """Return the power level of the sender of the given event according to
179175 their auth events.
180176
187183 Returns:
188184 Deferred[int]
189185 """
190 event = yield _get_event(room_id, event_id, event_map, state_res_store)
186 event = await _get_event(room_id, event_id, event_map, state_res_store)
191187
192188 pl = None
193189 for aid in event.auth_event_ids():
194 aev = yield _get_event(
190 aev = await _get_event(
195191 room_id, aid, event_map, state_res_store, allow_none=True
196192 )
197193 if aev and (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
201197 if pl is None:
202198 # Couldn't find power level. Check if they're the creator of the room
203199 for aid in event.auth_event_ids():
204 aev = yield _get_event(
200 aev = await _get_event(
205201 room_id, aid, event_map, state_res_store, allow_none=True
206202 )
207203 if aev and (aev.type, aev.state_key) == (EventTypes.Create, ""):
220216 return int(level)
221217
222218
223 @defer.inlineCallbacks
224 def _get_auth_chain_difference(state_sets, event_map, state_res_store):
219 async def _get_auth_chain_difference(state_sets, event_map, state_res_store):
225220 """Compare the auth chains of each state set and return the set of events
226221 that only appear in some but not all of the auth chains.
227222
234229 Deferred[set[str]]: Set of event IDs
235230 """
236231
237 difference = yield state_res_store.get_auth_chain_difference(
232 difference = await state_res_store.get_auth_chain_difference(
238233 [set(state_set.values()) for state_set in state_sets]
239234 )
240235
291286 return False
292287
293288
294 @defer.inlineCallbacks
295 def _add_event_and_auth_chain_to_graph(
289 async def _add_event_and_auth_chain_to_graph(
296290 graph, room_id, event_id, event_map, state_res_store, auth_diff
297291 ):
298292 """Helper function for _reverse_topological_power_sort that add the event
313307 eid = state.pop()
314308 graph.setdefault(eid, set())
315309
316 event = yield _get_event(room_id, eid, event_map, state_res_store)
310 event = await _get_event(room_id, eid, event_map, state_res_store)
317311 for aid in event.auth_event_ids():
318312 if aid in auth_diff:
319313 if aid not in graph:
322316 graph.setdefault(eid, set()).add(aid)
323317
324318
325 @defer.inlineCallbacks
326 def _reverse_topological_power_sort(
319 async def _reverse_topological_power_sort(
327320 clock, room_id, event_ids, event_map, state_res_store, auth_diff
328321 ):
329322 """Returns a list of the event_ids sorted by reverse topological ordering,
343336
344337 graph = {}
345338 for idx, event_id in enumerate(event_ids, start=1):
346 yield _add_event_and_auth_chain_to_graph(
339 await _add_event_and_auth_chain_to_graph(
347340 graph, room_id, event_id, event_map, state_res_store, auth_diff
348341 )
349342
350 # We yield occasionally when we're working with large data sets to
343 # We await occasionally when we're working with large data sets to
351344 # ensure that we don't block the reactor loop for too long.
352 if idx % _YIELD_AFTER_ITERATIONS == 0:
353 yield clock.sleep(0)
345 if idx % _AWAIT_AFTER_ITERATIONS == 0:
346 await clock.sleep(0)
354347
355348 event_to_pl = {}
356349 for idx, event_id in enumerate(graph, start=1):
357 pl = yield _get_power_level_for_sender(
350 pl = await _get_power_level_for_sender(
358351 room_id, event_id, event_map, state_res_store
359352 )
360353 event_to_pl[event_id] = pl
361354
362 # We yield occasionally when we're working with large data sets to
355 # We await occasionally when we're working with large data sets to
363356 # ensure that we don't block the reactor loop for too long.
364 if idx % _YIELD_AFTER_ITERATIONS == 0:
365 yield clock.sleep(0)
357 if idx % _AWAIT_AFTER_ITERATIONS == 0:
358 await clock.sleep(0)
366359
367360 def _get_power_order(event_id):
368361 ev = event_map[event_id]
377370 return sorted_events
378371
379372
380 @defer.inlineCallbacks
381 def _iterative_auth_checks(
373 async def _iterative_auth_checks(
382374 clock, room_id, room_version, event_ids, base_state, event_map, state_res_store
383375 ):
384376 """Sequentially apply auth checks to each event in given list, updating the
404396
405397 auth_events = {}
406398 for aid in event.auth_event_ids():
407 ev = yield _get_event(
399 ev = await _get_event(
408400 room_id, aid, event_map, state_res_store, allow_none=True
409401 )
410402
419411 for key in event_auth.auth_types_for_event(event):
420412 if key in resolved_state:
421413 ev_id = resolved_state[key]
422 ev = yield _get_event(room_id, ev_id, event_map, state_res_store)
414 ev = await _get_event(room_id, ev_id, event_map, state_res_store)
423415
424416 if ev.rejected_reason is None:
425417 auth_events[key] = event_map[ev_id]
437429 except AuthError:
438430 pass
439431
440 # We yield occasionally when we're working with large data sets to
432 # We await occasionally when we're working with large data sets to
441433 # ensure that we don't block the reactor loop for too long.
442 if idx % _YIELD_AFTER_ITERATIONS == 0:
443 yield clock.sleep(0)
434 if idx % _AWAIT_AFTER_ITERATIONS == 0:
435 await clock.sleep(0)
444436
445437 return resolved_state
446438
447439
448 @defer.inlineCallbacks
449 def _mainline_sort(
440 async def _mainline_sort(
450441 clock, room_id, event_ids, resolved_power_event_id, event_map, state_res_store
451442 ):
452443 """Returns a sorted list of event_ids sorted by mainline ordering based on
473464 idx = 0
474465 while pl:
475466 mainline.append(pl)
476 pl_ev = yield _get_event(room_id, pl, event_map, state_res_store)
467 pl_ev = await _get_event(room_id, pl, event_map, state_res_store)
477468 auth_events = pl_ev.auth_event_ids()
478469 pl = None
479470 for aid in auth_events:
480 ev = yield _get_event(
471 ev = await _get_event(
481472 room_id, aid, event_map, state_res_store, allow_none=True
482473 )
483474 if ev and (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
484475 pl = aid
485476 break
486477
487 # We yield occasionally when we're working with large data sets to
478 # We await occasionally when we're working with large data sets to
488479 # ensure that we don't block the reactor loop for too long.
489 if idx != 0 and idx % _YIELD_AFTER_ITERATIONS == 0:
490 yield clock.sleep(0)
480 if idx != 0 and idx % _AWAIT_AFTER_ITERATIONS == 0:
481 await clock.sleep(0)
491482
492483 idx += 1
493484
497488
498489 order_map = {}
499490 for idx, ev_id in enumerate(event_ids, start=1):
500 depth = yield _get_mainline_depth_for_event(
491 depth = await _get_mainline_depth_for_event(
501492 event_map[ev_id], mainline_map, event_map, state_res_store
502493 )
503494 order_map[ev_id] = (depth, event_map[ev_id].origin_server_ts, ev_id)
504495
505 # We yield occasionally when we're working with large data sets to
496 # We await occasionally when we're working with large data sets to
506497 # ensure that we don't block the reactor loop for too long.
507 if idx % _YIELD_AFTER_ITERATIONS == 0:
508 yield clock.sleep(0)
498 if idx % _AWAIT_AFTER_ITERATIONS == 0:
499 await clock.sleep(0)
509500
510501 event_ids.sort(key=lambda ev_id: order_map[ev_id])
511502
512503 return event_ids
513504
514505
515 @defer.inlineCallbacks
516 def _get_mainline_depth_for_event(event, mainline_map, event_map, state_res_store):
506 async def _get_mainline_depth_for_event(
507 event, mainline_map, event_map, state_res_store
508 ):
517509 """Get the mainline depths for the given event based on the mainline map
518510
519511 Args:
540532 event = None
541533
542534 for aid in auth_events:
543 aev = yield _get_event(
535 aev = await _get_event(
544536 room_id, aid, event_map, state_res_store, allow_none=True
545537 )
546538 if aev and (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
551543 return 0
552544
553545
554 @defer.inlineCallbacks
555 def _get_event(room_id, event_id, event_map, state_res_store, allow_none=False):
546 async def _get_event(room_id, event_id, event_map, state_res_store, allow_none=False):
556547 """Helper function to look up event in event_map, falling back to looking
557548 it up in the store
558549
568559 Deferred[Optional[FrozenEvent]]
569560 """
570561 if event_id not in event_map:
571 events = yield state_res_store.get_events([event_id], allow_rejected=True)
562 events = await state_res_store.get_events([event_id], allow_rejected=True)
572563 event_map.update(events)
573564 event = event_map.get(event_id)
574565
9999 if isinstance(db_content, memoryview):
100100 db_content = db_content.tobytes()
101101
102 # Decode it to a Unicode string before feeding it to json.loads, so we
103 # consistenty get a Unicode-containing object out.
102 # Decode it to a Unicode string before feeding it to json.loads, since
103 # Python 3.5 does not support deserializing bytes.
104104 if isinstance(db_content, (bytes, bytearray)):
105105 db_content = db_content.decode("utf8")
106106
248248 retcol="progress_json",
249249 )
250250
251 progress = json.loads(progress_json)
251 # Avoid a circular import.
252 from synapse.storage._base import db_to_json
253
254 progress = db_to_json(progress_json)
252255
253256 time_start = self._clock.time_msec()
254257 items_updated = await update_handler(progress, batch_size)
127127 db_conn, "presence_stream", "stream_id"
128128 )
129129 self._device_inbox_id_gen = StreamIdGenerator(
130 db_conn, "device_max_stream_id", "stream_id"
130 db_conn, "device_inbox", "stream_id"
131131 )
132132 self._public_room_id_gen = StreamIdGenerator(
133133 db_conn, "public_room_list_stream", "stream_id"
2121
2222 from twisted.internet import defer
2323
24 from synapse.storage._base import SQLBaseStore
24 from synapse.storage._base import SQLBaseStore, db_to_json
2525 from synapse.storage.database import Database
2626 from synapse.storage.util.id_generators import StreamIdGenerator
2727 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
7676 )
7777
7878 global_account_data = {
79 row["account_data_type"]: json.loads(row["content"]) for row in rows
79 row["account_data_type"]: db_to_json(row["content"]) for row in rows
8080 }
8181
8282 rows = self.db.simple_select_list_txn(
8989 by_room = {}
9090 for row in rows:
9191 room_data = by_room.setdefault(row["room_id"], {})
92 room_data[row["account_data_type"]] = json.loads(row["content"])
92 room_data[row["account_data_type"]] = db_to_json(row["content"])
9393
9494 return global_account_data, by_room
9595
112112 )
113113
114114 if result:
115 return json.loads(result)
115 return db_to_json(result)
116116 else:
117117 return None
118118
136136 )
137137
138138 return {
139 row["account_data_type"]: json.loads(row["content"]) for row in rows
139 row["account_data_type"]: db_to_json(row["content"]) for row in rows
140140 }
141141
142142 return self.db.runInteraction(
169169 allow_none=True,
170170 )
171171
172 return json.loads(content_json) if content_json else None
172 return db_to_json(content_json) if content_json else None
173173
174174 return self.db.runInteraction(
175175 "get_account_data_for_room_and_type", get_account_data_for_room_and_type_txn
254254
255255 txn.execute(sql, (user_id, stream_id))
256256
257 global_account_data = {row[0]: json.loads(row[1]) for row in txn}
257 global_account_data = {row[0]: db_to_json(row[1]) for row in txn}
258258
259259 sql = (
260260 "SELECT room_id, account_data_type, content FROM room_account_data"
266266 account_data_by_room = {}
267267 for row in txn:
268268 room_account_data = account_data_by_room.setdefault(row[0], {})
269 room_account_data[row[1]] = json.loads(row[2])
269 room_account_data[row[1]] = db_to_json(row[2])
270270
271271 return global_account_data, account_data_by_room
272272
2121
2222 from synapse.appservice import AppServiceTransaction
2323 from synapse.config.appservice import load_appservices
24 from synapse.storage._base import SQLBaseStore
24 from synapse.storage._base import SQLBaseStore, db_to_json
2525 from synapse.storage.data_stores.main.events_worker import EventsWorkerStore
2626 from synapse.storage.database import Database
2727
302302 if not entry:
303303 return None
304304
305 event_ids = json.loads(entry["event_ids"])
305 event_ids = db_to_json(entry["event_ids"])
306306
307307 events = yield self.get_events_as_list(event_ids)
308308
2020 from twisted.internet import defer
2121
2222 from synapse.logging.opentracing import log_kv, set_tag, trace
23 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
23 from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
2424 from synapse.storage.database import Database
2525 from synapse.util.caches.expiringcache import ExpiringCache
2626
6464 messages = []
6565 for row in txn:
6666 stream_pos = row[0]
67 messages.append(json.loads(row[1]))
67 messages.append(db_to_json(row[1]))
6868 if len(messages) < limit:
6969 stream_pos = current_stream_id
7070 return messages, stream_pos
172172 messages = []
173173 for row in txn:
174174 stream_pos = row[0]
175 messages.append(json.loads(row[1]))
175 messages.append(db_to_json(row[1]))
176176 if len(messages) < limit:
177177 log_kv({"message": "Set stream position to current position"})
178178 stream_pos = current_stream_id
423423 def _add_messages_to_local_device_inbox_txn(
424424 self, txn, stream_id, messages_by_user_then_device
425425 ):
426 sql = "UPDATE device_max_stream_id" " SET stream_id = ?" " WHERE stream_id < ?"
427 txn.execute(sql, (stream_id, stream_id))
428
429426 local_by_user_then_device = {}
430427 for user_id, messages_by_device in messages_by_user_then_device.items():
431428 messages_json_for_user = {}
576576 rows = yield self.db.execute(
577577 "get_users_whose_signatures_changed", None, sql, user_id, from_key
578578 )
579 return {user for row in rows for user in json.loads(row[0])}
579 return {user for row in rows for user in db_to_json(row[0])}
580580 else:
581581 return set()
582582
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 import json
16 from canonicaljson import json
1717
1818 from twisted.internet import defer
1919
2020 from synapse.api.errors import StoreError
2121 from synapse.logging.opentracing import log_kv, trace
22 from synapse.storage._base import SQLBaseStore
22 from synapse.storage._base import SQLBaseStore, db_to_json
2323
2424
2525 class EndToEndRoomKeyStore(SQLBaseStore):
147147 "forwarded_count": row["forwarded_count"],
148148 # is_verified must be returned to the client as a boolean
149149 "is_verified": bool(row["is_verified"]),
150 "session_data": json.loads(row["session_data"]),
150 "session_data": db_to_json(row["session_data"]),
151151 }
152152
153153 return sessions
221221 "first_message_index": row[2],
222222 "forwarded_count": row[3],
223223 "is_verified": row[4],
224 "session_data": json.loads(row[5]),
224 "session_data": db_to_json(row[5]),
225225 }
226226
227227 return ret
318318 keyvalues={"user_id": user_id, "version": this_version, "deleted": 0},
319319 retcols=("version", "algorithm", "auth_data", "etag"),
320320 )
321 result["auth_data"] = json.loads(result["auth_data"])
321 result["auth_data"] = db_to_json(result["auth_data"])
322322 result["version"] = str(result["version"])
323323 if result["etag"] is None:
324324 result["etag"] = 0
365365 for row in rows:
366366 user_id = row["user_id"]
367367 key_type = row["keytype"]
368 key = json.loads(row["keydata"])
368 key = db_to_json(row["keydata"])
369369 user_info = result.setdefault(user_id, {})
370370 user_info[key_type] = key
371371
2020 from twisted.internet import defer
2121
2222 from synapse.metrics.background_process_metrics import run_as_background_process
23 from synapse.storage._base import LoggingTransaction, SQLBaseStore
23 from synapse.storage._base import LoggingTransaction, SQLBaseStore, db_to_json
2424 from synapse.storage.database import Database
2525 from synapse.util.caches.descriptors import cachedInlineCallbacks
2626
5757 """Custom deserializer for actions. This allows us to "compress" common actions
5858 """
5959 if actions:
60 return json.loads(actions)
60 return db_to_json(actions)
6161
6262 if is_highlight:
6363 return DEFAULT_HIGHLIGHT_ACTION
1616 import itertools
1717 import logging
1818 from collections import OrderedDict, namedtuple
19 from functools import wraps
2019 from typing import TYPE_CHECKING, Dict, Iterable, List, Tuple
2120
2221 import attr
23 from canonicaljson import json
2422 from prometheus_client import Counter
2523
2624 from twisted.internet import defer
3230 from synapse.events import EventBase # noqa: F401
3331 from synapse.events.snapshot import EventContext # noqa: F401
3432 from synapse.logging.utils import log_function
35 from synapse.storage._base import make_in_list_sql_clause
33 from synapse.storage._base import db_to_json, make_in_list_sql_clause
3634 from synapse.storage.data_stores.main.search import SearchEntry
3735 from synapse.storage.database import Database, LoggingTransaction
3836 from synapse.storage.util.id_generators import StreamIdGenerator
6866 _EventCacheEntry = namedtuple("_EventCacheEntry", ("event", "redacted_event"))
6967
7068
71 def _retry_on_integrity_error(func):
72 """Wraps a database function so that it gets retried on IntegrityError,
73 with `delete_existing=True` passed in.
74
75 Args:
76 func: function that returns a Deferred and accepts a `delete_existing` arg
77 """
78
79 @wraps(func)
80 @defer.inlineCallbacks
81 def f(self, *args, **kwargs):
82 try:
83 res = yield func(self, *args, delete_existing=False, **kwargs)
84 except self.database_engine.module.IntegrityError:
85 logger.exception("IntegrityError, retrying.")
86 res = yield func(self, *args, delete_existing=True, **kwargs)
87 return res
88
89 return f
90
91
9269 @attr.s(slots=True)
9370 class DeltaState:
9471 """Deltas to use to update the `current_state_events` table.
133110 hs.config.worker.writers.events == hs.get_instance_name()
134111 ), "Can only instantiate EventsStore on master"
135112
136 @_retry_on_integrity_error
137113 @defer.inlineCallbacks
138114 def _persist_events_and_state_updates(
139115 self,
142118 state_delta_for_room: Dict[str, DeltaState],
143119 new_forward_extremeties: Dict[str, List[str]],
144120 backfilled: bool = False,
145 delete_existing: bool = False,
146121 ):
147122 """Persist a set of events alongside updates to the current state and
148123 forward extremities tables.
156131 new_forward_extremities: Map from room_id to list of event IDs
157132 that are the new forward extremities of the room.
158133 backfilled
159 delete_existing
160134
161135 Returns:
162136 Deferred: resolves when the events have been persisted
196170 self._persist_events_txn,
197171 events_and_contexts=events_and_contexts,
198172 backfilled=backfilled,
199 delete_existing=delete_existing,
200173 state_delta_for_room=state_delta_for_room,
201174 new_forward_extremeties=new_forward_extremeties,
202175 )
261234 )
262235
263236 txn.execute(sql + clause, args)
264 results.extend(r[0] for r in txn if not json.loads(r[1]).get("soft_failed"))
237 results.extend(r[0] for r in txn if not db_to_json(r[1]).get("soft_failed"))
265238
266239 for chunk in batch_iter(event_ids, 100):
267240 yield self.db.runInteraction(
322295 if prev_event_id in existing_prevs:
323296 continue
324297
325 soft_failed = json.loads(metadata).get("soft_failed")
298 soft_failed = db_to_json(metadata).get("soft_failed")
326299 if soft_failed or rejected:
327300 to_recursively_check.append(prev_event_id)
328301 existing_prevs.add(prev_event_id)
340313 txn: LoggingTransaction,
341314 events_and_contexts: List[Tuple[EventBase, EventContext]],
342315 backfilled: bool,
343 delete_existing: bool = False,
344316 state_delta_for_room: Dict[str, DeltaState] = {},
345317 new_forward_extremeties: Dict[str, List[str]] = {},
346318 ):
391363
392364 # From this point onwards the events are only events that we haven't
393365 # seen before.
394
395 if delete_existing:
396 # For paranoia reasons, we go and delete all the existing entries
397 # for these events so we can reinsert them.
398 # This gets around any problems with some tables already having
399 # entries.
400 self._delete_existing_rows_txn(txn, events_and_contexts=events_and_contexts)
401366
402367 self._store_event_txn(txn, events_and_contexts=events_and_contexts)
403368
616581 txn.execute(sql, (room_id, EventTypes.Create, ""))
617582 row = txn.fetchone()
618583 if row:
619 event_json = json.loads(row[0])
584 event_json = db_to_json(row[0])
620585 content = event_json.get("content", {})
621586 creator = content.get("creator")
622587 room_version_id = content.get("room_version", RoomVersions.V1.identifier)
795760 self._update_backward_extremeties(txn, [event])
796761
797762 return [ec for ec in events_and_contexts if ec[0] not in to_remove]
798
799 @classmethod
800 def _delete_existing_rows_txn(cls, txn, events_and_contexts):
801 if not events_and_contexts:
802 # nothing to do here
803 return
804
805 logger.info("Deleting existing")
806
807 for table in (
808 "events",
809 "event_auth",
810 "event_json",
811 "event_edges",
812 "event_forward_extremities",
813 "event_reference_hashes",
814 "event_search",
815 "event_to_state_groups",
816 "state_events",
817 "rejections",
818 "redactions",
819 "room_memberships",
820 ):
821 txn.executemany(
822 "DELETE FROM %s WHERE event_id = ?" % (table,),
823 [(ev.event_id,) for ev, _ in events_and_contexts],
824 )
825
826 for table in ("event_push_actions",):
827 txn.executemany(
828 "DELETE FROM %s WHERE room_id = ? AND event_id = ?" % (table,),
829 [(ev.room_id, ev.event_id) for ev, _ in events_and_contexts],
830 )
831763
832764 def _store_event_txn(self, txn, events_and_contexts):
833765 """Insert new events into the event and event_json tables
1414
1515 import logging
1616
17 from canonicaljson import json
18
1917 from twisted.internet import defer
2018
2119 from synapse.api.constants import EventContentFields
22 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
20 from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
2321 from synapse.storage.database import Database
2422
2523 logger = logging.getLogger(__name__)
124122 for row in rows:
125123 try:
126124 event_id = row[1]
127 event_json = json.loads(row[2])
125 event_json = db_to_json(row[2])
128126 sender = event_json["sender"]
129127 content = event_json["content"]
130128
207205
208206 for row in ev_rows:
209207 event_id = row["event_id"]
210 event_json = json.loads(row["json"])
208 event_json = db_to_json(row["json"])
211209 try:
212210 origin_server_ts = event_json["origin_server_ts"]
213211 except (KeyError, AttributeError):
316314
317315 soft_failed = False
318316 if metadata:
319 soft_failed = json.loads(metadata).get("soft_failed")
317 soft_failed = db_to_json(metadata).get("soft_failed")
320318
321319 if soft_failed or rejected:
322320 soft_failed_events_to_lookup.add(event_id)
357355
358356 graph[event_id] = {prev_event_id}
359357
360 soft_failed = json.loads(metadata).get("soft_failed")
358 soft_failed = db_to_json(metadata).get("soft_failed")
361359 if soft_failed or rejected:
362360 soft_failed_events_to_lookup.add(event_id)
363361 else:
542540 last_row_event_id = ""
543541 for (event_id, event_json_raw) in results:
544542 try:
545 event_json = json.loads(event_json_raw)
543 event_json = db_to_json(event_json_raw)
546544
547545 self.db.simple_insert_many_txn(
548546 txn=txn,
2020 from collections import namedtuple
2121 from typing import List, Optional, Tuple
2222
23 from canonicaljson import json
2423 from constantly import NamedConstant, Names
2524
2625 from twisted.internet import defer
3938 from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
4039 from synapse.replication.tcp.streams import BackfillStream
4140 from synapse.replication.tcp.streams.events import EventsStream
42 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
41 from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
4342 from synapse.storage.database import Database
4443 from synapse.storage.util.id_generators import StreamIdGenerator
4544 from synapse.types import get_domain_from_id
610609 if not allow_rejected and rejected_reason:
611610 continue
612611
613 d = json.loads(row["json"])
614 internal_metadata = json.loads(row["internal_metadata"])
612 d = db_to_json(row["json"])
613 internal_metadata = db_to_json(row["internal_metadata"])
615614
616615 format_version = row["format_version"]
617616 if format_version is None:
639638 else:
640639 room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
641640 if not room_version:
642 logger.error(
641 logger.warning(
643642 "Event %s in room %s has unknown room version %s",
644643 event_id,
645644 d["room_id"],
2020 from twisted.internet import defer
2121
2222 from synapse.api.errors import SynapseError
23 from synapse.storage._base import SQLBaseStore
23 from synapse.storage._base import SQLBaseStore, db_to_json
2424
2525 # The category ID for the "default" category. We don't store as null in the
2626 # database to avoid the fun of null != null
196196 categories = {
197197 row[0]: {
198198 "is_public": row[1],
199 "profile": json.loads(row[2]),
199 "profile": db_to_json(row[2]),
200200 "order": row[3],
201201 }
202202 for row in txn
220220 return {
221221 row["category_id"]: {
222222 "is_public": row["is_public"],
223 "profile": json.loads(row["profile"]),
223 "profile": db_to_json(row["profile"]),
224224 }
225225 for row in rows
226226 }
234234 desc="get_group_category",
235235 )
236236
237 category["profile"] = json.loads(category["profile"])
237 category["profile"] = db_to_json(category["profile"])
238238
239239 return category
240240
250250 return {
251251 row["role_id"]: {
252252 "is_public": row["is_public"],
253 "profile": json.loads(row["profile"]),
253 "profile": db_to_json(row["profile"]),
254254 }
255255 for row in rows
256256 }
264264 desc="get_group_role",
265265 )
266266
267 role["profile"] = json.loads(role["profile"])
267 role["profile"] = db_to_json(role["profile"])
268268
269269 return role
270270
332332 roles = {
333333 row[0]: {
334334 "is_public": row[1],
335 "profile": json.loads(row[2]),
335 "profile": db_to_json(row[2]),
336336 "order": row[3],
337337 }
338338 for row in txn
461461
462462 now = int(self._clock.time_msec())
463463 if row and now < row["valid_until_ms"]:
464 return json.loads(row["attestation_json"])
464 return db_to_json(row["attestation_json"])
465465
466466 return None
467467
488488 "group_id": row[0],
489489 "type": row[1],
490490 "membership": row[2],
491 "content": json.loads(row[3]),
491 "content": db_to_json(row[3]),
492492 }
493493 for row in txn
494494 ]
518518 "group_id": group_id,
519519 "membership": membership,
520520 "type": gtype,
521 "content": json.loads(content_json),
521 "content": db_to_json(content_json),
522522 }
523523 for group_id, membership, gtype, content_json in txn
524524 ]
566566 """
567567 txn.execute(sql, (last_id, current_id, limit))
568568 updates = [
569 (stream_id, (group_id, user_id, gtype, json.loads(content_json)))
569 (stream_id, (group_id, user_id, gtype, db_to_json(content_json)))
570570 for stream_id, group_id, user_id, gtype, content_json in txn
571571 ]
572572
2323
2424 from synapse.push.baserules import list_with_base_rules
2525 from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
26 from synapse.storage._base import SQLBaseStore
26 from synapse.storage._base import SQLBaseStore, db_to_json
2727 from synapse.storage.data_stores.main.appservice import ApplicationServiceWorkerStore
2828 from synapse.storage.data_stores.main.events_worker import EventsWorkerStore
2929 from synapse.storage.data_stores.main.pusher import PusherWorkerStore
4242 ruleslist = []
4343 for rawrule in rawrules:
4444 rule = dict(rawrule)
45 rule["conditions"] = json.loads(rawrule["conditions"])
46 rule["actions"] = json.loads(rawrule["actions"])
45 rule["conditions"] = db_to_json(rawrule["conditions"])
46 rule["actions"] = db_to_json(rawrule["actions"])
4747 rule["default"] = False
4848 ruleslist.append(rule)
4949
258258 # To do this we set the state_group to a new object as object() != object()
259259 state_group = object()
260260
261 current_state_ids = yield context.get_current_state_ids()
261 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
262262 result = yield self._bulk_get_push_rules_for_room(
263263 event.room_id, state_group, current_state_ids, event=event
264264 )
1616 import logging
1717 from typing import Iterable, Iterator, List, Tuple
1818
19 from canonicaljson import encode_canonical_json, json
19 from canonicaljson import encode_canonical_json
2020
2121 from twisted.internet import defer
2222
23 from synapse.storage._base import SQLBaseStore
23 from synapse.storage._base import SQLBaseStore, db_to_json
2424 from synapse.util.caches.descriptors import cachedInlineCallbacks, cachedList
2525
2626 logger = logging.getLogger(__name__)
3535 for r in rows:
3636 dataJson = r["data"]
3737 try:
38 r["data"] = json.loads(dataJson)
38 r["data"] = db_to_json(dataJson)
3939 except Exception as e:
4040 logger.warning(
4141 "Invalid JSON in data for pusher %d: %s, %s",
2121
2222 from twisted.internet import defer
2323
24 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
24 from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
2525 from synapse.storage.database import Database
2626 from synapse.storage.util.id_generators import StreamIdGenerator
2727 from synapse.util.async_helpers import ObservableDeferred
202202 for row in rows:
203203 content.setdefault(row["event_id"], {}).setdefault(row["receipt_type"], {})[
204204 row["user_id"]
205 ] = json.loads(row["data"])
205 ] = db_to_json(row["data"])
206206
207207 return [{"type": "m.receipt", "room_id": room_id, "content": content}]
208208
259259 event_entry = room_event["content"].setdefault(row["event_id"], {})
260260 receipt_type = event_entry.setdefault(row["receipt_type"], {})
261261
262 receipt_type[row["user_id"]] = json.loads(row["data"])
262 receipt_type[row["user_id"]] = db_to_json(row["data"])
263263
264264 results = {
265265 room_id: [results[room_id]] if room_id in results else []
328328 """
329329 txn.execute(sql, (last_id, current_id, limit))
330330
331 updates = [(r[0], r[1:5] + (json.loads(r[5]),)) for r in txn]
331 updates = [(r[0], r[1:5] + (db_to_json(r[5]),)) for r in txn]
332332
333333 limited = False
334334 upper_bound = current_id
2626 from synapse.metrics.background_process_metrics import run_as_background_process
2727 from synapse.storage._base import SQLBaseStore
2828 from synapse.storage.database import Database
29 from synapse.storage.types import Cursor
30 from synapse.storage.util.sequence import build_sequence_generator
2931 from synapse.types import UserID
3032 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
3133
4042
4143 self.config = hs.config
4244 self.clock = hs.get_clock()
45
46 self._user_id_seq = build_sequence_generator(
47 database.engine, find_max_generated_user_id_localpart, "user_id_seq",
48 )
4349
4450 @cached()
4551 def get_user_by_id(self, user_id):
480486 ret = yield self.db.runInteraction("count_real_users", _count_users)
481487 return ret
482488
483 @defer.inlineCallbacks
484 def find_next_generated_user_id_localpart(self):
485 """
486 Gets the localpart of the next generated user ID.
487
488 Generated user IDs are integers, so we find the largest integer user ID
489 already taken and return that plus one.
490 """
491
492 def _find_next_generated_user_id(txn):
493 # We bound between '@0' and '@a' to avoid pulling the entire table
494 # out.
495 txn.execute("SELECT name FROM users WHERE '@0' <= name AND name < '@a'")
496
497 regex = re.compile(r"^@(\d+):")
498
499 max_found = 0
500
501 for (user_id,) in txn:
502 match = regex.search(user_id)
503 if match:
504 max_found = max(int(match.group(1)), max_found)
505
506 return max_found + 1
507
508 return (
509 (
510 yield self.db.runInteraction(
511 "find_next_generated_user_id", _find_next_generated_user_id
512 )
513 )
514 )
489 async def generate_user_id(self) -> str:
490 """Generate a suitable localpart for a guest user
491
492 Returns: a (hopefully) free localpart
493 """
494 next_id = await self.db.runInteraction(
495 "generate_user_id", self._user_id_seq.get_next_id_txn
496 )
497
498 return str(next_id)
515499
516500 async def get_user_id_by_threepid(self, medium: str, address: str) -> Optional[str]:
517501 """Returns user id from threepid
15721556 keyvalues={"user_id": user_id},
15731557 values={"expiration_ts_ms": expiration_ts, "email_sent": False},
15741558 )
1559
1560
1561 def find_max_generated_user_id_localpart(cur: Cursor) -> int:
1562 """
1563 Gets the localpart of the max current generated user ID.
1564
1565 Generated user IDs are integers, so we find the largest integer user ID
1566 already taken and return that.
1567 """
1568
1569 # We bound between '@0' and '@a' to avoid pulling the entire table
1570 # out.
1571 cur.execute("SELECT name FROM users WHERE '@0' <= name AND name < '@a'")
1572
1573 regex = re.compile(r"^@(\d+):")
1574
1575 max_found = 0
1576
1577 for (user_id,) in cur:
1578 match = regex.search(user_id)
1579 if match:
1580 max_found = max(int(match.group(1)), max_found)
1581 return max_found
2727 from synapse.api.constants import EventTypes
2828 from synapse.api.errors import StoreError
2929 from synapse.api.room_versions import RoomVersion, RoomVersions
30 from synapse.storage._base import SQLBaseStore
30 from synapse.storage._base import SQLBaseStore, db_to_json
3131 from synapse.storage.data_stores.main.search import SearchStore
3232 from synapse.storage.database import Database, LoggingTransaction
3333 from synapse.types import ThirdPartyInstanceID
117117 WHERE room_id = ?
118118 """
119119 txn.execute(sql, [room_id])
120 res = self.db.cursor_to_dict(txn)[0]
120 # Catch error if sql returns empty result to return "None" instead of an error
121 try:
122 res = self.db.cursor_to_dict(txn)[0]
123 except IndexError:
124 return None
125
121126 res["federatable"] = bool(res["federatable"])
122127 res["public"] = bool(res["public"])
123128 return res
664669 next_token = None
665670 for stream_ordering, content_json in txn:
666671 next_token = stream_ordering
667 event_json = json.loads(content_json)
672 event_json = db_to_json(content_json)
668673 content = event_json["content"]
669674 content_url = content.get("url")
670675 thumbnail_url = content.get("info", {}).get("thumbnail_url")
909914 if not row["json"]:
910915 retention_policy = {}
911916 else:
912 ev = json.loads(row["json"])
913 retention_policy = json.dumps(ev["content"])
917 ev = db_to_json(row["json"])
918 retention_policy = ev["content"]
914919
915920 self.db.simple_insert_txn(
916921 txn=txn,
965970
966971 updates = []
967972 for room_id, event_json in txn:
968 event_dict = json.loads(event_json)
973 event_dict = db_to_json(event_json)
969974 room_version_id = event_dict.get("content", {}).get(
970975 "room_version", RoomVersions.V1.identifier
971976 )
1616 import logging
1717 from typing import Iterable, List, Set
1818
19 from canonicaljson import json
20
2119 from twisted.internet import defer
2220
2321 from synapse.api.constants import EventTypes, Membership
2624 from synapse.storage._base import (
2725 LoggingTransaction,
2826 SQLBaseStore,
27 db_to_json,
2928 make_in_list_sql_clause,
3029 )
3130 from synapse.storage.data_stores.main.events_worker import EventsWorkerStore
497496 # To do this we set the state_group to a new object as object() != object()
498497 state_group = object()
499498
500 current_state_ids = yield context.get_current_state_ids()
499 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
501500 result = yield self._get_joined_users_from_context(
502501 event.room_id, state_group, current_state_ids, event=event, context=context
503502 )
937936 event_id = row["event_id"]
938937 room_id = row["room_id"]
939938 try:
940 event_json = json.loads(row["json"])
939 event_json = db_to_json(row["json"])
941940 content = event_json["content"]
942941 except Exception:
943942 continue
0 /* Copyright 2020 The Matrix.org Foundation C.I.C
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- We need to store the stream positions by instance in a sharded config world.
16 --
17 -- We default to master as we want the column to be NOT NULL and we correctly
18 -- reset the instance name to match the config each time we start up.
19 ALTER TABLE federation_stream_position ADD COLUMN instance_name TEXT NOT NULL DEFAULT 'master';
20
21 CREATE UNIQUE INDEX federation_stream_position_instance ON federation_stream_position(type, instance_name);
0 # Copyright 2020 The Matrix.org Foundation C.I.C.
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 """
15 Adds a postgres SEQUENCE for generating guest user IDs.
16 """
17
18 from synapse.storage.data_stores.main.registration import (
19 find_max_generated_user_id_localpart,
20 )
21 from synapse.storage.engines import PostgresEngine
22
23
24 def run_create(cur, database_engine, *args, **kwargs):
25 if not isinstance(database_engine, PostgresEngine):
26 return
27
28 next_id = find_max_generated_user_id_localpart(cur) + 1
29 cur.execute("CREATE SEQUENCE user_id_seq START WITH %s", (next_id,))
30
31
32 def run_upgrade(*args, **kwargs):
33 pass
1616 import re
1717 from collections import namedtuple
1818
19 from canonicaljson import json
20
2119 from twisted.internet import defer
2220
2321 from synapse.api.errors import SynapseError
24 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
22 from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
2523 from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
2624 from synapse.storage.database import Database
2725 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
156154 stream_ordering = row["stream_ordering"]
157155 origin_server_ts = row["origin_server_ts"]
158156 try:
159 event_json = json.loads(row["json"])
157 event_json = db_to_json(row["json"])
160158 content = event_json["content"]
161159 except Exception:
162160 continue
352352 last_room_id = progress.get("last_room_id", "")
353353
354354 def _background_remove_left_rooms_txn(txn):
355 # get a batch of room ids to consider
355356 sql = """
356357 SELECT DISTINCT room_id FROM current_state_events
357358 WHERE room_id > ? ORDER BY room_id LIMIT ?
362363 if not room_ids:
363364 return True, set()
364365
366 ###########################################################################
367 #
368 # exclude rooms where we have active members
369
365370 sql = """
366371 SELECT room_id
367 FROM current_state_events
372 FROM local_current_membership
368373 WHERE
369374 room_id > ? AND room_id <= ?
370 AND type = 'm.room.member'
371375 AND membership = 'join'
372 AND state_key LIKE ?
373376 GROUP BY room_id
374377 """
375378
376 txn.execute(sql, (last_room_id, room_ids[-1], "%:" + self.server_name))
377
379 txn.execute(sql, (last_room_id, room_ids[-1]))
378380 joined_room_ids = {row[0] for row in txn}
379
380 left_rooms = set(room_ids) - joined_room_ids
381
382 logger.info("Deleting current state left rooms: %r", left_rooms)
381 to_delete = set(room_ids) - joined_room_ids
382
383 ###########################################################################
384 #
385 # exclude rooms which we are in the process of constructing; these otherwise
386 # qualify as "rooms with no local users", and would have their
387 # forward extremities cleaned up.
388
389 # the following query will return a list of rooms which have forward
390 # extremities that are *not* also the create event in the room - ie
391 # those that are not being created currently.
392
393 sql = """
394 SELECT DISTINCT efe.room_id
395 FROM event_forward_extremities efe
396 LEFT JOIN current_state_events cse ON
397 cse.event_id = efe.event_id
398 AND cse.type = 'm.room.create'
399 AND cse.state_key = ''
400 WHERE
401 cse.event_id IS NULL
402 AND efe.room_id > ? AND efe.room_id <= ?
403 """
404
405 txn.execute(sql, (last_room_id, room_ids[-1]))
406
407 # build a set of those rooms within `to_delete` that do not appear in
408 # the above, leaving us with the rooms in `to_delete` that *are* being
409 # created.
410 creating_rooms = to_delete.difference(row[0] for row in txn)
411 logger.info("skipping rooms which are being created: %s", creating_rooms)
412
413 # now remove the rooms being created from the list of those to delete.
414 #
415 # (we could have just taken the intersection of `to_delete` with the result
416 # of the sql query, but it's useful to be able to log `creating_rooms`; and
417 # having done so, it's quicker to remove the (few) creating rooms from
418 # `to_delete` than it is to form the intersection with the (larger) list of
419 # not-creating-rooms)
420
421 to_delete -= creating_rooms
422
423 ###########################################################################
424 #
425 # now clear the state for the rooms
426
427 logger.info("Deleting current state left rooms: %r", to_delete)
383428
384429 # First we get all users that we still think were joined to the
385430 # room. This is so that we can mark those device lists as
390435 txn,
391436 table="current_state_events",
392437 column="room_id",
393 iterable=left_rooms,
438 iterable=to_delete,
394439 keyvalues={"type": EventTypes.Member, "membership": Membership.JOIN},
395440 retcols=("state_key",),
396441 )
402447 txn,
403448 table="current_state_events",
404449 column="room_id",
405 iterable=left_rooms,
450 iterable=to_delete,
406451 keyvalues={},
407452 )
408453
410455 txn,
411456 table="event_forward_extremities",
412457 column="room_id",
413 iterable=left_rooms,
458 iterable=to_delete,
414459 keyvalues={},
415460 )
416461
4444 from synapse.logging.context import make_deferred_yieldable, run_in_background
4545 from synapse.storage._base import SQLBaseStore
4646 from synapse.storage.data_stores.main.events_worker import EventsWorkerStore
47 from synapse.storage.database import Database
47 from synapse.storage.database import Database, make_in_list_sql_clause
4848 from synapse.storage.engines import PostgresEngine
4949 from synapse.types import RoomStreamToken
5050 from synapse.util.caches.stream_change_cache import StreamChangeCache
251251
252252 def __init__(self, database: Database, db_conn, hs):
253253 super(StreamWorkerStore, self).__init__(database, db_conn, hs)
254
255 self._instance_name = hs.get_instance_name()
256 self._send_federation = hs.should_send_federation()
257 self._federation_shard_config = hs.config.worker.federation_shard_config
258
259 # If we're a process that sends federation we may need to reset the
260 # `federation_stream_position` table to match the current sharding
261 # config. We don't do this now as otherwise two processes could conflict
262 # during startup which would cause one to die.
263 self._need_to_reset_federation_stream_positions = self._send_federation
254264
255265 events_max = self.get_room_max_stream_ordering()
256266 event_cache_prefill, min_event_val = self.db.get_cache_dict(
792802
793803 return upper_bound, events
794804
795 def get_federation_out_pos(self, typ):
796 return self.db.simple_select_one_onecol(
805 async def get_federation_out_pos(self, typ: str) -> int:
806 if self._need_to_reset_federation_stream_positions:
807 await self.db.runInteraction(
808 "_reset_federation_positions_txn", self._reset_federation_positions_txn
809 )
810 self._need_to_reset_federation_stream_positions = False
811
812 return await self.db.simple_select_one_onecol(
797813 table="federation_stream_position",
798814 retcol="stream_id",
799 keyvalues={"type": typ},
815 keyvalues={"type": typ, "instance_name": self._instance_name},
800816 desc="get_federation_out_pos",
801817 )
802818
803 def update_federation_out_pos(self, typ, stream_id):
804 return self.db.simple_update_one(
819 async def update_federation_out_pos(self, typ, stream_id):
820 if self._need_to_reset_federation_stream_positions:
821 await self.db.runInteraction(
822 "_reset_federation_positions_txn", self._reset_federation_positions_txn
823 )
824 self._need_to_reset_federation_stream_positions = False
825
826 return await self.db.simple_update_one(
805827 table="federation_stream_position",
806 keyvalues={"type": typ},
828 keyvalues={"type": typ, "instance_name": self._instance_name},
807829 updatevalues={"stream_id": stream_id},
808830 desc="update_federation_out_pos",
809831 )
832
833 def _reset_federation_positions_txn(self, txn):
834 """Fiddles with the `federation_stream_position` table to make it match
835 the configured federation sender instances during start up.
836 """
837
838 # The federation sender instances may have changed, so we need to
839 # massage the `federation_stream_position` table to have a row per type
840 # per instance sending federation. If there is a mismatch we update the
841 # table with the correct rows using the *minimum* stream ID seen. This
842 # may result in resending of events/EDUs to remote servers, but that is
843 # preferable to dropping them.
844
845 if not self._send_federation:
846 return
847
848 # Pull out the configured instances. If we don't have a shard config then
849 # we assume that we're the only instance sending.
850 configured_instances = self._federation_shard_config.instances
851 if not configured_instances:
852 configured_instances = [self._instance_name]
853 elif self._instance_name not in configured_instances:
854 return
855
856 instances_in_table = self.db.simple_select_onecol_txn(
857 txn,
858 table="federation_stream_position",
859 keyvalues={},
860 retcol="instance_name",
861 )
862
863 if set(instances_in_table) == set(configured_instances):
864 # Nothing to do
865 return
866
867 sql = """
868 SELECT type, MIN(stream_id) FROM federation_stream_position
869 GROUP BY type
870 """
871 txn.execute(sql)
872 min_positions = dict(txn) # Map from type -> min position
873
874 # Ensure we do actually have some values here
875 assert set(min_positions) == {"federation", "events"}
876
877 sql = """
878 DELETE FROM federation_stream_position
879 WHERE NOT (%s)
880 """
881 clause, args = make_in_list_sql_clause(
882 txn.database_engine, "instance_name", configured_instances
883 )
884 txn.execute(sql % (clause,), args)
885
886 for typ, stream_id in min_positions.items():
887 self.db.simple_upsert_txn(
888 txn,
889 table="federation_stream_position",
890 keyvalues={"type": typ, "instance_name": self._instance_name},
891 values={"stream_id": stream_id},
892 )
810893
811894 def has_room_changed_since(self, room_id, stream_id):
812895 return self._events_stream_cache.has_entity_changed(room_id, stream_id)
2020
2121 from twisted.internet import defer
2222
23 from synapse.storage._base import db_to_json
2324 from synapse.storage.data_stores.main.account_data import AccountDataWorkerStore
2425 from synapse.util.caches.descriptors import cached
2526
4849 tags_by_room = {}
4950 for row in rows:
5051 room_tags = tags_by_room.setdefault(row["room_id"], {})
51 room_tags[row["tag"]] = json.loads(row["content"])
52 room_tags[row["tag"]] = db_to_json(row["content"])
5253 return tags_by_room
5354
5455 return deferred
179180 retcols=("tag", "content"),
180181 desc="get_tags_for_room",
181182 ).addCallback(
182 lambda rows: {row["tag"]: json.loads(row["content"]) for row in rows}
183 lambda rows: {row["tag"]: db_to_json(row["content"]) for row in rows}
183184 )
184185
185186
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import json
1514 from typing import Any, Dict, Optional, Union
1615
1716 import attr
17 from canonicaljson import json
1818
1919 from synapse.api.errors import StoreError
20 from synapse.storage._base import SQLBaseStore
20 from synapse.storage._base import SQLBaseStore, db_to_json
2121 from synapse.types import JsonDict
2222 from synapse.util import stringutils as stringutils
2323
117117 desc="get_ui_auth_session",
118118 )
119119
120 result["clientdict"] = json.loads(result["clientdict"])
120 result["clientdict"] = db_to_json(result["clientdict"])
121121
122122 return UIAuthSessionData(session_id, **result)
123123
167167 retcols=("stage_type", "result"),
168168 desc="get_completed_ui_auth_stages",
169169 ):
170 results[row["stage_type"]] = json.loads(row["result"])
170 results[row["stage_type"]] = db_to_json(row["result"])
171171
172172 return results
173173
223223 )
224224
225225 # Update it and add it back to the database.
226 serverdict = json.loads(result["serverdict"])
226 serverdict = db_to_json(result["serverdict"])
227227 serverdict[key] = value
228228
229229 self.db.simple_update_one_txn(
253253 desc="get_ui_auth_session_data",
254254 )
255255
256 serverdict = json.loads(result["serverdict"])
256 serverdict = db_to_json(result["serverdict"])
257257
258258 return serverdict.get(key, default)
259259
197197 room_id
198198 )
199199
200 users_with_profile = yield state.get_current_users_in_room(room_id)
200 users_with_profile = yield defer.ensureDeferred(
201 state.get_current_users_in_room(room_id)
202 )
201203 user_ids = set(users_with_profile)
202204
203205 # Update each user in the user directory.
6969
7070
7171 class UserErasureStore(UserErasureWorkerStore):
72 def mark_user_erased(self, user_id):
72 def mark_user_erased(self, user_id: str) -> None:
7373 """Indicate that user_id wishes their message history to be erased.
7474
7575 Args:
76 user_id (str): full user_id to be erased
76 user_id: full user_id to be erased
7777 """
7878
7979 def f(txn):
8888 self._invalidate_cache_and_stream(txn, self.is_user_erased, (user_id,))
8989
9090 return self.db.runInteraction("mark_user_erased", f)
91
92 def mark_user_not_erased(self, user_id: str) -> None:
93 """Indicate that user_id is no longer erased.
94
95 Args:
96 user_id: full user_id to be un-erased
97 """
98
99 def f(txn):
100 # first check if they are already in the list
101 txn.execute("SELECT 1 FROM erased_users WHERE user_id = ?", (user_id,))
102 if not txn.fetchone():
103 return
104
105 # They are there, delete them.
106 self.simple_delete_one_txn(
107 txn, "erased_users", keyvalues={"user_id": user_id}
108 )
109
110 self._invalidate_cache_and_stream(txn, self.is_user_erased, (user_id,))
111
112 return self.db.runInteraction("mark_user_not_erased", f)
2323 from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
2424 from synapse.storage.database import Database
2525 from synapse.storage.state import StateFilter
26 from synapse.storage.types import Cursor
27 from synapse.storage.util.sequence import build_sequence_generator
2628 from synapse.types import StateMap
2729 from synapse.util.caches.descriptors import cached
2830 from synapse.util.caches.dictionary_cache import DictionaryCache
9193 "*stateGroupMembersCache*", 500000,
9294 )
9395
96 def get_max_state_group_txn(txn: Cursor):
97 txn.execute("SELECT COALESCE(max(id), 0) FROM state_groups")
98 return txn.fetchone()[0]
99
100 self._state_group_seq_gen = build_sequence_generator(
101 self.database_engine, get_max_state_group_txn, "state_group_id_seq"
102 )
103
94104 @cached(max_entries=10000, iterable=True)
95105 def get_state_group_delta(self, state_group):
96106 """Given a state group try to return a previous group and a delta between
385395 # AFAIK, this can never happen
386396 raise Exception("current_state_ids cannot be None")
387397
388 state_group = self.database_engine.get_next_state_group_id(txn)
398 state_group = self._state_group_seq_gen.get_next_id_txn(txn)
389399
390400 self.db.simple_insert_txn(
391401 txn,
9090 def lock_table(self, txn, table: str) -> None:
9191 ...
9292
93 @abc.abstractmethod
94 def get_next_state_group_id(self, txn) -> int:
95 """Returns an int that can be used as a new state_group ID
96 """
97 ...
98
9993 @property
10094 @abc.abstractmethod
10195 def server_version(self) -> str:
153153 def lock_table(self, txn, table):
154154 txn.execute("LOCK TABLE %s in EXCLUSIVE MODE" % (table,))
155155
156 def get_next_state_group_id(self, txn):
157 """Returns an int that can be used as a new state_group ID
158 """
159 txn.execute("SELECT nextval('state_group_id_seq')")
160 return txn.fetchone()[0]
161
162156 @property
163157 def server_version(self):
164158 """Returns a string giving the server version. For example: '8.1.5'
9595 def lock_table(self, txn, table):
9696 return
9797
98 def get_next_state_group_id(self, txn):
99 """Returns an int that can be used as a new state_group ID
100 """
101 # We do application locking here since if we're using sqlite then
102 # we are a single process synapse.
103 with self._current_state_group_id_lock:
104 if self._current_state_group_id is None:
105 txn.execute("SELECT COALESCE(max(id), 0) FROM state_groups")
106 self._current_state_group_id = txn.fetchone()[0]
107
108 self._current_state_group_id += 1
109 return self._current_state_group_id
110
11198 @property
11299 def server_version(self):
113100 """Gets a string giving the server version. For example: '3.22.0'
2828 from synapse.events.snapshot import EventContext
2929 from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
3030 from synapse.metrics.background_process_metrics import run_as_background_process
31 from synapse.state import StateResolutionStore
3231 from synapse.storage.data_stores import DataStores
3332 from synapse.storage.data_stores.main.events import DeltaState
3433 from synapse.types import StateMap
647646 room_version = await self.main_store.get_room_version_id(room_id)
648647
649648 logger.debug("calling resolve_state_groups from preserve_events")
649
650 # Avoid a circular import.
651 from synapse.state import StateResolutionStore
652
650653 res = await self._state_resolution_handler.resolve_state_groups(
651654 room_id,
652655 room_version,
2020 from typing_extensions import Deque
2121
2222 from synapse.storage.database import Database, LoggingTransaction
23 from synapse.storage.util.sequence import PostgresSequenceGenerator
2324
2425
2526 class IdGenerator(object):
246247 ):
247248 self._db = db
248249 self._instance_name = instance_name
249 self._sequence_name = sequence_name
250250
251251 # We lock as some functions may be called from DB threads.
252252 self._lock = threading.Lock()
258258 # Set of local IDs that we're still processing. The current position
259259 # should be less than the minimum of this set (if not empty).
260260 self._unfinished_ids = set() # type: Set[int]
261
262 self._sequence_gen = PostgresSequenceGenerator(sequence_name)
261263
262264 def _load_current_ids(
263265 self, db_conn, table: str, instance_column: str, id_column: str
282284 return current_positions
283285
284286 def _load_next_id_txn(self, txn):
285 txn.execute("SELECT nextval(?)", (self._sequence_name,))
286 (next_id,) = txn.fetchone()
287 return next_id
287 return self._sequence_gen.get_next_id_txn(txn)
288288
289289 async def get_next(self):
290290 """
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import abc
15 import threading
16 from typing import Callable, Optional
17
18 from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine
19 from synapse.storage.types import Cursor
20
21
22 class SequenceGenerator(metaclass=abc.ABCMeta):
23 """A class which generates a unique sequence of integers"""
24
25 @abc.abstractmethod
26 def get_next_id_txn(self, txn: Cursor) -> int:
27 """Gets the next ID in the sequence"""
28 ...
29
30
31 class PostgresSequenceGenerator(SequenceGenerator):
32 """An implementation of SequenceGenerator which uses a postgres sequence"""
33
34 def __init__(self, sequence_name: str):
35 self._sequence_name = sequence_name
36
37 def get_next_id_txn(self, txn: Cursor) -> int:
38 txn.execute("SELECT nextval(?)", (self._sequence_name,))
39 return txn.fetchone()[0]
40
41
42 GetFirstCallbackType = Callable[[Cursor], int]
43
44
45 class LocalSequenceGenerator(SequenceGenerator):
46 """An implementation of SequenceGenerator which uses local locking
47
48 This only works reliably if there are no other worker processes generating IDs at
49 the same time.
50 """
51
52 def __init__(self, get_first_callback: GetFirstCallbackType):
53 """
54 Args:
55 get_first_callback: a callback which is called on the first call to
56 get_next_id_txn; should return the curreent maximum id
57 """
58 # the callback. this is cleared after it is called, so that it can be GCed.
59 self._callback = get_first_callback # type: Optional[GetFirstCallbackType]
60
61 # The current max value, or None if we haven't looked in the DB yet.
62 self._current_max_id = None # type: Optional[int]
63 self._lock = threading.Lock()
64
65 def get_next_id_txn(self, txn: Cursor) -> int:
66 # We do application locking here since if we're using sqlite then
67 # we are a single process synapse.
68 with self._lock:
69 if self._current_max_id is None:
70 assert self._callback is not None
71 self._current_max_id = self._callback(txn)
72 self._callback = None
73
74 self._current_max_id += 1
75 return self._current_max_id
76
77
78 def build_sequence_generator(
79 database_engine: BaseDatabaseEngine,
80 get_first_callback: GetFirstCallbackType,
81 sequence_name: str,
82 ) -> SequenceGenerator:
83 """Get the best impl of SequenceGenerator available
84
85 This uses PostgresSequenceGenerator on postgres, and a locally-locked impl on
86 sqlite.
87
88 Args:
89 database_engine: the database engine we are connected to
90 get_first_callback: a callback which gets the next sequence ID. Used if
91 we're on sqlite.
92 sequence_name: the name of a postgres sequence to use.
93 """
94 if isinstance(database_engine, PostgresEngine):
95 return PostgresSequenceGenerator(sequence_name)
96 else:
97 return LocalSequenceGenerator(get_first_callback)
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
14 import inspect
1515 import logging
1616
1717 from twisted.internet import defer
18 from twisted.internet.defer import Deferred, fail, succeed
19 from twisted.python import failure
1820
1921 from synapse.logging.context import make_deferred_yieldable, run_in_background
2022 from synapse.metrics.background_process_metrics import run_as_background_process
7880 run_as_background_process(name, self.signals[name].fire, *args, **kwargs)
7981
8082
83 def maybeAwaitableDeferred(f, *args, **kw):
84 """
85 Invoke a function that may or may not return a Deferred or an Awaitable.
86
87 This is a modified version of twisted.internet.defer.maybeDeferred.
88 """
89 try:
90 result = f(*args, **kw)
91 except Exception:
92 return fail(failure.Failure(captureVars=Deferred.debug))
93
94 if isinstance(result, Deferred):
95 return result
96 # Handle the additional case of an awaitable being returned.
97 elif inspect.isawaitable(result):
98 return defer.ensureDeferred(result)
99 elif isinstance(result, failure.Failure):
100 return fail(result)
101 else:
102 return succeed(result)
103
104
81105 class Signal(object):
82106 """A Signal is a dispatch point that stores a list of callables as
83107 observers of it.
121145 ),
122146 )
123147
124 return defer.maybeDeferred(observer, *args, **kwargs).addErrback(eb)
148 return maybeAwaitableDeferred(observer, *args, **kwargs).addErrback(eb)
125149
126150 deferreds = [run_in_background(do, o) for o in self.observers]
127151
1616 import random
1717 import re
1818 import string
19 from collections import Iterable
19 from collections.abc import Iterable
2020
2121 from synapse.api.errors import Codes, SynapseError
2222
4040 serialize/deserialize.
4141 """
4242
43 event, context = create_event(
44 self.hs, room_id=self.room_id, type="m.test", sender=self.user_id,
43 event, context = self.get_success(
44 create_event(
45 self.hs, room_id=self.room_id, type="m.test", sender=self.user_id,
46 )
4547 )
4648
4749 self._check_serialize_deserialize(event, context)
5052 """Test that an EventContext for a state event (with not previous entry)
5153 is the same after serialize/deserialize.
5254 """
53 event, context = create_event(
54 self.hs,
55 room_id=self.room_id,
56 type="m.test",
57 sender=self.user_id,
58 state_key="",
55 event, context = self.get_success(
56 create_event(
57 self.hs,
58 room_id=self.room_id,
59 type="m.test",
60 sender=self.user_id,
61 state_key="",
62 )
5963 )
6064
6165 self._check_serialize_deserialize(event, context)
6468 """Test that an EventContext for a state event (which replaces a
6569 previous entry) is the same after serialize/deserialize.
6670 """
67 event, context = create_event(
68 self.hs,
69 room_id=self.room_id,
70 type="m.room.member",
71 sender=self.user_id,
72 state_key=self.user_id,
73 content={"membership": "leave"},
71 event, context = self.get_success(
72 create_event(
73 self.hs,
74 room_id=self.room_id,
75 type="m.room.member",
76 sender=self.user_id,
77 state_key=self.user_id,
78 content={"membership": "leave"},
79 )
7480 )
7581
7682 self._check_serialize_deserialize(event, context)
2525 from synapse.rest.client.v1 import login
2626 from synapse.types import JsonDict, ReadReceipt
2727
28 from tests.test_utils import make_awaitable
2829 from tests.unittest import HomeserverTestCase, override_config
2930
3031
3132 class FederationSenderReceiptsTestCases(HomeserverTestCase):
3233 def make_homeserver(self, reactor, clock):
34 mock_state_handler = Mock(spec=["get_current_hosts_in_room"])
35 # Ensure a new Awaitable is created for each call.
36 mock_state_handler.get_current_hosts_in_room.side_effect = lambda room_Id: make_awaitable(
37 ["test", "host2"]
38 )
3339 return self.setup_test_homeserver(
34 state_handler=Mock(spec=["get_current_hosts_in_room"]),
40 state_handler=mock_state_handler,
3541 federation_transport_client=Mock(spec=["send_transaction"]),
3642 )
3743
3844 @override_config({"send_federation": True})
3945 def test_send_receipts(self):
40 mock_state_handler = self.hs.get_state_handler()
41 mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
42
4346 mock_send_transaction = (
4447 self.hs.get_federation_transport_client().send_transaction
4548 )
8083 def test_send_receipts_with_backoff(self):
8184 """Send two receipts in quick succession; the second should be flushed, but
8285 only after 20ms"""
83 mock_state_handler = self.hs.get_state_handler()
84 mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
85
8686 mock_send_transaction = (
8787 self.hs.get_federation_transport_client().send_transaction
8888 )
163163
164164 def make_homeserver(self, reactor, clock):
165165 return self.setup_test_homeserver(
166 state_handler=Mock(spec=["get_current_hosts_in_room"]),
167166 federation_transport_client=Mock(spec=["send_transaction"]),
168167 )
169168
173172 return c
174173
175174 def prepare(self, reactor, clock, hs):
176 # stub out get_current_hosts_in_room
177 mock_state_handler = hs.get_state_handler()
178 mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
179
180175 # stub out get_users_who_share_room_with_user so that it claims that
181176 # `@user2:host2` is in the room
182177 def get_users_who_share_room_with_user(user_id):
141141 self.get_success(self.handler.delete_device(user1, "abc"))
142142
143143 # check the device was deleted
144 res = self.handler.get_device(user1, "abc")
145 self.pump()
146 self.assertIsInstance(
147 self.failureResultOf(res).value, synapse.api.errors.NotFoundError
144 self.get_failure(
145 self.handler.get_device(user1, "abc"), synapse.api.errors.NotFoundError
148146 )
149147
150148 # we'd like to check the access token was invalidated, but that's a
179177
180178 def test_update_unknown_device(self):
181179 update = {"display_name": "new_display"}
182 res = self.handler.update_device("user_id", "unknown_device_id", update)
183 self.pump()
184 self.assertIsInstance(
185 self.failureResultOf(res).value, synapse.api.errors.NotFoundError
180 self.get_failure(
181 self.handler.update_device("user_id", "unknown_device_id", update),
182 synapse.api.errors.NotFoundError,
186183 )
187184
188185 def _record_users(self):
4545 """If the user has no devices, we expect an empty list.
4646 """
4747 local_user = "@boris:" + self.hs.hostname
48 res = yield self.handler.query_local_devices({local_user: None})
48 res = yield defer.ensureDeferred(
49 self.handler.query_local_devices({local_user: None})
50 )
4951 self.assertDictEqual(res, {local_user: {}})
5052
5153 @defer.inlineCallbacks
5961 "alg2:k3": {"key": "key3"},
6062 }
6163
62 res = yield self.handler.upload_keys_for_user(
63 local_user, device_id, {"one_time_keys": keys}
64 res = yield defer.ensureDeferred(
65 self.handler.upload_keys_for_user(
66 local_user, device_id, {"one_time_keys": keys}
67 )
6468 )
6569 self.assertDictEqual(res, {"one_time_key_counts": {"alg1": 1, "alg2": 2}})
6670
6771 # we should be able to change the signature without a problem
6872 keys["alg2:k2"]["signatures"]["k1"] = "sig2"
69 res = yield self.handler.upload_keys_for_user(
70 local_user, device_id, {"one_time_keys": keys}
73 res = yield defer.ensureDeferred(
74 self.handler.upload_keys_for_user(
75 local_user, device_id, {"one_time_keys": keys}
76 )
7177 )
7278 self.assertDictEqual(res, {"one_time_key_counts": {"alg1": 1, "alg2": 2}})
7379
8389 "alg2:k3": {"key": "key3"},
8490 }
8591
86 res = yield self.handler.upload_keys_for_user(
87 local_user, device_id, {"one_time_keys": keys}
92 res = yield defer.ensureDeferred(
93 self.handler.upload_keys_for_user(
94 local_user, device_id, {"one_time_keys": keys}
95 )
8896 )
8997 self.assertDictEqual(res, {"one_time_key_counts": {"alg1": 1, "alg2": 2}})
9098
9199 try:
92 yield self.handler.upload_keys_for_user(
93 local_user, device_id, {"one_time_keys": {"alg1:k1": "key2"}}
100 yield defer.ensureDeferred(
101 self.handler.upload_keys_for_user(
102 local_user, device_id, {"one_time_keys": {"alg1:k1": "key2"}}
103 )
94104 )
95105 self.fail("No error when changing string key")
96106 except errors.SynapseError:
97107 pass
98108
99109 try:
100 yield self.handler.upload_keys_for_user(
101 local_user, device_id, {"one_time_keys": {"alg2:k3": "key2"}}
110 yield defer.ensureDeferred(
111 self.handler.upload_keys_for_user(
112 local_user, device_id, {"one_time_keys": {"alg2:k3": "key2"}}
113 )
102114 )
103115 self.fail("No error when replacing dict key with string")
104116 except errors.SynapseError:
105117 pass
106118
107119 try:
108 yield self.handler.upload_keys_for_user(
109 local_user, device_id, {"one_time_keys": {"alg1:k1": {"key": "key"}}}
120 yield defer.ensureDeferred(
121 self.handler.upload_keys_for_user(
122 local_user,
123 device_id,
124 {"one_time_keys": {"alg1:k1": {"key": "key"}}},
125 )
110126 )
111127 self.fail("No error when replacing string key with dict")
112128 except errors.SynapseError:
113129 pass
114130
115131 try:
116 yield self.handler.upload_keys_for_user(
117 local_user,
118 device_id,
119 {
120 "one_time_keys": {
121 "alg2:k2": {"key": "key3", "signatures": {"k1": "sig1"}}
122 }
123 },
132 yield defer.ensureDeferred(
133 self.handler.upload_keys_for_user(
134 local_user,
135 device_id,
136 {
137 "one_time_keys": {
138 "alg2:k2": {"key": "key3", "signatures": {"k1": "sig1"}}
139 }
140 },
141 )
124142 )
125143 self.fail("No error when replacing dict key")
126144 except errors.SynapseError:
132150 device_id = "xyz"
133151 keys = {"alg1:k1": "key1"}
134152
135 res = yield self.handler.upload_keys_for_user(
136 local_user, device_id, {"one_time_keys": keys}
153 res = yield defer.ensureDeferred(
154 self.handler.upload_keys_for_user(
155 local_user, device_id, {"one_time_keys": keys}
156 )
137157 )
138158 self.assertDictEqual(res, {"one_time_key_counts": {"alg1": 1}})
139159
140 res2 = yield self.handler.claim_one_time_keys(
141 {"one_time_keys": {local_user: {device_id: "alg1"}}}, timeout=None
160 res2 = yield defer.ensureDeferred(
161 self.handler.claim_one_time_keys(
162 {"one_time_keys": {local_user: {device_id: "alg1"}}}, timeout=None
163 )
142164 )
143165 self.assertEqual(
144166 res2,
162184 },
163185 }
164186 }
165 yield self.handler.upload_signing_keys_for_user(local_user, keys1)
187 yield defer.ensureDeferred(
188 self.handler.upload_signing_keys_for_user(local_user, keys1)
189 )
166190
167191 keys2 = {
168192 "master_key": {
174198 },
175199 }
176200 }
177 yield self.handler.upload_signing_keys_for_user(local_user, keys2)
178
179 devices = yield self.handler.query_devices(
180 {"device_keys": {local_user: []}}, 0, local_user
201 yield defer.ensureDeferred(
202 self.handler.upload_signing_keys_for_user(local_user, keys2)
203 )
204
205 devices = yield defer.ensureDeferred(
206 self.handler.query_devices({"device_keys": {local_user: []}}, 0, local_user)
181207 )
182208 self.assertDictEqual(devices["master_keys"], {local_user: keys2["master_key"]})
183209
214240 "nqOvzeuGWT/sRx3h7+MHoInYj3Uk2LD/unI9kDYcHwk",
215241 "2lonYOM6xYKdEsO+6KrC766xBcHnYnim1x/4LFGF8B0",
216242 )
217 yield self.handler.upload_signing_keys_for_user(local_user, keys1)
243 yield defer.ensureDeferred(
244 self.handler.upload_signing_keys_for_user(local_user, keys1)
245 )
218246
219247 # upload two device keys, which will be signed later by the self-signing key
220248 device_key_1 = {
244272 "signatures": {local_user: {"ed25519:def": "base64+signature"}},
245273 }
246274
247 yield self.handler.upload_keys_for_user(
248 local_user, "abc", {"device_keys": device_key_1}
249 )
250 yield self.handler.upload_keys_for_user(
251 local_user, "def", {"device_keys": device_key_2}
275 yield defer.ensureDeferred(
276 self.handler.upload_keys_for_user(
277 local_user, "abc", {"device_keys": device_key_1}
278 )
279 )
280 yield defer.ensureDeferred(
281 self.handler.upload_keys_for_user(
282 local_user, "def", {"device_keys": device_key_2}
283 )
252284 )
253285
254286 # sign the first device key and upload it
255287 del device_key_1["signatures"]
256288 sign.sign_json(device_key_1, local_user, signing_key)
257 yield self.handler.upload_signatures_for_device_keys(
258 local_user, {local_user: {"abc": device_key_1}}
289 yield defer.ensureDeferred(
290 self.handler.upload_signatures_for_device_keys(
291 local_user, {local_user: {"abc": device_key_1}}
292 )
259293 )
260294
261295 # sign the second device key and upload both device keys. The server
263297 # signature for it
264298 del device_key_2["signatures"]
265299 sign.sign_json(device_key_2, local_user, signing_key)
266 yield self.handler.upload_signatures_for_device_keys(
267 local_user, {local_user: {"abc": device_key_1, "def": device_key_2}}
300 yield defer.ensureDeferred(
301 self.handler.upload_signatures_for_device_keys(
302 local_user, {local_user: {"abc": device_key_1, "def": device_key_2}}
303 )
268304 )
269305
270306 device_key_1["signatures"][local_user]["ed25519:abc"] = "base64+signature"
271307 device_key_2["signatures"][local_user]["ed25519:def"] = "base64+signature"
272 devices = yield self.handler.query_devices(
273 {"device_keys": {local_user: []}}, 0, local_user
308 devices = yield defer.ensureDeferred(
309 self.handler.query_devices({"device_keys": {local_user: []}}, 0, local_user)
274310 )
275311 del devices["device_keys"][local_user]["abc"]["unsigned"]
276312 del devices["device_keys"][local_user]["def"]["unsigned"]
291327 },
292328 }
293329 }
294 yield self.handler.upload_signing_keys_for_user(local_user, keys1)
330 yield defer.ensureDeferred(
331 self.handler.upload_signing_keys_for_user(local_user, keys1)
332 )
295333
296334 res = None
297335 try:
298 yield self.hs.get_device_handler().check_device_registered(
299 user_id=local_user,
300 device_id="nqOvzeuGWT/sRx3h7+MHoInYj3Uk2LD/unI9kDYcHwk",
301 initial_device_display_name="new display name",
336 yield defer.ensureDeferred(
337 self.hs.get_device_handler().check_device_registered(
338 user_id=local_user,
339 device_id="nqOvzeuGWT/sRx3h7+MHoInYj3Uk2LD/unI9kDYcHwk",
340 initial_device_display_name="new display name",
341 )
302342 )
303343 except errors.SynapseError as e:
304344 res = e.code
305345 self.assertEqual(res, 400)
306346
307 res = yield self.handler.query_local_devices({local_user: None})
347 res = yield defer.ensureDeferred(
348 self.handler.query_local_devices({local_user: None})
349 )
308350 self.assertDictEqual(res, {local_user: {}})
309351
310352 @defer.inlineCallbacks
330372 "ed25519", "xyz", "OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA"
331373 )
332374
333 yield self.handler.upload_keys_for_user(
334 local_user, device_id, {"device_keys": device_key}
375 yield defer.ensureDeferred(
376 self.handler.upload_keys_for_user(
377 local_user, device_id, {"device_keys": device_key}
378 )
335379 )
336380
337381 # private key: 2lonYOM6xYKdEsO+6KrC766xBcHnYnim1x/4LFGF8B0
371415 "user_signing_key": usersigning_key,
372416 "self_signing_key": selfsigning_key,
373417 }
374 yield self.handler.upload_signing_keys_for_user(local_user, cross_signing_keys)
418 yield defer.ensureDeferred(
419 self.handler.upload_signing_keys_for_user(local_user, cross_signing_keys)
420 )
375421
376422 # set up another user with a master key. This user will be signed by
377423 # the first user
383429 "usage": ["master"],
384430 "keys": {"ed25519:" + other_master_pubkey: other_master_pubkey},
385431 }
386 yield self.handler.upload_signing_keys_for_user(
387 other_user, {"master_key": other_master_key}
432 yield defer.ensureDeferred(
433 self.handler.upload_signing_keys_for_user(
434 other_user, {"master_key": other_master_key}
435 )
388436 )
389437
390438 # test various signature failures (see below)
391 ret = yield self.handler.upload_signatures_for_device_keys(
392 local_user,
393 {
394 local_user: {
395 # fails because the signature is invalid
396 # should fail with INVALID_SIGNATURE
397 device_id: {
398 "user_id": local_user,
399 "device_id": device_id,
400 "algorithms": [
401 "m.olm.curve25519-aes-sha2",
402 RoomEncryptionAlgorithms.MEGOLM_V1_AES_SHA2,
403 ],
404 "keys": {
405 "curve25519:xyz": "curve25519+key",
406 # private key: OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA
407 "ed25519:xyz": device_pubkey,
439 ret = yield defer.ensureDeferred(
440 self.handler.upload_signatures_for_device_keys(
441 local_user,
442 {
443 local_user: {
444 # fails because the signature is invalid
445 # should fail with INVALID_SIGNATURE
446 device_id: {
447 "user_id": local_user,
448 "device_id": device_id,
449 "algorithms": [
450 "m.olm.curve25519-aes-sha2",
451 RoomEncryptionAlgorithms.MEGOLM_V1_AES_SHA2,
452 ],
453 "keys": {
454 "curve25519:xyz": "curve25519+key",
455 # private key: OMkooTr76ega06xNvXIGPbgvvxAOzmQncN8VObS7aBA
456 "ed25519:xyz": device_pubkey,
457 },
458 "signatures": {
459 local_user: {
460 "ed25519:" + selfsigning_pubkey: "something"
461 }
462 },
408463 },
409 "signatures": {
410 local_user: {"ed25519:" + selfsigning_pubkey: "something"}
464 # fails because device is unknown
465 # should fail with NOT_FOUND
466 "unknown": {
467 "user_id": local_user,
468 "device_id": "unknown",
469 "signatures": {
470 local_user: {
471 "ed25519:" + selfsigning_pubkey: "something"
472 }
473 },
474 },
475 # fails because the signature is invalid
476 # should fail with INVALID_SIGNATURE
477 master_pubkey: {
478 "user_id": local_user,
479 "usage": ["master"],
480 "keys": {"ed25519:" + master_pubkey: master_pubkey},
481 "signatures": {
482 local_user: {"ed25519:" + device_pubkey: "something"}
483 },
411484 },
412485 },
413 # fails because device is unknown
414 # should fail with NOT_FOUND
415 "unknown": {
416 "user_id": local_user,
417 "device_id": "unknown",
418 "signatures": {
419 local_user: {"ed25519:" + selfsigning_pubkey: "something"}
486 other_user: {
487 # fails because the device is not the user's master-signing key
488 # should fail with NOT_FOUND
489 "unknown": {
490 "user_id": other_user,
491 "device_id": "unknown",
492 "signatures": {
493 local_user: {
494 "ed25519:" + usersigning_pubkey: "something"
495 }
496 },
497 },
498 other_master_pubkey: {
499 # fails because the key doesn't match what the server has
500 # should fail with UNKNOWN
501 "user_id": other_user,
502 "usage": ["master"],
503 "keys": {
504 "ed25519:" + other_master_pubkey: other_master_pubkey
505 },
506 "something": "random",
507 "signatures": {
508 local_user: {
509 "ed25519:" + usersigning_pubkey: "something"
510 }
511 },
420512 },
421513 },
422 # fails because the signature is invalid
423 # should fail with INVALID_SIGNATURE
424 master_pubkey: {
425 "user_id": local_user,
426 "usage": ["master"],
427 "keys": {"ed25519:" + master_pubkey: master_pubkey},
428 "signatures": {
429 local_user: {"ed25519:" + device_pubkey: "something"}
430 },
431 },
432 },
433 other_user: {
434 # fails because the device is not the user's master-signing key
435 # should fail with NOT_FOUND
436 "unknown": {
437 "user_id": other_user,
438 "device_id": "unknown",
439 "signatures": {
440 local_user: {"ed25519:" + usersigning_pubkey: "something"}
441 },
442 },
443 other_master_pubkey: {
444 # fails because the key doesn't match what the server has
445 # should fail with UNKNOWN
446 "user_id": other_user,
447 "usage": ["master"],
448 "keys": {"ed25519:" + other_master_pubkey: other_master_pubkey},
449 "something": "random",
450 "signatures": {
451 local_user: {"ed25519:" + usersigning_pubkey: "something"}
452 },
453 },
454 },
455 },
514 },
515 )
456516 )
457517
458518 user_failures = ret["failures"][local_user]
477537 sign.sign_json(device_key, local_user, selfsigning_signing_key)
478538 sign.sign_json(master_key, local_user, device_signing_key)
479539 sign.sign_json(other_master_key, local_user, usersigning_signing_key)
480 ret = yield self.handler.upload_signatures_for_device_keys(
481 local_user,
482 {
483 local_user: {device_id: device_key, master_pubkey: master_key},
484 other_user: {other_master_pubkey: other_master_key},
485 },
540 ret = yield defer.ensureDeferred(
541 self.handler.upload_signatures_for_device_keys(
542 local_user,
543 {
544 local_user: {device_id: device_key, master_pubkey: master_key},
545 other_user: {other_master_pubkey: other_master_key},
546 },
547 )
486548 )
487549
488550 self.assertEqual(ret["failures"], {})
489551
490552 # fetch the signed keys/devices and make sure that the signatures are there
491 ret = yield self.handler.query_devices(
492 {"device_keys": {local_user: [], other_user: []}}, 0, local_user
553 ret = yield defer.ensureDeferred(
554 self.handler.query_devices(
555 {"device_keys": {local_user: [], other_user: []}}, 0, local_user
556 )
493557 )
494558
495559 self.assertEqual(
6565 """
6666 res = None
6767 try:
68 yield self.handler.get_version_info(self.local_user)
68 yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
6969 except errors.SynapseError as e:
7070 res = e.code
7171 self.assertEqual(res, 404)
7777 """
7878 res = None
7979 try:
80 yield self.handler.get_version_info(self.local_user, "bogus_version")
80 yield defer.ensureDeferred(
81 self.handler.get_version_info(self.local_user, "bogus_version")
82 )
8183 except errors.SynapseError as e:
8284 res = e.code
8385 self.assertEqual(res, 404)
8688 def test_create_version(self):
8789 """Check that we can create and then retrieve versions.
8890 """
89 res = yield self.handler.create_version(
90 self.local_user,
91 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
91 res = yield defer.ensureDeferred(
92 self.handler.create_version(
93 self.local_user,
94 {
95 "algorithm": "m.megolm_backup.v1",
96 "auth_data": "first_version_auth_data",
97 },
98 )
9299 )
93100 self.assertEqual(res, "1")
94101
95102 # check we can retrieve it as the current version
96 res = yield self.handler.get_version_info(self.local_user)
103 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
97104 version_etag = res["etag"]
98105 self.assertIsInstance(version_etag, str)
99106 del res["etag"]
108115 )
109116
110117 # check we can retrieve it as a specific version
111 res = yield self.handler.get_version_info(self.local_user, "1")
118 res = yield defer.ensureDeferred(
119 self.handler.get_version_info(self.local_user, "1")
120 )
112121 self.assertEqual(res["etag"], version_etag)
113122 del res["etag"]
114123 self.assertDictEqual(
122131 )
123132
124133 # upload a new one...
125 res = yield self.handler.create_version(
126 self.local_user,
127 {
128 "algorithm": "m.megolm_backup.v1",
129 "auth_data": "second_version_auth_data",
130 },
134 res = yield defer.ensureDeferred(
135 self.handler.create_version(
136 self.local_user,
137 {
138 "algorithm": "m.megolm_backup.v1",
139 "auth_data": "second_version_auth_data",
140 },
141 )
131142 )
132143 self.assertEqual(res, "2")
133144
134145 # check we can retrieve it as the current version
135 res = yield self.handler.get_version_info(self.local_user)
146 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
136147 del res["etag"]
137148 self.assertDictEqual(
138149 res,
148159 def test_update_version(self):
149160 """Check that we can update versions.
150161 """
151 version = yield self.handler.create_version(
152 self.local_user,
153 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
154 )
155 self.assertEqual(version, "1")
156
157 res = yield self.handler.update_version(
158 self.local_user,
159 version,
160 {
161 "algorithm": "m.megolm_backup.v1",
162 "auth_data": "revised_first_version_auth_data",
163 "version": version,
164 },
162 version = yield defer.ensureDeferred(
163 self.handler.create_version(
164 self.local_user,
165 {
166 "algorithm": "m.megolm_backup.v1",
167 "auth_data": "first_version_auth_data",
168 },
169 )
170 )
171 self.assertEqual(version, "1")
172
173 res = yield defer.ensureDeferred(
174 self.handler.update_version(
175 self.local_user,
176 version,
177 {
178 "algorithm": "m.megolm_backup.v1",
179 "auth_data": "revised_first_version_auth_data",
180 "version": version,
181 },
182 )
165183 )
166184 self.assertDictEqual(res, {})
167185
168186 # check we can retrieve it as the current version
169 res = yield self.handler.get_version_info(self.local_user)
187 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
170188 del res["etag"]
171189 self.assertDictEqual(
172190 res,
184202 """
185203 res = None
186204 try:
187 yield self.handler.update_version(
188 self.local_user,
189 "1",
190 {
191 "algorithm": "m.megolm_backup.v1",
192 "auth_data": "revised_first_version_auth_data",
193 "version": "1",
194 },
205 yield defer.ensureDeferred(
206 self.handler.update_version(
207 self.local_user,
208 "1",
209 {
210 "algorithm": "m.megolm_backup.v1",
211 "auth_data": "revised_first_version_auth_data",
212 "version": "1",
213 },
214 )
195215 )
196216 except errors.SynapseError as e:
197217 res = e.code
201221 def test_update_omitted_version(self):
202222 """Check that the update succeeds if the version is missing from the body
203223 """
204 version = yield self.handler.create_version(
205 self.local_user,
206 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
207 )
208 self.assertEqual(version, "1")
209
210 yield self.handler.update_version(
211 self.local_user,
212 version,
213 {
214 "algorithm": "m.megolm_backup.v1",
215 "auth_data": "revised_first_version_auth_data",
216 },
224 version = yield defer.ensureDeferred(
225 self.handler.create_version(
226 self.local_user,
227 {
228 "algorithm": "m.megolm_backup.v1",
229 "auth_data": "first_version_auth_data",
230 },
231 )
232 )
233 self.assertEqual(version, "1")
234
235 yield defer.ensureDeferred(
236 self.handler.update_version(
237 self.local_user,
238 version,
239 {
240 "algorithm": "m.megolm_backup.v1",
241 "auth_data": "revised_first_version_auth_data",
242 },
243 )
217244 )
218245
219246 # check we can retrieve it as the current version
220 res = yield self.handler.get_version_info(self.local_user)
247 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
221248 del res["etag"] # etag is opaque, so don't test its contents
222249 self.assertDictEqual(
223250 res,
233260 def test_update_bad_version(self):
234261 """Check that we get a 400 if the version in the body doesn't match
235262 """
236 version = yield self.handler.create_version(
237 self.local_user,
238 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
239 )
240 self.assertEqual(version, "1")
241
242 res = None
243 try:
244 yield self.handler.update_version(
245 self.local_user,
246 version,
247 {
248 "algorithm": "m.megolm_backup.v1",
249 "auth_data": "revised_first_version_auth_data",
250 "version": "incorrect",
251 },
263 version = yield defer.ensureDeferred(
264 self.handler.create_version(
265 self.local_user,
266 {
267 "algorithm": "m.megolm_backup.v1",
268 "auth_data": "first_version_auth_data",
269 },
270 )
271 )
272 self.assertEqual(version, "1")
273
274 res = None
275 try:
276 yield defer.ensureDeferred(
277 self.handler.update_version(
278 self.local_user,
279 version,
280 {
281 "algorithm": "m.megolm_backup.v1",
282 "auth_data": "revised_first_version_auth_data",
283 "version": "incorrect",
284 },
285 )
252286 )
253287 except errors.SynapseError as e:
254288 res = e.code
260294 """
261295 res = None
262296 try:
263 yield self.handler.delete_version(self.local_user, "1")
297 yield defer.ensureDeferred(
298 self.handler.delete_version(self.local_user, "1")
299 )
264300 except errors.SynapseError as e:
265301 res = e.code
266302 self.assertEqual(res, 404)
271307 """
272308 res = None
273309 try:
274 yield self.handler.delete_version(self.local_user)
310 yield defer.ensureDeferred(self.handler.delete_version(self.local_user))
275311 except errors.SynapseError as e:
276312 res = e.code
277313 self.assertEqual(res, 404)
280316 def test_delete_version(self):
281317 """Check that we can create and then delete versions.
282318 """
283 res = yield self.handler.create_version(
284 self.local_user,
285 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
319 res = yield defer.ensureDeferred(
320 self.handler.create_version(
321 self.local_user,
322 {
323 "algorithm": "m.megolm_backup.v1",
324 "auth_data": "first_version_auth_data",
325 },
326 )
286327 )
287328 self.assertEqual(res, "1")
288329
289330 # check we can delete it
290 yield self.handler.delete_version(self.local_user, "1")
331 yield defer.ensureDeferred(self.handler.delete_version(self.local_user, "1"))
291332
292333 # check that it's gone
293334 res = None
294335 try:
295 yield self.handler.get_version_info(self.local_user, "1")
336 yield defer.ensureDeferred(
337 self.handler.get_version_info(self.local_user, "1")
338 )
296339 except errors.SynapseError as e:
297340 res = e.code
298341 self.assertEqual(res, 404)
303346 """
304347 res = None
305348 try:
306 yield self.handler.get_room_keys(self.local_user, "bogus_version")
349 yield defer.ensureDeferred(
350 self.handler.get_room_keys(self.local_user, "bogus_version")
351 )
307352 except errors.SynapseError as e:
308353 res = e.code
309354 self.assertEqual(res, 404)
312357 def test_get_missing_room_keys(self):
313358 """Check we get an empty response from an empty backup
314359 """
315 version = yield self.handler.create_version(
316 self.local_user,
317 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
318 )
319 self.assertEqual(version, "1")
320
321 res = yield self.handler.get_room_keys(self.local_user, version)
360 version = yield defer.ensureDeferred(
361 self.handler.create_version(
362 self.local_user,
363 {
364 "algorithm": "m.megolm_backup.v1",
365 "auth_data": "first_version_auth_data",
366 },
367 )
368 )
369 self.assertEqual(version, "1")
370
371 res = yield defer.ensureDeferred(
372 self.handler.get_room_keys(self.local_user, version)
373 )
322374 self.assertDictEqual(res, {"rooms": {}})
323375
324376 # TODO: test the locking semantics when uploading room_keys,
330382 """
331383 res = None
332384 try:
333 yield self.handler.upload_room_keys(
334 self.local_user, "no_version", room_keys
385 yield defer.ensureDeferred(
386 self.handler.upload_room_keys(self.local_user, "no_version", room_keys)
335387 )
336388 except errors.SynapseError as e:
337389 res = e.code
342394 """Check that we get a 404 on uploading keys when an nonexistent version
343395 is specified
344396 """
345 version = yield self.handler.create_version(
346 self.local_user,
347 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
348 )
349 self.assertEqual(version, "1")
350
351 res = None
352 try:
353 yield self.handler.upload_room_keys(
354 self.local_user, "bogus_version", room_keys
397 version = yield defer.ensureDeferred(
398 self.handler.create_version(
399 self.local_user,
400 {
401 "algorithm": "m.megolm_backup.v1",
402 "auth_data": "first_version_auth_data",
403 },
404 )
405 )
406 self.assertEqual(version, "1")
407
408 res = None
409 try:
410 yield defer.ensureDeferred(
411 self.handler.upload_room_keys(
412 self.local_user, "bogus_version", room_keys
413 )
355414 )
356415 except errors.SynapseError as e:
357416 res = e.code
361420 def test_upload_room_keys_wrong_version(self):
362421 """Check that we get a 403 on uploading keys for an old version
363422 """
364 version = yield self.handler.create_version(
365 self.local_user,
366 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
367 )
368 self.assertEqual(version, "1")
369
370 version = yield self.handler.create_version(
371 self.local_user,
372 {
373 "algorithm": "m.megolm_backup.v1",
374 "auth_data": "second_version_auth_data",
375 },
423 version = yield defer.ensureDeferred(
424 self.handler.create_version(
425 self.local_user,
426 {
427 "algorithm": "m.megolm_backup.v1",
428 "auth_data": "first_version_auth_data",
429 },
430 )
431 )
432 self.assertEqual(version, "1")
433
434 version = yield defer.ensureDeferred(
435 self.handler.create_version(
436 self.local_user,
437 {
438 "algorithm": "m.megolm_backup.v1",
439 "auth_data": "second_version_auth_data",
440 },
441 )
376442 )
377443 self.assertEqual(version, "2")
378444
379445 res = None
380446 try:
381 yield self.handler.upload_room_keys(self.local_user, "1", room_keys)
447 yield defer.ensureDeferred(
448 self.handler.upload_room_keys(self.local_user, "1", room_keys)
449 )
382450 except errors.SynapseError as e:
383451 res = e.code
384452 self.assertEqual(res, 403)
387455 def test_upload_room_keys_insert(self):
388456 """Check that we can insert and retrieve keys for a session
389457 """
390 version = yield self.handler.create_version(
391 self.local_user,
392 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
393 )
394 self.assertEqual(version, "1")
395
396 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
397
398 res = yield self.handler.get_room_keys(self.local_user, version)
458 version = yield defer.ensureDeferred(
459 self.handler.create_version(
460 self.local_user,
461 {
462 "algorithm": "m.megolm_backup.v1",
463 "auth_data": "first_version_auth_data",
464 },
465 )
466 )
467 self.assertEqual(version, "1")
468
469 yield defer.ensureDeferred(
470 self.handler.upload_room_keys(self.local_user, version, room_keys)
471 )
472
473 res = yield defer.ensureDeferred(
474 self.handler.get_room_keys(self.local_user, version)
475 )
399476 self.assertDictEqual(res, room_keys)
400477
401478 # check getting room_keys for a given room
402 res = yield self.handler.get_room_keys(
403 self.local_user, version, room_id="!abc:matrix.org"
479 res = yield defer.ensureDeferred(
480 self.handler.get_room_keys(
481 self.local_user, version, room_id="!abc:matrix.org"
482 )
404483 )
405484 self.assertDictEqual(res, room_keys)
406485
407486 # check getting room_keys for a given session_id
408 res = yield self.handler.get_room_keys(
409 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
487 res = yield defer.ensureDeferred(
488 self.handler.get_room_keys(
489 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
490 )
410491 )
411492 self.assertDictEqual(res, room_keys)
412493
414495 def test_upload_room_keys_merge(self):
415496 """Check that we can upload a new room_key for an existing session and
416497 have it correctly merged"""
417 version = yield self.handler.create_version(
418 self.local_user,
419 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
420 )
421 self.assertEqual(version, "1")
422
423 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
498 version = yield defer.ensureDeferred(
499 self.handler.create_version(
500 self.local_user,
501 {
502 "algorithm": "m.megolm_backup.v1",
503 "auth_data": "first_version_auth_data",
504 },
505 )
506 )
507 self.assertEqual(version, "1")
508
509 yield defer.ensureDeferred(
510 self.handler.upload_room_keys(self.local_user, version, room_keys)
511 )
424512
425513 # get the etag to compare to future versions
426 res = yield self.handler.get_version_info(self.local_user)
514 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
427515 backup_etag = res["etag"]
428516 self.assertEqual(res["count"], 1)
429517
433521 # test that increasing the message_index doesn't replace the existing session
434522 new_room_key["first_message_index"] = 2
435523 new_room_key["session_data"] = "new"
436 yield self.handler.upload_room_keys(self.local_user, version, new_room_keys)
437
438 res = yield self.handler.get_room_keys(self.local_user, version)
524 yield defer.ensureDeferred(
525 self.handler.upload_room_keys(self.local_user, version, new_room_keys)
526 )
527
528 res = yield defer.ensureDeferred(
529 self.handler.get_room_keys(self.local_user, version)
530 )
439531 self.assertEqual(
440532 res["rooms"]["!abc:matrix.org"]["sessions"]["c0ff33"]["session_data"],
441533 "SSBBTSBBIEZJU0gK",
442534 )
443535
444536 # the etag should be the same since the session did not change
445 res = yield self.handler.get_version_info(self.local_user)
537 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
446538 self.assertEqual(res["etag"], backup_etag)
447539
448540 # test that marking the session as verified however /does/ replace it
449541 new_room_key["is_verified"] = True
450 yield self.handler.upload_room_keys(self.local_user, version, new_room_keys)
451
452 res = yield self.handler.get_room_keys(self.local_user, version)
542 yield defer.ensureDeferred(
543 self.handler.upload_room_keys(self.local_user, version, new_room_keys)
544 )
545
546 res = yield defer.ensureDeferred(
547 self.handler.get_room_keys(self.local_user, version)
548 )
453549 self.assertEqual(
454550 res["rooms"]["!abc:matrix.org"]["sessions"]["c0ff33"]["session_data"], "new"
455551 )
456552
457553 # the etag should NOT be equal now, since the key changed
458 res = yield self.handler.get_version_info(self.local_user)
554 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
459555 self.assertNotEqual(res["etag"], backup_etag)
460556 backup_etag = res["etag"]
461557
463559 # with a lower forwarding count
464560 new_room_key["forwarded_count"] = 2
465561 new_room_key["session_data"] = "other"
466 yield self.handler.upload_room_keys(self.local_user, version, new_room_keys)
467
468 res = yield self.handler.get_room_keys(self.local_user, version)
562 yield defer.ensureDeferred(
563 self.handler.upload_room_keys(self.local_user, version, new_room_keys)
564 )
565
566 res = yield defer.ensureDeferred(
567 self.handler.get_room_keys(self.local_user, version)
568 )
469569 self.assertEqual(
470570 res["rooms"]["!abc:matrix.org"]["sessions"]["c0ff33"]["session_data"], "new"
471571 )
472572
473573 # the etag should be the same since the session did not change
474 res = yield self.handler.get_version_info(self.local_user)
574 res = yield defer.ensureDeferred(self.handler.get_version_info(self.local_user))
475575 self.assertEqual(res["etag"], backup_etag)
476576
477577 # TODO: check edge cases as well as the common variations here
480580 def test_delete_room_keys(self):
481581 """Check that we can insert and delete keys for a session
482582 """
483 version = yield self.handler.create_version(
484 self.local_user,
485 {"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
583 version = yield defer.ensureDeferred(
584 self.handler.create_version(
585 self.local_user,
586 {
587 "algorithm": "m.megolm_backup.v1",
588 "auth_data": "first_version_auth_data",
589 },
590 )
486591 )
487592 self.assertEqual(version, "1")
488593
489594 # check for bulk-delete
490 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
491 yield self.handler.delete_room_keys(self.local_user, version)
492 res = yield self.handler.get_room_keys(
493 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
595 yield defer.ensureDeferred(
596 self.handler.upload_room_keys(self.local_user, version, room_keys)
597 )
598 yield defer.ensureDeferred(
599 self.handler.delete_room_keys(self.local_user, version)
600 )
601 res = yield defer.ensureDeferred(
602 self.handler.get_room_keys(
603 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
604 )
494605 )
495606 self.assertDictEqual(res, {"rooms": {}})
496607
497608 # check for bulk-delete per room
498 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
499 yield self.handler.delete_room_keys(
500 self.local_user, version, room_id="!abc:matrix.org"
501 )
502 res = yield self.handler.get_room_keys(
503 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
609 yield defer.ensureDeferred(
610 self.handler.upload_room_keys(self.local_user, version, room_keys)
611 )
612 yield defer.ensureDeferred(
613 self.handler.delete_room_keys(
614 self.local_user, version, room_id="!abc:matrix.org"
615 )
616 )
617 res = yield defer.ensureDeferred(
618 self.handler.get_room_keys(
619 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
620 )
504621 )
505622 self.assertDictEqual(res, {"rooms": {}})
506623
507624 # check for bulk-delete per session
508 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
509 yield self.handler.delete_room_keys(
510 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
511 )
512 res = yield self.handler.get_room_keys(
513 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
625 yield defer.ensureDeferred(
626 self.handler.upload_room_keys(self.local_user, version, room_keys)
627 )
628 yield defer.ensureDeferred(
629 self.handler.delete_room_keys(
630 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
631 )
632 )
633 res = yield defer.ensureDeferred(
634 self.handler.get_room_keys(
635 self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
636 )
514637 )
515638 self.assertDictEqual(res, {"rooms": {}})
7171 def test_get_my_name(self):
7272 yield self.store.set_profile_displayname(self.frank.localpart, "Frank")
7373
74 displayname = yield self.handler.get_displayname(self.frank)
74 displayname = yield defer.ensureDeferred(
75 self.handler.get_displayname(self.frank)
76 )
7577
7678 self.assertEquals("Frank", displayname)
7779
139141 {"displayname": "Alice"}
140142 )
141143
142 displayname = yield self.handler.get_displayname(self.alice)
144 displayname = yield defer.ensureDeferred(
145 self.handler.get_displayname(self.alice)
146 )
143147
144148 self.assertEquals(displayname, "Alice")
145149 self.mock_federation.make_query.assert_called_with(
154158 yield self.store.create_profile("caroline")
155159 yield self.store.set_profile_displayname("caroline", "Caroline")
156160
157 response = yield self.query_handlers["profile"](
158 {"user_id": "@caroline:test", "field": "displayname"}
161 response = yield defer.ensureDeferred(
162 self.query_handlers["profile"](
163 {"user_id": "@caroline:test", "field": "displayname"}
164 )
159165 )
160166
161167 self.assertEquals({"displayname": "Caroline"}, response)
165171 yield self.store.set_profile_avatar_url(
166172 self.frank.localpart, "http://my.server/me.png"
167173 )
168
169 avatar_url = yield self.handler.get_avatar_url(self.frank)
174 avatar_url = yield defer.ensureDeferred(self.handler.get_avatar_url(self.frank))
170175
171176 self.assertEquals("http://my.server/me.png", avatar_url)
172177
137137
138138 self.datastore.get_joined_hosts_for_room = get_joined_hosts_for_room
139139
140 def get_current_users_in_room(room_id):
140 def get_users_in_room(room_id):
141141 return defer.succeed({str(u) for u in self.room_members})
142142
143 hs.get_state_handler().get_current_users_in_room = get_current_users_in_room
143 self.datastore.get_users_in_room = get_users_in_room
144144
145145 self.datastore.get_user_directory_stream_pos.return_value = (
146146 # we deliberately return a non-None stream pos to avoid doing an initial_spam
6666 return test_server_connection_factory
6767
6868
69 # Once Async Mocks or lambdas are supported this can go away.
70 def generate_resolve_service(result):
71 async def resolve_service(_):
72 return result
73
74 return resolve_service
75
76
6977 class MatrixFederationAgentTests(unittest.TestCase):
7078 def setUp(self):
7179 self.reactor = ThreadedMemoryReactorClock()
372380 """
373381 Test the behaviour when the certificate on the server doesn't match the hostname
374382 """
375 self.mock_resolver.resolve_service.side_effect = lambda _: []
383 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
376384 self.reactor.lookups["testserv1"] = "1.2.3.4"
377385
378386 test_d = self._make_get_request(b"matrix://testserv1/foo/bar")
455463 Test the behaviour when the server name has no port, no SRV, and no well-known
456464 """
457465
458 self.mock_resolver.resolve_service.side_effect = lambda _: []
466 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
459467 self.reactor.lookups["testserv"] = "1.2.3.4"
460468
461469 test_d = self._make_get_request(b"matrix://testserv/foo/bar")
509517 """Test the behaviour when the .well-known delegates elsewhere
510518 """
511519
512 self.mock_resolver.resolve_service.side_effect = lambda _: []
520 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
513521 self.reactor.lookups["testserv"] = "1.2.3.4"
514522 self.reactor.lookups["target-server"] = "1::f"
515523
571579 """Test the behaviour when the server name has no port and no SRV record, but
572580 the .well-known has a 300 redirect
573581 """
574 self.mock_resolver.resolve_service.side_effect = lambda _: []
582 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
575583 self.reactor.lookups["testserv"] = "1.2.3.4"
576584 self.reactor.lookups["target-server"] = "1::f"
577585
660668 Test the behaviour when the server name has an *invalid* well-known (and no SRV)
661669 """
662670
663 self.mock_resolver.resolve_service.side_effect = lambda _: []
671 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
664672 self.reactor.lookups["testserv"] = "1.2.3.4"
665673
666674 test_d = self._make_get_request(b"matrix://testserv/foo/bar")
716724 # the config left to the default, which will not trust it (since the
717725 # presented cert is signed by a test CA)
718726
719 self.mock_resolver.resolve_service.side_effect = lambda _: []
727 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
720728 self.reactor.lookups["testserv"] = "1.2.3.4"
721729
722730 config = default_config("test", parse=True)
763771 """
764772 Test the behaviour when there is a single SRV record
765773 """
766 self.mock_resolver.resolve_service.side_effect = lambda _: [
767 Server(host=b"srvtarget", port=8443)
768 ]
774 self.mock_resolver.resolve_service.side_effect = generate_resolve_service(
775 [Server(host=b"srvtarget", port=8443)]
776 )
769777 self.reactor.lookups["srvtarget"] = "1.2.3.4"
770778
771779 test_d = self._make_get_request(b"matrix://testserv/foo/bar")
818826 self.assertEqual(host, "1.2.3.4")
819827 self.assertEqual(port, 443)
820828
821 self.mock_resolver.resolve_service.side_effect = lambda _: [
822 Server(host=b"srvtarget", port=8443)
823 ]
829 self.mock_resolver.resolve_service.side_effect = generate_resolve_service(
830 [Server(host=b"srvtarget", port=8443)]
831 )
824832
825833 self._handle_well_known_connection(
826834 client_factory,
860868 def test_idna_servername(self):
861869 """test the behaviour when the server name has idna chars in"""
862870
863 self.mock_resolver.resolve_service.side_effect = lambda _: []
871 self.mock_resolver.resolve_service.side_effect = generate_resolve_service([])
864872
865873 # the resolver is always called with the IDNA hostname as a native string.
866874 self.reactor.lookups["xn--bcher-kva.com"] = "1.2.3.4"
921929 def test_idna_srv_target(self):
922930 """test the behaviour when the target of a SRV record has idna chars"""
923931
924 self.mock_resolver.resolve_service.side_effect = lambda _: [
925 Server(host=b"xn--trget-3qa.com", port=8443) # târget.com
926 ]
932 self.mock_resolver.resolve_service.side_effect = generate_resolve_service(
933 [Server(host=b"xn--trget-3qa.com", port=8443)] # târget.com
934 )
927935 self.reactor.lookups["xn--trget-3qa.com"] = "1.2.3.4"
928936
929937 test_d = self._make_get_request(b"matrix://xn--bcher-kva.com/foo/bar")
10861094 def test_srv_fallbacks(self):
10871095 """Test that other SRV results are tried if the first one fails.
10881096 """
1089
1090 self.mock_resolver.resolve_service.side_effect = lambda _: [
1091 Server(host=b"target.com", port=8443),
1092 Server(host=b"target.com", port=8444),
1093 ]
1097 self.mock_resolver.resolve_service.side_effect = generate_resolve_service(
1098 [
1099 Server(host=b"target.com", port=8443),
1100 Server(host=b"target.com", port=8444),
1101 ]
1102 )
10941103 self.reactor.lookups["target.com"] = "1.2.3.4"
10951104
10961105 test_d = self._make_get_request(b"matrix://testserv/foo/bar")
2121 from twisted.names import dns, error
2222
2323 from synapse.http.federation.srv_resolver import SrvResolver
24 from synapse.logging.context import SENTINEL_CONTEXT, LoggingContext, current_context
24 from synapse.logging.context import LoggingContext, current_context
2525
2626 from tests import unittest
2727 from tests.utils import MockClock
4949
5050 with LoggingContext("one") as ctx:
5151 resolve_d = resolver.resolve_service(service_name)
52
53 self.assertNoResult(resolve_d)
54
55 # should have reset to the sentinel context
56 self.assertIs(current_context(), SENTINEL_CONTEXT)
57
58 result = yield resolve_d
52 result = yield defer.ensureDeferred(resolve_d)
5953
6054 # should have restored our context
6155 self.assertIs(current_context(), ctx)
9084 cache = {service_name: [entry]}
9185 resolver = SrvResolver(dns_client=dns_client_mock, cache=cache)
9286
93 servers = yield resolver.resolve_service(service_name)
87 servers = yield defer.ensureDeferred(resolver.resolve_service(service_name))
9488
9589 dns_client_mock.lookupService.assert_called_once_with(service_name)
9690
116110 dns_client=dns_client_mock, cache=cache, get_time=clock.time
117111 )
118112
119 servers = yield resolver.resolve_service(service_name)
113 servers = yield defer.ensureDeferred(resolver.resolve_service(service_name))
120114
121115 self.assertFalse(dns_client_mock.lookupService.called)
122116
135129 resolver = SrvResolver(dns_client=dns_client_mock, cache=cache)
136130
137131 with self.assertRaises(error.DNSServerError):
138 yield resolver.resolve_service(service_name)
132 yield defer.ensureDeferred(resolver.resolve_service(service_name))
139133
140134 @defer.inlineCallbacks
141135 def test_name_error(self):
148142 cache = {}
149143 resolver = SrvResolver(dns_client=dns_client_mock, cache=cache)
150144
151 servers = yield resolver.resolve_service(service_name)
145 servers = yield defer.ensureDeferred(resolver.resolve_service(service_name))
152146
153147 self.assertEquals(len(servers), 0)
154148 self.assertEquals(len(cache), 0)
165159 cache = {}
166160 resolver = SrvResolver(dns_client=dns_client_mock, cache=cache)
167161
168 resolve_d = resolver.resolve_service(service_name)
169 self.assertNoResult(resolve_d)
162 # Old versions of Twisted don't have an ensureDeferred in failureResultOf.
163 resolve_d = defer.ensureDeferred(resolver.resolve_service(service_name))
170164
171165 # returning a single "." should make the lookup fail with a ConenctError
172166 lookup_deferred.callback(
191185 cache = {}
192186 resolver = SrvResolver(dns_client=dns_client_mock, cache=cache)
193187
194 resolve_d = resolver.resolve_service(service_name)
195 self.assertNoResult(resolve_d)
188 # Old versions of Twisted don't have an ensureDeferred in successResultOf.
189 resolve_d = defer.ensureDeferred(resolver.resolve_service(service_name))
196190
197191 lookup_deferred.callback(
198192 (
1313 # limitations under the License.
1414
1515 import logging
16 from typing import Any, List, Optional, Tuple
16 from typing import Any, Callable, List, Optional, Tuple
1717
1818 import attr
1919
2525 GenericWorkerReplicationHandler,
2626 GenericWorkerServer,
2727 )
28 from synapse.http.server import JsonResource
2829 from synapse.http.site import SynapseRequest
29 from synapse.replication.http import streams
30 from synapse.replication.http import ReplicationRestResource, streams
3031 from synapse.replication.tcp.handler import ReplicationCommandHandler
3132 from synapse.replication.tcp.protocol import ClientReplicationStreamProtocol
3233 from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
3435 from synapse.util import Clock
3536
3637 from tests import unittest
37 from tests.server import FakeTransport
38 from tests.server import FakeTransport, render
3839
3940 logger = logging.getLogger(__name__)
4041
179180 self.assertEqual(request.method, b"GET")
180181
181182
183 class BaseMultiWorkerStreamTestCase(unittest.HomeserverTestCase):
184 """Base class for tests running multiple workers.
185
186 Automatically handle HTTP replication requests from workers to master,
187 unlike `BaseStreamTestCase`.
188 """
189
190 servlets = [] # type: List[Callable[[HomeServer, JsonResource], None]]
191
192 def setUp(self):
193 super().setUp()
194
195 # build a replication server
196 self.server_factory = ReplicationStreamProtocolFactory(self.hs)
197 self.streamer = self.hs.get_replication_streamer()
198
199 store = self.hs.get_datastore()
200 self.database = store.db
201
202 self.reactor.lookups["testserv"] = "1.2.3.4"
203
204 self._worker_hs_to_resource = {}
205
206 # When we see a connection attempt to the master replication listener we
207 # automatically set up the connection. This is so that tests don't
208 # manually have to go and explicitly set it up each time (plus sometimes
209 # it is impossible to write the handling explicitly in the tests).
210 self.reactor.add_tcp_client_callback(
211 "1.2.3.4", 8765, self._handle_http_replication_attempt
212 )
213
214 def create_test_json_resource(self):
215 """Overrides `HomeserverTestCase.create_test_json_resource`.
216 """
217 # We override this so that it automatically registers all the HTTP
218 # replication servlets, without having to explicitly do that in all
219 # subclassses.
220
221 resource = ReplicationRestResource(self.hs)
222
223 for servlet in self.servlets:
224 servlet(self.hs, resource)
225
226 return resource
227
228 def make_worker_hs(
229 self, worker_app: str, extra_config: dict = {}, **kwargs
230 ) -> HomeServer:
231 """Make a new worker HS instance, correctly connecting replcation
232 stream to the master HS.
233
234 Args:
235 worker_app: Type of worker, e.g. `synapse.app.federation_sender`.
236 extra_config: Any extra config to use for this instances.
237 **kwargs: Options that get passed to `self.setup_test_homeserver`,
238 useful to e.g. pass some mocks for things like `http_client`
239
240 Returns:
241 The new worker HomeServer instance.
242 """
243
244 config = self._get_worker_hs_config()
245 config["worker_app"] = worker_app
246 config.update(extra_config)
247
248 worker_hs = self.setup_test_homeserver(
249 homeserverToUse=GenericWorkerServer,
250 config=config,
251 reactor=self.reactor,
252 **kwargs
253 )
254
255 store = worker_hs.get_datastore()
256 store.db._db_pool = self.database._db_pool
257
258 repl_handler = ReplicationCommandHandler(worker_hs)
259 client = ClientReplicationStreamProtocol(
260 worker_hs, "client", "test", self.clock, repl_handler,
261 )
262 server = self.server_factory.buildProtocol(None)
263
264 client_transport = FakeTransport(server, self.reactor)
265 client.makeConnection(client_transport)
266
267 server_transport = FakeTransport(client, self.reactor)
268 server.makeConnection(server_transport)
269
270 # Set up a resource for the worker
271 resource = ReplicationRestResource(self.hs)
272
273 for servlet in self.servlets:
274 servlet(worker_hs, resource)
275
276 self._worker_hs_to_resource[worker_hs] = resource
277
278 return worker_hs
279
280 def _get_worker_hs_config(self) -> dict:
281 config = self.default_config()
282 config["worker_replication_host"] = "testserv"
283 config["worker_replication_http_port"] = "8765"
284 return config
285
286 def render_on_worker(self, worker_hs: HomeServer, request: SynapseRequest):
287 render(request, self._worker_hs_to_resource[worker_hs], self.reactor)
288
289 def replicate(self):
290 """Tell the master side of replication that something has happened, and then
291 wait for the replication to occur.
292 """
293 self.streamer.on_notifier_poke()
294 self.pump()
295
296 def _handle_http_replication_attempt(self):
297 """Handles a connection attempt to the master replication HTTP
298 listener.
299 """
300
301 # We should have at least one outbound connection attempt, where the
302 # last is one to the HTTP repication IP/port.
303 clients = self.reactor.tcpClients
304 self.assertGreaterEqual(len(clients), 1)
305 (host, port, client_factory, _timeout, _bindAddress) = clients.pop()
306 self.assertEqual(host, "1.2.3.4")
307 self.assertEqual(port, 8765)
308
309 # Set up client side protocol
310 client_protocol = client_factory.buildProtocol(None)
311
312 request_factory = OneShotRequestFactory()
313
314 # Set up the server side protocol
315 channel = _PushHTTPChannel(self.reactor)
316 channel.requestFactory = request_factory
317 channel.site = self.site
318
319 # Connect client to server and vice versa.
320 client_to_server_transport = FakeTransport(
321 channel, self.reactor, client_protocol
322 )
323 client_protocol.makeConnection(client_to_server_transport)
324
325 server_to_client_transport = FakeTransport(
326 client_protocol, self.reactor, channel
327 )
328 channel.makeConnection(server_to_client_transport)
329
330 # Note: at this point we've wired everything up, but we need to return
331 # before the data starts flowing over the connections as this is called
332 # inside `connecTCP` before the connection has been passed back to the
333 # code that requested the TCP connection.
334
335
182336 class TestReplicationDataHandler(GenericWorkerReplicationHandler):
183337 """Drop-in for ReplicationDataHandler which just collects RDATA rows"""
184338
239393 if self._pull_to_push_producer:
240394 # We need to manually stop the _PullToPushProducer.
241395 self._pull_to_push_producer.stop()
396
397 def checkPersistence(self, request, version):
398 """Check whether the connection can be re-used
399 """
400 # We hijack this to always say no for ease of wiring stuff up in
401 # `handle_http_replication_attempt`.
402 request.responseHeaders.setRawHeaders(b"connection", [b"close"])
403 return False
242404
243405
244406 class _PullToPushProducer:
118118 OTHER_USER = "@other_user:localhost"
119119
120120 # have the user join
121 inject_member_event(self.hs, self.room_id, OTHER_USER, Membership.JOIN)
121 self.get_success(
122 inject_member_event(self.hs, self.room_id, OTHER_USER, Membership.JOIN)
123 )
122124
123125 # Update existing power levels with mod at PL50
124126 pls = self.helper.get_state(
156158 # roll back all the state by de-modding the user
157159 prev_events = fork_point
158160 pls["users"][OTHER_USER] = 0
159 pl_event = inject_event(
160 self.hs,
161 prev_event_ids=prev_events,
162 type=EventTypes.PowerLevels,
163 state_key="",
164 sender=self.user_id,
165 room_id=self.room_id,
166 content=pls,
161 pl_event = self.get_success(
162 inject_event(
163 self.hs,
164 prev_event_ids=prev_events,
165 type=EventTypes.PowerLevels,
166 state_key="",
167 sender=self.user_id,
168 room_id=self.room_id,
169 content=pls,
170 )
167171 )
168172
169173 # one more bit of state that doesn't get rolled back
267271
268272 # have the users join
269273 for u in user_ids:
270 inject_member_event(self.hs, self.room_id, u, Membership.JOIN)
274 self.get_success(
275 inject_member_event(self.hs, self.room_id, u, Membership.JOIN)
276 )
271277
272278 # Update existing power levels with mod at PL50
273279 pls = self.helper.get_state(
305311 pl_events = []
306312 for u in user_ids:
307313 pls["users"][u] = 0
308 e = inject_event(
309 self.hs,
310 prev_event_ids=prev_events,
311 type=EventTypes.PowerLevels,
312 state_key="",
313 sender=self.user_id,
314 room_id=self.room_id,
315 content=pls,
314 e = self.get_success(
315 inject_event(
316 self.hs,
317 prev_event_ids=prev_events,
318 type=EventTypes.PowerLevels,
319 state_key="",
320 sender=self.user_id,
321 room_id=self.room_id,
322 content=pls,
323 )
316324 )
317325 prev_events = [e.event_id]
318326 pl_events.append(e)
433441 body = "event %i" % (self.event_count,)
434442 self.event_count += 1
435443
436 return inject_event(
437 self.hs,
438 room_id=self.room_id,
439 sender=sender,
440 type="test_event",
441 content={"body": body},
442 **kwargs
444 return self.get_success(
445 inject_event(
446 self.hs,
447 room_id=self.room_id,
448 sender=sender,
449 type="test_event",
450 content={"body": body},
451 **kwargs
452 )
443453 )
444454
445455 def _inject_state_event(
458468 if body is None:
459469 body = "state event %s" % (state_key,)
460470
461 return inject_event(
462 self.hs,
463 room_id=self.room_id,
464 sender=sender,
465 type="test_state_event",
466 state_key=state_key,
467 content={"body": body},
468 )
471 return self.get_success(
472 inject_event(
473 self.hs,
474 room_id=self.room_id,
475 sender=sender,
476 type="test_state_event",
477 state_key=state_key,
478 content={"body": body},
479 )
480 )
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import logging
15
16 from synapse.api.constants import LoginType
17 from synapse.http.site import SynapseRequest
18 from synapse.rest.client.v2_alpha import register
19
20 from tests.replication._base import BaseMultiWorkerStreamTestCase
21 from tests.rest.client.v2_alpha.test_auth import DummyRecaptchaChecker
22 from tests.server import FakeChannel
23
24 logger = logging.getLogger(__name__)
25
26
27 class ClientReaderTestCase(BaseMultiWorkerStreamTestCase):
28 """Base class for tests of the replication streams"""
29
30 servlets = [register.register_servlets]
31
32 def prepare(self, reactor, clock, hs):
33 self.recaptcha_checker = DummyRecaptchaChecker(hs)
34 auth_handler = hs.get_auth_handler()
35 auth_handler.checkers[LoginType.RECAPTCHA] = self.recaptcha_checker
36
37 def _get_worker_hs_config(self) -> dict:
38 config = self.default_config()
39 config["worker_app"] = "synapse.app.client_reader"
40 config["worker_replication_host"] = "testserv"
41 config["worker_replication_http_port"] = "8765"
42 return config
43
44 def test_register_single_worker(self):
45 """Test that registration works when using a single client reader worker.
46 """
47 worker_hs = self.make_worker_hs("synapse.app.client_reader")
48
49 request_1, channel_1 = self.make_request(
50 "POST",
51 "register",
52 {"username": "user", "type": "m.login.password", "password": "bar"},
53 ) # type: SynapseRequest, FakeChannel
54 self.render_on_worker(worker_hs, request_1)
55 self.assertEqual(request_1.code, 401)
56
57 # Grab the session
58 session = channel_1.json_body["session"]
59
60 # also complete the dummy auth
61 request_2, channel_2 = self.make_request(
62 "POST", "register", {"auth": {"session": session, "type": "m.login.dummy"}}
63 ) # type: SynapseRequest, FakeChannel
64 self.render_on_worker(worker_hs, request_2)
65 self.assertEqual(request_2.code, 200)
66
67 # We're given a registered user.
68 self.assertEqual(channel_2.json_body["user_id"], "@user:test")
69
70 def test_register_multi_worker(self):
71 """Test that registration works when using multiple client reader workers.
72 """
73 worker_hs_1 = self.make_worker_hs("synapse.app.client_reader")
74 worker_hs_2 = self.make_worker_hs("synapse.app.client_reader")
75
76 request_1, channel_1 = self.make_request(
77 "POST",
78 "register",
79 {"username": "user", "type": "m.login.password", "password": "bar"},
80 ) # type: SynapseRequest, FakeChannel
81 self.render_on_worker(worker_hs_1, request_1)
82 self.assertEqual(request_1.code, 401)
83
84 # Grab the session
85 session = channel_1.json_body["session"]
86
87 # also complete the dummy auth
88 request_2, channel_2 = self.make_request(
89 "POST", "register", {"auth": {"session": session, "type": "m.login.dummy"}}
90 ) # type: SynapseRequest, FakeChannel
91 self.render_on_worker(worker_hs_2, request_2)
92 self.assertEqual(request_2.code, 200)
93
94 # We're given a registered user.
95 self.assertEqual(channel_2.json_body["user_id"], "@user:test")
3131
3232 def make_homeserver(self, reactor, clock):
3333 hs = self.setup_test_homeserver(homeserverToUse=GenericWorkerServer)
34
3435 return hs
3536
3637 def test_federation_ack_sent(self):
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import logging
15
16 from mock import Mock
17
18 from twisted.internet import defer
19
20 from synapse.api.constants import EventTypes, Membership
21 from synapse.events.builder import EventBuilderFactory
22 from synapse.rest.admin import register_servlets_for_client_rest_resource
23 from synapse.rest.client.v1 import login, room
24 from synapse.types import UserID
25
26 from tests.replication._base import BaseMultiWorkerStreamTestCase
27
28 logger = logging.getLogger(__name__)
29
30
31 class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
32 servlets = [
33 login.register_servlets,
34 register_servlets_for_client_rest_resource,
35 room.register_servlets,
36 ]
37
38 def default_config(self):
39 conf = super().default_config()
40 conf["send_federation"] = False
41 return conf
42
43 def test_send_event_single_sender(self):
44 """Test that using a single federation sender worker correctly sends a
45 new event.
46 """
47 mock_client = Mock(spec=["put_json"])
48 mock_client.put_json.side_effect = lambda *_, **__: defer.succeed({})
49
50 self.make_worker_hs(
51 "synapse.app.federation_sender",
52 {"send_federation": True},
53 http_client=mock_client,
54 )
55
56 user = self.register_user("user", "pass")
57 token = self.login("user", "pass")
58
59 room = self.create_room_with_remote_server(user, token)
60
61 mock_client.put_json.reset_mock()
62
63 self.create_and_send_event(room, UserID.from_string(user))
64 self.replicate()
65
66 # Assert that the event was sent out over federation.
67 mock_client.put_json.assert_called()
68 self.assertEqual(mock_client.put_json.call_args[0][0], "other_server")
69 self.assertTrue(mock_client.put_json.call_args[1]["data"].get("pdus"))
70
71 def test_send_event_sharded(self):
72 """Test that using two federation sender workers correctly sends
73 new events.
74 """
75 mock_client1 = Mock(spec=["put_json"])
76 mock_client1.put_json.side_effect = lambda *_, **__: defer.succeed({})
77 self.make_worker_hs(
78 "synapse.app.federation_sender",
79 {
80 "send_federation": True,
81 "worker_name": "sender1",
82 "federation_sender_instances": ["sender1", "sender2"],
83 },
84 http_client=mock_client1,
85 )
86
87 mock_client2 = Mock(spec=["put_json"])
88 mock_client2.put_json.side_effect = lambda *_, **__: defer.succeed({})
89 self.make_worker_hs(
90 "synapse.app.federation_sender",
91 {
92 "send_federation": True,
93 "worker_name": "sender2",
94 "federation_sender_instances": ["sender1", "sender2"],
95 },
96 http_client=mock_client2,
97 )
98
99 user = self.register_user("user2", "pass")
100 token = self.login("user2", "pass")
101
102 sent_on_1 = False
103 sent_on_2 = False
104 for i in range(20):
105 server_name = "other_server_%d" % (i,)
106 room = self.create_room_with_remote_server(user, token, server_name)
107 mock_client1.reset_mock() # type: ignore[attr-defined]
108 mock_client2.reset_mock() # type: ignore[attr-defined]
109
110 self.create_and_send_event(room, UserID.from_string(user))
111 self.replicate()
112
113 if mock_client1.put_json.called:
114 sent_on_1 = True
115 mock_client2.put_json.assert_not_called()
116 self.assertEqual(mock_client1.put_json.call_args[0][0], server_name)
117 self.assertTrue(mock_client1.put_json.call_args[1]["data"].get("pdus"))
118 elif mock_client2.put_json.called:
119 sent_on_2 = True
120 mock_client1.put_json.assert_not_called()
121 self.assertEqual(mock_client2.put_json.call_args[0][0], server_name)
122 self.assertTrue(mock_client2.put_json.call_args[1]["data"].get("pdus"))
123 else:
124 raise AssertionError(
125 "Expected send transaction from one or the other sender"
126 )
127
128 if sent_on_1 and sent_on_2:
129 break
130
131 self.assertTrue(sent_on_1)
132 self.assertTrue(sent_on_2)
133
134 def test_send_typing_sharded(self):
135 """Test that using two federation sender workers correctly sends
136 new typing EDUs.
137 """
138 mock_client1 = Mock(spec=["put_json"])
139 mock_client1.put_json.side_effect = lambda *_, **__: defer.succeed({})
140 self.make_worker_hs(
141 "synapse.app.federation_sender",
142 {
143 "send_federation": True,
144 "worker_name": "sender1",
145 "federation_sender_instances": ["sender1", "sender2"],
146 },
147 http_client=mock_client1,
148 )
149
150 mock_client2 = Mock(spec=["put_json"])
151 mock_client2.put_json.side_effect = lambda *_, **__: defer.succeed({})
152 self.make_worker_hs(
153 "synapse.app.federation_sender",
154 {
155 "send_federation": True,
156 "worker_name": "sender2",
157 "federation_sender_instances": ["sender1", "sender2"],
158 },
159 http_client=mock_client2,
160 )
161
162 user = self.register_user("user3", "pass")
163 token = self.login("user3", "pass")
164
165 typing_handler = self.hs.get_typing_handler()
166
167 sent_on_1 = False
168 sent_on_2 = False
169 for i in range(20):
170 server_name = "other_server_%d" % (i,)
171 room = self.create_room_with_remote_server(user, token, server_name)
172 mock_client1.reset_mock() # type: ignore[attr-defined]
173 mock_client2.reset_mock() # type: ignore[attr-defined]
174
175 self.get_success(
176 typing_handler.started_typing(
177 target_user=UserID.from_string(user),
178 auth_user=UserID.from_string(user),
179 room_id=room,
180 timeout=20000,
181 )
182 )
183
184 self.replicate()
185
186 if mock_client1.put_json.called:
187 sent_on_1 = True
188 mock_client2.put_json.assert_not_called()
189 self.assertEqual(mock_client1.put_json.call_args[0][0], server_name)
190 self.assertTrue(mock_client1.put_json.call_args[1]["data"].get("edus"))
191 elif mock_client2.put_json.called:
192 sent_on_2 = True
193 mock_client1.put_json.assert_not_called()
194 self.assertEqual(mock_client2.put_json.call_args[0][0], server_name)
195 self.assertTrue(mock_client2.put_json.call_args[1]["data"].get("edus"))
196 else:
197 raise AssertionError(
198 "Expected send transaction from one or the other sender"
199 )
200
201 if sent_on_1 and sent_on_2:
202 break
203
204 self.assertTrue(sent_on_1)
205 self.assertTrue(sent_on_2)
206
207 def create_room_with_remote_server(self, user, token, remote_server="other_server"):
208 room = self.helper.create_room_as(user, tok=token)
209 store = self.hs.get_datastore()
210 federation = self.hs.get_handlers().federation_handler
211
212 prev_event_ids = self.get_success(store.get_latest_event_ids_in_room(room))
213 room_version = self.get_success(store.get_room_version(room))
214
215 factory = EventBuilderFactory(self.hs)
216 factory.hostname = remote_server
217
218 user_id = UserID("user", remote_server).to_string()
219
220 event_dict = {
221 "type": EventTypes.Member,
222 "state_key": user_id,
223 "content": {"membership": Membership.JOIN},
224 "sender": user_id,
225 "room_id": room,
226 }
227
228 builder = factory.for_room_version(room_version, event_dict)
229 join_event = self.get_success(builder.build(prev_event_ids))
230
231 self.get_success(federation.on_send_join_request(remote_server, join_event))
232 self.replicate()
233
234 return room
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import logging
15
16 from mock import Mock
17
18 from twisted.internet import defer
19
20 from synapse.rest import admin
21 from synapse.rest.client.v1 import login, room
22
23 from tests.replication._base import BaseMultiWorkerStreamTestCase
24
25 logger = logging.getLogger(__name__)
26
27
28 class PusherShardTestCase(BaseMultiWorkerStreamTestCase):
29 """Checks pusher sharding works
30 """
31
32 servlets = [
33 admin.register_servlets_for_client_rest_resource,
34 room.register_servlets,
35 login.register_servlets,
36 ]
37
38 def prepare(self, reactor, clock, hs):
39 # Register a user who sends a message that we'll get notified about
40 self.other_user_id = self.register_user("otheruser", "pass")
41 self.other_access_token = self.login("otheruser", "pass")
42
43 def default_config(self):
44 conf = super().default_config()
45 conf["start_pushers"] = False
46 return conf
47
48 def _create_pusher_and_send_msg(self, localpart):
49 # Create a user that will get push notifications
50 user_id = self.register_user(localpart, "pass")
51 access_token = self.login(localpart, "pass")
52
53 # Register a pusher
54 user_dict = self.get_success(
55 self.hs.get_datastore().get_user_by_access_token(access_token)
56 )
57 token_id = user_dict["token_id"]
58
59 self.get_success(
60 self.hs.get_pusherpool().add_pusher(
61 user_id=user_id,
62 access_token=token_id,
63 kind="http",
64 app_id="m.http",
65 app_display_name="HTTP Push Notifications",
66 device_display_name="pushy push",
67 pushkey="a@example.com",
68 lang=None,
69 data={"url": "https://push.example.com/push"},
70 )
71 )
72
73 self.pump()
74
75 # Create a room
76 room = self.helper.create_room_as(user_id, tok=access_token)
77
78 # The other user joins
79 self.helper.join(
80 room=room, user=self.other_user_id, tok=self.other_access_token
81 )
82
83 # The other user sends some messages
84 response = self.helper.send(room, body="Hi!", tok=self.other_access_token)
85 event_id = response["event_id"]
86
87 return event_id
88
89 def test_send_push_single_worker(self):
90 """Test that registration works when using a pusher worker.
91 """
92 http_client_mock = Mock(spec_set=["post_json_get_json"])
93 http_client_mock.post_json_get_json.side_effect = lambda *_, **__: defer.succeed(
94 {}
95 )
96
97 self.make_worker_hs(
98 "synapse.app.pusher",
99 {"start_pushers": True},
100 proxied_http_client=http_client_mock,
101 )
102
103 event_id = self._create_pusher_and_send_msg("user")
104
105 # Advance time a bit, so the pusher will register something has happened
106 self.pump()
107
108 http_client_mock.post_json_get_json.assert_called_once()
109 self.assertEqual(
110 http_client_mock.post_json_get_json.call_args[0][0],
111 "https://push.example.com/push",
112 )
113 self.assertEqual(
114 event_id,
115 http_client_mock.post_json_get_json.call_args[0][1]["notification"][
116 "event_id"
117 ],
118 )
119
120 def test_send_push_multiple_workers(self):
121 """Test that registration works when using sharded pusher workers.
122 """
123 http_client_mock1 = Mock(spec_set=["post_json_get_json"])
124 http_client_mock1.post_json_get_json.side_effect = lambda *_, **__: defer.succeed(
125 {}
126 )
127
128 self.make_worker_hs(
129 "synapse.app.pusher",
130 {
131 "start_pushers": True,
132 "worker_name": "pusher1",
133 "pusher_instances": ["pusher1", "pusher2"],
134 },
135 proxied_http_client=http_client_mock1,
136 )
137
138 http_client_mock2 = Mock(spec_set=["post_json_get_json"])
139 http_client_mock2.post_json_get_json.side_effect = lambda *_, **__: defer.succeed(
140 {}
141 )
142
143 self.make_worker_hs(
144 "synapse.app.pusher",
145 {
146 "start_pushers": True,
147 "worker_name": "pusher2",
148 "pusher_instances": ["pusher1", "pusher2"],
149 },
150 proxied_http_client=http_client_mock2,
151 )
152
153 # We choose a user name that we know should go to pusher1.
154 event_id = self._create_pusher_and_send_msg("user2")
155
156 # Advance time a bit, so the pusher will register something has happened
157 self.pump()
158
159 http_client_mock1.post_json_get_json.assert_called_once()
160 http_client_mock2.post_json_get_json.assert_not_called()
161 self.assertEqual(
162 http_client_mock1.post_json_get_json.call_args[0][0],
163 "https://push.example.com/push",
164 )
165 self.assertEqual(
166 event_id,
167 http_client_mock1.post_json_get_json.call_args[0][1]["notification"][
168 "event_id"
169 ],
170 )
171
172 http_client_mock1.post_json_get_json.reset_mock()
173 http_client_mock2.post_json_get_json.reset_mock()
174
175 # Now we choose a user name that we know should go to pusher2.
176 event_id = self._create_pusher_and_send_msg("user4")
177
178 # Advance time a bit, so the pusher will register something has happened
179 self.pump()
180
181 http_client_mock1.post_json_get_json.assert_not_called()
182 http_client_mock2.post_json_get_json.assert_called_once()
183 self.assertEqual(
184 http_client_mock2.post_json_get_json.call_args[0][0],
185 "https://push.example.com/push",
186 )
187 self.assertEqual(
188 event_id,
189 http_client_mock2.post_json_get_json.call_args[0][1]["notification"][
190 "event_id"
191 ],
192 )
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 Dirk Klimpel
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import urllib.parse
17 from typing import List, Optional
18
19 from mock import Mock
20
21 import synapse.rest.admin
22 from synapse.api.errors import Codes
23 from synapse.rest.client.v1 import directory, events, login, room
24
25 from tests import unittest
26
27 """Tests admin REST events for /rooms paths."""
28
29
30 class ShutdownRoomTestCase(unittest.HomeserverTestCase):
31 servlets = [
32 synapse.rest.admin.register_servlets_for_client_rest_resource,
33 login.register_servlets,
34 events.register_servlets,
35 room.register_servlets,
36 room.register_deprecated_servlets,
37 ]
38
39 def prepare(self, reactor, clock, hs):
40 self.event_creation_handler = hs.get_event_creation_handler()
41 hs.config.user_consent_version = "1"
42
43 consent_uri_builder = Mock()
44 consent_uri_builder.build_user_consent_uri.return_value = "http://example.com"
45 self.event_creation_handler._consent_uri_builder = consent_uri_builder
46
47 self.store = hs.get_datastore()
48
49 self.admin_user = self.register_user("admin", "pass", admin=True)
50 self.admin_user_tok = self.login("admin", "pass")
51
52 self.other_user = self.register_user("user", "pass")
53 self.other_user_token = self.login("user", "pass")
54
55 # Mark the admin user as having consented
56 self.get_success(self.store.user_set_consent_version(self.admin_user, "1"))
57
58 def test_shutdown_room_consent(self):
59 """Test that we can shutdown rooms with local users who have not
60 yet accepted the privacy policy. This used to fail when we tried to
61 force part the user from the old room.
62 """
63 self.event_creation_handler._block_events_without_consent_error = None
64
65 room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
66
67 # Assert one user in room
68 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
69 self.assertEqual([self.other_user], users_in_room)
70
71 # Enable require consent to send events
72 self.event_creation_handler._block_events_without_consent_error = "Error"
73
74 # Assert that the user is getting consent error
75 self.helper.send(
76 room_id, body="foo", tok=self.other_user_token, expect_code=403
77 )
78
79 # Test that the admin can still send shutdown
80 url = "admin/shutdown_room/" + room_id
81 request, channel = self.make_request(
82 "POST",
83 url.encode("ascii"),
84 json.dumps({"new_room_user_id": self.admin_user}),
85 access_token=self.admin_user_tok,
86 )
87 self.render(request)
88
89 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
90
91 # Assert there is now no longer anyone in the room
92 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
93 self.assertEqual([], users_in_room)
94
95 def test_shutdown_room_block_peek(self):
96 """Test that a world_readable room can no longer be peeked into after
97 it has been shut down.
98 """
99
100 self.event_creation_handler._block_events_without_consent_error = None
101
102 room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
103
104 # Enable world readable
105 url = "rooms/%s/state/m.room.history_visibility" % (room_id,)
106 request, channel = self.make_request(
107 "PUT",
108 url.encode("ascii"),
109 json.dumps({"history_visibility": "world_readable"}),
110 access_token=self.other_user_token,
111 )
112 self.render(request)
113 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
114
115 # Test that the admin can still send shutdown
116 url = "admin/shutdown_room/" + room_id
117 request, channel = self.make_request(
118 "POST",
119 url.encode("ascii"),
120 json.dumps({"new_room_user_id": self.admin_user}),
121 access_token=self.admin_user_tok,
122 )
123 self.render(request)
124
125 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
126
127 # Assert we can no longer peek into the room
128 self._assert_peek(room_id, expect_code=403)
129
130 def _assert_peek(self, room_id, expect_code):
131 """Assert that the admin user can (or cannot) peek into the room.
132 """
133
134 url = "rooms/%s/initialSync" % (room_id,)
135 request, channel = self.make_request(
136 "GET", url.encode("ascii"), access_token=self.admin_user_tok
137 )
138 self.render(request)
139 self.assertEqual(
140 expect_code, int(channel.result["code"]), msg=channel.result["body"]
141 )
142
143 url = "events?timeout=0&room_id=" + room_id
144 request, channel = self.make_request(
145 "GET", url.encode("ascii"), access_token=self.admin_user_tok
146 )
147 self.render(request)
148 self.assertEqual(
149 expect_code, int(channel.result["code"]), msg=channel.result["body"]
150 )
151
152
153 class PurgeRoomTestCase(unittest.HomeserverTestCase):
154 """Test /purge_room admin API.
155 """
156
157 servlets = [
158 synapse.rest.admin.register_servlets,
159 login.register_servlets,
160 room.register_servlets,
161 ]
162
163 def prepare(self, reactor, clock, hs):
164 self.store = hs.get_datastore()
165
166 self.admin_user = self.register_user("admin", "pass", admin=True)
167 self.admin_user_tok = self.login("admin", "pass")
168
169 def test_purge_room(self):
170 room_id = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
171
172 # All users have to have left the room.
173 self.helper.leave(room_id, user=self.admin_user, tok=self.admin_user_tok)
174
175 url = "/_synapse/admin/v1/purge_room"
176 request, channel = self.make_request(
177 "POST",
178 url.encode("ascii"),
179 {"room_id": room_id},
180 access_token=self.admin_user_tok,
181 )
182 self.render(request)
183
184 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
185
186 # Test that the following tables have been purged of all rows related to the room.
187 for table in (
188 "current_state_events",
189 "event_backward_extremities",
190 "event_forward_extremities",
191 "event_json",
192 "event_push_actions",
193 "event_search",
194 "events",
195 "group_rooms",
196 "public_room_list_stream",
197 "receipts_graph",
198 "receipts_linearized",
199 "room_aliases",
200 "room_depth",
201 "room_memberships",
202 "room_stats_state",
203 "room_stats_current",
204 "room_stats_historical",
205 "room_stats_earliest_token",
206 "rooms",
207 "stream_ordering_to_exterm",
208 "users_in_public_rooms",
209 "users_who_share_private_rooms",
210 "appservice_room_list",
211 "e2e_room_keys",
212 "event_push_summary",
213 "pusher_throttle",
214 "group_summary_rooms",
215 "room_account_data",
216 "room_tags",
217 # "state_groups", # Current impl leaves orphaned state groups around.
218 "state_groups_state",
219 ):
220 count = self.get_success(
221 self.store.db.simple_select_one_onecol(
222 table=table,
223 keyvalues={"room_id": room_id},
224 retcol="COUNT(*)",
225 desc="test_purge_room",
226 )
227 )
228
229 self.assertEqual(count, 0, msg="Rows not purged in {}".format(table))
230
231
232 class RoomTestCase(unittest.HomeserverTestCase):
233 """Test /room admin API.
234 """
235
236 servlets = [
237 synapse.rest.admin.register_servlets,
238 login.register_servlets,
239 room.register_servlets,
240 directory.register_servlets,
241 ]
242
243 def prepare(self, reactor, clock, hs):
244 self.store = hs.get_datastore()
245
246 # Create user
247 self.admin_user = self.register_user("admin", "pass", admin=True)
248 self.admin_user_tok = self.login("admin", "pass")
249
250 def test_list_rooms(self):
251 """Test that we can list rooms"""
252 # Create 3 test rooms
253 total_rooms = 3
254 room_ids = []
255 for x in range(total_rooms):
256 room_id = self.helper.create_room_as(
257 self.admin_user, tok=self.admin_user_tok
258 )
259 room_ids.append(room_id)
260
261 # Request the list of rooms
262 url = "/_synapse/admin/v1/rooms"
263 request, channel = self.make_request(
264 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
265 )
266 self.render(request)
267
268 # Check request completed successfully
269 self.assertEqual(200, int(channel.code), msg=channel.json_body)
270
271 # Check that response json body contains a "rooms" key
272 self.assertTrue(
273 "rooms" in channel.json_body,
274 msg="Response body does not " "contain a 'rooms' key",
275 )
276
277 # Check that 3 rooms were returned
278 self.assertEqual(3, len(channel.json_body["rooms"]), msg=channel.json_body)
279
280 # Check their room_ids match
281 returned_room_ids = [room["room_id"] for room in channel.json_body["rooms"]]
282 self.assertEqual(room_ids, returned_room_ids)
283
284 # Check that all fields are available
285 for r in channel.json_body["rooms"]:
286 self.assertIn("name", r)
287 self.assertIn("canonical_alias", r)
288 self.assertIn("joined_members", r)
289 self.assertIn("joined_local_members", r)
290 self.assertIn("version", r)
291 self.assertIn("creator", r)
292 self.assertIn("encryption", r)
293 self.assertIn("federatable", r)
294 self.assertIn("public", r)
295 self.assertIn("join_rules", r)
296 self.assertIn("guest_access", r)
297 self.assertIn("history_visibility", r)
298 self.assertIn("state_events", r)
299
300 # Check that the correct number of total rooms was returned
301 self.assertEqual(channel.json_body["total_rooms"], total_rooms)
302
303 # Check that the offset is correct
304 # Should be 0 as we aren't paginating
305 self.assertEqual(channel.json_body["offset"], 0)
306
307 # Check that the prev_batch parameter is not present
308 self.assertNotIn("prev_batch", channel.json_body)
309
310 # We shouldn't receive a next token here as there's no further rooms to show
311 self.assertNotIn("next_batch", channel.json_body)
312
313 def test_list_rooms_pagination(self):
314 """Test that we can get a full list of rooms through pagination"""
315 # Create 5 test rooms
316 total_rooms = 5
317 room_ids = []
318 for x in range(total_rooms):
319 room_id = self.helper.create_room_as(
320 self.admin_user, tok=self.admin_user_tok
321 )
322 room_ids.append(room_id)
323
324 # Set the name of the rooms so we get a consistent returned ordering
325 for idx, room_id in enumerate(room_ids):
326 self.helper.send_state(
327 room_id, "m.room.name", {"name": str(idx)}, tok=self.admin_user_tok,
328 )
329
330 # Request the list of rooms
331 returned_room_ids = []
332 start = 0
333 limit = 2
334
335 run_count = 0
336 should_repeat = True
337 while should_repeat:
338 run_count += 1
339
340 url = "/_synapse/admin/v1/rooms?from=%d&limit=%d&order_by=%s" % (
341 start,
342 limit,
343 "name",
344 )
345 request, channel = self.make_request(
346 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
347 )
348 self.render(request)
349 self.assertEqual(
350 200, int(channel.result["code"]), msg=channel.result["body"]
351 )
352
353 self.assertTrue("rooms" in channel.json_body)
354 for r in channel.json_body["rooms"]:
355 returned_room_ids.append(r["room_id"])
356
357 # Check that the correct number of total rooms was returned
358 self.assertEqual(channel.json_body["total_rooms"], total_rooms)
359
360 # Check that the offset is correct
361 # We're only getting 2 rooms each page, so should be 2 * last run_count
362 self.assertEqual(channel.json_body["offset"], 2 * (run_count - 1))
363
364 if run_count > 1:
365 # Check the value of prev_batch is correct
366 self.assertEqual(channel.json_body["prev_batch"], 2 * (run_count - 2))
367
368 if "next_batch" not in channel.json_body:
369 # We have reached the end of the list
370 should_repeat = False
371 else:
372 # Make another query with an updated start value
373 start = channel.json_body["next_batch"]
374
375 # We should've queried the endpoint 3 times
376 self.assertEqual(
377 run_count,
378 3,
379 msg="Should've queried 3 times for 5 rooms with limit 2 per query",
380 )
381
382 # Check that we received all of the room ids
383 self.assertEqual(room_ids, returned_room_ids)
384
385 url = "/_synapse/admin/v1/rooms?from=%d&limit=%d" % (start, limit)
386 request, channel = self.make_request(
387 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
388 )
389 self.render(request)
390 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
391
392 def test_correct_room_attributes(self):
393 """Test the correct attributes for a room are returned"""
394 # Create a test room
395 room_id = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
396
397 test_alias = "#test:test"
398 test_room_name = "something"
399
400 # Have another user join the room
401 user_2 = self.register_user("user4", "pass")
402 user_tok_2 = self.login("user4", "pass")
403 self.helper.join(room_id, user_2, tok=user_tok_2)
404
405 # Create a new alias to this room
406 url = "/_matrix/client/r0/directory/room/%s" % (urllib.parse.quote(test_alias),)
407 request, channel = self.make_request(
408 "PUT",
409 url.encode("ascii"),
410 {"room_id": room_id},
411 access_token=self.admin_user_tok,
412 )
413 self.render(request)
414 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
415
416 # Set this new alias as the canonical alias for this room
417 self.helper.send_state(
418 room_id,
419 "m.room.aliases",
420 {"aliases": [test_alias]},
421 tok=self.admin_user_tok,
422 state_key="test",
423 )
424 self.helper.send_state(
425 room_id,
426 "m.room.canonical_alias",
427 {"alias": test_alias},
428 tok=self.admin_user_tok,
429 )
430
431 # Set a name for the room
432 self.helper.send_state(
433 room_id, "m.room.name", {"name": test_room_name}, tok=self.admin_user_tok,
434 )
435
436 # Request the list of rooms
437 url = "/_synapse/admin/v1/rooms"
438 request, channel = self.make_request(
439 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
440 )
441 self.render(request)
442 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
443
444 # Check that rooms were returned
445 self.assertTrue("rooms" in channel.json_body)
446 rooms = channel.json_body["rooms"]
447
448 # Check that only one room was returned
449 self.assertEqual(len(rooms), 1)
450
451 # And that the value of the total_rooms key was correct
452 self.assertEqual(channel.json_body["total_rooms"], 1)
453
454 # Check that the offset is correct
455 # We're not paginating, so should be 0
456 self.assertEqual(channel.json_body["offset"], 0)
457
458 # Check that there is no `prev_batch`
459 self.assertNotIn("prev_batch", channel.json_body)
460
461 # Check that there is no `next_batch`
462 self.assertNotIn("next_batch", channel.json_body)
463
464 # Check that all provided attributes are set
465 r = rooms[0]
466 self.assertEqual(room_id, r["room_id"])
467 self.assertEqual(test_room_name, r["name"])
468 self.assertEqual(test_alias, r["canonical_alias"])
469
470 def test_room_list_sort_order(self):
471 """Test room list sort ordering. alphabetical name versus number of members,
472 reversing the order, etc.
473 """
474
475 def _set_canonical_alias(room_id: str, test_alias: str, admin_user_tok: str):
476 # Create a new alias to this room
477 url = "/_matrix/client/r0/directory/room/%s" % (
478 urllib.parse.quote(test_alias),
479 )
480 request, channel = self.make_request(
481 "PUT",
482 url.encode("ascii"),
483 {"room_id": room_id},
484 access_token=admin_user_tok,
485 )
486 self.render(request)
487 self.assertEqual(
488 200, int(channel.result["code"]), msg=channel.result["body"]
489 )
490
491 # Set this new alias as the canonical alias for this room
492 self.helper.send_state(
493 room_id,
494 "m.room.aliases",
495 {"aliases": [test_alias]},
496 tok=admin_user_tok,
497 state_key="test",
498 )
499 self.helper.send_state(
500 room_id,
501 "m.room.canonical_alias",
502 {"alias": test_alias},
503 tok=admin_user_tok,
504 )
505
506 def _order_test(
507 order_type: str, expected_room_list: List[str], reverse: bool = False,
508 ):
509 """Request the list of rooms in a certain order. Assert that order is what
510 we expect
511
512 Args:
513 order_type: The type of ordering to give the server
514 expected_room_list: The list of room_ids in the order we expect to get
515 back from the server
516 """
517 # Request the list of rooms in the given order
518 url = "/_synapse/admin/v1/rooms?order_by=%s" % (order_type,)
519 if reverse:
520 url += "&dir=b"
521 request, channel = self.make_request(
522 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
523 )
524 self.render(request)
525 self.assertEqual(200, channel.code, msg=channel.json_body)
526
527 # Check that rooms were returned
528 self.assertTrue("rooms" in channel.json_body)
529 rooms = channel.json_body["rooms"]
530
531 # Check for the correct total_rooms value
532 self.assertEqual(channel.json_body["total_rooms"], 3)
533
534 # Check that the offset is correct
535 # We're not paginating, so should be 0
536 self.assertEqual(channel.json_body["offset"], 0)
537
538 # Check that there is no `prev_batch`
539 self.assertNotIn("prev_batch", channel.json_body)
540
541 # Check that there is no `next_batch`
542 self.assertNotIn("next_batch", channel.json_body)
543
544 # Check that rooms were returned in alphabetical order
545 returned_order = [r["room_id"] for r in rooms]
546 self.assertListEqual(expected_room_list, returned_order) # order is checked
547
548 # Create 3 test rooms
549 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
550 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
551 room_id_3 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
552
553 # Set room names in alphabetical order. room 1 -> A, 2 -> B, 3 -> C
554 self.helper.send_state(
555 room_id_1, "m.room.name", {"name": "A"}, tok=self.admin_user_tok,
556 )
557 self.helper.send_state(
558 room_id_2, "m.room.name", {"name": "B"}, tok=self.admin_user_tok,
559 )
560 self.helper.send_state(
561 room_id_3, "m.room.name", {"name": "C"}, tok=self.admin_user_tok,
562 )
563
564 # Set room canonical room aliases
565 _set_canonical_alias(room_id_1, "#A_alias:test", self.admin_user_tok)
566 _set_canonical_alias(room_id_2, "#B_alias:test", self.admin_user_tok)
567 _set_canonical_alias(room_id_3, "#C_alias:test", self.admin_user_tok)
568
569 # Set room member size in the reverse order. room 1 -> 1 member, 2 -> 2, 3 -> 3
570 user_1 = self.register_user("bob1", "pass")
571 user_1_tok = self.login("bob1", "pass")
572 self.helper.join(room_id_2, user_1, tok=user_1_tok)
573
574 user_2 = self.register_user("bob2", "pass")
575 user_2_tok = self.login("bob2", "pass")
576 self.helper.join(room_id_3, user_2, tok=user_2_tok)
577
578 user_3 = self.register_user("bob3", "pass")
579 user_3_tok = self.login("bob3", "pass")
580 self.helper.join(room_id_3, user_3, tok=user_3_tok)
581
582 # Test different sort orders, with forward and reverse directions
583 _order_test("name", [room_id_1, room_id_2, room_id_3])
584 _order_test("name", [room_id_3, room_id_2, room_id_1], reverse=True)
585
586 _order_test("canonical_alias", [room_id_1, room_id_2, room_id_3])
587 _order_test("canonical_alias", [room_id_3, room_id_2, room_id_1], reverse=True)
588
589 _order_test("joined_members", [room_id_3, room_id_2, room_id_1])
590 _order_test("joined_members", [room_id_1, room_id_2, room_id_3], reverse=True)
591
592 _order_test("joined_local_members", [room_id_3, room_id_2, room_id_1])
593 _order_test(
594 "joined_local_members", [room_id_1, room_id_2, room_id_3], reverse=True
595 )
596
597 _order_test("version", [room_id_1, room_id_2, room_id_3])
598 _order_test("version", [room_id_1, room_id_2, room_id_3], reverse=True)
599
600 _order_test("creator", [room_id_1, room_id_2, room_id_3])
601 _order_test("creator", [room_id_1, room_id_2, room_id_3], reverse=True)
602
603 _order_test("encryption", [room_id_1, room_id_2, room_id_3])
604 _order_test("encryption", [room_id_1, room_id_2, room_id_3], reverse=True)
605
606 _order_test("federatable", [room_id_1, room_id_2, room_id_3])
607 _order_test("federatable", [room_id_1, room_id_2, room_id_3], reverse=True)
608
609 _order_test("public", [room_id_1, room_id_2, room_id_3])
610 # Different sort order of SQlite and PostreSQL
611 # _order_test("public", [room_id_3, room_id_2, room_id_1], reverse=True)
612
613 _order_test("join_rules", [room_id_1, room_id_2, room_id_3])
614 _order_test("join_rules", [room_id_1, room_id_2, room_id_3], reverse=True)
615
616 _order_test("guest_access", [room_id_1, room_id_2, room_id_3])
617 _order_test("guest_access", [room_id_1, room_id_2, room_id_3], reverse=True)
618
619 _order_test("history_visibility", [room_id_1, room_id_2, room_id_3])
620 _order_test(
621 "history_visibility", [room_id_1, room_id_2, room_id_3], reverse=True
622 )
623
624 _order_test("state_events", [room_id_3, room_id_2, room_id_1])
625 _order_test("state_events", [room_id_1, room_id_2, room_id_3], reverse=True)
626
627 def test_search_term(self):
628 """Test that searching for a room works correctly"""
629 # Create two test rooms
630 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
631 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
632
633 room_name_1 = "something"
634 room_name_2 = "else"
635
636 # Set the name for each room
637 self.helper.send_state(
638 room_id_1, "m.room.name", {"name": room_name_1}, tok=self.admin_user_tok,
639 )
640 self.helper.send_state(
641 room_id_2, "m.room.name", {"name": room_name_2}, tok=self.admin_user_tok,
642 )
643
644 def _search_test(
645 expected_room_id: Optional[str],
646 search_term: str,
647 expected_http_code: int = 200,
648 ):
649 """Search for a room and check that the returned room's id is a match
650
651 Args:
652 expected_room_id: The room_id expected to be returned by the API. Set
653 to None to expect zero results for the search
654 search_term: The term to search for room names with
655 expected_http_code: The expected http code for the request
656 """
657 url = "/_synapse/admin/v1/rooms?search_term=%s" % (search_term,)
658 request, channel = self.make_request(
659 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
660 )
661 self.render(request)
662 self.assertEqual(expected_http_code, channel.code, msg=channel.json_body)
663
664 if expected_http_code != 200:
665 return
666
667 # Check that rooms were returned
668 self.assertTrue("rooms" in channel.json_body)
669 rooms = channel.json_body["rooms"]
670
671 # Check that the expected number of rooms were returned
672 expected_room_count = 1 if expected_room_id else 0
673 self.assertEqual(len(rooms), expected_room_count)
674 self.assertEqual(channel.json_body["total_rooms"], expected_room_count)
675
676 # Check that the offset is correct
677 # We're not paginating, so should be 0
678 self.assertEqual(channel.json_body["offset"], 0)
679
680 # Check that there is no `prev_batch`
681 self.assertNotIn("prev_batch", channel.json_body)
682
683 # Check that there is no `next_batch`
684 self.assertNotIn("next_batch", channel.json_body)
685
686 if expected_room_id:
687 # Check that the first returned room id is correct
688 r = rooms[0]
689 self.assertEqual(expected_room_id, r["room_id"])
690
691 # Perform search tests
692 _search_test(room_id_1, "something")
693 _search_test(room_id_1, "thing")
694
695 _search_test(room_id_2, "else")
696 _search_test(room_id_2, "se")
697
698 _search_test(None, "foo")
699 _search_test(None, "bar")
700 _search_test(None, "", expected_http_code=400)
701
702 def test_single_room(self):
703 """Test that a single room can be requested correctly"""
704 # Create two test rooms
705 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
706 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
707
708 room_name_1 = "something"
709 room_name_2 = "else"
710
711 # Set the name for each room
712 self.helper.send_state(
713 room_id_1, "m.room.name", {"name": room_name_1}, tok=self.admin_user_tok,
714 )
715 self.helper.send_state(
716 room_id_2, "m.room.name", {"name": room_name_2}, tok=self.admin_user_tok,
717 )
718
719 url = "/_synapse/admin/v1/rooms/%s" % (room_id_1,)
720 request, channel = self.make_request(
721 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
722 )
723 self.render(request)
724 self.assertEqual(200, channel.code, msg=channel.json_body)
725
726 self.assertIn("room_id", channel.json_body)
727 self.assertIn("name", channel.json_body)
728 self.assertIn("canonical_alias", channel.json_body)
729 self.assertIn("joined_members", channel.json_body)
730 self.assertIn("joined_local_members", channel.json_body)
731 self.assertIn("version", channel.json_body)
732 self.assertIn("creator", channel.json_body)
733 self.assertIn("encryption", channel.json_body)
734 self.assertIn("federatable", channel.json_body)
735 self.assertIn("public", channel.json_body)
736 self.assertIn("join_rules", channel.json_body)
737 self.assertIn("guest_access", channel.json_body)
738 self.assertIn("history_visibility", channel.json_body)
739 self.assertIn("state_events", channel.json_body)
740
741 self.assertEqual(room_id_1, channel.json_body["room_id"])
742
743
744 class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
745
746 servlets = [
747 synapse.rest.admin.register_servlets,
748 room.register_servlets,
749 login.register_servlets,
750 ]
751
752 def prepare(self, reactor, clock, homeserver):
753 self.admin_user = self.register_user("admin", "pass", admin=True)
754 self.admin_user_tok = self.login("admin", "pass")
755
756 self.creator = self.register_user("creator", "test")
757 self.creator_tok = self.login("creator", "test")
758
759 self.second_user_id = self.register_user("second", "test")
760 self.second_tok = self.login("second", "test")
761
762 self.public_room_id = self.helper.create_room_as(
763 self.creator, tok=self.creator_tok, is_public=True
764 )
765 self.url = "/_synapse/admin/v1/join/{}".format(self.public_room_id)
766
767 def test_requester_is_no_admin(self):
768 """
769 If the user is not a server admin, an error 403 is returned.
770 """
771 body = json.dumps({"user_id": self.second_user_id})
772
773 request, channel = self.make_request(
774 "POST",
775 self.url,
776 content=body.encode(encoding="utf_8"),
777 access_token=self.second_tok,
778 )
779 self.render(request)
780
781 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
782 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
783
784 def test_invalid_parameter(self):
785 """
786 If a parameter is missing, return an error
787 """
788 body = json.dumps({"unknown_parameter": "@unknown:test"})
789
790 request, channel = self.make_request(
791 "POST",
792 self.url,
793 content=body.encode(encoding="utf_8"),
794 access_token=self.admin_user_tok,
795 )
796 self.render(request)
797
798 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
799 self.assertEqual(Codes.MISSING_PARAM, channel.json_body["errcode"])
800
801 def test_local_user_does_not_exist(self):
802 """
803 Tests that a lookup for a user that does not exist returns a 404
804 """
805 body = json.dumps({"user_id": "@unknown:test"})
806
807 request, channel = self.make_request(
808 "POST",
809 self.url,
810 content=body.encode(encoding="utf_8"),
811 access_token=self.admin_user_tok,
812 )
813 self.render(request)
814
815 self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
816 self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
817
818 def test_remote_user(self):
819 """
820 Check that only local user can join rooms.
821 """
822 body = json.dumps({"user_id": "@not:exist.bla"})
823
824 request, channel = self.make_request(
825 "POST",
826 self.url,
827 content=body.encode(encoding="utf_8"),
828 access_token=self.admin_user_tok,
829 )
830 self.render(request)
831
832 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
833 self.assertEqual(
834 "This endpoint can only be used with local users",
835 channel.json_body["error"],
836 )
837
838 def test_room_does_not_exist(self):
839 """
840 Check that unknown rooms/server return error 404.
841 """
842 body = json.dumps({"user_id": self.second_user_id})
843 url = "/_synapse/admin/v1/join/!unknown:test"
844
845 request, channel = self.make_request(
846 "POST",
847 url,
848 content=body.encode(encoding="utf_8"),
849 access_token=self.admin_user_tok,
850 )
851 self.render(request)
852
853 self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
854 self.assertEqual("No known servers", channel.json_body["error"])
855
856 def test_room_is_not_valid(self):
857 """
858 Check that invalid room names, return an error 400.
859 """
860 body = json.dumps({"user_id": self.second_user_id})
861 url = "/_synapse/admin/v1/join/invalidroom"
862
863 request, channel = self.make_request(
864 "POST",
865 url,
866 content=body.encode(encoding="utf_8"),
867 access_token=self.admin_user_tok,
868 )
869 self.render(request)
870
871 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
872 self.assertEqual(
873 "invalidroom was not legal room ID or room alias",
874 channel.json_body["error"],
875 )
876
877 def test_join_public_room(self):
878 """
879 Test joining a local user to a public room with "JoinRules.PUBLIC"
880 """
881 body = json.dumps({"user_id": self.second_user_id})
882
883 request, channel = self.make_request(
884 "POST",
885 self.url,
886 content=body.encode(encoding="utf_8"),
887 access_token=self.admin_user_tok,
888 )
889 self.render(request)
890
891 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
892 self.assertEqual(self.public_room_id, channel.json_body["room_id"])
893
894 # Validate if user is a member of the room
895
896 request, channel = self.make_request(
897 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
898 )
899 self.render(request)
900 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
901 self.assertEqual(self.public_room_id, channel.json_body["joined_rooms"][0])
902
903 def test_join_private_room_if_not_member(self):
904 """
905 Test joining a local user to a private room with "JoinRules.INVITE"
906 when server admin is not member of this room.
907 """
908 private_room_id = self.helper.create_room_as(
909 self.creator, tok=self.creator_tok, is_public=False
910 )
911 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
912 body = json.dumps({"user_id": self.second_user_id})
913
914 request, channel = self.make_request(
915 "POST",
916 url,
917 content=body.encode(encoding="utf_8"),
918 access_token=self.admin_user_tok,
919 )
920 self.render(request)
921
922 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
923 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
924
925 def test_join_private_room_if_member(self):
926 """
927 Test joining a local user to a private room with "JoinRules.INVITE",
928 when server admin is member of this room.
929 """
930 private_room_id = self.helper.create_room_as(
931 self.creator, tok=self.creator_tok, is_public=False
932 )
933 self.helper.invite(
934 room=private_room_id,
935 src=self.creator,
936 targ=self.admin_user,
937 tok=self.creator_tok,
938 )
939 self.helper.join(
940 room=private_room_id, user=self.admin_user, tok=self.admin_user_tok
941 )
942
943 # Validate if server admin is a member of the room
944
945 request, channel = self.make_request(
946 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.admin_user_tok,
947 )
948 self.render(request)
949 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
950 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
951
952 # Join user to room.
953
954 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
955 body = json.dumps({"user_id": self.second_user_id})
956
957 request, channel = self.make_request(
958 "POST",
959 url,
960 content=body.encode(encoding="utf_8"),
961 access_token=self.admin_user_tok,
962 )
963 self.render(request)
964 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
965 self.assertEqual(private_room_id, channel.json_body["room_id"])
966
967 # Validate if user is a member of the room
968
969 request, channel = self.make_request(
970 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
971 )
972 self.render(request)
973 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
974 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
975
976 def test_join_private_room_if_owner(self):
977 """
978 Test joining a local user to a private room with "JoinRules.INVITE",
979 when server admin is owner of this room.
980 """
981 private_room_id = self.helper.create_room_as(
982 self.admin_user, tok=self.admin_user_tok, is_public=False
983 )
984 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
985 body = json.dumps({"user_id": self.second_user_id})
986
987 request, channel = self.make_request(
988 "POST",
989 url,
990 content=body.encode(encoding="utf_8"),
991 access_token=self.admin_user_tok,
992 )
993 self.render(request)
994
995 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
996 self.assertEqual(private_room_id, channel.json_body["room_id"])
997
998 # Validate if user is a member of the room
999
1000 request, channel = self.make_request(
1001 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
1002 )
1003 self.render(request)
1004 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
1005 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
0 # -*- coding: utf-8 -*-
1 # Copyright 2020 Dirk Klimpel
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import urllib.parse
17 from typing import List, Optional
18
19 from mock import Mock
20
21 import synapse.rest.admin
22 from synapse.api.errors import Codes
23 from synapse.rest.client.v1 import directory, events, login, room
24
25 from tests import unittest
26
27 """Tests admin REST events for /rooms paths."""
28
29
30 class ShutdownRoomTestCase(unittest.HomeserverTestCase):
31 servlets = [
32 synapse.rest.admin.register_servlets_for_client_rest_resource,
33 login.register_servlets,
34 events.register_servlets,
35 room.register_servlets,
36 room.register_deprecated_servlets,
37 ]
38
39 def prepare(self, reactor, clock, hs):
40 self.event_creation_handler = hs.get_event_creation_handler()
41 hs.config.user_consent_version = "1"
42
43 consent_uri_builder = Mock()
44 consent_uri_builder.build_user_consent_uri.return_value = "http://example.com"
45 self.event_creation_handler._consent_uri_builder = consent_uri_builder
46
47 self.store = hs.get_datastore()
48
49 self.admin_user = self.register_user("admin", "pass", admin=True)
50 self.admin_user_tok = self.login("admin", "pass")
51
52 self.other_user = self.register_user("user", "pass")
53 self.other_user_token = self.login("user", "pass")
54
55 # Mark the admin user as having consented
56 self.get_success(self.store.user_set_consent_version(self.admin_user, "1"))
57
58 def test_shutdown_room_consent(self):
59 """Test that we can shutdown rooms with local users who have not
60 yet accepted the privacy policy. This used to fail when we tried to
61 force part the user from the old room.
62 """
63 self.event_creation_handler._block_events_without_consent_error = None
64
65 room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
66
67 # Assert one user in room
68 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
69 self.assertEqual([self.other_user], users_in_room)
70
71 # Enable require consent to send events
72 self.event_creation_handler._block_events_without_consent_error = "Error"
73
74 # Assert that the user is getting consent error
75 self.helper.send(
76 room_id, body="foo", tok=self.other_user_token, expect_code=403
77 )
78
79 # Test that the admin can still send shutdown
80 url = "admin/shutdown_room/" + room_id
81 request, channel = self.make_request(
82 "POST",
83 url.encode("ascii"),
84 json.dumps({"new_room_user_id": self.admin_user}),
85 access_token=self.admin_user_tok,
86 )
87 self.render(request)
88
89 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
90
91 # Assert there is now no longer anyone in the room
92 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
93 self.assertEqual([], users_in_room)
94
95 def test_shutdown_room_block_peek(self):
96 """Test that a world_readable room can no longer be peeked into after
97 it has been shut down.
98 """
99
100 self.event_creation_handler._block_events_without_consent_error = None
101
102 room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
103
104 # Enable world readable
105 url = "rooms/%s/state/m.room.history_visibility" % (room_id,)
106 request, channel = self.make_request(
107 "PUT",
108 url.encode("ascii"),
109 json.dumps({"history_visibility": "world_readable"}),
110 access_token=self.other_user_token,
111 )
112 self.render(request)
113 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
114
115 # Test that the admin can still send shutdown
116 url = "admin/shutdown_room/" + room_id
117 request, channel = self.make_request(
118 "POST",
119 url.encode("ascii"),
120 json.dumps({"new_room_user_id": self.admin_user}),
121 access_token=self.admin_user_tok,
122 )
123 self.render(request)
124
125 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
126
127 # Assert we can no longer peek into the room
128 self._assert_peek(room_id, expect_code=403)
129
130 def _assert_peek(self, room_id, expect_code):
131 """Assert that the admin user can (or cannot) peek into the room.
132 """
133
134 url = "rooms/%s/initialSync" % (room_id,)
135 request, channel = self.make_request(
136 "GET", url.encode("ascii"), access_token=self.admin_user_tok
137 )
138 self.render(request)
139 self.assertEqual(
140 expect_code, int(channel.result["code"]), msg=channel.result["body"]
141 )
142
143 url = "events?timeout=0&room_id=" + room_id
144 request, channel = self.make_request(
145 "GET", url.encode("ascii"), access_token=self.admin_user_tok
146 )
147 self.render(request)
148 self.assertEqual(
149 expect_code, int(channel.result["code"]), msg=channel.result["body"]
150 )
151
152
153 class DeleteRoomTestCase(unittest.HomeserverTestCase):
154 servlets = [
155 synapse.rest.admin.register_servlets,
156 login.register_servlets,
157 events.register_servlets,
158 room.register_servlets,
159 room.register_deprecated_servlets,
160 ]
161
162 def prepare(self, reactor, clock, hs):
163 self.event_creation_handler = hs.get_event_creation_handler()
164 hs.config.user_consent_version = "1"
165
166 consent_uri_builder = Mock()
167 consent_uri_builder.build_user_consent_uri.return_value = "http://example.com"
168 self.event_creation_handler._consent_uri_builder = consent_uri_builder
169
170 self.store = hs.get_datastore()
171
172 self.admin_user = self.register_user("admin", "pass", admin=True)
173 self.admin_user_tok = self.login("admin", "pass")
174
175 self.other_user = self.register_user("user", "pass")
176 self.other_user_tok = self.login("user", "pass")
177
178 # Mark the admin user as having consented
179 self.get_success(self.store.user_set_consent_version(self.admin_user, "1"))
180
181 self.room_id = self.helper.create_room_as(
182 self.other_user, tok=self.other_user_tok
183 )
184 self.url = "/_synapse/admin/v1/rooms/%s/delete" % self.room_id
185
186 def test_requester_is_no_admin(self):
187 """
188 If the user is not a server admin, an error 403 is returned.
189 """
190
191 request, channel = self.make_request(
192 "POST", self.url, json.dumps({}), access_token=self.other_user_tok,
193 )
194 self.render(request)
195
196 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
197 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
198
199 def test_room_does_not_exist(self):
200 """
201 Check that unknown rooms/server return error 404.
202 """
203 url = "/_synapse/admin/v1/rooms/!unknown:test/delete"
204
205 request, channel = self.make_request(
206 "POST", url, json.dumps({}), access_token=self.admin_user_tok,
207 )
208 self.render(request)
209
210 self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
211 self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
212
213 def test_room_is_not_valid(self):
214 """
215 Check that invalid room names, return an error 400.
216 """
217 url = "/_synapse/admin/v1/rooms/invalidroom/delete"
218
219 request, channel = self.make_request(
220 "POST", url, json.dumps({}), access_token=self.admin_user_tok,
221 )
222 self.render(request)
223
224 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
225 self.assertEqual(
226 "invalidroom is not a legal room ID", channel.json_body["error"],
227 )
228
229 def test_new_room_user_does_not_exist(self):
230 """
231 Tests that the user ID must be from local server but it does not have to exist.
232 """
233 body = json.dumps({"new_room_user_id": "@unknown:test"})
234
235 request, channel = self.make_request(
236 "POST",
237 self.url,
238 content=body.encode(encoding="utf_8"),
239 access_token=self.admin_user_tok,
240 )
241 self.render(request)
242
243 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
244 self.assertIn("new_room_id", channel.json_body)
245 self.assertIn("kicked_users", channel.json_body)
246 self.assertIn("failed_to_kick_users", channel.json_body)
247 self.assertIn("local_aliases", channel.json_body)
248
249 def test_new_room_user_is_not_local(self):
250 """
251 Check that only local users can create new room to move members.
252 """
253 body = json.dumps({"new_room_user_id": "@not:exist.bla"})
254
255 request, channel = self.make_request(
256 "POST",
257 self.url,
258 content=body.encode(encoding="utf_8"),
259 access_token=self.admin_user_tok,
260 )
261 self.render(request)
262
263 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
264 self.assertEqual(
265 "User must be our own: @not:exist.bla", channel.json_body["error"],
266 )
267
268 def test_block_is_not_bool(self):
269 """
270 If parameter `block` is not boolean, return an error
271 """
272 body = json.dumps({"block": "NotBool"})
273
274 request, channel = self.make_request(
275 "POST",
276 self.url,
277 content=body.encode(encoding="utf_8"),
278 access_token=self.admin_user_tok,
279 )
280 self.render(request)
281
282 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
283 self.assertEqual(Codes.BAD_JSON, channel.json_body["errcode"])
284
285 def test_purge_room_and_block(self):
286 """Test to purge a room and block it.
287 Members will not be moved to a new room and will not receive a message.
288 """
289 # Test that room is not purged
290 with self.assertRaises(AssertionError):
291 self._is_purged(self.room_id)
292
293 # Test that room is not blocked
294 self._is_blocked(self.room_id, expect=False)
295
296 # Assert one user in room
297 self._is_member(room_id=self.room_id, user_id=self.other_user)
298
299 body = json.dumps({"block": True})
300
301 request, channel = self.make_request(
302 "POST",
303 self.url.encode("ascii"),
304 content=body.encode(encoding="utf_8"),
305 access_token=self.admin_user_tok,
306 )
307 self.render(request)
308
309 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
310 self.assertEqual(None, channel.json_body["new_room_id"])
311 self.assertEqual(self.other_user, channel.json_body["kicked_users"][0])
312 self.assertIn("failed_to_kick_users", channel.json_body)
313 self.assertIn("local_aliases", channel.json_body)
314
315 self._is_purged(self.room_id)
316 self._is_blocked(self.room_id, expect=True)
317 self._has_no_members(self.room_id)
318
319 def test_purge_room_and_not_block(self):
320 """Test to purge a room and do not block it.
321 Members will not be moved to a new room and will not receive a message.
322 """
323 # Test that room is not purged
324 with self.assertRaises(AssertionError):
325 self._is_purged(self.room_id)
326
327 # Test that room is not blocked
328 self._is_blocked(self.room_id, expect=False)
329
330 # Assert one user in room
331 self._is_member(room_id=self.room_id, user_id=self.other_user)
332
333 body = json.dumps({"block": False})
334
335 request, channel = self.make_request(
336 "POST",
337 self.url.encode("ascii"),
338 content=body.encode(encoding="utf_8"),
339 access_token=self.admin_user_tok,
340 )
341 self.render(request)
342
343 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
344 self.assertEqual(None, channel.json_body["new_room_id"])
345 self.assertEqual(self.other_user, channel.json_body["kicked_users"][0])
346 self.assertIn("failed_to_kick_users", channel.json_body)
347 self.assertIn("local_aliases", channel.json_body)
348
349 self._is_purged(self.room_id)
350 self._is_blocked(self.room_id, expect=False)
351 self._has_no_members(self.room_id)
352
353 def test_shutdown_room_consent(self):
354 """Test that we can shutdown rooms with local users who have not
355 yet accepted the privacy policy. This used to fail when we tried to
356 force part the user from the old room.
357 Members will be moved to a new room and will receive a message.
358 """
359 self.event_creation_handler._block_events_without_consent_error = None
360
361 # Assert one user in room
362 users_in_room = self.get_success(self.store.get_users_in_room(self.room_id))
363 self.assertEqual([self.other_user], users_in_room)
364
365 # Enable require consent to send events
366 self.event_creation_handler._block_events_without_consent_error = "Error"
367
368 # Assert that the user is getting consent error
369 self.helper.send(
370 self.room_id, body="foo", tok=self.other_user_tok, expect_code=403
371 )
372
373 # Test that room is not purged
374 with self.assertRaises(AssertionError):
375 self._is_purged(self.room_id)
376
377 # Assert one user in room
378 self._is_member(room_id=self.room_id, user_id=self.other_user)
379
380 # Test that the admin can still send shutdown
381 url = "/_synapse/admin/v1/rooms/%s/delete" % self.room_id
382 request, channel = self.make_request(
383 "POST",
384 url.encode("ascii"),
385 json.dumps({"new_room_user_id": self.admin_user}),
386 access_token=self.admin_user_tok,
387 )
388 self.render(request)
389
390 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
391 self.assertEqual(self.other_user, channel.json_body["kicked_users"][0])
392 self.assertIn("new_room_id", channel.json_body)
393 self.assertIn("failed_to_kick_users", channel.json_body)
394 self.assertIn("local_aliases", channel.json_body)
395
396 # Test that member has moved to new room
397 self._is_member(
398 room_id=channel.json_body["new_room_id"], user_id=self.other_user
399 )
400
401 self._is_purged(self.room_id)
402 self._has_no_members(self.room_id)
403
404 def test_shutdown_room_block_peek(self):
405 """Test that a world_readable room can no longer be peeked into after
406 it has been shut down.
407 Members will be moved to a new room and will receive a message.
408 """
409 self.event_creation_handler._block_events_without_consent_error = None
410
411 # Enable world readable
412 url = "rooms/%s/state/m.room.history_visibility" % (self.room_id,)
413 request, channel = self.make_request(
414 "PUT",
415 url.encode("ascii"),
416 json.dumps({"history_visibility": "world_readable"}),
417 access_token=self.other_user_tok,
418 )
419 self.render(request)
420 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
421
422 # Test that room is not purged
423 with self.assertRaises(AssertionError):
424 self._is_purged(self.room_id)
425
426 # Assert one user in room
427 self._is_member(room_id=self.room_id, user_id=self.other_user)
428
429 # Test that the admin can still send shutdown
430 url = "/_synapse/admin/v1/rooms/%s/delete" % self.room_id
431 request, channel = self.make_request(
432 "POST",
433 url.encode("ascii"),
434 json.dumps({"new_room_user_id": self.admin_user}),
435 access_token=self.admin_user_tok,
436 )
437 self.render(request)
438
439 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
440 self.assertEqual(self.other_user, channel.json_body["kicked_users"][0])
441 self.assertIn("new_room_id", channel.json_body)
442 self.assertIn("failed_to_kick_users", channel.json_body)
443 self.assertIn("local_aliases", channel.json_body)
444
445 # Test that member has moved to new room
446 self._is_member(
447 room_id=channel.json_body["new_room_id"], user_id=self.other_user
448 )
449
450 self._is_purged(self.room_id)
451 self._has_no_members(self.room_id)
452
453 # Assert we can no longer peek into the room
454 self._assert_peek(self.room_id, expect_code=403)
455
456 def _is_blocked(self, room_id, expect=True):
457 """Assert that the room is blocked or not
458 """
459 d = self.store.is_room_blocked(room_id)
460 if expect:
461 self.assertTrue(self.get_success(d))
462 else:
463 self.assertIsNone(self.get_success(d))
464
465 def _has_no_members(self, room_id):
466 """Assert there is now no longer anyone in the room
467 """
468 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
469 self.assertEqual([], users_in_room)
470
471 def _is_member(self, room_id, user_id):
472 """Test that user is member of the room
473 """
474 users_in_room = self.get_success(self.store.get_users_in_room(room_id))
475 self.assertIn(user_id, users_in_room)
476
477 def _is_purged(self, room_id):
478 """Test that the following tables have been purged of all rows related to the room.
479 """
480 for table in (
481 "current_state_events",
482 "event_backward_extremities",
483 "event_forward_extremities",
484 "event_json",
485 "event_push_actions",
486 "event_search",
487 "events",
488 "group_rooms",
489 "public_room_list_stream",
490 "receipts_graph",
491 "receipts_linearized",
492 "room_aliases",
493 "room_depth",
494 "room_memberships",
495 "room_stats_state",
496 "room_stats_current",
497 "room_stats_historical",
498 "room_stats_earliest_token",
499 "rooms",
500 "stream_ordering_to_exterm",
501 "users_in_public_rooms",
502 "users_who_share_private_rooms",
503 "appservice_room_list",
504 "e2e_room_keys",
505 "event_push_summary",
506 "pusher_throttle",
507 "group_summary_rooms",
508 "local_invites",
509 "room_account_data",
510 "room_tags",
511 # "state_groups", # Current impl leaves orphaned state groups around.
512 "state_groups_state",
513 ):
514 count = self.get_success(
515 self.store.db.simple_select_one_onecol(
516 table=table,
517 keyvalues={"room_id": room_id},
518 retcol="COUNT(*)",
519 desc="test_purge_room",
520 )
521 )
522
523 self.assertEqual(count, 0, msg="Rows not purged in {}".format(table))
524
525 def _assert_peek(self, room_id, expect_code):
526 """Assert that the admin user can (or cannot) peek into the room.
527 """
528
529 url = "rooms/%s/initialSync" % (room_id,)
530 request, channel = self.make_request(
531 "GET", url.encode("ascii"), access_token=self.admin_user_tok
532 )
533 self.render(request)
534 self.assertEqual(
535 expect_code, int(channel.result["code"]), msg=channel.result["body"]
536 )
537
538 url = "events?timeout=0&room_id=" + room_id
539 request, channel = self.make_request(
540 "GET", url.encode("ascii"), access_token=self.admin_user_tok
541 )
542 self.render(request)
543 self.assertEqual(
544 expect_code, int(channel.result["code"]), msg=channel.result["body"]
545 )
546
547
548 class PurgeRoomTestCase(unittest.HomeserverTestCase):
549 """Test /purge_room admin API.
550 """
551
552 servlets = [
553 synapse.rest.admin.register_servlets,
554 login.register_servlets,
555 room.register_servlets,
556 ]
557
558 def prepare(self, reactor, clock, hs):
559 self.store = hs.get_datastore()
560
561 self.admin_user = self.register_user("admin", "pass", admin=True)
562 self.admin_user_tok = self.login("admin", "pass")
563
564 def test_purge_room(self):
565 room_id = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
566
567 # All users have to have left the room.
568 self.helper.leave(room_id, user=self.admin_user, tok=self.admin_user_tok)
569
570 url = "/_synapse/admin/v1/purge_room"
571 request, channel = self.make_request(
572 "POST",
573 url.encode("ascii"),
574 {"room_id": room_id},
575 access_token=self.admin_user_tok,
576 )
577 self.render(request)
578
579 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
580
581 # Test that the following tables have been purged of all rows related to the room.
582 for table in (
583 "current_state_events",
584 "event_backward_extremities",
585 "event_forward_extremities",
586 "event_json",
587 "event_push_actions",
588 "event_search",
589 "events",
590 "group_rooms",
591 "public_room_list_stream",
592 "receipts_graph",
593 "receipts_linearized",
594 "room_aliases",
595 "room_depth",
596 "room_memberships",
597 "room_stats_state",
598 "room_stats_current",
599 "room_stats_historical",
600 "room_stats_earliest_token",
601 "rooms",
602 "stream_ordering_to_exterm",
603 "users_in_public_rooms",
604 "users_who_share_private_rooms",
605 "appservice_room_list",
606 "e2e_room_keys",
607 "event_push_summary",
608 "pusher_throttle",
609 "group_summary_rooms",
610 "room_account_data",
611 "room_tags",
612 # "state_groups", # Current impl leaves orphaned state groups around.
613 "state_groups_state",
614 ):
615 count = self.get_success(
616 self.store.db.simple_select_one_onecol(
617 table=table,
618 keyvalues={"room_id": room_id},
619 retcol="COUNT(*)",
620 desc="test_purge_room",
621 )
622 )
623
624 self.assertEqual(count, 0, msg="Rows not purged in {}".format(table))
625
626
627 class RoomTestCase(unittest.HomeserverTestCase):
628 """Test /room admin API.
629 """
630
631 servlets = [
632 synapse.rest.admin.register_servlets,
633 login.register_servlets,
634 room.register_servlets,
635 directory.register_servlets,
636 ]
637
638 def prepare(self, reactor, clock, hs):
639 self.store = hs.get_datastore()
640
641 # Create user
642 self.admin_user = self.register_user("admin", "pass", admin=True)
643 self.admin_user_tok = self.login("admin", "pass")
644
645 def test_list_rooms(self):
646 """Test that we can list rooms"""
647 # Create 3 test rooms
648 total_rooms = 3
649 room_ids = []
650 for x in range(total_rooms):
651 room_id = self.helper.create_room_as(
652 self.admin_user, tok=self.admin_user_tok
653 )
654 room_ids.append(room_id)
655
656 # Request the list of rooms
657 url = "/_synapse/admin/v1/rooms"
658 request, channel = self.make_request(
659 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
660 )
661 self.render(request)
662
663 # Check request completed successfully
664 self.assertEqual(200, int(channel.code), msg=channel.json_body)
665
666 # Check that response json body contains a "rooms" key
667 self.assertTrue(
668 "rooms" in channel.json_body,
669 msg="Response body does not " "contain a 'rooms' key",
670 )
671
672 # Check that 3 rooms were returned
673 self.assertEqual(3, len(channel.json_body["rooms"]), msg=channel.json_body)
674
675 # Check their room_ids match
676 returned_room_ids = [room["room_id"] for room in channel.json_body["rooms"]]
677 self.assertEqual(room_ids, returned_room_ids)
678
679 # Check that all fields are available
680 for r in channel.json_body["rooms"]:
681 self.assertIn("name", r)
682 self.assertIn("canonical_alias", r)
683 self.assertIn("joined_members", r)
684 self.assertIn("joined_local_members", r)
685 self.assertIn("version", r)
686 self.assertIn("creator", r)
687 self.assertIn("encryption", r)
688 self.assertIn("federatable", r)
689 self.assertIn("public", r)
690 self.assertIn("join_rules", r)
691 self.assertIn("guest_access", r)
692 self.assertIn("history_visibility", r)
693 self.assertIn("state_events", r)
694
695 # Check that the correct number of total rooms was returned
696 self.assertEqual(channel.json_body["total_rooms"], total_rooms)
697
698 # Check that the offset is correct
699 # Should be 0 as we aren't paginating
700 self.assertEqual(channel.json_body["offset"], 0)
701
702 # Check that the prev_batch parameter is not present
703 self.assertNotIn("prev_batch", channel.json_body)
704
705 # We shouldn't receive a next token here as there's no further rooms to show
706 self.assertNotIn("next_batch", channel.json_body)
707
708 def test_list_rooms_pagination(self):
709 """Test that we can get a full list of rooms through pagination"""
710 # Create 5 test rooms
711 total_rooms = 5
712 room_ids = []
713 for x in range(total_rooms):
714 room_id = self.helper.create_room_as(
715 self.admin_user, tok=self.admin_user_tok
716 )
717 room_ids.append(room_id)
718
719 # Set the name of the rooms so we get a consistent returned ordering
720 for idx, room_id in enumerate(room_ids):
721 self.helper.send_state(
722 room_id, "m.room.name", {"name": str(idx)}, tok=self.admin_user_tok,
723 )
724
725 # Request the list of rooms
726 returned_room_ids = []
727 start = 0
728 limit = 2
729
730 run_count = 0
731 should_repeat = True
732 while should_repeat:
733 run_count += 1
734
735 url = "/_synapse/admin/v1/rooms?from=%d&limit=%d&order_by=%s" % (
736 start,
737 limit,
738 "name",
739 )
740 request, channel = self.make_request(
741 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
742 )
743 self.render(request)
744 self.assertEqual(
745 200, int(channel.result["code"]), msg=channel.result["body"]
746 )
747
748 self.assertTrue("rooms" in channel.json_body)
749 for r in channel.json_body["rooms"]:
750 returned_room_ids.append(r["room_id"])
751
752 # Check that the correct number of total rooms was returned
753 self.assertEqual(channel.json_body["total_rooms"], total_rooms)
754
755 # Check that the offset is correct
756 # We're only getting 2 rooms each page, so should be 2 * last run_count
757 self.assertEqual(channel.json_body["offset"], 2 * (run_count - 1))
758
759 if run_count > 1:
760 # Check the value of prev_batch is correct
761 self.assertEqual(channel.json_body["prev_batch"], 2 * (run_count - 2))
762
763 if "next_batch" not in channel.json_body:
764 # We have reached the end of the list
765 should_repeat = False
766 else:
767 # Make another query with an updated start value
768 start = channel.json_body["next_batch"]
769
770 # We should've queried the endpoint 3 times
771 self.assertEqual(
772 run_count,
773 3,
774 msg="Should've queried 3 times for 5 rooms with limit 2 per query",
775 )
776
777 # Check that we received all of the room ids
778 self.assertEqual(room_ids, returned_room_ids)
779
780 url = "/_synapse/admin/v1/rooms?from=%d&limit=%d" % (start, limit)
781 request, channel = self.make_request(
782 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
783 )
784 self.render(request)
785 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
786
787 def test_correct_room_attributes(self):
788 """Test the correct attributes for a room are returned"""
789 # Create a test room
790 room_id = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
791
792 test_alias = "#test:test"
793 test_room_name = "something"
794
795 # Have another user join the room
796 user_2 = self.register_user("user4", "pass")
797 user_tok_2 = self.login("user4", "pass")
798 self.helper.join(room_id, user_2, tok=user_tok_2)
799
800 # Create a new alias to this room
801 url = "/_matrix/client/r0/directory/room/%s" % (urllib.parse.quote(test_alias),)
802 request, channel = self.make_request(
803 "PUT",
804 url.encode("ascii"),
805 {"room_id": room_id},
806 access_token=self.admin_user_tok,
807 )
808 self.render(request)
809 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
810
811 # Set this new alias as the canonical alias for this room
812 self.helper.send_state(
813 room_id,
814 "m.room.aliases",
815 {"aliases": [test_alias]},
816 tok=self.admin_user_tok,
817 state_key="test",
818 )
819 self.helper.send_state(
820 room_id,
821 "m.room.canonical_alias",
822 {"alias": test_alias},
823 tok=self.admin_user_tok,
824 )
825
826 # Set a name for the room
827 self.helper.send_state(
828 room_id, "m.room.name", {"name": test_room_name}, tok=self.admin_user_tok,
829 )
830
831 # Request the list of rooms
832 url = "/_synapse/admin/v1/rooms"
833 request, channel = self.make_request(
834 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
835 )
836 self.render(request)
837 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
838
839 # Check that rooms were returned
840 self.assertTrue("rooms" in channel.json_body)
841 rooms = channel.json_body["rooms"]
842
843 # Check that only one room was returned
844 self.assertEqual(len(rooms), 1)
845
846 # And that the value of the total_rooms key was correct
847 self.assertEqual(channel.json_body["total_rooms"], 1)
848
849 # Check that the offset is correct
850 # We're not paginating, so should be 0
851 self.assertEqual(channel.json_body["offset"], 0)
852
853 # Check that there is no `prev_batch`
854 self.assertNotIn("prev_batch", channel.json_body)
855
856 # Check that there is no `next_batch`
857 self.assertNotIn("next_batch", channel.json_body)
858
859 # Check that all provided attributes are set
860 r = rooms[0]
861 self.assertEqual(room_id, r["room_id"])
862 self.assertEqual(test_room_name, r["name"])
863 self.assertEqual(test_alias, r["canonical_alias"])
864
865 def test_room_list_sort_order(self):
866 """Test room list sort ordering. alphabetical name versus number of members,
867 reversing the order, etc.
868 """
869
870 def _set_canonical_alias(room_id: str, test_alias: str, admin_user_tok: str):
871 # Create a new alias to this room
872 url = "/_matrix/client/r0/directory/room/%s" % (
873 urllib.parse.quote(test_alias),
874 )
875 request, channel = self.make_request(
876 "PUT",
877 url.encode("ascii"),
878 {"room_id": room_id},
879 access_token=admin_user_tok,
880 )
881 self.render(request)
882 self.assertEqual(
883 200, int(channel.result["code"]), msg=channel.result["body"]
884 )
885
886 # Set this new alias as the canonical alias for this room
887 self.helper.send_state(
888 room_id,
889 "m.room.aliases",
890 {"aliases": [test_alias]},
891 tok=admin_user_tok,
892 state_key="test",
893 )
894 self.helper.send_state(
895 room_id,
896 "m.room.canonical_alias",
897 {"alias": test_alias},
898 tok=admin_user_tok,
899 )
900
901 def _order_test(
902 order_type: str, expected_room_list: List[str], reverse: bool = False,
903 ):
904 """Request the list of rooms in a certain order. Assert that order is what
905 we expect
906
907 Args:
908 order_type: The type of ordering to give the server
909 expected_room_list: The list of room_ids in the order we expect to get
910 back from the server
911 """
912 # Request the list of rooms in the given order
913 url = "/_synapse/admin/v1/rooms?order_by=%s" % (order_type,)
914 if reverse:
915 url += "&dir=b"
916 request, channel = self.make_request(
917 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
918 )
919 self.render(request)
920 self.assertEqual(200, channel.code, msg=channel.json_body)
921
922 # Check that rooms were returned
923 self.assertTrue("rooms" in channel.json_body)
924 rooms = channel.json_body["rooms"]
925
926 # Check for the correct total_rooms value
927 self.assertEqual(channel.json_body["total_rooms"], 3)
928
929 # Check that the offset is correct
930 # We're not paginating, so should be 0
931 self.assertEqual(channel.json_body["offset"], 0)
932
933 # Check that there is no `prev_batch`
934 self.assertNotIn("prev_batch", channel.json_body)
935
936 # Check that there is no `next_batch`
937 self.assertNotIn("next_batch", channel.json_body)
938
939 # Check that rooms were returned in alphabetical order
940 returned_order = [r["room_id"] for r in rooms]
941 self.assertListEqual(expected_room_list, returned_order) # order is checked
942
943 # Create 3 test rooms
944 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
945 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
946 room_id_3 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
947
948 # Set room names in alphabetical order. room 1 -> A, 2 -> B, 3 -> C
949 self.helper.send_state(
950 room_id_1, "m.room.name", {"name": "A"}, tok=self.admin_user_tok,
951 )
952 self.helper.send_state(
953 room_id_2, "m.room.name", {"name": "B"}, tok=self.admin_user_tok,
954 )
955 self.helper.send_state(
956 room_id_3, "m.room.name", {"name": "C"}, tok=self.admin_user_tok,
957 )
958
959 # Set room canonical room aliases
960 _set_canonical_alias(room_id_1, "#A_alias:test", self.admin_user_tok)
961 _set_canonical_alias(room_id_2, "#B_alias:test", self.admin_user_tok)
962 _set_canonical_alias(room_id_3, "#C_alias:test", self.admin_user_tok)
963
964 # Set room member size in the reverse order. room 1 -> 1 member, 2 -> 2, 3 -> 3
965 user_1 = self.register_user("bob1", "pass")
966 user_1_tok = self.login("bob1", "pass")
967 self.helper.join(room_id_2, user_1, tok=user_1_tok)
968
969 user_2 = self.register_user("bob2", "pass")
970 user_2_tok = self.login("bob2", "pass")
971 self.helper.join(room_id_3, user_2, tok=user_2_tok)
972
973 user_3 = self.register_user("bob3", "pass")
974 user_3_tok = self.login("bob3", "pass")
975 self.helper.join(room_id_3, user_3, tok=user_3_tok)
976
977 # Test different sort orders, with forward and reverse directions
978 _order_test("name", [room_id_1, room_id_2, room_id_3])
979 _order_test("name", [room_id_3, room_id_2, room_id_1], reverse=True)
980
981 _order_test("canonical_alias", [room_id_1, room_id_2, room_id_3])
982 _order_test("canonical_alias", [room_id_3, room_id_2, room_id_1], reverse=True)
983
984 _order_test("joined_members", [room_id_3, room_id_2, room_id_1])
985 _order_test("joined_members", [room_id_1, room_id_2, room_id_3], reverse=True)
986
987 _order_test("joined_local_members", [room_id_3, room_id_2, room_id_1])
988 _order_test(
989 "joined_local_members", [room_id_1, room_id_2, room_id_3], reverse=True
990 )
991
992 _order_test("version", [room_id_1, room_id_2, room_id_3])
993 _order_test("version", [room_id_1, room_id_2, room_id_3], reverse=True)
994
995 _order_test("creator", [room_id_1, room_id_2, room_id_3])
996 _order_test("creator", [room_id_1, room_id_2, room_id_3], reverse=True)
997
998 _order_test("encryption", [room_id_1, room_id_2, room_id_3])
999 _order_test("encryption", [room_id_1, room_id_2, room_id_3], reverse=True)
1000
1001 _order_test("federatable", [room_id_1, room_id_2, room_id_3])
1002 _order_test("federatable", [room_id_1, room_id_2, room_id_3], reverse=True)
1003
1004 _order_test("public", [room_id_1, room_id_2, room_id_3])
1005 # Different sort order of SQlite and PostreSQL
1006 # _order_test("public", [room_id_3, room_id_2, room_id_1], reverse=True)
1007
1008 _order_test("join_rules", [room_id_1, room_id_2, room_id_3])
1009 _order_test("join_rules", [room_id_1, room_id_2, room_id_3], reverse=True)
1010
1011 _order_test("guest_access", [room_id_1, room_id_2, room_id_3])
1012 _order_test("guest_access", [room_id_1, room_id_2, room_id_3], reverse=True)
1013
1014 _order_test("history_visibility", [room_id_1, room_id_2, room_id_3])
1015 _order_test(
1016 "history_visibility", [room_id_1, room_id_2, room_id_3], reverse=True
1017 )
1018
1019 _order_test("state_events", [room_id_3, room_id_2, room_id_1])
1020 _order_test("state_events", [room_id_1, room_id_2, room_id_3], reverse=True)
1021
1022 def test_search_term(self):
1023 """Test that searching for a room works correctly"""
1024 # Create two test rooms
1025 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1026 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1027
1028 room_name_1 = "something"
1029 room_name_2 = "else"
1030
1031 # Set the name for each room
1032 self.helper.send_state(
1033 room_id_1, "m.room.name", {"name": room_name_1}, tok=self.admin_user_tok,
1034 )
1035 self.helper.send_state(
1036 room_id_2, "m.room.name", {"name": room_name_2}, tok=self.admin_user_tok,
1037 )
1038
1039 def _search_test(
1040 expected_room_id: Optional[str],
1041 search_term: str,
1042 expected_http_code: int = 200,
1043 ):
1044 """Search for a room and check that the returned room's id is a match
1045
1046 Args:
1047 expected_room_id: The room_id expected to be returned by the API. Set
1048 to None to expect zero results for the search
1049 search_term: The term to search for room names with
1050 expected_http_code: The expected http code for the request
1051 """
1052 url = "/_synapse/admin/v1/rooms?search_term=%s" % (search_term,)
1053 request, channel = self.make_request(
1054 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
1055 )
1056 self.render(request)
1057 self.assertEqual(expected_http_code, channel.code, msg=channel.json_body)
1058
1059 if expected_http_code != 200:
1060 return
1061
1062 # Check that rooms were returned
1063 self.assertTrue("rooms" in channel.json_body)
1064 rooms = channel.json_body["rooms"]
1065
1066 # Check that the expected number of rooms were returned
1067 expected_room_count = 1 if expected_room_id else 0
1068 self.assertEqual(len(rooms), expected_room_count)
1069 self.assertEqual(channel.json_body["total_rooms"], expected_room_count)
1070
1071 # Check that the offset is correct
1072 # We're not paginating, so should be 0
1073 self.assertEqual(channel.json_body["offset"], 0)
1074
1075 # Check that there is no `prev_batch`
1076 self.assertNotIn("prev_batch", channel.json_body)
1077
1078 # Check that there is no `next_batch`
1079 self.assertNotIn("next_batch", channel.json_body)
1080
1081 if expected_room_id:
1082 # Check that the first returned room id is correct
1083 r = rooms[0]
1084 self.assertEqual(expected_room_id, r["room_id"])
1085
1086 # Perform search tests
1087 _search_test(room_id_1, "something")
1088 _search_test(room_id_1, "thing")
1089
1090 _search_test(room_id_2, "else")
1091 _search_test(room_id_2, "se")
1092
1093 _search_test(None, "foo")
1094 _search_test(None, "bar")
1095 _search_test(None, "", expected_http_code=400)
1096
1097 def test_single_room(self):
1098 """Test that a single room can be requested correctly"""
1099 # Create two test rooms
1100 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1101 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1102
1103 room_name_1 = "something"
1104 room_name_2 = "else"
1105
1106 # Set the name for each room
1107 self.helper.send_state(
1108 room_id_1, "m.room.name", {"name": room_name_1}, tok=self.admin_user_tok,
1109 )
1110 self.helper.send_state(
1111 room_id_2, "m.room.name", {"name": room_name_2}, tok=self.admin_user_tok,
1112 )
1113
1114 url = "/_synapse/admin/v1/rooms/%s" % (room_id_1,)
1115 request, channel = self.make_request(
1116 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
1117 )
1118 self.render(request)
1119 self.assertEqual(200, channel.code, msg=channel.json_body)
1120
1121 self.assertIn("room_id", channel.json_body)
1122 self.assertIn("name", channel.json_body)
1123 self.assertIn("canonical_alias", channel.json_body)
1124 self.assertIn("joined_members", channel.json_body)
1125 self.assertIn("joined_local_members", channel.json_body)
1126 self.assertIn("version", channel.json_body)
1127 self.assertIn("creator", channel.json_body)
1128 self.assertIn("encryption", channel.json_body)
1129 self.assertIn("federatable", channel.json_body)
1130 self.assertIn("public", channel.json_body)
1131 self.assertIn("join_rules", channel.json_body)
1132 self.assertIn("guest_access", channel.json_body)
1133 self.assertIn("history_visibility", channel.json_body)
1134 self.assertIn("state_events", channel.json_body)
1135
1136 self.assertEqual(room_id_1, channel.json_body["room_id"])
1137
1138 def test_room_members(self):
1139 """Test that room members can be requested correctly"""
1140 # Create two test rooms
1141 room_id_1 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1142 room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok)
1143
1144 # Have another user join the room
1145 user_1 = self.register_user("foo", "pass")
1146 user_tok_1 = self.login("foo", "pass")
1147 self.helper.join(room_id_1, user_1, tok=user_tok_1)
1148
1149 # Have another user join the room
1150 user_2 = self.register_user("bar", "pass")
1151 user_tok_2 = self.login("bar", "pass")
1152 self.helper.join(room_id_1, user_2, tok=user_tok_2)
1153 self.helper.join(room_id_2, user_2, tok=user_tok_2)
1154
1155 # Have another user join the room
1156 user_3 = self.register_user("foobar", "pass")
1157 user_tok_3 = self.login("foobar", "pass")
1158 self.helper.join(room_id_2, user_3, tok=user_tok_3)
1159
1160 url = "/_synapse/admin/v1/rooms/%s/members" % (room_id_1,)
1161 request, channel = self.make_request(
1162 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
1163 )
1164 self.render(request)
1165 self.assertEqual(200, channel.code, msg=channel.json_body)
1166
1167 self.assertCountEqual(
1168 ["@admin:test", "@foo:test", "@bar:test"], channel.json_body["members"]
1169 )
1170 self.assertEqual(channel.json_body["total"], 3)
1171
1172 url = "/_synapse/admin/v1/rooms/%s/members" % (room_id_2,)
1173 request, channel = self.make_request(
1174 "GET", url.encode("ascii"), access_token=self.admin_user_tok,
1175 )
1176 self.render(request)
1177 self.assertEqual(200, channel.code, msg=channel.json_body)
1178
1179 self.assertCountEqual(
1180 ["@admin:test", "@bar:test", "@foobar:test"], channel.json_body["members"]
1181 )
1182 self.assertEqual(channel.json_body["total"], 3)
1183
1184
1185 class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
1186
1187 servlets = [
1188 synapse.rest.admin.register_servlets,
1189 room.register_servlets,
1190 login.register_servlets,
1191 ]
1192
1193 def prepare(self, reactor, clock, homeserver):
1194 self.admin_user = self.register_user("admin", "pass", admin=True)
1195 self.admin_user_tok = self.login("admin", "pass")
1196
1197 self.creator = self.register_user("creator", "test")
1198 self.creator_tok = self.login("creator", "test")
1199
1200 self.second_user_id = self.register_user("second", "test")
1201 self.second_tok = self.login("second", "test")
1202
1203 self.public_room_id = self.helper.create_room_as(
1204 self.creator, tok=self.creator_tok, is_public=True
1205 )
1206 self.url = "/_synapse/admin/v1/join/{}".format(self.public_room_id)
1207
1208 def test_requester_is_no_admin(self):
1209 """
1210 If the user is not a server admin, an error 403 is returned.
1211 """
1212 body = json.dumps({"user_id": self.second_user_id})
1213
1214 request, channel = self.make_request(
1215 "POST",
1216 self.url,
1217 content=body.encode(encoding="utf_8"),
1218 access_token=self.second_tok,
1219 )
1220 self.render(request)
1221
1222 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
1223 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
1224
1225 def test_invalid_parameter(self):
1226 """
1227 If a parameter is missing, return an error
1228 """
1229 body = json.dumps({"unknown_parameter": "@unknown:test"})
1230
1231 request, channel = self.make_request(
1232 "POST",
1233 self.url,
1234 content=body.encode(encoding="utf_8"),
1235 access_token=self.admin_user_tok,
1236 )
1237 self.render(request)
1238
1239 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
1240 self.assertEqual(Codes.MISSING_PARAM, channel.json_body["errcode"])
1241
1242 def test_local_user_does_not_exist(self):
1243 """
1244 Tests that a lookup for a user that does not exist returns a 404
1245 """
1246 body = json.dumps({"user_id": "@unknown:test"})
1247
1248 request, channel = self.make_request(
1249 "POST",
1250 self.url,
1251 content=body.encode(encoding="utf_8"),
1252 access_token=self.admin_user_tok,
1253 )
1254 self.render(request)
1255
1256 self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
1257 self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
1258
1259 def test_remote_user(self):
1260 """
1261 Check that only local user can join rooms.
1262 """
1263 body = json.dumps({"user_id": "@not:exist.bla"})
1264
1265 request, channel = self.make_request(
1266 "POST",
1267 self.url,
1268 content=body.encode(encoding="utf_8"),
1269 access_token=self.admin_user_tok,
1270 )
1271 self.render(request)
1272
1273 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
1274 self.assertEqual(
1275 "This endpoint can only be used with local users",
1276 channel.json_body["error"],
1277 )
1278
1279 def test_room_does_not_exist(self):
1280 """
1281 Check that unknown rooms/server return error 404.
1282 """
1283 body = json.dumps({"user_id": self.second_user_id})
1284 url = "/_synapse/admin/v1/join/!unknown:test"
1285
1286 request, channel = self.make_request(
1287 "POST",
1288 url,
1289 content=body.encode(encoding="utf_8"),
1290 access_token=self.admin_user_tok,
1291 )
1292 self.render(request)
1293
1294 self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
1295 self.assertEqual("No known servers", channel.json_body["error"])
1296
1297 def test_room_is_not_valid(self):
1298 """
1299 Check that invalid room names, return an error 400.
1300 """
1301 body = json.dumps({"user_id": self.second_user_id})
1302 url = "/_synapse/admin/v1/join/invalidroom"
1303
1304 request, channel = self.make_request(
1305 "POST",
1306 url,
1307 content=body.encode(encoding="utf_8"),
1308 access_token=self.admin_user_tok,
1309 )
1310 self.render(request)
1311
1312 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
1313 self.assertEqual(
1314 "invalidroom was not legal room ID or room alias",
1315 channel.json_body["error"],
1316 )
1317
1318 def test_join_public_room(self):
1319 """
1320 Test joining a local user to a public room with "JoinRules.PUBLIC"
1321 """
1322 body = json.dumps({"user_id": self.second_user_id})
1323
1324 request, channel = self.make_request(
1325 "POST",
1326 self.url,
1327 content=body.encode(encoding="utf_8"),
1328 access_token=self.admin_user_tok,
1329 )
1330 self.render(request)
1331
1332 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1333 self.assertEqual(self.public_room_id, channel.json_body["room_id"])
1334
1335 # Validate if user is a member of the room
1336
1337 request, channel = self.make_request(
1338 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
1339 )
1340 self.render(request)
1341 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
1342 self.assertEqual(self.public_room_id, channel.json_body["joined_rooms"][0])
1343
1344 def test_join_private_room_if_not_member(self):
1345 """
1346 Test joining a local user to a private room with "JoinRules.INVITE"
1347 when server admin is not member of this room.
1348 """
1349 private_room_id = self.helper.create_room_as(
1350 self.creator, tok=self.creator_tok, is_public=False
1351 )
1352 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
1353 body = json.dumps({"user_id": self.second_user_id})
1354
1355 request, channel = self.make_request(
1356 "POST",
1357 url,
1358 content=body.encode(encoding="utf_8"),
1359 access_token=self.admin_user_tok,
1360 )
1361 self.render(request)
1362
1363 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
1364 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
1365
1366 def test_join_private_room_if_member(self):
1367 """
1368 Test joining a local user to a private room with "JoinRules.INVITE",
1369 when server admin is member of this room.
1370 """
1371 private_room_id = self.helper.create_room_as(
1372 self.creator, tok=self.creator_tok, is_public=False
1373 )
1374 self.helper.invite(
1375 room=private_room_id,
1376 src=self.creator,
1377 targ=self.admin_user,
1378 tok=self.creator_tok,
1379 )
1380 self.helper.join(
1381 room=private_room_id, user=self.admin_user, tok=self.admin_user_tok
1382 )
1383
1384 # Validate if server admin is a member of the room
1385
1386 request, channel = self.make_request(
1387 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.admin_user_tok,
1388 )
1389 self.render(request)
1390 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
1391 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
1392
1393 # Join user to room.
1394
1395 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
1396 body = json.dumps({"user_id": self.second_user_id})
1397
1398 request, channel = self.make_request(
1399 "POST",
1400 url,
1401 content=body.encode(encoding="utf_8"),
1402 access_token=self.admin_user_tok,
1403 )
1404 self.render(request)
1405 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1406 self.assertEqual(private_room_id, channel.json_body["room_id"])
1407
1408 # Validate if user is a member of the room
1409
1410 request, channel = self.make_request(
1411 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
1412 )
1413 self.render(request)
1414 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
1415 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
1416
1417 def test_join_private_room_if_owner(self):
1418 """
1419 Test joining a local user to a private room with "JoinRules.INVITE",
1420 when server admin is owner of this room.
1421 """
1422 private_room_id = self.helper.create_room_as(
1423 self.admin_user, tok=self.admin_user_tok, is_public=False
1424 )
1425 url = "/_synapse/admin/v1/join/{}".format(private_room_id)
1426 body = json.dumps({"user_id": self.second_user_id})
1427
1428 request, channel = self.make_request(
1429 "POST",
1430 url,
1431 content=body.encode(encoding="utf_8"),
1432 access_token=self.admin_user_tok,
1433 )
1434 self.render(request)
1435
1436 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1437 self.assertEqual(private_room_id, channel.json_body["room_id"])
1438
1439 # Validate if user is a member of the room
1440
1441 request, channel = self.make_request(
1442 "GET", "/_matrix/client/r0/joined_rooms", access_token=self.second_tok,
1443 )
1444 self.render(request)
1445 self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
1446 self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
856856 self.assertEqual("@user:test", channel.json_body["name"])
857857 self.assertEqual(True, channel.json_body["deactivated"])
858858
859 def test_reactivate_user(self):
860 """
861 Test reactivating another user.
862 """
863
864 # Deactivate the user.
865 request, channel = self.make_request(
866 "PUT",
867 self.url_other_user,
868 access_token=self.admin_user_tok,
869 content=json.dumps({"deactivated": True}).encode(encoding="utf_8"),
870 )
871 self.render(request)
872 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
873
874 # Attempt to reactivate the user (without a password).
875 request, channel = self.make_request(
876 "PUT",
877 self.url_other_user,
878 access_token=self.admin_user_tok,
879 content=json.dumps({"deactivated": False}).encode(encoding="utf_8"),
880 )
881 self.render(request)
882 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
883
884 # Reactivate the user.
885 request, channel = self.make_request(
886 "PUT",
887 self.url_other_user,
888 access_token=self.admin_user_tok,
889 content=json.dumps({"deactivated": False, "password": "foo"}).encode(
890 encoding="utf_8"
891 ),
892 )
893 self.render(request)
894 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
895
896 # Get user
897 request, channel = self.make_request(
898 "GET", self.url_other_user, access_token=self.admin_user_tok,
899 )
900 self.render(request)
901
902 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
903 self.assertEqual("@user:test", channel.json_body["name"])
904 self.assertEqual(False, channel.json_body["deactivated"])
905
859906 def test_set_user_as_admin(self):
860907 """
861908 Test setting the admin flag on a user.
397397 </cas:serviceResponse>
398398 """
399399 % cas_user_id
400 )
400 ).encode("utf-8")
401401
402402 mocked_http_client = Mock(spec=["get_raw"])
403403 mocked_http_client.get_raw.side_effect = get_raw
513513 ]
514514
515515 jwt_secret = "secret"
516 jwt_algorithm = "HS256"
516517
517518 def make_homeserver(self, reactor, clock):
518519 self.hs = self.setup_test_homeserver()
519520 self.hs.config.jwt_enabled = True
520521 self.hs.config.jwt_secret = self.jwt_secret
521 self.hs.config.jwt_algorithm = "HS256"
522 self.hs.config.jwt_algorithm = self.jwt_algorithm
522523 return self.hs
523524
524525 def jwt_encode(self, token, secret=jwt_secret):
525 return jwt.encode(token, secret, "HS256").decode("ascii")
526 return jwt.encode(token, secret, self.jwt_algorithm).decode("ascii")
526527
527528 def jwt_login(self, *args):
528529 params = json.dumps(
545546
546547 def test_login_jwt_invalid_signature(self):
547548 channel = self.jwt_login({"sub": "frog"}, "notsecret")
548 self.assertEqual(channel.result["code"], b"401", channel.result)
549 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
550 self.assertEqual(channel.json_body["error"], "Invalid JWT")
549 self.assertEqual(channel.result["code"], b"403", channel.result)
550 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
551 self.assertEqual(
552 channel.json_body["error"],
553 "JWT validation failed: Signature verification failed",
554 )
551555
552556 def test_login_jwt_expired(self):
553557 channel = self.jwt_login({"sub": "frog", "exp": 864000})
554 self.assertEqual(channel.result["code"], b"401", channel.result)
555 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
556 self.assertEqual(channel.json_body["error"], "JWT expired")
558 self.assertEqual(channel.result["code"], b"403", channel.result)
559 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
560 self.assertEqual(
561 channel.json_body["error"], "JWT validation failed: Signature has expired"
562 )
557563
558564 def test_login_jwt_not_before(self):
559565 now = int(time.time())
560566 channel = self.jwt_login({"sub": "frog", "nbf": now + 3600})
561 self.assertEqual(channel.result["code"], b"401", channel.result)
562 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
563 self.assertEqual(channel.json_body["error"], "Invalid JWT")
567 self.assertEqual(channel.result["code"], b"403", channel.result)
568 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
569 self.assertEqual(
570 channel.json_body["error"],
571 "JWT validation failed: The token is not yet valid (nbf)",
572 )
564573
565574 def test_login_no_sub(self):
566575 channel = self.jwt_login({"username": "root"})
567 self.assertEqual(channel.result["code"], b"401", channel.result)
568 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
576 self.assertEqual(channel.result["code"], b"403", channel.result)
577 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
569578 self.assertEqual(channel.json_body["error"], "Invalid JWT")
579
580 @override_config(
581 {
582 "jwt_config": {
583 "jwt_enabled": True,
584 "secret": jwt_secret,
585 "algorithm": jwt_algorithm,
586 "issuer": "test-issuer",
587 }
588 }
589 )
590 def test_login_iss(self):
591 """Test validating the issuer claim."""
592 # A valid issuer.
593 channel = self.jwt_login({"sub": "kermit", "iss": "test-issuer"})
594 self.assertEqual(channel.result["code"], b"200", channel.result)
595 self.assertEqual(channel.json_body["user_id"], "@kermit:test")
596
597 # An invalid issuer.
598 channel = self.jwt_login({"sub": "kermit", "iss": "invalid"})
599 self.assertEqual(channel.result["code"], b"403", channel.result)
600 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
601 self.assertEqual(
602 channel.json_body["error"], "JWT validation failed: Invalid issuer"
603 )
604
605 # Not providing an issuer.
606 channel = self.jwt_login({"sub": "kermit"})
607 self.assertEqual(channel.result["code"], b"403", channel.result)
608 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
609 self.assertEqual(
610 channel.json_body["error"],
611 'JWT validation failed: Token is missing the "iss" claim',
612 )
613
614 def test_login_iss_no_config(self):
615 """Test providing an issuer claim without requiring it in the configuration."""
616 channel = self.jwt_login({"sub": "kermit", "iss": "invalid"})
617 self.assertEqual(channel.result["code"], b"200", channel.result)
618 self.assertEqual(channel.json_body["user_id"], "@kermit:test")
619
620 @override_config(
621 {
622 "jwt_config": {
623 "jwt_enabled": True,
624 "secret": jwt_secret,
625 "algorithm": jwt_algorithm,
626 "audiences": ["test-audience"],
627 }
628 }
629 )
630 def test_login_aud(self):
631 """Test validating the audience claim."""
632 # A valid audience.
633 channel = self.jwt_login({"sub": "kermit", "aud": "test-audience"})
634 self.assertEqual(channel.result["code"], b"200", channel.result)
635 self.assertEqual(channel.json_body["user_id"], "@kermit:test")
636
637 # An invalid audience.
638 channel = self.jwt_login({"sub": "kermit", "aud": "invalid"})
639 self.assertEqual(channel.result["code"], b"403", channel.result)
640 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
641 self.assertEqual(
642 channel.json_body["error"], "JWT validation failed: Invalid audience"
643 )
644
645 # Not providing an audience.
646 channel = self.jwt_login({"sub": "kermit"})
647 self.assertEqual(channel.result["code"], b"403", channel.result)
648 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
649 self.assertEqual(
650 channel.json_body["error"],
651 'JWT validation failed: Token is missing the "aud" claim',
652 )
653
654 def test_login_aud_no_config(self):
655 """Test providing an audience without requiring it in the configuration."""
656 channel = self.jwt_login({"sub": "kermit", "aud": "invalid"})
657 self.assertEqual(channel.result["code"], b"403", channel.result)
658 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
659 self.assertEqual(
660 channel.json_body["error"], "JWT validation failed: Invalid audience"
661 )
570662
571663 def test_login_no_token(self):
572664 params = json.dumps({"type": "org.matrix.login.jwt"})
573665 request, channel = self.make_request(b"POST", LOGIN_URL, params)
574666 self.render(request)
575 self.assertEqual(channel.result["code"], b"401", channel.result)
576 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
667 self.assertEqual(channel.result["code"], b"403", channel.result)
668 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
577669 self.assertEqual(channel.json_body["error"], "Token field for JWT is missing")
578670
579671
655747
656748 def test_login_jwt_invalid_signature(self):
657749 channel = self.jwt_login({"sub": "frog"}, self.bad_privatekey)
658 self.assertEqual(channel.result["code"], b"401", channel.result)
659 self.assertEqual(channel.json_body["errcode"], "M_UNAUTHORIZED")
660 self.assertEqual(channel.json_body["error"], "Invalid JWT")
750 self.assertEqual(channel.result["code"], b"403", channel.result)
751 self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
752 self.assertEqual(
753 channel.json_body["error"],
754 "JWT validation failed: Signature verification failed",
755 )
2525 from parameterized import parameterized_class
2626 from PIL import Image as Image
2727
28 from twisted.internet import defer
2829 from twisted.internet.defer import Deferred
2930
3031 from synapse.logging.context import make_deferred_yieldable
7677
7778 # This uses a real blocking threadpool so we have to wait for it to be
7879 # actually done :/
79 x = self.media_storage.ensure_media_is_in_local_cache(file_info)
80 x = defer.ensureDeferred(
81 self.media_storage.ensure_media_is_in_local_cache(file_info)
82 )
8083
8184 # Hotloop until the threadpool does its job...
8285 self.wait_on_thread(x)
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
14 import json
1515 import os
16 import re
17
18 from mock import patch
1619
1720 import attr
1821
130133 self.reactor.nameResolver = Resolver()
131134
132135 def test_cache_returns_correct_type(self):
133 self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")]
136 self.lookups["matrix.org"] = [(IPv4Address, "10.1.2.3")]
134137
135138 request, channel = self.make_request(
136139 "GET", "url_preview?url=http://matrix.org", shorthand=False
186189 )
187190
188191 def test_non_ascii_preview_httpequiv(self):
189 self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")]
192 self.lookups["matrix.org"] = [(IPv4Address, "10.1.2.3")]
190193
191194 end_content = (
192195 b"<html><head>"
220223 self.assertEqual(channel.json_body["og:title"], "\u0434\u043a\u0430")
221224
222225 def test_non_ascii_preview_content_type(self):
223 self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")]
226 self.lookups["matrix.org"] = [(IPv4Address, "10.1.2.3")]
224227
225228 end_content = (
226229 b"<html><head>"
253256 self.assertEqual(channel.json_body["og:title"], "\u0434\u043a\u0430")
254257
255258 def test_overlong_title(self):
256 self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")]
259 self.lookups["matrix.org"] = [(IPv4Address, "10.1.2.3")]
257260
258261 end_content = (
259262 b"<html><head>"
291294 """
292295 IP addresses can be previewed directly.
293296 """
294 self.lookups["example.com"] = [(IPv4Address, "8.8.8.8")]
297 self.lookups["example.com"] = [(IPv4Address, "10.1.2.3")]
295298
296299 request, channel = self.make_request(
297300 "GET", "url_preview?url=http://example.com", shorthand=False
438441 # Hardcode the URL resolving to the IP we want.
439442 self.lookups["example.com"] = [
440443 (IPv4Address, "1.1.1.2"),
441 (IPv4Address, "8.8.8.8"),
444 (IPv4Address, "10.1.2.3"),
442445 ]
443446
444447 request, channel = self.make_request(
517520 """
518521 Accept-Language header is sent to the remote server
519522 """
520 self.lookups["example.com"] = [(IPv4Address, "8.8.8.8")]
523 self.lookups["example.com"] = [(IPv4Address, "10.1.2.3")]
521524
522525 # Build and make a request to the server
523526 request, channel = self.make_request(
561564 ),
562565 server.data,
563566 )
567
568 def test_oembed_photo(self):
569 """Test an oEmbed endpoint which returns a 'photo' type which redirects the preview to a new URL."""
570 # Route the HTTP version to an HTTP endpoint so that the tests work.
571 with patch.dict(
572 "synapse.rest.media.v1.preview_url_resource._oembed_patterns",
573 {
574 re.compile(
575 r"http://twitter\.com/.+/status/.+"
576 ): "http://publish.twitter.com/oembed",
577 },
578 clear=True,
579 ):
580
581 self.lookups["publish.twitter.com"] = [(IPv4Address, "10.1.2.3")]
582 self.lookups["cdn.twitter.com"] = [(IPv4Address, "10.1.2.3")]
583
584 result = {
585 "version": "1.0",
586 "type": "photo",
587 "url": "http://cdn.twitter.com/matrixdotorg",
588 }
589 oembed_content = json.dumps(result).encode("utf-8")
590
591 end_content = (
592 b"<html><head>"
593 b"<title>Some Title</title>"
594 b'<meta property="og:description" content="hi" />'
595 b"</head></html>"
596 )
597
598 request, channel = self.make_request(
599 "GET",
600 "url_preview?url=http://twitter.com/matrixdotorg/status/12345",
601 shorthand=False,
602 )
603 request.render(self.preview_url)
604 self.pump()
605
606 client = self.reactor.tcpClients[0][2].buildProtocol(None)
607 server = AccumulatingProtocol()
608 server.makeConnection(FakeTransport(client, self.reactor))
609 client.makeConnection(FakeTransport(server, self.reactor))
610 client.dataReceived(
611 (
612 b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\n"
613 b'Content-Type: application/json; charset="utf8"\r\n\r\n'
614 )
615 % (len(oembed_content),)
616 + oembed_content
617 )
618
619 self.pump()
620
621 client = self.reactor.tcpClients[1][2].buildProtocol(None)
622 server = AccumulatingProtocol()
623 server.makeConnection(FakeTransport(client, self.reactor))
624 client.makeConnection(FakeTransport(server, self.reactor))
625 client.dataReceived(
626 (
627 b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\n"
628 b'Content-Type: text/html; charset="utf8"\r\n\r\n'
629 )
630 % (len(end_content),)
631 + end_content
632 )
633
634 self.pump()
635
636 self.assertEqual(channel.code, 200)
637 self.assertEqual(
638 channel.json_body, {"og:title": "Some Title", "og:description": "hi"}
639 )
640
641 def test_oembed_rich(self):
642 """Test an oEmbed endpoint which returns HTML content via the 'rich' type."""
643 # Route the HTTP version to an HTTP endpoint so that the tests work.
644 with patch.dict(
645 "synapse.rest.media.v1.preview_url_resource._oembed_patterns",
646 {
647 re.compile(
648 r"http://twitter\.com/.+/status/.+"
649 ): "http://publish.twitter.com/oembed",
650 },
651 clear=True,
652 ):
653
654 self.lookups["publish.twitter.com"] = [(IPv4Address, "10.1.2.3")]
655
656 result = {
657 "version": "1.0",
658 "type": "rich",
659 "html": "<div>Content Preview</div>",
660 }
661 end_content = json.dumps(result).encode("utf-8")
662
663 request, channel = self.make_request(
664 "GET",
665 "url_preview?url=http://twitter.com/matrixdotorg/status/12345",
666 shorthand=False,
667 )
668 request.render(self.preview_url)
669 self.pump()
670
671 client = self.reactor.tcpClients[0][2].buildProtocol(None)
672 server = AccumulatingProtocol()
673 server.makeConnection(FakeTransport(client, self.reactor))
674 client.makeConnection(FakeTransport(server, self.reactor))
675 client.dataReceived(
676 (
677 b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\n"
678 b'Content-Type: application/json; charset="utf8"\r\n\r\n'
679 )
680 % (len(end_content),)
681 + end_content
682 )
683
684 self.pump()
685 self.assertEqual(channel.code, 200)
686 self.assertEqual(
687 channel.json_body,
688 {"og:title": None, "og:description": "Content Preview"},
689 )
236236 def __init__(self):
237237 self.threadpool = ThreadPool(self)
238238
239 self._tcp_callbacks = {}
239240 self._udp = []
240241 lookups = self.lookups = {}
241242
266267
267268 def getThreadPool(self):
268269 return self.threadpool
270
271 def add_tcp_client_callback(self, host, port, callback):
272 """Add a callback that will be invoked when we receive a connection
273 attempt to the given IP/port using `connectTCP`.
274
275 Note that the callback gets run before we return the connection to the
276 client, which means callbacks cannot block while waiting for writes.
277 """
278 self._tcp_callbacks[(host, port)] = callback
279
280 def connectTCP(self, host, port, factory, timeout=30, bindAddress=None):
281 """Fake L{IReactorTCP.connectTCP}.
282 """
283
284 conn = super().connectTCP(
285 host, port, factory, timeout=timeout, bindAddress=None
286 )
287
288 callback = self._tcp_callbacks.get((host, port))
289 if callback:
290 callback()
291
292 return conn
269293
270294
271295 class ThreadPool:
485509 try:
486510 self.other.dataReceived(to_write)
487511 except Exception as e:
488 logger.warning("Exception writing to protocol: %s", e)
512 logger.exception("Exception writing to protocol: %s", e)
489513 return
490514
491515 self.buffer = self.buffer[len(to_write) :]
1313 # limitations under the License.
1414
1515 import itertools
16 from typing import List
1617
1718 import attr
1819
431432 state_res_store=TestStateResolutionStore(event_map),
432433 )
433434
434 state_before = self.successResultOf(state_d)
435 state_before = self.successResultOf(defer.ensureDeferred(state_d))
435436
436437 state_after = dict(state_before)
437438 if fake_event.state_key is not None:
580581 state_res_store=TestStateResolutionStore(self.event_map),
581582 )
582583
583 state = self.successResultOf(state_d)
584 state = self.successResultOf(defer.ensureDeferred(state_d))
584585
585586 self.assert_dict(self.expected_combined_state, state)
586587
607608 Deferred[dict[str, FrozenEvent]]: Dict from event_id to event.
608609 """
609610
610 return {eid: self.event_map[eid] for eid in event_ids if eid in self.event_map}
611
612 def _get_auth_chain(self, event_ids):
611 return defer.succeed(
612 {eid: self.event_map[eid] for eid in event_ids if eid in self.event_map}
613 )
614
615 def _get_auth_chain(self, event_ids: List[str]) -> List[str]:
613616 """Gets the full auth chain for a set of events (including rejected
614617 events).
615618
621624 presence of rejected events
622625
623626 Args:
624 event_ids (list): The event IDs of the events to fetch the auth
627 event_ids: The event IDs of the events to fetch the auth
625628 chain for. Must be state events.
626629 Returns:
627 Deferred[list[str]]: List of event IDs of the auth chain.
630 List of event IDs of the auth chain.
628631 """
629632
630633 # Simple DFS for auth chain
647650 chains = [frozenset(self._get_auth_chain(a)) for a in auth_sets]
648651
649652 common = set(chains[0]).intersection(*chains[1:])
650 return set(chains[0]).union(*chains[1:]) - common
653 return defer.succeed(set(chains[0]).union(*chains[1:]) - common)
5555 )
5656
5757 @defer.inlineCallbacks
58 def test_get_room_unknown_room(self):
59 self.assertIsNone((yield self.store.get_room("!uknown:test")),)
60
61 @defer.inlineCallbacks
5862 def test_get_room_with_stats(self):
5963 self.assertDictContainsSubset(
6064 {
6468 },
6569 (yield self.store.get_room_with_stats(self.room.to_string())),
6670 )
71
72 @defer.inlineCallbacks
73 def test_get_room_with_stats_unknown_room(self):
74 self.assertIsNone((yield self.store.get_room_with_stats("!uknown:test")),)
6775
6876
6977 class RoomEventsStoreTestCase(unittest.TestCase):
100108 etype=EventTypes.Name, name=name, content={"name": name}, depth=1
101109 )
102110
103 state = yield self.store.get_current_state(room_id=self.room.to_string())
111 state = yield defer.ensureDeferred(
112 self.store.get_current_state(room_id=self.room.to_string())
113 )
104114
105115 self.assertEquals(1, len(state))
106116 self.assertObjectHasAttributes(
116126 etype=EventTypes.Topic, topic=topic, content={"topic": topic}, depth=1
117127 )
118128
119 state = yield self.store.get_current_state(room_id=self.room.to_string())
129 state = yield defer.ensureDeferred(
130 self.store.get_current_state(room_id=self.room.to_string())
131 )
120132
121133 self.assertEquals(1, len(state))
122134 self.assertObjectHasAttributes(
117117
118118 def test_get_joined_users_from_context(self):
119119 room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
120 bob_event = event_injection.inject_member_event(
121 self.hs, room, self.u_bob, Membership.JOIN
120 bob_event = self.get_success(
121 event_injection.inject_member_event(
122 self.hs, room, self.u_bob, Membership.JOIN
123 )
122124 )
123125
124126 # first, create a regular event
125 event, context = event_injection.create_event(
126 self.hs,
127 room_id=room,
128 sender=self.u_alice,
129 prev_event_ids=[bob_event.event_id],
130 type="m.test.1",
131 content={},
127 event, context = self.get_success(
128 event_injection.create_event(
129 self.hs,
130 room_id=room,
131 sender=self.u_alice,
132 prev_event_ids=[bob_event.event_id],
133 type="m.test.1",
134 content={},
135 )
132136 )
133137
134138 users = self.get_success(
139143 # Regression test for #7376: create a state event whose key matches bob's
140144 # user_id, but which is *not* a membership event, and persist that; then check
141145 # that `get_joined_users_from_context` returns the correct users for the next event.
142 non_member_event = event_injection.inject_event(
143 self.hs,
144 room_id=room,
145 sender=self.u_bob,
146 prev_event_ids=[bob_event.event_id],
147 type="m.test.2",
148 state_key=self.u_bob,
149 content={},
150 )
151 event, context = event_injection.create_event(
152 self.hs,
153 room_id=room,
154 sender=self.u_alice,
155 prev_event_ids=[non_member_event.event_id],
156 type="m.test.3",
157 content={},
146 non_member_event = self.get_success(
147 event_injection.inject_event(
148 self.hs,
149 room_id=room,
150 sender=self.u_bob,
151 prev_event_ids=[bob_event.event_id],
152 type="m.test.2",
153 state_key=self.u_bob,
154 content={},
155 )
156 )
157 event, context = self.get_success(
158 event_injection.create_event(
159 self.hs,
160 room_id=room,
161 sender=self.u_alice,
162 prev_event_ids=[non_member_event.event_id],
163 type="m.test.3",
164 content={},
165 )
158166 )
159167 users = self.get_success(
160168 self.store.get_joined_users_from_context(event, context)
6363 },
6464 )
6565
66 event, context = yield self.event_creation_handler.create_new_client_event(
67 builder
66 event, context = yield defer.ensureDeferred(
67 self.event_creation_handler.create_new_client_event(builder)
6868 )
6969
7070 yield self.storage.persistence.persist_event(event, context)
172172 # Register a mock on the store so that the incoming update doesn't fail because
173173 # we don't share a room with the user.
174174 store = self.homeserver.get_datastore()
175 store.get_rooms_for_user = Mock(return_value=["!someroom:test"])
175 store.get_rooms_for_user = Mock(return_value=succeed(["!someroom:test"]))
176176
177177 # Manually inject a fake device list update. We need this update to include at
178178 # least one prev_id so that the user's device list will need to be retried.
217217 # Register mock device list retrieval on the federation client.
218218 federation_client = self.homeserver.get_federation_client()
219219 federation_client.query_user_devices = Mock(
220 return_value={
221 "user_id": remote_user_id,
222 "stream_id": 1,
223 "devices": [],
224 "master_key": {
220 return_value=succeed(
221 {
225222 "user_id": remote_user_id,
226 "usage": ["master"],
227 "keys": {"ed25519:" + remote_master_key: remote_master_key},
228 },
229 "self_signing_key": {
230 "user_id": remote_user_id,
231 "usage": ["self_signing"],
232 "keys": {
233 "ed25519:" + remote_self_signing_key: remote_self_signing_key
223 "stream_id": 1,
224 "devices": [],
225 "master_key": {
226 "user_id": remote_user_id,
227 "usage": ["master"],
228 "keys": {"ed25519:" + remote_master_key: remote_master_key},
234229 },
235 },
236 }
230 "self_signing_key": {
231 "user_id": remote_user_id,
232 "usage": ["self_signing"],
233 "keys": {
234 "ed25519:"
235 + remote_self_signing_key: remote_self_signing_key
236 },
237 },
238 }
239 )
237240 )
238241
239242 # Resync the device list.
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
1313
14 import logging
1514 import re
16 from io import StringIO
1715
1816 from twisted.internet.defer import Deferred
19 from twisted.python.failure import Failure
20 from twisted.test.proto_helpers import AccumulatingProtocol
2117 from twisted.web.resource import Resource
22 from twisted.web.server import NOT_DONE_YET
2318
2419 from synapse.api.errors import Codes, RedirectException, SynapseError
2520 from synapse.config.server import parse_listener_def
2621 from synapse.http.server import DirectServeHtmlResource, JsonResource, OptionsResource
27 from synapse.http.site import SynapseSite, logger
22 from synapse.http.site import SynapseSite
2823 from synapse.logging.context import make_deferred_yieldable
2924 from synapse.util import Clock
3025
3126 from tests import unittest
3227 from tests.server import (
33 FakeTransport,
3428 ThreadedMemoryReactorClock,
3529 make_request,
3630 render,
198192 return channel
199193
200194 def test_unknown_options_request(self):
201 """An OPTIONS requests to an unknown URL still returns 200 OK."""
195 """An OPTIONS requests to an unknown URL still returns 204 No Content."""
202196 channel = self._make_request(b"OPTIONS", b"/foo/")
203 self.assertEqual(channel.result["code"], b"200")
204 self.assertEqual(channel.result["body"], b"{}")
197 self.assertEqual(channel.result["code"], b"204")
198 self.assertNotIn("body", channel.result)
205199
206200 # Ensure the correct CORS headers have been added
207201 self.assertTrue(
218212 )
219213
220214 def test_known_options_request(self):
221 """An OPTIONS requests to an known URL still returns 200 OK."""
215 """An OPTIONS requests to an known URL still returns 204 No Content."""
222216 channel = self._make_request(b"OPTIONS", b"/res/")
223 self.assertEqual(channel.result["code"], b"200")
224 self.assertEqual(channel.result["body"], b"{}")
217 self.assertEqual(channel.result["code"], b"204")
218 self.assertNotIn("body", channel.result)
225219
226220 # Ensure the correct CORS headers have been added
227221 self.assertTrue(
317311 self.assertEqual(location_headers, [b"/no/over/there"])
318312 cookies_headers = [v for k, v in headers if k == b"Set-Cookie"]
319313 self.assertEqual(cookies_headers, [b"session=yespls"])
320
321
322 class SiteTestCase(unittest.HomeserverTestCase):
323 def test_lose_connection(self):
324 """
325 We log the URI correctly redacted when we lose the connection.
326 """
327
328 class HangingResource(Resource):
329 """
330 A Resource that strategically hangs, as if it were processing an
331 answer.
332 """
333
334 def render(self, request):
335 return NOT_DONE_YET
336
337 # Set up a logging handler that we can inspect afterwards
338 output = StringIO()
339 handler = logging.StreamHandler(output)
340 logger.addHandler(handler)
341 old_level = logger.level
342 logger.setLevel(10)
343 self.addCleanup(logger.setLevel, old_level)
344 self.addCleanup(logger.removeHandler, handler)
345
346 # Make a resource and a Site, the resource will hang and allow us to
347 # time out the request while it's 'processing'
348 base_resource = Resource()
349 base_resource.putChild(b"", HangingResource())
350 site = SynapseSite(
351 "test", "site_tag", self.hs.config.listeners[0], base_resource, "1.0"
352 )
353
354 server = site.buildProtocol(None)
355 client = AccumulatingProtocol()
356 client.makeConnection(FakeTransport(server, self.reactor))
357 server.makeConnection(FakeTransport(client, self.reactor))
358
359 # Send a request with an access token that will get redacted
360 server.dataReceived(b"GET /?access_token=bar HTTP/1.0\r\n\r\n")
361 self.pump()
362
363 # Lose the connection
364 e = Failure(Exception("Failed123"))
365 server.connectionLost(e)
366 handler.flush()
367
368 # Our access token is redacted and the failure reason is logged.
369 self.assertIn("/?access_token=<redacted>", output.getvalue())
370 self.assertIn("Failed123", output.getvalue())
9696
9797 self._group_to_state[state_group] = dict(current_state_ids)
9898
99 return state_group
99 return defer.succeed(state_group)
100100
101101 def get_events(self, event_ids, **kwargs):
102 return {
103 e_id: self._event_id_to_event[e_id]
104 for e_id in event_ids
105 if e_id in self._event_id_to_event
106 }
102 return defer.succeed(
103 {
104 e_id: self._event_id_to_event[e_id]
105 for e_id in event_ids
106 if e_id in self._event_id_to_event
107 }
108 )
107109
108110 def get_state_group_delta(self, name):
109 return None, None
111 return defer.succeed((None, None))
110112
111113 def register_events(self, events):
112114 for e in events:
119121 self._event_to_state_group[event_id] = state_group
120122
121123 def get_room_version_id(self, room_id):
122 return RoomVersions.V1.identifier
124 return defer.succeed(RoomVersions.V1.identifier)
123125
124126
125127 class DictObj(dict):
201203 context_store = {} # type: dict[str, EventContext]
202204
203205 for event in graph.walk():
204 context = yield self.state.compute_event_context(event)
206 context = yield defer.ensureDeferred(
207 self.state.compute_event_context(event)
208 )
205209 self.store.register_event_context(event, context)
206210 context_store[event.event_id] = context
207211
243247 context_store = {}
244248
245249 for event in graph.walk():
246 context = yield self.state.compute_event_context(event)
250 context = yield defer.ensureDeferred(
251 self.state.compute_event_context(event)
252 )
247253 self.store.register_event_context(event, context)
248254 context_store[event.event_id] = context
249255
299305 context_store = {}
300306
301307 for event in graph.walk():
302 context = yield self.state.compute_event_context(event)
308 context = yield defer.ensureDeferred(
309 self.state.compute_event_context(event)
310 )
303311 self.store.register_event_context(event, context)
304312 context_store[event.event_id] = context
305313
372380 context_store = {}
373381
374382 for event in graph.walk():
375 context = yield self.state.compute_event_context(event)
383 context = yield defer.ensureDeferred(
384 self.state.compute_event_context(event)
385 )
376386 self.store.register_event_context(event, context)
377387 context_store[event.event_id] = context
378388
410420 create_event(type="test2", state_key=""),
411421 ]
412422
413 context = yield self.state.compute_event_context(event, old_state=old_state)
423 context = yield defer.ensureDeferred(
424 self.state.compute_event_context(event, old_state=old_state)
425 )
414426
415427 prev_state_ids = yield context.get_prev_state_ids()
416428 self.assertCountEqual((e.event_id for e in old_state), prev_state_ids.values())
417429
418 current_state_ids = yield context.get_current_state_ids()
430 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
419431 self.assertCountEqual(
420432 (e.event_id for e in old_state), current_state_ids.values()
421433 )
433445 create_event(type="test2", state_key=""),
434446 ]
435447
436 context = yield self.state.compute_event_context(event, old_state=old_state)
448 context = yield defer.ensureDeferred(
449 self.state.compute_event_context(event, old_state=old_state)
450 )
437451
438452 prev_state_ids = yield context.get_prev_state_ids()
439453 self.assertCountEqual((e.event_id for e in old_state), prev_state_ids.values())
440454
441 current_state_ids = yield context.get_current_state_ids()
455 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
442456 self.assertCountEqual(
443457 (e.event_id for e in old_state + [event]), current_state_ids.values()
444458 )
461475 create_event(type="test2", state_key=""),
462476 ]
463477
464 group_name = self.store.store_state_group(
478 group_name = yield self.store.store_state_group(
465479 prev_event_id,
466480 event.room_id,
467481 None,
470484 )
471485 self.store.register_event_id_state_group(prev_event_id, group_name)
472486
473 context = yield self.state.compute_event_context(event)
474
475 current_state_ids = yield context.get_current_state_ids()
487 context = yield defer.ensureDeferred(self.state.compute_event_context(event))
488
489 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
476490
477491 self.assertEqual(
478492 {e.event_id for e in old_state}, set(current_state_ids.values())
493507 create_event(type="test2", state_key=""),
494508 ]
495509
496 group_name = self.store.store_state_group(
510 group_name = yield self.store.store_state_group(
497511 prev_event_id,
498512 event.room_id,
499513 None,
502516 )
503517 self.store.register_event_id_state_group(prev_event_id, group_name)
504518
505 context = yield self.state.compute_event_context(event)
519 context = yield defer.ensureDeferred(self.state.compute_event_context(event))
506520
507521 prev_state_ids = yield context.get_prev_state_ids()
508522
543557 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2
544558 )
545559
546 current_state_ids = yield context.get_current_state_ids()
560 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
547561
548562 self.assertEqual(len(current_state_ids), 6)
549563
585599 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2
586600 )
587601
588 current_state_ids = yield context.get_current_state_ids()
602 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
589603
590604 self.assertEqual(len(current_state_ids), 6)
591605
640654 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2
641655 )
642656
643 current_state_ids = yield context.get_current_state_ids()
657 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
644658
645659 self.assertEqual(old_state_2[3].event_id, current_state_ids[("test1", "1")])
646660
668682 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2
669683 )
670684
671 current_state_ids = yield context.get_current_state_ids()
685 current_state_ids = yield defer.ensureDeferred(context.get_current_state_ids())
672686
673687 self.assertEqual(old_state_1[3].event_id, current_state_ids[("test1", "1")])
674688
689 @defer.inlineCallbacks
675690 def _get_context(
676691 self, event, prev_event_id_1, old_state_1, prev_event_id_2, old_state_2
677692 ):
678 sg1 = self.store.store_state_group(
693 sg1 = yield self.store.store_state_group(
679694 prev_event_id_1,
680695 event.room_id,
681696 None,
684699 )
685700 self.store.register_event_id_state_group(prev_event_id_1, sg1)
686701
687 sg2 = self.store.store_state_group(
702 sg2 = yield self.store.store_state_group(
688703 prev_event_id_2,
689704 event.room_id,
690705 None,
693708 )
694709 self.store.register_event_id_state_group(prev_event_id_2, sg2)
695710
696 return self.state.compute_event_context(event)
711 result = yield defer.ensureDeferred(self.state.compute_event_context(event))
712 return result
1616 """
1717 Utilities for running the unit tests
1818 """
19 from typing import Awaitable, TypeVar
19 from typing import Any, Awaitable, TypeVar
2020
2121 TV = TypeVar("TV")
2222
3535
3636 # if next didn't raise, the awaitable hasn't completed.
3737 raise Exception("awaitable has not yet completed")
38
39
40 async def make_awaitable(result: Any):
41 """Create an awaitable that just returns a result."""
42 return result
2121 from synapse.events.snapshot import EventContext
2222 from synapse.types import Collection
2323
24 from tests.test_utils import get_awaitable_result
25
2624 """
2725 Utility functions for poking events into the storage of the server under test.
2826 """
2927
3028
31 def inject_member_event(
29 async def inject_member_event(
3230 hs: synapse.server.HomeServer,
3331 room_id: str,
3432 sender: str,
4543 if extra_content:
4644 content.update(extra_content)
4745
48 return inject_event(
46 return await inject_event(
4947 hs,
5048 room_id=room_id,
5149 type=EventTypes.Member,
5654 )
5755
5856
59 def inject_event(
57 async def inject_event(
6058 hs: synapse.server.HomeServer,
6159 room_version: Optional[str] = None,
6260 prev_event_ids: Optional[Collection[str]] = None,
7169 prev_event_ids: prev_events for the event. If not specified, will be looked up
7270 kwargs: fields for the event to be created
7371 """
74 test_reactor = hs.get_reactor()
72 event, context = await create_event(hs, room_version, prev_event_ids, **kwargs)
7573
76 event, context = create_event(hs, room_version, prev_event_ids, **kwargs)
77
78 d = hs.get_storage().persistence.persist_event(event, context)
79 test_reactor.advance(0)
80 get_awaitable_result(d)
74 await hs.get_storage().persistence.persist_event(event, context)
8175
8276 return event
8377
8478
85 def create_event(
79 async def create_event(
8680 hs: synapse.server.HomeServer,
8781 room_version: Optional[str] = None,
8882 prev_event_ids: Optional[Collection[str]] = None,
8983 **kwargs
9084 ) -> Tuple[EventBase, EventContext]:
91 test_reactor = hs.get_reactor()
92
9385 if room_version is None:
94 d = hs.get_datastore().get_room_version_id(kwargs["room_id"])
95 test_reactor.advance(0)
96 room_version = get_awaitable_result(d)
86 room_version = await hs.get_datastore().get_room_version_id(kwargs["room_id"])
9787
9888 builder = hs.get_event_builder_factory().for_room_version(
9989 KNOWN_ROOM_VERSIONS[room_version], kwargs
10090 )
101 d = hs.get_event_creation_handler().create_new_client_event(
91 event, context = await hs.get_event_creation_handler().create_new_client_event(
10292 builder, prev_event_ids=prev_event_ids
10393 )
104 test_reactor.advance(0)
105 event, context = get_awaitable_result(d)
10694
10795 return event, context
5252 #
5353
5454 # before we do that, we persist some other events to act as state.
55 self.inject_visibility("@admin:hs", "joined")
55 yield self.inject_visibility("@admin:hs", "joined")
5656 for i in range(0, 10):
5757 yield self.inject_room_member("@resident%i:hs" % i)
5858
136136 },
137137 )
138138
139 event, context = yield self.event_creation_handler.create_new_client_event(
140 builder
139 event, context = yield defer.ensureDeferred(
140 self.event_creation_handler.create_new_client_event(builder)
141141 )
142142 yield self.storage.persistence.persist_event(event, context)
143143 return event
157157 },
158158 )
159159
160 event, context = yield self.event_creation_handler.create_new_client_event(
161 builder
160 event, context = yield defer.ensureDeferred(
161 self.event_creation_handler.create_new_client_event(builder)
162162 )
163163
164164 yield self.storage.persistence.persist_event(event, context)
178178 },
179179 )
180180
181 event, context = yield self.event_creation_handler.create_new_client_event(
182 builder
181 event, context = yield defer.ensureDeferred(
182 self.event_creation_handler.create_new_client_event(builder)
183183 )
184184
185185 yield self.storage.persistence.persist_event(event, context)
602602 user: MXID of the user to inject the membership for.
603603 membership: The membership type.
604604 """
605 event_injection.inject_member_event(self.hs, room, user, membership)
605 self.get_success(
606 event_injection.inject_member_event(self.hs, room, user, membership)
607 )
606608
607609
608610 class FederatingHomeserverTestCase(HomeserverTestCase):
670670 },
671671 )
672672
673 event, context = yield event_creation_handler.create_new_client_event(builder)
673 event, context = yield defer.ensureDeferred(
674 event_creation_handler.create_new_client_event(builder)
675 )
674676
675677 yield persistence_store.persist_event(event, context)
125125 black==19.10b0
126126 commands =
127127 python -m black --check --diff .
128 /bin/sh -c "flake8 synapse tests scripts scripts-dev synctl {env:PEP8SUFFIX:}"
128 /bin/sh -c "flake8 synapse tests scripts scripts-dev contrib synctl {env:PEP8SUFFIX:}"
129129 {toxinidir}/scripts-dev/config-lint.sh
130130
131131 [testenv:check_isort]
184184 synapse/handlers/cas_handler.py \
185185 synapse/handlers/directory.py \
186186 synapse/handlers/federation.py \
187 synapse/handlers/identity.py \
187188 synapse/handlers/oidc_handler.py \
188189 synapse/handlers/presence.py \
189190 synapse/handlers/room_member.py \