Codebase list matrix-synapse / a164b24
New upstream version 1.29.0 Andrej Shadura 3 years ago
127 changed file(s) with 2475 addition(s) and 832 deletion(s). Raw diff Collapse all Expand all
55 *.egg
66 *.egg-info
77 *.lock
8 *.pyc
8 *.py[cod]
99 *.snap
1010 *.tac
1111 _trial_temp/
1212 _trial_temp*/
1313 /out
1414 .DS_Store
15 __pycache__/
1516
1617 # stuff that is likely to exist when you run a server locally
1718 /*.db
0 Synapse 1.29.0 (2021-03-08)
1 ===========================
2
3 Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change.
4
5
6 No significant changes.
7
8
9 Synapse 1.29.0rc1 (2021-03-04)
10 ==============================
11
12 Features
13 --------
14
15 - Add rate limiters to cross-user key sharing requests. ([\#8957](https://github.com/matrix-org/synapse/issues/8957))
16 - Add `order_by` to the admin API `GET /_synapse/admin/v1/users/<user_id>/media`. Contributed by @dklimpel. ([\#8978](https://github.com/matrix-org/synapse/issues/8978))
17 - Add some configuration settings to make users' profile data more private. ([\#9203](https://github.com/matrix-org/synapse/issues/9203))
18 - The `no_proxy` and `NO_PROXY` environment variables are now respected in proxied HTTP clients with the lowercase form taking precedence if both are present. Additionally, the lowercase `https_proxy` environment variable is now respected in proxied HTTP clients on top of existing support for the uppercase `HTTPS_PROXY` form and takes precedence if both are present. Contributed by Timothy Leung. ([\#9372](https://github.com/matrix-org/synapse/issues/9372))
19 - Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users. ([\#9383](https://github.com/matrix-org/synapse/issues/9383), [\#9385](https://github.com/matrix-org/synapse/issues/9385))
20 - Add support for regenerating thumbnails if they have been deleted but the original image is still stored. ([\#9438](https://github.com/matrix-org/synapse/issues/9438))
21 - Add support for `X-Forwarded-Proto` header when using a reverse proxy. ([\#9472](https://github.com/matrix-org/synapse/issues/9472), [\#9501](https://github.com/matrix-org/synapse/issues/9501), [\#9512](https://github.com/matrix-org/synapse/issues/9512), [\#9539](https://github.com/matrix-org/synapse/issues/9539))
22
23
24 Bugfixes
25 --------
26
27 - Fix a bug where users' pushers were not all deleted when they deactivated their account. ([\#9285](https://github.com/matrix-org/synapse/issues/9285), [\#9516](https://github.com/matrix-org/synapse/issues/9516))
28 - Fix a bug where a lot of unnecessary presence updates were sent when joining a room. ([\#9402](https://github.com/matrix-org/synapse/issues/9402))
29 - Fix a bug that caused multiple calls to the experimental `shared_rooms` endpoint to return stale results. ([\#9416](https://github.com/matrix-org/synapse/issues/9416))
30 - Fix a bug in single sign-on which could cause a "No session cookie found" error. ([\#9436](https://github.com/matrix-org/synapse/issues/9436))
31 - Fix bug introduced in v1.27.0 where allowing a user to choose their own username when logging in via single sign-on did not work unless an `idp_icon` was defined. ([\#9440](https://github.com/matrix-org/synapse/issues/9440))
32 - Fix a bug introduced in v1.26.0 where some sequences were not properly configured when running `synapse_port_db`. ([\#9449](https://github.com/matrix-org/synapse/issues/9449))
33 - Fix deleting pushers when using sharded pushers. ([\#9465](https://github.com/matrix-org/synapse/issues/9465), [\#9466](https://github.com/matrix-org/synapse/issues/9466), [\#9479](https://github.com/matrix-org/synapse/issues/9479), [\#9536](https://github.com/matrix-org/synapse/issues/9536))
34 - Fix missing startup checks for the consistency of certain PostgreSQL sequences. ([\#9470](https://github.com/matrix-org/synapse/issues/9470))
35 - Fix a long-standing bug where the media repository could leak file descriptors while previewing media. ([\#9497](https://github.com/matrix-org/synapse/issues/9497))
36 - Properly purge the event chain cover index when purging history. ([\#9498](https://github.com/matrix-org/synapse/issues/9498))
37 - Fix missing chain cover index due to a schema delta not being applied correctly. Only affected servers that ran development versions. ([\#9503](https://github.com/matrix-org/synapse/issues/9503))
38 - Fix a bug introduced in v1.25.0 where `/_synapse/admin/join/` would fail when given a room alias. ([\#9506](https://github.com/matrix-org/synapse/issues/9506))
39 - Prevent presence background jobs from running when presence is disabled. ([\#9530](https://github.com/matrix-org/synapse/issues/9530))
40 - Fix rare edge case that caused a background update to fail if the server had rejected an event that had duplicate auth events. ([\#9537](https://github.com/matrix-org/synapse/issues/9537))
41
42
43 Improved Documentation
44 ----------------------
45
46 - Update the example systemd config to propagate reloads to individual units. ([\#9463](https://github.com/matrix-org/synapse/issues/9463))
47
48
49 Internal Changes
50 ----------------
51
52 - Add documentation and type hints to `parse_duration`. ([\#9432](https://github.com/matrix-org/synapse/issues/9432))
53 - Remove vestiges of `uploads_path` configuration setting. ([\#9462](https://github.com/matrix-org/synapse/issues/9462))
54 - Add a comment about systemd-python. ([\#9464](https://github.com/matrix-org/synapse/issues/9464))
55 - Test that we require validated email for email pushers. ([\#9496](https://github.com/matrix-org/synapse/issues/9496))
56 - Allow python to generate bytecode for synapse. ([\#9502](https://github.com/matrix-org/synapse/issues/9502))
57 - Fix incorrect type hints. ([\#9515](https://github.com/matrix-org/synapse/issues/9515), [\#9518](https://github.com/matrix-org/synapse/issues/9518))
58 - Add type hints to device and event report admin API. ([\#9519](https://github.com/matrix-org/synapse/issues/9519))
59 - Add type hints to user admin API. ([\#9521](https://github.com/matrix-org/synapse/issues/9521))
60 - Bump the versions of mypy and mypy-zope used for static type checking. ([\#9529](https://github.com/matrix-org/synapse/issues/9529))
61
62
063 Synapse 1.28.0 (2021-02-25)
164 ===========================
265
8383 # replace `1.3.0` and `stretch` accordingly:
8484 wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
8585 dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
86
87 Upgrading to v1.29.0
88 ====================
89
90 Requirement for X-Forwarded-Proto header
91 ----------------------------------------
92
93 When using Synapse with a reverse proxy (in particular, when using the
94 `x_forwarded` option on an HTTP listener), Synapse now expects to receive an
95 `X-Forwarded-Proto` header on incoming HTTP requests. If it is not set, Synapse
96 will log a warning on each received request.
97
98 To avoid the warning, administrators using a reverse proxy should ensure that
99 the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
100 indicate the protocol used by the client. See the `reverse proxy documentation
101 <docs/reverse_proxy.md>`_, where the example configurations have been updated to
102 show how to set this header.
103
104 (Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
105 sets `X-Forwarded-Proto` by default.)
86106
87107 Upgrading to v1.27.0
88108 ====================
5757 cp -r tests "$tmpdir"
5858
5959 PYTHONPATH="$tmpdir" \
60 "${TARGET_PYTHON}" -B -m twisted.trial --reporter=text -j2 tests
60 "${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
6161
6262 # build the config file
63 "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_config" \
63 "${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \
6464 --config-dir="/etc/matrix-synapse" \
6565 --data-dir="/var/lib/matrix-synapse" |
6666 perl -pe '
8686 ' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
8787
8888 # build the log config file
89 "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
89 "${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_log_config" \
9090 --output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
9191
9292 # add a dependency on the right version of python to substvars.
0 matrix-synapse-py3 (1.29.0) stable; urgency=medium
1
2 [ Jonathan de Jong ]
3 * Remove the python -B flag (don't generate bytecode) in scripts and documentation.
4
5 [ Synapse Packaging team ]
6 * New synapse release 1.29.0.
7
8 -- Synapse Packaging team <packages@matrix.org> Mon, 08 Mar 2021 13:51:50 +0000
9
010 matrix-synapse-py3 (1.28.0) stable; urgency=medium
111
212 * New synapse release 1.28.0.
4343 .
4444 .nf
4545
46 $ python \-B \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
46 $ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
4747 .
4848 .fi
4949 .
4040
4141 Configuration file may be generated as follows:
4242
43 $ python -B -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
43 $ python -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
4444
4545 ## ENVIRONMENT
4646
1010 By default, the image expects a single volume, located at ``/data``, that will hold:
1111
1212 * configuration files;
13 * temporary files during uploads;
1413 * uploaded media and thumbnails;
1514 * the SQLite database if you do not configure postgres;
1615 * the appservices configuration.
8888 ## Files ##
8989
9090 media_store_path: "/data/media"
91 uploads_path: "/data/uploads"
9291 max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "50M" }}"
9392 max_image_pixels: "32M"
9493 dynamic_thumbnails: false
378378 - ``total`` - Number of rooms.
379379
380380
381 List media of an user
382 ================================
381 List media of a user
382 ====================
383383 Gets a list of all local media that a specific ``user_id`` has created.
384 The response is ordered by creation date descending and media ID descending.
385 The newest media is on top.
384 By default, the response is ordered by descending creation date and ascending media ID.
385 The newest media is on top. You can change the order with parameters
386 ``order_by`` and ``dir``.
386387
387388 The API is::
388389
439440 denoting the offset in the returned results. This should be treated as an opaque value and
440441 not explicitly set to anything other than the return value of ``next_token`` from a previous call.
441442 Defaults to ``0``.
443 - ``order_by`` - The method by which to sort the returned list of media.
444 If the ordered field has duplicates, the second order is always by ascending ``media_id``,
445 which guarantees a stable ordering. Valid values are:
446
447 - ``media_id`` - Media are ordered alphabetically by ``media_id``.
448 - ``upload_name`` - Media are ordered alphabetically by name the media was uploaded with.
449 - ``created_ts`` - Media are ordered by when the content was uploaded in ms.
450 Smallest to largest. This is the default.
451 - ``last_access_ts`` - Media are ordered by when the content was last accessed in ms.
452 Smallest to largest.
453 - ``media_length`` - Media are ordered by length of the media in bytes.
454 Smallest to largest.
455 - ``media_type`` - Media are ordered alphabetically by MIME-type.
456 - ``quarantined_by`` - Media are ordered alphabetically by the user ID that
457 initiated the quarantine request for this media.
458 - ``safe_from_quarantine`` - Media are ordered by the status if this media is safe
459 from quarantining.
460
461 - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
462 Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
463
464 If neither ``order_by`` nor ``dir`` is set, the default order is newest media on top
465 (corresponds to ``order_by`` = ``created_ts`` and ``dir`` = ``b``).
466
467 Caution. The database only has indexes on the columns ``media_id``,
468 ``user_id`` and ``created_ts``. This means that if a different sort order is used
469 (``upload_name``, ``last_access_ts``, ``media_length``, ``media_type``,
470 ``quarantined_by`` or ``safe_from_quarantine``), this can cause a large load on the
471 database, especially for large environments.
442472
443473 **Response**
444474
88 (443) to Matrix clients without needing to run Synapse with root
99 privileges.
1010
11 You should configure your reverse proxy to forward requests to `/_matrix` or
12 `/_synapse/client` to Synapse, and have it set the `X-Forwarded-For` and
13 `X-Forwarded-Proto` request headers.
14
15 You should remember that Matrix clients and other Matrix servers do not
16 necessarily need to connect to your server via the same server name or
17 port. Indeed, clients will use port 443 by default, whereas servers default to
18 port 8448. Where these are different, we refer to the 'client port' and the
19 'federation port'. See [the Matrix
20 specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
21 for more details of the algorithm used for federation connections, and
22 [delegate.md](<delegate.md>) for instructions on setting up delegation.
23
1124 **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
1225 the requested URI in any way (for example, by decoding `%xx` escapes).
1326 Beware that Apache *will* canonicalise URIs unless you specify
1427 `nocanon`.
15
16 When setting up a reverse proxy, remember that Matrix clients and other
17 Matrix servers do not necessarily need to connect to your server via the
18 same server name or port. Indeed, clients will use port 443 by default,
19 whereas servers default to port 8448. Where these are different, we
20 refer to the 'client port' and the 'federation port'. See [the Matrix
21 specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
22 for more details of the algorithm used for federation connections, and
23 [delegate.md](<delegate.md>) for instructions on setting up delegation.
24
25 Endpoints that are part of the standardised Matrix specification are
26 located under `/_matrix`, whereas endpoints specific to Synapse are
27 located under `/_synapse/client`.
2828
2929 Let's assume that we expect clients to connect to our server at
3030 `https://matrix.example.com`, and other servers to connect at
5151 location ~* ^(\/_matrix|\/_synapse\/client) {
5252 proxy_pass http://localhost:8008;
5353 proxy_set_header X-Forwarded-For $remote_addr;
54 proxy_set_header X-Forwarded-Proto $scheme;
55 proxy_set_header Host $host;
56
5457 # Nginx by default only allows file uploads up to 1M in size
5558 # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
5659 client_max_body_size 50M;
101104 SSLEngine on
102105 ServerName matrix.example.com;
103106
107 RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
104108 AllowEncodedSlashes NoDecode
105109 ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
106110 ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
112116 SSLEngine on
113117 ServerName example.com;
114118
119 RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
115120 AllowEncodedSlashes NoDecode
116121 ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
117122 ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
133138 ```
134139 frontend https
135140 bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
141 http-request set-header X-Forwarded-Proto https if { ssl_fc }
142 http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
143 http-request set-header X-Forwarded-For %[src]
136144
137145 # Matrix client traffic
138146 acl matrix-host hdr(host) -i matrix.example.com
143151
144152 frontend matrix-federation
145153 bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
154 http-request set-header X-Forwarded-Proto https if { ssl_fc }
155 http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
156 http-request set-header X-Forwarded-For %[src]
157
146158 default_backend matrix
147159
148160 backend matrix
9999 # requesting server. Defaults to 'false'.
100100 #
101101 #limit_profile_requests_to_users_who_share_rooms: true
102
103 # Uncomment to prevent a user's profile data from being retrieved and
104 # displayed in a room until they have joined it. By default, a user's
105 # profile data is included in an invite event, regardless of the values
106 # of the above two settings, and whether or not the users share a server.
107 # Defaults to 'true'.
108 #
109 #include_profile_data_on_invite: false
102110
103111 # If set to 'true', removes the need for authentication to access the server's
104112 # public rooms directory through the client API, meaning that anyone can
697705 #federation_metrics_domains:
698706 # - matrix.org
699707 # - example.com
708
709 # Uncomment to disable profile lookup over federation. By default, the
710 # Federation API allows other homeservers to obtain profile data of any user
711 # on this homeserver. Defaults to 'true'.
712 #
713 #allow_profile_lookup_over_federation: false
700714
701715
702716 ## Caching ##
25292543
25302544 # User Directory configuration
25312545 #
2532 # 'enabled' defines whether users can search the user directory. If
2533 # false then empty responses are returned to all queries. Defaults to
2534 # true.
2535 #
2536 # 'search_all_users' defines whether to search all users visible to your HS
2537 # when searching the user directory, rather than limiting to users visible
2538 # in public rooms. Defaults to false. If you set it True, you'll have to
2539 # rebuild the user_directory search indexes, see
2540 # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
2541 #
2542 #user_directory:
2543 # enabled: true
2544 # search_all_users: false
2546 user_directory:
2547 # Defines whether users can search the user directory. If false then
2548 # empty responses are returned to all queries. Defaults to true.
2549 #
2550 # Uncomment to disable the user directory.
2551 #
2552 #enabled: false
2553
2554 # Defines whether to search all users visible to your HS when searching
2555 # the user directory, rather than limiting to users visible in public
2556 # rooms. Defaults to false.
2557 #
2558 # If you set it true, you'll have to rebuild the user_directory search
2559 # indexes, see:
2560 # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
2561 #
2562 # Uncomment to return search results containing all known users, even if that
2563 # user does not share a room with the requester.
2564 #
2565 #search_all_users: true
2566
2567 # Defines whether to prefer local users in search query results.
2568 # If True, local users are more likely to appear above remote users
2569 # when searching the user directory. Defaults to false.
2570 #
2571 # Uncomment to prefer local over remote users in user directory search
2572 # results.
2573 #
2574 #prefer_local_users: true
25452575
25462576
25472577 # User Consent configuration
2424 * `check_username_for_spam`
2525 * `check_registration_for_spam`
2626
27 The details of the each of these methods (as well as their inputs and outputs)
27 The details of each of these methods (as well as their inputs and outputs)
2828 are documented in the `synapse.events.spamcheck.SpamChecker` class.
2929
3030 The `ModuleApi` class provides a way for the custom spam checker class to
33
44 # This service should be restarted when the synapse target is restarted.
55 PartOf=matrix-synapse.target
6 ReloadPropagatedFrom=matrix-synapse.target
67
78 # if this is started at the same time as the main, let the main process start
89 # first, to initialise the database schema.
22
33 # This service should be restarted when the synapse target is restarted.
44 PartOf=matrix-synapse.target
5 ReloadPropagatedFrom=matrix-synapse.target
56
67 [Service]
78 Type=notify
219219
220220 Acknowledge receipt of some federation data
221221
222 #### REMOVE_PUSHER (C)
223
224 Inform the server a pusher should be removed
225
226222 ### REMOTE_SERVER_UP (S, C)
227223
228224 Inform other processes that a remote server may have come back online.
2121 import sys
2222 import time
2323 import traceback
24 from typing import Dict, Optional, Set
24 from typing import Dict, Iterable, Optional, Set
2525
2626 import yaml
2727
4646 from synapse.storage.databases.main.media_repository import (
4747 MediaRepositoryBackgroundUpdateStore,
4848 )
49 from synapse.storage.databases.main.pusher import PusherWorkerStore
4950 from synapse.storage.databases.main.registration import (
5051 RegistrationBackgroundUpdateStore,
5152 find_max_generated_user_id_localpart,
176177 UserDirectoryBackgroundUpdateStore,
177178 EndToEndKeyBackgroundStore,
178179 StatsStore,
180 PusherWorkerStore,
179181 ):
180182 def execute(self, f, *args, **kwargs):
181183 return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
628630 await self._setup_state_group_id_seq()
629631 await self._setup_user_id_seq()
630632 await self._setup_events_stream_seqs()
631 await self._setup_device_inbox_seq()
633 await self._setup_sequence(
634 "device_inbox_sequence", ("device_inbox", "device_federation_outbox")
635 )
636 await self._setup_sequence(
637 "account_data_sequence", ("room_account_data", "room_tags_revisions", "account_data"))
638 await self._setup_sequence("receipts_sequence", ("receipts_linearized", ))
639 await self._setup_auth_chain_sequence()
632640
633641 # Step 3. Get tables.
634642 self.progress.set_state("Fetching tables")
853861
854862 return done, remaining + done
855863
856 async def _setup_state_group_id_seq(self):
864 async def _setup_state_group_id_seq(self) -> None:
857865 curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
858866 table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True
859867 )
867875
868876 await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
869877
870 async def _setup_user_id_seq(self):
878 async def _setup_user_id_seq(self) -> None:
871879 curr_id = await self.sqlite_store.db_pool.runInteraction(
872880 "setup_user_id_seq", find_max_generated_user_id_localpart
873881 )
876884 next_id = curr_id + 1
877885 txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
878886
879 return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
880
881 async def _setup_events_stream_seqs(self):
887 await self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
888
889 async def _setup_events_stream_seqs(self) -> None:
882890 """Set the event stream sequences to the correct values.
883891 """
884892
907915 (curr_backward_id + 1,),
908916 )
909917
910 return await self.postgres_store.db_pool.runInteraction(
918 await self.postgres_store.db_pool.runInteraction(
911919 "_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
912920 )
913921
914 async def _setup_device_inbox_seq(self):
915 """Set the device inbox sequence to the correct value.
922 async def _setup_sequence(self, sequence_name: str, stream_id_tables: Iterable[str]) -> None:
923 """Set a sequence to the correct value.
916924 """
917 curr_local_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
918 table="device_inbox",
919 keyvalues={},
920 retcol="COALESCE(MAX(stream_id), 1)",
921 allow_none=True,
922 )
923
924 curr_federation_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
925 table="device_federation_outbox",
926 keyvalues={},
927 retcol="COALESCE(MAX(stream_id), 1)",
928 allow_none=True,
929 )
930
931 next_id = max(curr_local_id, curr_federation_id) + 1
925 current_stream_ids = []
926 for stream_id_table in stream_id_tables:
927 max_stream_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
928 table=stream_id_table,
929 keyvalues={},
930 retcol="COALESCE(MAX(stream_id), 1)",
931 allow_none=True,
932 )
933 current_stream_ids.append(max_stream_id)
934
935 next_id = max(current_stream_ids) + 1
936
937 def r(txn):
938 sql = "ALTER SEQUENCE %s RESTART WITH" % (sequence_name, )
939 txn.execute(sql + " %s", (next_id, ))
940
941 await self.postgres_store.db_pool.runInteraction("_setup_%s" % (sequence_name,), r)
942
943 async def _setup_auth_chain_sequence(self) -> None:
944 curr_chain_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
945 table="event_auth_chains", keyvalues={}, retcol="MAX(chain_id)", allow_none=True
946 )
932947
933948 def r(txn):
934949 txn.execute(
935 "ALTER SEQUENCE device_inbox_sequence RESTART WITH %s", (next_id,)
936 )
937
938 return self.postgres_store.db_pool.runInteraction("_setup_device_inbox_seq", r)
950 "ALTER SEQUENCE event_auth_chain_id RESTART WITH %s",
951 (curr_chain_id,),
952 )
953
954 await self.postgres_store.db_pool.runInteraction(
955 "_setup_event_auth_chain_id", r,
956 )
957
939958
940959
941960 ##############################################
101101 "flake8",
102102 ]
103103
104 CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.790", "mypy-zope==0.2.8"]
104 CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
105105
106106 # Dependencies which are exclusively required by unit test code. This is
107107 # NOT a list of all modules that are necessary to run the unit tests.
4747 except ImportError:
4848 pass
4949
50 __version__ = "1.28.0"
50 __version__ = "1.29.0"
5151
5252 if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
5353 # We import here so that we don't have to install a bunch of deps when
9797
9898 Retention = "m.room.retention"
9999
100 Dummy = "org.matrix.dummy_event"
101
102
103 class EduTypes:
100104 Presence = "m.presence"
101
102 Dummy = "org.matrix.dummy_event"
105 RoomKeyRequest = "m.room_key_request"
103106
104107
105108 class RejectedReason:
1313 # limitations under the License.
1414
1515 from collections import OrderedDict
16 from typing import Any, Optional, Tuple
16 from typing import Hashable, Optional, Tuple
1717
1818 from synapse.api.errors import LimitExceededError
1919 from synapse.types import Requester
4141 # * How many times an action has occurred since a point in time
4242 # * The point in time
4343 # * The rate_hz of this particular entry. This can vary per request
44 self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
44 self.actions = (
45 OrderedDict()
46 ) # type: OrderedDict[Hashable, Tuple[float, int, float]]
4547
4648 def can_requester_do_action(
4749 self,
8183
8284 def can_do_action(
8385 self,
84 key: Any,
86 key: Hashable,
8587 rate_hz: Optional[float] = None,
8688 burst_count: Optional[int] = None,
8789 update: bool = True,
174176
175177 def ratelimit(
176178 self,
177 key: Any,
179 key: Hashable,
178180 rate_hz: Optional[float] = None,
179181 burst_count: Optional[int] = None,
180182 update: bool = True,
1515 import sys
1616
1717 from synapse import python_dependencies # noqa: E402
18
19 sys.dont_write_bytecode = True
2018
2119 logger = logging.getLogger(__name__)
2220
209209 config.update_user_directory = False
210210 config.run_background_tasks = False
211211 config.start_pushers = False
212 config.pusher_shard_config.instances = []
212213 config.send_federation = False
214 config.federation_shard_config.instances = []
213215
214216 synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
215217
2222
2323 from twisted.internet import address
2424 from twisted.web.resource import IResource
25 from twisted.web.server import Request
2526
2627 import synapse
2728 import synapse.events
189190 self.http_client = hs.get_simple_http_client()
190191 self.main_uri = hs.config.worker_main_http_uri
191192
192 async def on_POST(self, request, device_id):
193 async def on_POST(self, request: Request, device_id: Optional[str]):
193194 requester = await self.auth.get_user_by_req(request, allow_guest=True)
194195 user_id = requester.user.to_string()
195196 body = parse_json_object_from_request(request)
222223 header: request.requestHeaders.getRawHeaders(header, [])
223224 for header in (b"Authorization", b"User-Agent")
224225 }
225 # Add the previous hop the the X-Forwarded-For header.
226 # Add the previous hop to the X-Forwarded-For header.
226227 x_forwarded_for = request.requestHeaders.getRawHeaders(
227228 b"X-Forwarded-For", []
228229 )
230 # we use request.client here, since we want the previous hop, not the
231 # original client (as returned by request.getClientAddress()).
229232 if isinstance(request.client, (address.IPv4Address, address.IPv6Address)):
230233 previous_host = request.client.host.encode("ascii")
231234 # If the header exists, add to the comma-separated list of the first
237240 else:
238241 x_forwarded_for = [previous_host]
239242 headers[b"X-Forwarded-For"] = x_forwarded_for
243
244 # Replicate the original X-Forwarded-Proto header. Note that
245 # XForwardedForRequest overrides isSecure() to give us the original protocol
246 # used by the client, as opposed to the protocol used by our upstream proxy
247 # - which is what we want here.
248 headers[b"X-Forwarded-Proto"] = [
249 b"https" if request.isSecure() else b"http"
250 ]
240251
241252 try:
242253 result = await self.http_client.post_json_get_json(
643654 logger.warning("Unsupported listener type: %s", listener.type)
644655
645656 self.get_tcp_replication().start_replication(self)
646
647 async def remove_pusher(self, app_id, push_key, user_id):
648 self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
649657
650658 @cache_in_self
651659 def get_replication_data_handler(self):
921929 # For other worker types we force this to off.
922930 config.appservice.notify_appservices = False
923931
924 if config.worker_app == "synapse.app.pusher":
925 if config.server.start_pushers:
926 sys.stderr.write(
927 "\nThe pushers must be disabled in the main synapse process"
928 "\nbefore they can be run in a separate worker."
929 "\nPlease add ``start_pushers: false`` to the main config"
930 "\n"
931 )
932 sys.exit(1)
933
934 # Force the pushers to start since they will be disabled in the main config
935 config.server.start_pushers = True
936 else:
937 # For other worker types we force this to off.
938 config.server.start_pushers = False
939
940932 if config.worker_app == "synapse.app.user_dir":
941933 if config.server.update_user_directory:
942934 sys.stderr.write(
953945 # For other worker types we force this to off.
954946 config.server.update_user_directory = False
955947
956 if config.worker_app == "synapse.app.federation_sender":
957 if config.worker.send_federation:
958 sys.stderr.write(
959 "\nThe send_federation must be disabled in the main synapse process"
960 "\nbefore they can be run in a separate worker."
961 "\nPlease add ``send_federation: false`` to the main config"
962 "\n"
963 )
964 sys.exit(1)
965
966 # Force the pushers to start since they will be disabled in the main config
967 config.worker.send_federation = True
968 else:
969 # For other worker types we force this to off.
970 config.worker.send_federation = False
971
972948 synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
973949
974950 hs = GenericWorkerServer(
2020 from collections import OrderedDict
2121 from hashlib import sha256
2222 from textwrap import dedent
23 from typing import Any, Iterable, List, MutableMapping, Optional
23 from typing import Any, Iterable, List, MutableMapping, Optional, Union
2424
2525 import attr
2626 import jinja2
146146 return int(value) * size
147147
148148 @staticmethod
149 def parse_duration(value):
149 def parse_duration(value: Union[str, int]) -> int:
150 """Convert a duration as a string or integer to a number of milliseconds.
151
152 If an integer is provided it is treated as milliseconds and is unchanged.
153
154 String durations can have a suffix of 's', 'm', 'h', 'd', 'w', or 'y'.
155 No suffix is treated as milliseconds.
156
157 Args:
158 value: The duration to parse.
159
160 Returns:
161 The number of milliseconds in the duration.
162 """
150163 if isinstance(value, int):
151164 return value
152165 second = 1000
830843
831844 def should_handle(self, instance_name: str, key: str) -> bool:
832845 """Whether this instance is responsible for handling the given key."""
833 # If multiple instances are not defined we always return true
834 if not self.instances or len(self.instances) == 1:
835 return True
836
837 return self.get_instance(key) == instance_name
838
839 def get_instance(self, key: str) -> str:
846 # If no instances are defined we assume some other worker is handling
847 # this.
848 if not self.instances:
849 return False
850
851 return self._get_instance(key) == instance_name
852
853 def _get_instance(self, key: str) -> str:
840854 """Get the instance responsible for handling the given key.
841855
842 Note: For things like federation sending the config for which instance
843 is sending is known only to the sender instance if there is only one.
844 Therefore `should_handle` should be used where possible.
856 Note: For federation sending and pushers the config for which instance
857 is sending is known only to the sender instance, so we don't expose this
858 method by default.
845859 """
846860
847861 if not self.instances:
848 return "master"
862 raise Exception("Unknown worker")
849863
850864 if len(self.instances) == 1:
851865 return self.instances[0]
862876 return self.instances[remainder]
863877
864878
879 @attr.s
880 class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
881 """A version of `ShardedWorkerHandlingConfig` that is used for config
882 options where all instances know which instances are responsible for the
883 sharded work.
884 """
885
886 def __attrs_post_init__(self):
887 # We require that `self.instances` is non-empty.
888 if not self.instances:
889 raise Exception("Got empty list of instances for shard config")
890
891 def get_instance(self, key: str) -> str:
892 """Get the instance responsible for handling the given key."""
893 return self._get_instance(key)
894
895
865896 __all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]
148148 instances: List[str]
149149 def __init__(self, instances: List[str]) -> None: ...
150150 def should_handle(self, instance_name: str, key: str) -> bool: ...
151
152 class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
151153 def get_instance(self, key: str) -> str: ...
4040 )
4141 self.federation_metrics_domains = set(federation_metrics_domains)
4242
43 self.allow_profile_lookup_over_federation = config.get(
44 "allow_profile_lookup_over_federation", True
45 )
46
4347 def generate_config_section(self, config_dir_path, server_name, **kwargs):
4448 return """\
4549 ## Federation ##
6569 #federation_metrics_domains:
6670 # - matrix.org
6771 # - example.com
72
73 # Uncomment to disable profile lookup over federation. By default, the
74 # Federation API allows other homeservers to obtain profile data of any user
75 # on this homeserver. Defaults to 'true'.
76 #
77 #allow_profile_lookup_over_federation: false
6878 """
6979
7080
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 from ._base import Config, ShardedWorkerHandlingConfig
16 from ._base import Config
1717
1818
1919 class PushConfig(Config):
2525 self.push_group_unread_count_by_room = push_config.get(
2626 "group_unread_count_by_room", True
2727 )
28
29 pusher_instances = config.get("pusher_instances") or []
30 self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
3128
3229 # There was a a 'redact_content' setting but mistakenly read from the
3330 # 'email'section'. Check for the flag in the 'push' section, and log,
9999 self.rc_joins_remote = RateLimitConfig(
100100 config.get("rc_joins", {}).get("remote", {}),
101101 defaults={"per_second": 0.01, "burst_count": 3},
102 )
103
104 # Ratelimit cross-user key requests:
105 # * For local requests this is keyed by the sending device.
106 # * For requests received over federation this is keyed by the origin.
107 #
108 # Note that this isn't exposed in the configuration as it is obscure.
109 self.rc_key_requests = RateLimitConfig(
110 config.get("rc_key_requests", {}),
111 defaults={"per_second": 20, "burst_count": 100},
102112 )
103113
104114 self.rc_3pid_validation = RateLimitConfig(
205205
206206 def generate_config_section(self, data_dir_path, **kwargs):
207207 media_store = os.path.join(data_dir_path, "media_store")
208 uploads_path = os.path.join(data_dir_path, "uploads")
209208
210209 formatted_thumbnail_sizes = "".join(
211210 THUMBNAIL_SIZE_YAML % s for s in DEFAULT_THUMBNAIL_SIZES
262262 False,
263263 )
264264
265 # Whether to retrieve and display profile data for a user when they
266 # are invited to a room
267 self.include_profile_data_on_invite = config.get(
268 "include_profile_data_on_invite", True
269 )
270
265271 if "restrict_public_rooms_to_local_users" in config and (
266272 "allow_public_rooms_without_auth" in config
267273 or "allow_public_rooms_over_federation" in config
390396 if self.public_baseurl is not None:
391397 if self.public_baseurl[-1] != "/":
392398 self.public_baseurl += "/"
393 self.start_pushers = config.get("start_pushers", True)
394399
395400 # (undocumented) option for torturing the worker-mode replication a bit,
396401 # for testing. The value defines the number of milliseconds to pause before
847852 #
848853 #limit_profile_requests_to_users_who_share_rooms: true
849854
855 # Uncomment to prevent a user's profile data from being retrieved and
856 # displayed in a room until they have joined it. By default, a user's
857 # profile data is included in an invite event, regardless of the values
858 # of the above two settings, and whether or not the users share a server.
859 # Defaults to 'true'.
860 #
861 #include_profile_data_on_invite: false
862
850863 # If set to 'true', removes the need for authentication to access the server's
851864 # public rooms directory through the client API, meaning that anyone can
852865 # query the room directory. Defaults to 'false'.
2323 section = "userdirectory"
2424
2525 def read_config(self, config, **kwargs):
26 self.user_directory_search_enabled = True
27 self.user_directory_search_all_users = False
28 user_directory_config = config.get("user_directory", None)
29 if user_directory_config:
30 self.user_directory_search_enabled = user_directory_config.get(
31 "enabled", True
32 )
33 self.user_directory_search_all_users = user_directory_config.get(
34 "search_all_users", False
35 )
26 user_directory_config = config.get("user_directory") or {}
27 self.user_directory_search_enabled = user_directory_config.get("enabled", True)
28 self.user_directory_search_all_users = user_directory_config.get(
29 "search_all_users", False
30 )
31 self.user_directory_search_prefer_local_users = user_directory_config.get(
32 "prefer_local_users", False
33 )
3634
3735 def generate_config_section(self, config_dir_path, server_name, **kwargs):
3836 return """
3937 # User Directory configuration
4038 #
41 # 'enabled' defines whether users can search the user directory. If
42 # false then empty responses are returned to all queries. Defaults to
43 # true.
44 #
45 # 'search_all_users' defines whether to search all users visible to your HS
46 # when searching the user directory, rather than limiting to users visible
47 # in public rooms. Defaults to false. If you set it True, you'll have to
48 # rebuild the user_directory search indexes, see
49 # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
50 #
51 #user_directory:
52 # enabled: true
53 # search_all_users: false
39 user_directory:
40 # Defines whether users can search the user directory. If false then
41 # empty responses are returned to all queries. Defaults to true.
42 #
43 # Uncomment to disable the user directory.
44 #
45 #enabled: false
46
47 # Defines whether to search all users visible to your HS when searching
48 # the user directory, rather than limiting to users visible in public
49 # rooms. Defaults to false.
50 #
51 # If you set it true, you'll have to rebuild the user_directory search
52 # indexes, see:
53 # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
54 #
55 # Uncomment to return search results containing all known users, even if that
56 # user does not share a room with the requester.
57 #
58 #search_all_users: true
59
60 # Defines whether to prefer local users in search query results.
61 # If True, local users are more likely to appear above remote users
62 # when searching the user directory. Defaults to false.
63 #
64 # Uncomment to prefer local over remote users in user directory search
65 # results.
66 #
67 #prefer_local_users: true
5468 """
1616
1717 import attr
1818
19 from ._base import Config, ConfigError, ShardedWorkerHandlingConfig
19 from ._base import (
20 Config,
21 ConfigError,
22 RoutableShardedWorkerHandlingConfig,
23 ShardedWorkerHandlingConfig,
24 )
2025 from .server import ListenerConfig, parse_listener_def
26
27 _FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR = """
28 The send_federation config option must be disabled in the main
29 synapse process before they can be run in a separate worker.
30
31 Please add ``send_federation: false`` to the main config
32 """
33
34 _PUSHER_WITH_START_PUSHERS_ENABLED_ERROR = """
35 The start_pushers config option must be disabled in the main
36 synapse process before they can be run in a separate worker.
37
38 Please add ``start_pushers: false`` to the main config
39 """
2140
2241
2342 def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
102121 self.worker_replication_secret = config.get("worker_replication_secret", None)
103122
104123 self.worker_name = config.get("worker_name", self.worker_app)
124 self.instance_name = self.worker_name or "master"
105125
106126 self.worker_main_http_uri = config.get("worker_main_http_uri", None)
107127
117137 )
118138 )
119139
120 # Whether to send federation traffic out in this process. This only
121 # applies to some federation traffic, and so shouldn't be used to
122 # "disable" federation
123 self.send_federation = config.get("send_federation", True)
124
125 federation_sender_instances = config.get("federation_sender_instances") or []
140 # Handle federation sender configuration.
141 #
142 # There are two ways of configuring which instances handle federation
143 # sending:
144 # 1. The old way where "send_federation" is set to false and running a
145 # `synapse.app.federation_sender` worker app.
146 # 2. Specifying the workers sending federation in
147 # `federation_sender_instances`.
148 #
149
150 send_federation = config.get("send_federation", True)
151
152 federation_sender_instances = config.get("federation_sender_instances")
153 if federation_sender_instances is None:
154 # Default to an empty list, which means "another, unknown, worker is
155 # responsible for it".
156 federation_sender_instances = []
157
158 # If no federation sender instances are set we check if
159 # `send_federation` is set, which means use master
160 if send_federation:
161 federation_sender_instances = ["master"]
162
163 if self.worker_app == "synapse.app.federation_sender":
164 if send_federation:
165 # If we're running federation senders, and not using
166 # `federation_sender_instances`, then we should have
167 # explicitly set `send_federation` to false.
168 raise ConfigError(
169 _FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR
170 )
171
172 federation_sender_instances = [self.worker_name]
173
174 self.send_federation = self.instance_name in federation_sender_instances
126175 self.federation_shard_config = ShardedWorkerHandlingConfig(
127176 federation_sender_instances
128177 )
163212 "Must only specify one instance to handle `receipts` messages."
164213 )
165214
166 self.events_shard_config = ShardedWorkerHandlingConfig(self.writers.events)
215 if len(self.writers.events) == 0:
216 raise ConfigError("Must specify at least one instance to handle `events`.")
217
218 self.events_shard_config = RoutableShardedWorkerHandlingConfig(
219 self.writers.events
220 )
221
222 # Handle sharded push
223 start_pushers = config.get("start_pushers", True)
224 pusher_instances = config.get("pusher_instances")
225 if pusher_instances is None:
226 # Default to an empty list, which means "another, unknown, worker is
227 # responsible for it".
228 pusher_instances = []
229
230 # If no pushers instances are set we check if `start_pushers` is
231 # set, which means use master
232 if start_pushers:
233 pusher_instances = ["master"]
234
235 if self.worker_app == "synapse.app.pusher":
236 if start_pushers:
237 # If we're running pushers, and not using
238 # `pusher_instances`, then we should have explicitly set
239 # `start_pushers` to false.
240 raise ConfigError(_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR)
241
242 pusher_instances = [self.instance_name]
243
244 self.start_pushers = self.instance_name in pusher_instances
245 self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
167246
168247 # Whether this worker should run background tasks or not.
169248 #
3333 from twisted.internet.abstract import isIPAddress
3434 from twisted.python import failure
3535
36 from synapse.api.constants import EventTypes, Membership
36 from synapse.api.constants import EduTypes, EventTypes, Membership
3737 from synapse.api.errors import (
3838 AuthError,
3939 Codes,
4343 SynapseError,
4444 UnsupportedRoomVersionError,
4545 )
46 from synapse.api.ratelimiting import Ratelimiter
4647 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
4748 from synapse.events import EventBase
4849 from synapse.federation.federation_base import FederationBase, event_from_pdu_json
868869 # EDU received.
869870 self._edu_type_to_instance = {} # type: Dict[str, List[str]]
870871
872 # A rate limiter for incoming room key requests per origin.
873 self._room_key_request_rate_limiter = Ratelimiter(
874 clock=self.clock,
875 rate_hz=self.config.rc_key_requests.per_second,
876 burst_count=self.config.rc_key_requests.burst_count,
877 )
878
871879 def register_edu_handler(
872880 self, edu_type: str, handler: Callable[[str, JsonDict], Awaitable[None]]
873881 ):
916924 self._edu_type_to_instance[edu_type] = instance_names
917925
918926 async def on_edu(self, edu_type: str, origin: str, content: dict):
919 if not self.config.use_presence and edu_type == "m.presence":
927 if not self.config.use_presence and edu_type == EduTypes.Presence:
928 return
929
930 # If the incoming room key requests from a particular origin are over
931 # the limit, drop them.
932 if (
933 edu_type == EduTypes.RoomKeyRequest
934 and not self._room_key_request_rate_limiter.can_do_action(origin)
935 ):
920936 return
921937
922938 # Check if we have a handler on this instance
473473 self._processing_pending_presence = False
474474
475475 def send_presence_to_destinations(
476 self, states: List[UserPresenceState], destinations: List[str]
476 self, states: Iterable[UserPresenceState], destinations: Iterable[str]
477477 ) -> None:
478478 """Send the given presence states to the given destinations.
479479 destinations (list[str])
483483
484484 # This is when we receive a server-server Query
485485 async def on_GET(self, origin, content, query, query_type):
486 return await self.handler.on_query_request(
487 query_type,
488 {k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()},
489 )
486 args = {k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()}
487 args["origin"] = origin
488 return await self.handler.on_query_request(query_type, args)
490489
491490
492491 class FederationMakeJoinServlet(BaseFederationServlet):
3535 import bcrypt
3636 import pymacaroons
3737
38 from twisted.web.http import Request
38 from twisted.web.server import Request
3939
4040 from synapse.api.constants import LoginType
4141 from synapse.api.errors import (
480480 sid = authdict["session"]
481481
482482 # Convert the URI and method to strings.
483 uri = request.uri.decode("utf-8")
483 uri = request.uri.decode("utf-8") # type: ignore
484484 method = request.method.decode("utf-8")
485485
486486 # If there's no session ID, create a new session.
119119
120120 await self.store.user_set_password_hash(user_id, None)
121121
122 # Most of the pushers will have been deleted when we logged out the
123 # associated devices above, but we still need to delete pushers not
124 # associated with devices, e.g. email pushers.
125 await self.store.delete_all_pushers_for_user(user_id)
126
122127 # Add the user to a table of users pending deactivation (ie.
123128 # removal from all the rooms they're a member of)
124129 await self.store.add_user_pending_deactivation(user_id)
1515 import logging
1616 from typing import TYPE_CHECKING, Any, Dict
1717
18 from synapse.api.constants import EduTypes
1819 from synapse.api.errors import SynapseError
20 from synapse.api.ratelimiting import Ratelimiter
1921 from synapse.logging.context import run_in_background
2022 from synapse.logging.opentracing import (
2123 get_active_span_text_map,
2426 start_active_span,
2527 )
2628 from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
27 from synapse.types import JsonDict, UserID, get_domain_from_id
29 from synapse.types import JsonDict, Requester, UserID, get_domain_from_id
2830 from synapse.util import json_encoder
2931 from synapse.util.stringutils import random_string
3032
7779 ReplicationUserDevicesResyncRestServlet.make_client(hs)
7880 )
7981
82 self._ratelimiter = Ratelimiter(
83 clock=hs.get_clock(),
84 rate_hz=hs.config.rc_key_requests.per_second,
85 burst_count=hs.config.rc_key_requests.burst_count,
86 )
87
8088 async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
8189 local_messages = {}
8290 sender_user_id = content["sender"]
167175
168176 async def send_device_message(
169177 self,
170 sender_user_id: str,
178 requester: Requester,
171179 message_type: str,
172180 messages: Dict[str, Dict[str, JsonDict]],
173181 ) -> None:
182 sender_user_id = requester.user.to_string()
183
174184 set_tag("number_of_messages", len(messages))
175185 set_tag("sender", sender_user_id)
176186 local_messages = {}
177187 remote_messages = {} # type: Dict[str, Dict[str, Dict[str, JsonDict]]]
178188 for user_id, by_device in messages.items():
189 # Ratelimit local cross-user key requests by the sending device.
190 if (
191 message_type == EduTypes.RoomKeyRequest
192 and user_id != sender_user_id
193 and self._ratelimiter.can_do_action(
194 (sender_user_id, requester.device_id)
195 )
196 ):
197 continue
198
179199 # we use UserID.from_string to catch invalid user ids
180200 if self.is_mine(UserID.from_string(user_id)):
181201 messages_by_device = {
1616 import random
1717 from typing import TYPE_CHECKING, Iterable, List, Optional
1818
19 from synapse.api.constants import EventTypes, Membership
19 from synapse.api.constants import EduTypes, EventTypes, Membership
2020 from synapse.api.errors import AuthError, SynapseError
2121 from synapse.events import EventBase
2222 from synapse.handlers.presence import format_user_presence_state
112112 states = await presence_handler.get_states(users)
113113 to_add.extend(
114114 {
115 "type": EventTypes.Presence,
115 "type": EduTypes.Presence,
116116 "content": format_user_presence_state(state, time_now),
117117 }
118118 for state in states
1717
1818 from twisted.internet import defer
1919
20 from synapse.api.constants import EventTypes, Membership
20 from synapse.api.constants import EduTypes, EventTypes, Membership
2121 from synapse.api.errors import SynapseError
2222 from synapse.events.validator import EventValidator
2323 from synapse.handlers.presence import format_user_presence_state
411411
412412 return [
413413 {
414 "type": EventTypes.Presence,
414 "type": EduTypes.Presence,
415415 "content": format_user_presence_state(s, time_now),
416416 }
417417 for s in states
386386
387387 self.room_invite_state_types = self.hs.config.room_invite_state_types
388388
389 self.membership_types_to_include_profile_data_in = (
390 {Membership.JOIN, Membership.INVITE}
391 if self.hs.config.include_profile_data_on_invite
392 else {Membership.JOIN}
393 )
394
389395 self.send_event = ReplicationSendEventRestServlet.make_client(hs)
390396
391397 # This is only used to get at ratelimit function, and maybe_kick_guest_users
499505 membership = builder.content.get("membership", None)
500506 target = UserID.from_string(builder.state_key)
501507
502 if membership in {Membership.JOIN, Membership.INVITE}:
508 if membership in self.membership_types_to_include_profile_data_in:
503509 # If event doesn't include a display name, add one.
504510 profile = self.profile_handler
505511 content = builder.content
273273
274274 self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
275275
276 # Start a LoopingCall in 30s that fires every 5s.
277 # The initial delay is to allow disconnected clients a chance to
278 # reconnect before we treat them as offline.
279 def run_timeout_handler():
280 return run_as_background_process(
281 "handle_presence_timeouts", self._handle_timeouts
282 )
283
284 self.clock.call_later(30, self.clock.looping_call, run_timeout_handler, 5000)
285
286 def run_persister():
287 return run_as_background_process(
288 "persist_presence_changes", self._persist_unpersisted_changes
289 )
290
291 self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
276 if self._presence_enabled:
277 # Start a LoopingCall in 30s that fires every 5s.
278 # The initial delay is to allow disconnected clients a chance to
279 # reconnect before we treat them as offline.
280 def run_timeout_handler():
281 return run_as_background_process(
282 "handle_presence_timeouts", self._handle_timeouts
283 )
284
285 self.clock.call_later(
286 30, self.clock.looping_call, run_timeout_handler, 5000
287 )
288
289 def run_persister():
290 return run_as_background_process(
291 "persist_presence_changes", self._persist_unpersisted_changes
292 )
293
294 self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
292295
293296 LaterGauge(
294297 "synapse_handlers_presence_wheel_timer_size",
298301 )
299302
300303 # Used to handle sending of presence to newly joined users/servers
301 if hs.config.use_presence:
304 if self._presence_enabled:
302305 self.notifier.add_replication_callback(self.notify_new_event)
303306
304307 # Presence is best effort and quickly heals itself, so lets just always
848851 """Process current state deltas to find new joins that need to be
849852 handled.
850853 """
854 # A map of destination to a set of user state that they should receive
855 presence_destinations = {} # type: Dict[str, Set[UserPresenceState]]
856
851857 for delta in deltas:
852858 typ = delta["type"]
853859 state_key = delta["state_key"]
857863
858864 logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
859865
866 # Drop any event that isn't a membership join
860867 if typ != EventTypes.Member:
861868 continue
862869
879886 # Ignore changes to join events.
880887 continue
881888
882 await self._on_user_joined_room(room_id, state_key)
883
884 async def _on_user_joined_room(self, room_id: str, user_id: str) -> None:
889 # Retrieve any user presence state updates that need to be sent as a result,
890 # and the destinations that need to receive it
891 destinations, user_presence_states = await self._on_user_joined_room(
892 room_id, state_key
893 )
894
895 # Insert the destinations and respective updates into our destinations dict
896 for destination in destinations:
897 presence_destinations.setdefault(destination, set()).update(
898 user_presence_states
899 )
900
901 # Send out user presence updates for each destination
902 for destination, user_state_set in presence_destinations.items():
903 self.federation.send_presence_to_destinations(
904 destinations=[destination], states=user_state_set
905 )
906
907 async def _on_user_joined_room(
908 self, room_id: str, user_id: str
909 ) -> Tuple[List[str], List[UserPresenceState]]:
885910 """Called when we detect a user joining the room via the current state
886 delta stream.
887 """
888
911 delta stream. Returns the destinations that need to be updated and the
912 presence updates to send to them.
913
914 Args:
915 room_id: The ID of the room that the user has joined.
916 user_id: The ID of the user that has joined the room.
917
918 Returns:
919 A tuple of destinations and presence updates to send to them.
920 """
889921 if self.is_mine_id(user_id):
890922 # If this is a local user then we need to send their presence
891923 # out to hosts in the room (who don't already have it)
893925 # TODO: We should be able to filter the hosts down to those that
894926 # haven't previously seen the user
895927
928 remote_hosts = await self.state.get_current_hosts_in_room(room_id)
929
930 # Filter out ourselves.
931 filtered_remote_hosts = [
932 host for host in remote_hosts if host != self.server_name
933 ]
934
896935 state = await self.current_state_for_user(user_id)
897 hosts = await self.state.get_current_hosts_in_room(room_id)
898
899 # Filter out ourselves.
900 hosts = {host for host in hosts if host != self.server_name}
901
902 self.federation.send_presence_to_destinations(
903 states=[state], destinations=hosts
904 )
936 return filtered_remote_hosts, [state]
905937 else:
906938 # A remote user has joined the room, so we need to:
907939 # 1. Check if this is a new server in the room
913945
914946 # TODO: Check that this is actually a new server joining the
915947 # room.
948
949 remote_host = get_domain_from_id(user_id)
916950
917951 users = await self.state.get_current_users_in_room(room_id)
918952 user_ids = list(filter(self.is_mine_id, users))
933967 or state.status_msg is not None
934968 ]
935969
936 if states:
937 self.federation.send_presence_to_destinations(
938 states=states, destinations=[get_domain_from_id(user_id)]
939 )
970 return [remote_host], states
940971
941972
942973 def should_notify(old_state, new_state):
309309 await self._update_join_states(requester, target_user)
310310
311311 async def on_profile_query(self, args: JsonDict) -> JsonDict:
312 """Handles federation profile query requests."""
313
314 if not self.hs.config.allow_profile_lookup_over_federation:
315 raise SynapseError(
316 403,
317 "Profile lookup over federation is disabled on this homeserver",
318 Codes.FORBIDDEN,
319 )
320
312321 user = UserID.from_string(args["user_id"])
313322 if not self.hs.is_mine(user):
314323 raise SynapseError(400, "User is not hosted on this homeserver")
3030 import attr
3131 from typing_extensions import NoReturn, Protocol
3232
33 from twisted.web.http import Request
3433 from twisted.web.iweb import IRequest
34 from twisted.web.server import Request
3535
3636 from synapse.api.constants import LoginType
3737 from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 import re
16 from typing import Union
1617
17 from twisted.internet import task
18 from twisted.internet import address, task
1819 from twisted.web.client import FileBodyProducer
1920 from twisted.web.iweb import IRequest
2021
5253 pass
5354
5455
56 def get_request_uri(request: IRequest) -> bytes:
57 """Return the full URI that was requested by the client"""
58 return b"%s://%s%s" % (
59 b"https" if request.isSecure() else b"http",
60 _get_requested_host(request),
61 # despite its name, "request.uri" is only the path and query-string.
62 request.uri,
63 )
64
65
66 def _get_requested_host(request: IRequest) -> bytes:
67 hostname = request.getHeader(b"host")
68 if hostname:
69 return hostname
70
71 # no Host header, use the address/port that the request arrived on
72 host = request.getHost() # type: Union[address.IPv4Address, address.IPv6Address]
73
74 hostname = host.host.encode("ascii")
75
76 if request.isSecure() and host.port == 443:
77 # default port for https
78 return hostname
79
80 if not request.isSecure() and host.port == 80:
81 # default port for http
82 return hostname
83
84 return b"%s:%i" % (
85 hostname,
86 host.port,
87 )
88
89
5590 def get_request_user_agent(request: IRequest, default: str = "") -> str:
5691 """Return the last User-Agent header, or the given default."""
5792 # There could be raw utf-8 bytes in the User-Agent header.
288288 treq_args: Dict[str, Any] = {},
289289 ip_whitelist: Optional[IPSet] = None,
290290 ip_blacklist: Optional[IPSet] = None,
291 http_proxy: Optional[bytes] = None,
292 https_proxy: Optional[bytes] = None,
291 use_proxy: bool = False,
293292 ):
294293 """
295294 Args:
299298 we may not request.
300299 ip_whitelist: The whitelisted IP addresses, that we can
301300 request if it were otherwise caught in a blacklist.
302 http_proxy: proxy server to use for http connections. host[:port]
303 https_proxy: proxy server to use for https connections. host[:port]
301 use_proxy: Whether proxy settings should be discovered and used
302 from conventional environment variables.
304303 """
305304 self.hs = hs
306305
344343 connectTimeout=15,
345344 contextFactory=self.hs.get_http_client_context_factory(),
346345 pool=pool,
347 http_proxy=http_proxy,
348 https_proxy=https_proxy,
346 use_proxy=use_proxy,
349347 )
350348
351349 if self._ip_blacklist:
749747 """The maximum allowed size of the HTTP body was exceeded."""
750748
751749
750 class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol):
751 """A protocol which immediately errors upon receiving data."""
752
753 def __init__(self, deferred: defer.Deferred):
754 self.deferred = deferred
755
756 def _maybe_fail(self):
757 """
758 Report a max size exceed error and disconnect the first time this is called.
759 """
760 if not self.deferred.called:
761 self.deferred.errback(BodyExceededMaxSize())
762 # Close the connection (forcefully) since all the data will get
763 # discarded anyway.
764 self.transport.abortConnection()
765
766 def dataReceived(self, data: bytes) -> None:
767 self._maybe_fail()
768
769 def connectionLost(self, reason: Failure) -> None:
770 self._maybe_fail()
771
772
752773 class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
774 """A protocol which reads body to a stream, erroring if the body exceeds a maximum size."""
775
753776 def __init__(
754777 self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int]
755778 ):
806829 Returns:
807830 A Deferred which resolves to the length of the read body.
808831 """
832 d = defer.Deferred()
833
809834 # If the Content-Length header gives a size larger than the maximum allowed
810835 # size, do not bother downloading the body.
811836 if max_size is not None and response.length != UNKNOWN_LENGTH:
812837 if response.length > max_size:
813 return defer.fail(BodyExceededMaxSize())
814
815 d = defer.Deferred()
838 response.deliverBody(_DiscardBodyWithMaxSizeProtocol(d))
839 return d
840
816841 response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size))
817842 return d
818843
1313 # limitations under the License.
1414 import logging
1515 import urllib.parse
16 from typing import List, Optional
16 from typing import Any, Generator, List, Optional
1717
1818 from netaddr import AddrFormatError, IPAddress, IPSet
1919 from zope.interface import implementer
115115 uri: bytes,
116116 headers: Optional[Headers] = None,
117117 bodyProducer: Optional[IBodyProducer] = None,
118 ) -> defer.Deferred:
118 ) -> Generator[defer.Deferred, Any, defer.Deferred]:
119119 """
120120 Args:
121121 method: HTTP method: GET/POST/etc
176176 # We need to make sure the host header is set to the netloc of the
177177 # server and that a user-agent is provided.
178178 if headers is None:
179 headers = Headers()
179 request_headers = Headers()
180180 else:
181 headers = headers.copy()
182
183 if not headers.hasHeader(b"host"):
184 headers.addRawHeader(b"host", parsed_uri.netloc)
185 if not headers.hasHeader(b"user-agent"):
186 headers.addRawHeader(b"user-agent", self.user_agent)
181 request_headers = headers.copy()
182
183 if not request_headers.hasHeader(b"host"):
184 request_headers.addRawHeader(b"host", parsed_uri.netloc)
185 if not request_headers.hasHeader(b"user-agent"):
186 request_headers.addRawHeader(b"user-agent", self.user_agent)
187187
188188 res = yield make_deferred_yieldable(
189 self._agent.request(method, uri, headers, bodyProducer)
189 self._agent.request(method, uri, request_headers, bodyProducer)
190190 )
191191
192192 return res
10481048 RequestSendFailed: if the Content-Type header is missing or isn't JSON
10491049
10501050 """
1051 c_type = headers.getRawHeaders(b"Content-Type")
1052 if c_type is None:
1051 content_type_headers = headers.getRawHeaders(b"Content-Type")
1052 if content_type_headers is None:
10531053 raise RequestSendFailed(
10541054 RuntimeError("No Content-Type header received from remote server"),
10551055 can_retry=False,
10561056 )
10571057
1058 c_type = c_type[0].decode("ascii") # only the first header
1058 c_type = content_type_headers[0].decode("ascii") # only the first header
10591059 val, options = cgi.parse_header(c_type)
10601060 if val != "application/json":
10611061 raise RequestSendFailed(
1313 # limitations under the License.
1414 import logging
1515 import re
16 from urllib.request import getproxies_environment, proxy_bypass_environment
1617
1718 from zope.interface import implementer
1819
5758
5859 pool (HTTPConnectionPool|None): connection pool to be used. If None, a
5960 non-persistent pool instance will be created.
61
62 use_proxy (bool): Whether proxy settings should be discovered and used
63 from conventional environment variables.
6064 """
6165
6266 def __init__(
6771 connectTimeout=None,
6872 bindAddress=None,
6973 pool=None,
70 http_proxy=None,
71 https_proxy=None,
74 use_proxy=False,
7275 ):
7376 _AgentBase.__init__(self, reactor, pool)
7477
8386 if bindAddress is not None:
8487 self._endpoint_kwargs["bindAddress"] = bindAddress
8588
89 http_proxy = None
90 https_proxy = None
91 no_proxy = None
92 if use_proxy:
93 proxies = getproxies_environment()
94 http_proxy = proxies["http"].encode() if "http" in proxies else None
95 https_proxy = proxies["https"].encode() if "https" in proxies else None
96 no_proxy = proxies["no"] if "no" in proxies else None
97
8698 self.http_proxy_endpoint = _http_proxy_endpoint(
8799 http_proxy, self.proxy_reactor, **self._endpoint_kwargs
88100 )
90102 self.https_proxy_endpoint = _http_proxy_endpoint(
91103 https_proxy, self.proxy_reactor, **self._endpoint_kwargs
92104 )
105
106 self.no_proxy = no_proxy
93107
94108 self._policy_for_https = contextFactory
95109 self._reactor = reactor
138152 pool_key = (parsed_uri.scheme, parsed_uri.host, parsed_uri.port)
139153 request_path = parsed_uri.originForm
140154
141 if parsed_uri.scheme == b"http" and self.http_proxy_endpoint:
155 should_skip_proxy = False
156 if self.no_proxy is not None:
157 should_skip_proxy = proxy_bypass_environment(
158 parsed_uri.host.decode(),
159 proxies={"no": self.no_proxy},
160 )
161
162 if (
163 parsed_uri.scheme == b"http"
164 and self.http_proxy_endpoint
165 and not should_skip_proxy
166 ):
142167 # Cache *all* connections under the same key, since we are only
143168 # connecting to a single destination, the proxy:
144169 pool_key = ("http-proxy", self.http_proxy_endpoint)
145170 endpoint = self.http_proxy_endpoint
146171 request_path = uri
147 elif parsed_uri.scheme == b"https" and self.https_proxy_endpoint:
172 elif (
173 parsed_uri.scheme == b"https"
174 and self.https_proxy_endpoint
175 and not should_skip_proxy
176 ):
148177 endpoint = HTTPConnectProxyEndpoint(
149178 self.proxy_reactor,
150179 self.https_proxy_endpoint,
2020 import types
2121 import urllib
2222 from http import HTTPStatus
23 from inspect import isawaitable
2324 from io import BytesIO
2425 from typing import (
2526 Any,
2930 Iterable,
3031 Iterator,
3132 List,
33 Optional,
3234 Pattern,
3335 Tuple,
3436 Union,
7880 """Sends a JSON error response to clients."""
7981
8082 if f.check(SynapseError):
81 error_code = f.value.code
82 error_dict = f.value.error_dict()
83
84 logger.info("%s SynapseError: %s - %s", request, error_code, f.value.msg)
83 # mypy doesn't understand that f.check asserts the type.
84 exc = f.value # type: SynapseError # type: ignore
85 error_code = exc.code
86 error_dict = exc.error_dict()
87
88 logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg)
8589 else:
8690 error_code = 500
8791 error_dict = {"error": "Internal server error", "errcode": Codes.UNKNOWN}
9094 "Failed handle request via %r: %r",
9195 request.request_metrics.name,
9296 request,
93 exc_info=(f.type, f.value, f.getTracebackObject()),
97 exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
9498 )
9599
96100 # Only respond with an error response if we haven't already started writing,
127131 `{msg}` placeholders), or a jinja2 template
128132 """
129133 if f.check(CodeMessageException):
130 cme = f.value
134 # mypy doesn't understand that f.check asserts the type.
135 cme = f.value # type: CodeMessageException # type: ignore
131136 code = cme.code
132137 msg = cme.msg
133138
141146 logger.error(
142147 "Failed handle request %r",
143148 request,
144 exc_info=(f.type, f.value, f.getTracebackObject()),
149 exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
145150 )
146151 else:
147152 code = HTTPStatus.INTERNAL_SERVER_ERROR
150155 logger.error(
151156 "Failed handle request %r",
152157 request,
153 exc_info=(f.type, f.value, f.getTracebackObject()),
158 exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
154159 )
155160
156161 if isinstance(error_template, str):
277282 raw_callback_return = method_handler(request)
278283
279284 # Is it synchronous? We'll allow this for now.
280 if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)):
285 if isawaitable(raw_callback_return):
281286 callback_return = await raw_callback_return
282287 else:
283288 callback_return = raw_callback_return # type: ignore
398403 A tuple of the callback to use, the name of the servlet, and the
399404 key word arguments to pass to the callback
400405 """
406 # At this point the path must be bytes.
407 request_path_bytes = request.path # type: bytes # type: ignore
408 request_path = request_path_bytes.decode("ascii")
401409 # Treat HEAD requests as GET requests.
402 request_path = request.path.decode("ascii")
403410 request_method = request.method
404411 if request_method == b"HEAD":
405412 request_method = b"GET"
550557 request: Request,
551558 iterator: Iterator[bytes],
552559 ):
553 self._request = request
560 self._request = request # type: Optional[Request]
554561 self._iterator = iterator
555562 self._paused = False
556563
562569 """
563570 Send a list of bytes as a chunk of a response.
564571 """
565 if not data:
572 if not data or not self._request:
566573 return
567574 self._request.write(b"".join(data))
568575
1313 import contextlib
1414 import logging
1515 import time
16 from typing import Optional, Union
17
16 from typing import Optional, Type, Union
17
18 import attr
19 from zope.interface import implementer
20
21 from twisted.internet.interfaces import IAddress
1822 from twisted.python.failure import Failure
1923 from twisted.web.server import Request, Site
2024
5256
5357 def __init__(self, channel, *args, **kw):
5458 Request.__init__(self, channel, *args, **kw)
55 self.site = channel.site
59 self.site = channel.site # type: SynapseSite
5660 self._channel = channel # this is used by the tests
5761 self.start_time = 0.0
5862
9195 def get_request_id(self):
9296 return "%s-%i" % (self.get_method(), self.request_seq)
9397
94 def get_redacted_uri(self):
95 uri = self.uri
98 def get_redacted_uri(self) -> str:
99 """Gets the redacted URI associated with the request (or placeholder if the URI
100 has not yet been received).
101
102 Note: This is necessary as the placeholder value in twisted is str
103 rather than bytes, so we need to sanitise `self.uri`.
104
105 Returns:
106 The redacted URI as a string.
107 """
108 uri = self.uri # type: Union[bytes, str]
96109 if isinstance(uri, bytes):
97 uri = self.uri.decode("ascii", errors="replace")
110 uri = uri.decode("ascii", errors="replace")
98111 return redact_uri(uri)
99112
100 def get_method(self):
101 """Gets the method associated with the request (or placeholder if not
102 method has yet been received).
113 def get_method(self) -> str:
114 """Gets the method associated with the request (or placeholder if method
115 has not yet been received).
103116
104117 Note: This is necessary as the placeholder value in twisted is str
105118 rather than bytes, so we need to sanitise `self.method`.
106119
107120 Returns:
108 str
109 """
110 method = self.method
121 The request method as a string.
122 """
123 method = self.method # type: Union[bytes, str]
111124 if isinstance(method, bytes):
112 method = self.method.decode("ascii")
125 return self.method.decode("ascii")
113126 return method
114127
115128 def render(self, resrc):
332345
333346
334347 class XForwardedForRequest(SynapseRequest):
335 def __init__(self, *args, **kw):
336 SynapseRequest.__init__(self, *args, **kw)
337
348 """Request object which honours proxy headers
349
350 Extends SynapseRequest to replace getClientIP, getClientAddress, and isSecure with
351 information from request headers.
338352 """
339 Add a layer on top of another request that only uses the value of an
340 X-Forwarded-For header as the result of C{getClientIP}.
341 """
342
343 def getClientIP(self):
344 """
345 @return: The client address (the first address) in the value of the
346 I{X-Forwarded-For header}. If the header is not present, return
347 C{b"-"}.
348 """
349 return (
350 self.requestHeaders.getRawHeaders(b"x-forwarded-for", [b"-"])[0]
351 .split(b",")[0]
352 .strip()
353 .decode("ascii")
353
354 # the client IP and ssl flag, as extracted from the headers.
355 _forwarded_for = None # type: Optional[_XForwardedForAddress]
356 _forwarded_https = False # type: bool
357
358 def requestReceived(self, command, path, version):
359 # this method is called by the Channel once the full request has been
360 # received, to dispatch the request to a resource.
361 # We can use it to set the IP address and protocol according to the
362 # headers.
363 self._process_forwarded_headers()
364 return super().requestReceived(command, path, version)
365
366 def _process_forwarded_headers(self):
367 headers = self.requestHeaders.getRawHeaders(b"x-forwarded-for")
368 if not headers:
369 return
370
371 # for now, we just use the first x-forwarded-for header. Really, we ought
372 # to start from the client IP address, and check whether it is trusted; if it
373 # is, work backwards through the headers until we find an untrusted address.
374 # see https://github.com/matrix-org/synapse/issues/9471
375 self._forwarded_for = _XForwardedForAddress(
376 headers[0].split(b",")[0].strip().decode("ascii")
354377 )
378
379 # if we got an x-forwarded-for header, also look for an x-forwarded-proto header
380 header = self.getHeader(b"x-forwarded-proto")
381 if header is not None:
382 self._forwarded_https = header.lower() == b"https"
383 else:
384 # this is done largely for backwards-compatibility so that people that
385 # haven't set an x-forwarded-proto header don't get a redirect loop.
386 logger.warning(
387 "forwarded request lacks an x-forwarded-proto header: assuming https"
388 )
389 self._forwarded_https = True
390
391 def isSecure(self):
392 if self._forwarded_https:
393 return True
394 return super().isSecure()
395
396 def getClientIP(self) -> str:
397 """
398 Return the IP address of the client who submitted this request.
399
400 This method is deprecated. Use getClientAddress() instead.
401 """
402 if self._forwarded_for is not None:
403 return self._forwarded_for.host
404 return super().getClientIP()
405
406 def getClientAddress(self) -> IAddress:
407 """
408 Return the address of the client who submitted this request.
409 """
410 if self._forwarded_for is not None:
411 return self._forwarded_for
412 return super().getClientAddress()
413
414
415 @implementer(IAddress)
416 @attr.s(frozen=True, slots=True)
417 class _XForwardedForAddress:
418 host = attr.ib(type=str)
355419
356420
357421 class SynapseSite(Site):
376440
377441 assert config.http_options is not None
378442 proxied = config.http_options.x_forwarded
379 self.requestFactory = XForwardedForRequest if proxied else SynapseRequest
443 self.requestFactory = (
444 XForwardedForRequest if proxied else SynapseRequest
445 ) # type: Type[Request]
380446 self.access_logger = logging.getLogger(logger_name)
381447 self.server_version_string = server_version_string.encode("ascii")
382448
3131 TCP4ClientEndpoint,
3232 TCP6ClientEndpoint,
3333 )
34 from twisted.internet.interfaces import IPushProducer, ITransport
34 from twisted.internet.interfaces import IPushProducer, IStreamClientEndpoint, ITransport
3535 from twisted.internet.protocol import Factory, Protocol
3636 from twisted.python.failure import Failure
3737
120120 try:
121121 ip = ip_address(self.host)
122122 if isinstance(ip, IPv4Address):
123 endpoint = TCP4ClientEndpoint(_reactor, self.host, self.port)
123 endpoint = TCP4ClientEndpoint(
124 _reactor, self.host, self.port
125 ) # type: IStreamClientEndpoint
124126 elif isinstance(ip, IPv6Address):
125127 endpoint = TCP6ClientEndpoint(_reactor, self.host, self.port)
126128 else:
526526 REGISTRY.register(ReactorLastSeenMetric())
527527
528528
529 def runUntilCurrentTimer(func):
529 def runUntilCurrentTimer(reactor, func):
530530 @functools.wraps(func)
531531 def f(*args, **kwargs):
532532 now = reactor.seconds()
589589
590590 try:
591591 # Ensure the reactor has all the attributes we expect
592 reactor.runUntilCurrent
593 reactor._newTimedCalls
594 reactor.threadCallQueue
592 reactor.seconds # type: ignore
593 reactor.runUntilCurrent # type: ignore
594 reactor._newTimedCalls # type: ignore
595 reactor.threadCallQueue # type: ignore
595596
596597 # runUntilCurrent is called when we have pending calls. It is called once
597598 # per iteratation after fd polling.
598 reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent)
599 reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore
599600
600601 # We manually run the GC each reactor tick so that we can get some metrics
601602 # about time spent doing GC,
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 import logging
16 from typing import TYPE_CHECKING, Iterable, Optional, Tuple
16 from typing import TYPE_CHECKING, Any, Generator, Iterable, Optional, Tuple
1717
1818 from twisted.internet import defer
1919
306306 @defer.inlineCallbacks
307307 def get_state_events_in_room(
308308 self, room_id: str, types: Iterable[Tuple[str, Optional[str]]]
309 ) -> defer.Deferred:
309 ) -> Generator[defer.Deferred, Any, defer.Deferred]:
310310 """Gets current state events for the given room.
311311
312312 (This is exposed for compatibility with the old SpamCheckerApi. We should
1414 # limitations under the License.
1515 import logging
1616 import urllib.parse
17 from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
17 from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Union
1818
1919 from prometheus_client import Counter
2020
2121 from twisted.internet.error import AlreadyCalled, AlreadyCancelled
22 from twisted.internet.interfaces import IDelayedCall
2223
2324 from synapse.api.constants import EventTypes
2425 from synapse.events import EventBase
7071 self.data = pusher_config.data
7172 self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
7273 self.failing_since = pusher_config.failing_since
73 self.timed_call = None
74 self.timed_call = None # type: Optional[IDelayedCall]
7475 self._is_processing = False
7576 self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room
77 self._pusherpool = hs.get_pusherpool()
7678
7779 self.data = pusher_config.data
7880 if self.data is None:
298300 )
299301 else:
300302 logger.info("Pushkey %s was rejected: removing", pk)
301 await self.hs.remove_pusher(self.app_id, pk, self.user_id)
303 await self._pusherpool.remove_pusher(self.app_id, pk, self.user_id)
302304 return True
303305
304306 async def _build_notification_dict(
1818
1919 from prometheus_client import Gauge
2020
21 from synapse.api.errors import Codes, SynapseError
2122 from synapse.metrics.background_process_metrics import (
2223 run_as_background_process,
2324 wrap_as_background_process,
2425 )
2526 from synapse.push import Pusher, PusherConfig, PusherConfigException
2627 from synapse.push.pusher import PusherFactory
28 from synapse.replication.http.push import ReplicationRemovePusherRestServlet
2729 from synapse.types import JsonDict, RoomStreamToken
2830 from synapse.util.async_helpers import concurrently_execute
2931
5759 def __init__(self, hs: "HomeServer"):
5860 self.hs = hs
5961 self.pusher_factory = PusherFactory(hs)
60 self._should_start_pushers = hs.config.start_pushers
6162 self.store = self.hs.get_datastore()
6263 self.clock = self.hs.get_clock()
6364
6667 # We shard the handling of push notifications by user ID.
6768 self._pusher_shard_config = hs.config.push.pusher_shard_config
6869 self._instance_name = hs.get_instance_name()
70 self._should_start_pushers = (
71 self._instance_name in self._pusher_shard_config.instances
72 )
73
74 # We can only delete pushers on master.
75 self._remove_pusher_client = None
76 if hs.config.worker.worker_app:
77 self._remove_pusher_client = ReplicationRemovePusherRestServlet.make_client(
78 hs
79 )
6980
7081 # Record the last stream ID that we were poked about so we can get
7182 # changes since then. We set this to the current max stream ID on
101112 Returns:
102113 The newly created pusher.
103114 """
115
116 if kind == "email":
117 email_owner = await self.store.get_user_id_by_threepid("email", pushkey)
118 if email_owner != user_id:
119 raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND)
104120
105121 time_now_msec = self.clock.time_msec()
106122
174190 user_id: user to remove pushers for
175191 access_tokens: access token *ids* to remove pushers for
176192 """
177 if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
178 return
179
180193 tokens = set(access_tokens)
181194 for p in await self.store.get_pushers_by_user_id(user_id):
182195 if p.access_token in tokens:
379392
380393 synapse_pushers.labels(type(pusher).__name__, pusher.app_id).dec()
381394
382 await self.store.delete_pusher_by_app_id_pushkey_user_id(
383 app_id, pushkey, user_id
384 )
395 # We can only delete pushers on master.
396 if self._remove_pusher_client:
397 await self._remove_pusher_client(
398 app_id=app_id, pushkey=pushkey, user_id=user_id
399 )
400 else:
401 await self.store.delete_pusher_by_app_id_pushkey_user_id(
402 app_id, pushkey, user_id
403 )
105105 "pysaml2>=4.5.0;python_version>='3.6'",
106106 ],
107107 "oidc": ["authlib>=0.14.0"],
108 # systemd-python is necessary for logging to the systemd journal via
109 # `systemd.journal.JournalHandler`, as is documented in
110 # `contrib/systemd/log_config.yaml`.
108111 "systemd": ["systemd-python>=231"],
109112 "url_preview": ["lxml>=3.5.0"],
110113 "sentry": ["sentry-sdk>=0.7.2"],
2020 login,
2121 membership,
2222 presence,
23 push,
2324 register,
2425 send_event,
2526 streams,
4142 membership.register_servlets(hs, self)
4243 streams.register_servlets(hs, self)
4344 account_data.register_servlets(hs, self)
45 push.register_servlets(hs, self)
4446
4547 # The following can't currently be instantiated on workers.
4648 if hs.config.worker.worker_app is None:
212212 content = parse_json_object_from_request(request)
213213
214214 args = content["args"]
215
216 logger.info("Got %r query", query_type)
215 args["origin"] = content["origin"]
216
217 logger.info("Got %r query from %s", query_type, args["origin"])
217218
218219 result = await self.registry.on_query(query_type, args)
219220
1414 import logging
1515 from typing import TYPE_CHECKING, List, Optional, Tuple
1616
17 from twisted.web.http import Request
17 from twisted.web.server import Request
1818
1919 from synapse.http.servlet import parse_json_object_from_request
20 from synapse.http.site import SynapseRequest
2021 from synapse.replication.http._base import ReplicationEndpoint
2122 from synapse.types import JsonDict, Requester, UserID
2223 from synapse.util.distributor import user_left_room
7778 }
7879
7980 async def _handle_request( # type: ignore
80 self, request: Request, room_id: str, user_id: str
81 self, request: SynapseRequest, room_id: str, user_id: str
8182 ) -> Tuple[int, JsonDict]:
8283 content = parse_json_object_from_request(request)
8384
8586 event_content = content["content"]
8687
8788 requester = Requester.deserialize(self.store, content["requester"])
88
8989 request.requester = requester
9090
9191 logger.info("remote_join: %s into room: %s", user_id, room_id)
146146 }
147147
148148 async def _handle_request( # type: ignore
149 self, request: Request, invite_event_id: str
149 self, request: SynapseRequest, invite_event_id: str
150150 ) -> Tuple[int, JsonDict]:
151151 content = parse_json_object_from_request(request)
152152
154154 event_content = content["content"]
155155
156156 requester = Requester.deserialize(self.store, content["requester"])
157
158157 request.requester = requester
159158
160159 # hopefully we're now on the master, so this won't recurse!
0 # -*- coding: utf-8 -*-
1 # Copyright 2021 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 from typing import TYPE_CHECKING
17
18 from synapse.http.servlet import parse_json_object_from_request
19 from synapse.replication.http._base import ReplicationEndpoint
20
21 if TYPE_CHECKING:
22 from synapse.server import HomeServer
23
24 logger = logging.getLogger(__name__)
25
26
27 class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
28 """Deletes the given pusher.
29
30 Request format:
31
32 POST /_synapse/replication/remove_pusher/:user_id
33
34 {
35 "app_id": "<some_id>",
36 "pushkey": "<some_key>"
37 }
38
39 """
40
41 NAME = "add_user_account_data"
42 PATH_ARGS = ("user_id",)
43 CACHE = False
44
45 def __init__(self, hs: "HomeServer"):
46 super().__init__(hs)
47
48 self.pusher_pool = hs.get_pusherpool()
49
50 @staticmethod
51 async def _serialize_payload(app_id, pushkey, user_id):
52 payload = {
53 "app_id": app_id,
54 "pushkey": pushkey,
55 }
56
57 return payload
58
59 async def _handle_request(self, request, user_id):
60 content = parse_json_object_from_request(request)
61
62 app_id = content["app_id"]
63 pushkey = content["pushkey"]
64
65 await self.pusher_pool.remove_pusher(app_id, pushkey, user_id)
66
67 return 200, {}
68
69
70 def register_servlets(hs, http_server):
71 ReplicationRemovePusherRestServlet(hs).register(http_server)
107107
108108 # Map from stream to list of deferreds waiting for the stream to
109109 # arrive at a particular position. The lists are sorted by stream position.
110 self._streams_to_waiters = (
111 {}
112 ) # type: Dict[str, List[Tuple[int, Deferred[None]]]]
110 self._streams_to_waiters = {} # type: Dict[str, List[Tuple[int, Deferred]]]
113111
114112 async def on_rdata(
115113 self, stream_name: str, instance_name: str, token: int, rows: list
324324 return "%s %s" % (self.instance_name, self.token)
325325
326326
327 class RemovePusherCommand(Command):
328 """Sent by the client to request the master remove the given pusher.
329
330 Format::
331
332 REMOVE_PUSHER <app_id> <push_key> <user_id>
333 """
334
335 NAME = "REMOVE_PUSHER"
336
337 def __init__(self, app_id, push_key, user_id):
338 self.user_id = user_id
339 self.app_id = app_id
340 self.push_key = push_key
341
342 @classmethod
343 def from_line(cls, line):
344 app_id, push_key, user_id = line.split(" ", 2)
345
346 return cls(app_id, push_key, user_id)
347
348 def to_line(self):
349 return " ".join((self.app_id, self.push_key, self.user_id))
350
351
352327 class UserIpCommand(Command):
353328 """Sent periodically when a worker sees activity from a client.
354329
415390 ReplicateCommand,
416391 UserSyncCommand,
417392 FederationAckCommand,
418 RemovePusherCommand,
419393 UserIpCommand,
420394 RemoteServerUpCommand,
421395 ClearUserSyncsCommand,
442416 UserSyncCommand.NAME,
443417 ClearUserSyncsCommand.NAME,
444418 FederationAckCommand.NAME,
445 RemovePusherCommand.NAME,
446419 UserIpCommand.NAME,
447420 ErrorCommand.NAME,
448421 RemoteServerUpCommand.NAME,
4343 PositionCommand,
4444 RdataCommand,
4545 RemoteServerUpCommand,
46 RemovePusherCommand,
4746 ReplicateCommand,
4847 UserIpCommand,
4948 UserSyncCommand,
372371 if self._federation_sender:
373372 self._federation_sender.federation_ack(cmd.instance_name, cmd.token)
374373
375 def on_REMOVE_PUSHER(
376 self, conn: AbstractConnection, cmd: RemovePusherCommand
377 ) -> Optional[Awaitable[None]]:
378 remove_pusher_counter.inc()
379
380 if self._is_master:
381 return self._handle_remove_pusher(cmd)
382 else:
383 return None
384
385 async def _handle_remove_pusher(self, cmd: RemovePusherCommand):
386 await self._store.delete_pusher_by_app_id_pushkey_user_id(
387 app_id=cmd.app_id, pushkey=cmd.push_key, user_id=cmd.user_id
388 )
389
390 self._notifier.on_new_replication_data()
391
392374 def on_USER_IP(
393375 self, conn: AbstractConnection, cmd: UserIpCommand
394376 ) -> Optional[Awaitable[None]]:
683665 UserSyncCommand(instance_id, user_id, is_syncing, last_sync_ms)
684666 )
685667
686 def send_remove_pusher(self, app_id: str, push_key: str, user_id: str):
687 """Poke the master to remove a pusher for a user"""
688 cmd = RemovePusherCommand(app_id, push_key, user_id)
689 self.send_command(cmd)
690
691668 def send_user_ip(
692669 self,
693670 user_id: str,
501501 """Global or per room account data was changed"""
502502
503503 AccountDataStreamRow = namedtuple(
504 "AccountDataStream",
504 "AccountDataStreamRow",
505505 ("user_id", "room_id", "data_type"), # str # Optional[str] # str
506506 )
507507
144144 <input type="submit" value="Continue" class="primary-button">
145145 {% if user_attributes.avatar_url or user_attributes.display_name or user_attributes.emails %}
146146 <section class="idp-pick-details">
147 <h2><img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>Information from {{ idp.idp_name }}</h2>
147 <h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Information from {{ idp.idp_name }}</h2>
148148 {% if user_attributes.avatar_url %}
149149 <label class="idp-detail idp-avatar" for="idp-avatar">
150150 <div class="check-row">
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import TYPE_CHECKING, Tuple
1516
1617 from synapse.api.errors import NotFoundError, SynapseError
1718 from synapse.http.servlet import (
1920 assert_params_in_dict,
2021 parse_json_object_from_request,
2122 )
23 from synapse.http.site import SynapseRequest
2224 from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
23 from synapse.types import UserID
25 from synapse.types import JsonDict, UserID
26
27 if TYPE_CHECKING:
28 from synapse.server import HomeServer
2429
2530 logger = logging.getLogger(__name__)
2631
3439 "/users/(?P<user_id>[^/]*)/devices/(?P<device_id>[^/]*)$", "v2"
3540 )
3641
37 def __init__(self, hs):
42 def __init__(self, hs: "HomeServer"):
3843 super().__init__()
3944 self.hs = hs
4045 self.auth = hs.get_auth()
4146 self.device_handler = hs.get_device_handler()
4247 self.store = hs.get_datastore()
4348
44 async def on_GET(self, request, user_id, device_id):
49 async def on_GET(
50 self, request: SynapseRequest, user_id, device_id: str
51 ) -> Tuple[int, JsonDict]:
4552 await assert_requester_is_admin(self.auth, request)
4653
4754 target_user = UserID.from_string(user_id)
5764 )
5865 return 200, device
5966
60 async def on_DELETE(self, request, user_id, device_id):
67 async def on_DELETE(
68 self, request: SynapseRequest, user_id: str, device_id: str
69 ) -> Tuple[int, JsonDict]:
6170 await assert_requester_is_admin(self.auth, request)
6271
6372 target_user = UserID.from_string(user_id)
7180 await self.device_handler.delete_device(target_user.to_string(), device_id)
7281 return 200, {}
7382
74 async def on_PUT(self, request, user_id, device_id):
83 async def on_PUT(
84 self, request: SynapseRequest, user_id: str, device_id: str
85 ) -> Tuple[int, JsonDict]:
7586 await assert_requester_is_admin(self.auth, request)
7687
7788 target_user = UserID.from_string(user_id)
96107
97108 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
98109
99 def __init__(self, hs):
110 def __init__(self, hs: "HomeServer"):
100111 """
101112 Args:
102113 hs (synapse.server.HomeServer): server
106117 self.device_handler = hs.get_device_handler()
107118 self.store = hs.get_datastore()
108119
109 async def on_GET(self, request, user_id):
120 async def on_GET(
121 self, request: SynapseRequest, user_id: str
122 ) -> Tuple[int, JsonDict]:
110123 await assert_requester_is_admin(self.auth, request)
111124
112125 target_user = UserID.from_string(user_id)
129142
130143 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/delete_devices$", "v2")
131144
132 def __init__(self, hs):
145 def __init__(self, hs: "HomeServer"):
133146 self.hs = hs
134147 self.auth = hs.get_auth()
135148 self.device_handler = hs.get_device_handler()
136149 self.store = hs.get_datastore()
137150
138 async def on_POST(self, request, user_id):
151 async def on_POST(
152 self, request: SynapseRequest, user_id: str
153 ) -> Tuple[int, JsonDict]:
139154 await assert_requester_is_admin(self.auth, request)
140155
141156 target_user = UserID.from_string(user_id)
1313 # limitations under the License.
1414
1515 import logging
16 from typing import TYPE_CHECKING, Tuple
1617
1718 from synapse.api.errors import Codes, NotFoundError, SynapseError
1819 from synapse.http.servlet import RestServlet, parse_integer, parse_string
20 from synapse.http.site import SynapseRequest
1921 from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
22 from synapse.types import JsonDict
23
24 if TYPE_CHECKING:
25 from synapse.server import HomeServer
2026
2127 logger = logging.getLogger(__name__)
2228
4450
4551 PATTERNS = admin_patterns("/event_reports$")
4652
47 def __init__(self, hs):
53 def __init__(self, hs: "HomeServer"):
4854 self.hs = hs
4955 self.auth = hs.get_auth()
5056 self.store = hs.get_datastore()
5157
52 async def on_GET(self, request):
58 async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
5359 await assert_requester_is_admin(self.auth, request)
5460
5561 start = parse_integer(request, "from", default=0)
105111
106112 PATTERNS = admin_patterns("/event_reports/(?P<report_id>[^/]*)$")
107113
108 def __init__(self, hs):
114 def __init__(self, hs: "HomeServer"):
109115 self.hs = hs
110116 self.auth = hs.get_auth()
111117 self.store = hs.get_datastore()
112118
113 async def on_GET(self, request, report_id):
119 async def on_GET(
120 self, request: SynapseRequest, report_id: str
121 ) -> Tuple[int, JsonDict]:
114122 await assert_requester_is_admin(self.auth, request)
115123
116124 message = (
117125 "The report_id parameter must be a string representing a positive integer."
118126 )
119127 try:
120 report_id = int(report_id)
128 resolved_report_id = int(report_id)
121129 except ValueError:
122130 raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
123131
124 if report_id < 0:
132 if resolved_report_id < 0:
125133 raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
126134
127 ret = await self.store.get_event_report(report_id)
135 ret = await self.store.get_event_report(resolved_report_id)
128136 if not ret:
129137 raise NotFoundError("Event report not found")
130138
1616 import logging
1717 from typing import TYPE_CHECKING, Tuple
1818
19 from twisted.web.http import Request
19 from twisted.web.server import Request
2020
2121 from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
2222 from synapse.http.servlet import RestServlet, parse_boolean, parse_integer
4141
4242
4343 logger = logging.getLogger(__name__)
44
45
46 class ResolveRoomIdMixin:
47 def __init__(self, hs: "HomeServer"):
48 self.room_member_handler = hs.get_room_member_handler()
49
50 async def resolve_room_id(
51 self, room_identifier: str, remote_room_hosts: Optional[List[str]] = None
52 ) -> Tuple[str, Optional[List[str]]]:
53 """
54 Resolve a room identifier to a room ID, if necessary.
55
56 This also performanes checks to ensure the room ID is of the proper form.
57
58 Args:
59 room_identifier: The room ID or alias.
60 remote_room_hosts: The potential remote room hosts to use.
61
62 Returns:
63 The resolved room ID.
64
65 Raises:
66 SynapseError if the room ID is of the wrong form.
67 """
68 if RoomID.is_valid(room_identifier):
69 resolved_room_id = room_identifier
70 elif RoomAlias.is_valid(room_identifier):
71 room_alias = RoomAlias.from_string(room_identifier)
72 (
73 room_id,
74 remote_room_hosts,
75 ) = await self.room_member_handler.lookup_room_alias(room_alias)
76 resolved_room_id = room_id.to_string()
77 else:
78 raise SynapseError(
79 400, "%s was not legal room ID or room alias" % (room_identifier,)
80 )
81 if not resolved_room_id:
82 raise SynapseError(
83 400, "Unknown room ID or room alias %s" % room_identifier
84 )
85 return resolved_room_id, remote_room_hosts
4486
4587
4688 class ShutdownRoomRestServlet(RestServlet):
333375 return 200, ret
334376
335377
336 class JoinRoomAliasServlet(RestServlet):
378 class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
337379
338380 PATTERNS = admin_patterns("/join/(?P<room_identifier>[^/]*)")
339381
340382 def __init__(self, hs: "HomeServer"):
341 self.hs = hs
342 self.auth = hs.get_auth()
343 self.room_member_handler = hs.get_room_member_handler()
383 super().__init__(hs)
384 self.hs = hs
385 self.auth = hs.get_auth()
344386 self.admin_handler = hs.get_admin_handler()
345387 self.state_handler = hs.get_state_handler()
346388
361403 if not await self.admin_handler.get_user(target_user):
362404 raise NotFoundError("User not found")
363405
364 if RoomID.is_valid(room_identifier):
365 room_id = room_identifier
366 try:
367 remote_room_hosts = [
368 x.decode("ascii") for x in request.args[b"server_name"]
369 ] # type: Optional[List[str]]
370 except Exception:
371 remote_room_hosts = None
372 elif RoomAlias.is_valid(room_identifier):
373 handler = self.room_member_handler
374 room_alias = RoomAlias.from_string(room_identifier)
375 room_id, remote_room_hosts = await handler.lookup_room_alias(room_alias)
376 else:
377 raise SynapseError(
378 400, "%s was not legal room ID or room alias" % (room_identifier,)
379 )
406 # Get the room ID from the identifier.
407 try:
408 remote_room_hosts = [
409 x.decode("ascii") for x in request.args[b"server_name"]
410 ] # type: Optional[List[str]]
411 except Exception:
412 remote_room_hosts = None
413 room_id, remote_room_hosts = await self.resolve_room_id(
414 room_identifier, remote_room_hosts
415 )
380416
381417 fake_requester = create_requester(
382418 target_user, authenticated_entity=requester.authenticated_entity
411447 return 200, {"room_id": room_id}
412448
413449
414 class MakeRoomAdminRestServlet(RestServlet):
450 class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
415451 """Allows a server admin to get power in a room if a local user has power in
416452 a room. Will also invite the user if they're not in the room and it's a
417453 private room. Can specify another user (rather than the admin user) to be
426462 PATTERNS = admin_patterns("/rooms/(?P<room_identifier>[^/]*)/make_room_admin")
427463
428464 def __init__(self, hs: "HomeServer"):
429 self.hs = hs
430 self.auth = hs.get_auth()
431 self.room_member_handler = hs.get_room_member_handler()
465 super().__init__(hs)
466 self.hs = hs
467 self.auth = hs.get_auth()
432468 self.event_creation_handler = hs.get_event_creation_handler()
433469 self.state_handler = hs.get_state_handler()
434470 self.is_mine_id = hs.is_mine_id
435471
436 async def on_POST(self, request, room_identifier):
472 async def on_POST(
473 self, request: SynapseRequest, room_identifier: str
474 ) -> Tuple[int, JsonDict]:
437475 requester = await self.auth.get_user_by_req(request)
438476 await assert_user_is_admin(self.auth, requester.user)
439477 content = parse_json_object_from_request(request, allow_empty_body=True)
440478
441 # Resolve to a room ID, if necessary.
442 if RoomID.is_valid(room_identifier):
443 room_id = room_identifier
444 elif RoomAlias.is_valid(room_identifier):
445 room_alias = RoomAlias.from_string(room_identifier)
446 room_id, _ = await self.room_member_handler.lookup_room_alias(room_alias)
447 room_id = room_id.to_string()
448 else:
449 raise SynapseError(
450 400, "%s was not legal room ID or room alias" % (room_identifier,)
451 )
479 room_id, _ = await self.resolve_room_id(room_identifier)
452480
453481 # Which user to grant room admin rights to.
454482 user_to_add = content.get("user_id", requester.user.to_string())
555583 return 200, {}
556584
557585
558 class ForwardExtremitiesRestServlet(RestServlet):
586 class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet):
559587 """Allows a server admin to get or clear forward extremities.
560588
561589 Clearing does not require restarting the server.
570598 PATTERNS = admin_patterns("/rooms/(?P<room_identifier>[^/]*)/forward_extremities")
571599
572600 def __init__(self, hs: "HomeServer"):
573 self.hs = hs
574 self.auth = hs.get_auth()
575 self.room_member_handler = hs.get_room_member_handler()
601 super().__init__(hs)
602 self.hs = hs
603 self.auth = hs.get_auth()
576604 self.store = hs.get_datastore()
577605
578 async def resolve_room_id(self, room_identifier: str) -> str:
579 """Resolve to a room ID, if necessary."""
580 if RoomID.is_valid(room_identifier):
581 resolved_room_id = room_identifier
582 elif RoomAlias.is_valid(room_identifier):
583 room_alias = RoomAlias.from_string(room_identifier)
584 room_id, _ = await self.room_member_handler.lookup_room_alias(room_alias)
585 resolved_room_id = room_id.to_string()
586 else:
587 raise SynapseError(
588 400, "%s was not legal room ID or room alias" % (room_identifier,)
589 )
590 if not resolved_room_id:
591 raise SynapseError(
592 400, "Unknown room ID or room alias %s" % room_identifier
593 )
594 return resolved_room_id
595
596 async def on_DELETE(self, request, room_identifier):
597 requester = await self.auth.get_user_by_req(request)
598 await assert_user_is_admin(self.auth, requester.user)
599
600 room_id = await self.resolve_room_id(room_identifier)
606 async def on_DELETE(
607 self, request: SynapseRequest, room_identifier: str
608 ) -> Tuple[int, JsonDict]:
609 requester = await self.auth.get_user_by_req(request)
610 await assert_user_is_admin(self.auth, requester.user)
611
612 room_id, _ = await self.resolve_room_id(room_identifier)
601613
602614 deleted_count = await self.store.delete_forward_extremities_for_room(room_id)
603615 return 200, {"deleted": deleted_count}
604616
605 async def on_GET(self, request, room_identifier):
606 requester = await self.auth.get_user_by_req(request)
607 await assert_user_is_admin(self.auth, requester.user)
608
609 room_id = await self.resolve_room_id(room_identifier)
617 async def on_GET(
618 self, request: SynapseRequest, room_identifier: str
619 ) -> Tuple[int, JsonDict]:
620 requester = await self.auth.get_user_by_req(request)
621 await assert_user_is_admin(self.auth, requester.user)
622
623 room_id, _ = await self.resolve_room_id(room_identifier)
610624
611625 extremities = await self.store.get_forward_extremities_for_room(room_id)
612626 return 200, {"count": len(extremities), "results": extremities}
622636
623637 PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]*)/context/(?P<event_id>[^/]*)$")
624638
625 def __init__(self, hs):
639 def __init__(self, hs: "HomeServer"):
626640 super().__init__()
627641 self.clock = hs.get_clock()
628642 self.room_context_handler = hs.get_room_context_handler()
629643 self._event_serializer = hs.get_event_client_serializer()
630644 self.auth = hs.get_auth()
631645
632 async def on_GET(self, request, room_id, event_id):
646 async def on_GET(
647 self, request: SynapseRequest, room_id: str, event_id: str
648 ) -> Tuple[int, JsonDict]:
633649 requester = await self.auth.get_user_by_req(request, allow_guest=False)
634650 await assert_user_is_admin(self.auth, requester.user)
635651
1515 import hmac
1616 import logging
1717 from http import HTTPStatus
18 from typing import TYPE_CHECKING, Tuple
18 from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
1919
2020 from synapse.api.constants import UserTypes
2121 from synapse.api.errors import Codes, NotFoundError, SynapseError
3434 assert_user_is_admin,
3535 )
3636 from synapse.rest.client.v2_alpha._base import client_patterns
37 from synapse.storage.databases.main.media_repository import MediaSortOrder
3738 from synapse.types import JsonDict, UserID
3839
3940 if TYPE_CHECKING:
4546 class UsersRestServlet(RestServlet):
4647 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)$")
4748
48 def __init__(self, hs):
49 def __init__(self, hs: "HomeServer"):
4950 self.hs = hs
5051 self.store = hs.get_datastore()
5152 self.auth = hs.get_auth()
5253 self.admin_handler = hs.get_admin_handler()
5354
54 async def on_GET(self, request, user_id):
55 async def on_GET(
56 self, request: SynapseRequest, user_id: str
57 ) -> Tuple[int, List[JsonDict]]:
5558 target_user = UserID.from_string(user_id)
5659 await assert_requester_is_admin(self.auth, request)
5760
151154 otherwise an error.
152155 """
153156
154 def __init__(self, hs):
157 def __init__(self, hs: "HomeServer"):
155158 self.hs = hs
156159 self.auth = hs.get_auth()
157160 self.admin_handler = hs.get_admin_handler()
163166 self.registration_handler = hs.get_registration_handler()
164167 self.pusher_pool = hs.get_pusherpool()
165168
166 async def on_GET(self, request, user_id):
169 async def on_GET(
170 self, request: SynapseRequest, user_id: str
171 ) -> Tuple[int, JsonDict]:
167172 await assert_requester_is_admin(self.auth, request)
168173
169174 target_user = UserID.from_string(user_id)
177182
178183 return 200, ret
179184
180 async def on_PUT(self, request, user_id):
185 async def on_PUT(
186 self, request: SynapseRequest, user_id: str
187 ) -> Tuple[int, JsonDict]:
181188 requester = await self.auth.get_user_by_req(request)
182189 await assert_user_is_admin(self.auth, requester.user)
183190
271278 )
272279
273280 user = await self.admin_handler.get_user(target_user)
281 assert user is not None
282
274283 return 200, user
275284
276285 else: # create user
328337 target_user, requester, body["avatar_url"], True
329338 )
330339
331 ret = await self.admin_handler.get_user(target_user)
332
333 return 201, ret
340 user = await self.admin_handler.get_user(target_user)
341 assert user is not None
342
343 return 201, user
334344
335345
336346 class UserRegisterServlet(RestServlet):
344354 PATTERNS = admin_patterns("/register")
345355 NONCE_TIMEOUT = 60
346356
347 def __init__(self, hs):
357 def __init__(self, hs: "HomeServer"):
348358 self.auth_handler = hs.get_auth_handler()
349359 self.reactor = hs.get_reactor()
350 self.nonces = {}
360 self.nonces = {} # type: Dict[str, int]
351361 self.hs = hs
352362
353363 def _clear_old_nonces(self):
360370 if now - v > self.NONCE_TIMEOUT:
361371 del self.nonces[k]
362372
363 def on_GET(self, request):
373 def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
364374 """
365375 Generate a new nonce.
366376 """
370380 self.nonces[nonce] = int(self.reactor.seconds())
371381 return 200, {"nonce": nonce}
372382
373 async def on_POST(self, request):
383 async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
374384 self._clear_old_nonces()
375385
376386 if not self.hs.config.registration_shared_secret:
476486 client_patterns("/admin" + path_regex, v1=True)
477487 )
478488
479 def __init__(self, hs):
489 def __init__(self, hs: "HomeServer"):
480490 self.hs = hs
481491 self.auth = hs.get_auth()
482492 self.admin_handler = hs.get_admin_handler()
483493
484 async def on_GET(self, request, user_id):
494 async def on_GET(
495 self, request: SynapseRequest, user_id: str
496 ) -> Tuple[int, JsonDict]:
485497 target_user = UserID.from_string(user_id)
486498 requester = await self.auth.get_user_by_req(request)
487499 auth_user = requester.user
506518 self.is_mine = hs.is_mine
507519 self.store = hs.get_datastore()
508520
509 async def on_POST(self, request: str, target_user_id: str) -> Tuple[int, JsonDict]:
521 async def on_POST(
522 self, request: SynapseRequest, target_user_id: str
523 ) -> Tuple[int, JsonDict]:
510524 requester = await self.auth.get_user_by_req(request)
511525 await assert_user_is_admin(self.auth, requester.user)
512526
548562 self.account_activity_handler = hs.get_account_validity_handler()
549563 self.auth = hs.get_auth()
550564
551 async def on_POST(self, request):
565 async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
552566 await assert_requester_is_admin(self.auth, request)
553567
554568 body = parse_json_object_from_request(request)
582596
583597 PATTERNS = admin_patterns("/reset_password/(?P<target_user_id>[^/]*)")
584598
585 def __init__(self, hs):
599 def __init__(self, hs: "HomeServer"):
586600 self.store = hs.get_datastore()
587601 self.hs = hs
588602 self.auth = hs.get_auth()
589603 self.auth_handler = hs.get_auth_handler()
590604 self._set_password_handler = hs.get_set_password_handler()
591605
592 async def on_POST(self, request, target_user_id):
606 async def on_POST(
607 self, request: SynapseRequest, target_user_id: str
608 ) -> Tuple[int, JsonDict]:
593609 """Post request to allow an administrator reset password for a user.
594610 This needs user to have administrator access in Synapse.
595611 """
624640
625641 PATTERNS = admin_patterns("/search_users/(?P<target_user_id>[^/]*)")
626642
627 def __init__(self, hs):
628 self.hs = hs
629 self.store = hs.get_datastore()
630 self.auth = hs.get_auth()
631
632 async def on_GET(self, request, target_user_id):
643 def __init__(self, hs: "HomeServer"):
644 self.hs = hs
645 self.store = hs.get_datastore()
646 self.auth = hs.get_auth()
647
648 async def on_GET(
649 self, request: SynapseRequest, target_user_id: str
650 ) -> Tuple[int, Optional[List[JsonDict]]]:
633651 """Get request to search user table for specific users according to
634652 search term.
635653 This needs user to have a administrator access in Synapse.
680698
681699 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/admin$")
682700
683 def __init__(self, hs):
684 self.hs = hs
685 self.store = hs.get_datastore()
686 self.auth = hs.get_auth()
687
688 async def on_GET(self, request, user_id):
701 def __init__(self, hs: "HomeServer"):
702 self.hs = hs
703 self.store = hs.get_datastore()
704 self.auth = hs.get_auth()
705
706 async def on_GET(
707 self, request: SynapseRequest, user_id: str
708 ) -> Tuple[int, JsonDict]:
689709 await assert_requester_is_admin(self.auth, request)
690710
691711 target_user = UserID.from_string(user_id)
697717
698718 return 200, {"admin": is_admin}
699719
700 async def on_PUT(self, request, user_id):
720 async def on_PUT(
721 self, request: SynapseRequest, user_id: str
722 ) -> Tuple[int, JsonDict]:
701723 requester = await self.auth.get_user_by_req(request)
702724 await assert_user_is_admin(self.auth, requester.user)
703725 auth_user = requester.user
728750
729751 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]+)/joined_rooms$")
730752
731 def __init__(self, hs):
753 def __init__(self, hs: "HomeServer"):
732754 self.is_mine = hs.is_mine
733755 self.auth = hs.get_auth()
734756 self.store = hs.get_datastore()
735757
736 async def on_GET(self, request, user_id):
758 async def on_GET(
759 self, request: SynapseRequest, user_id: str
760 ) -> Tuple[int, JsonDict]:
737761 await assert_requester_is_admin(self.auth, request)
738762
739763 room_ids = await self.store.get_rooms_for_user(user_id)
756780
757781 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/pushers$")
758782
759 def __init__(self, hs):
783 def __init__(self, hs: "HomeServer"):
760784 self.is_mine = hs.is_mine
761785 self.store = hs.get_datastore()
762786 self.auth = hs.get_auth()
797821
798822 PATTERNS = admin_patterns("/users/(?P<user_id>[^/]+)/media$")
799823
800 def __init__(self, hs):
824 def __init__(self, hs: "HomeServer"):
801825 self.is_mine = hs.is_mine
802826 self.auth = hs.get_auth()
803827 self.store = hs.get_datastore()
831855 errcode=Codes.INVALID_PARAM,
832856 )
833857
858 # If neither `order_by` nor `dir` is set, set the default order
859 # to newest media is on top for backward compatibility.
860 if b"order_by" not in request.args and b"dir" not in request.args:
861 order_by = MediaSortOrder.CREATED_TS.value
862 direction = "b"
863 else:
864 order_by = parse_string(
865 request,
866 "order_by",
867 default=MediaSortOrder.CREATED_TS.value,
868 allowed_values=(
869 MediaSortOrder.MEDIA_ID.value,
870 MediaSortOrder.UPLOAD_NAME.value,
871 MediaSortOrder.CREATED_TS.value,
872 MediaSortOrder.LAST_ACCESS_TS.value,
873 MediaSortOrder.MEDIA_LENGTH.value,
874 MediaSortOrder.MEDIA_TYPE.value,
875 MediaSortOrder.QUARANTINED_BY.value,
876 MediaSortOrder.SAFE_FROM_QUARANTINE.value,
877 ),
878 )
879 direction = parse_string(
880 request, "dir", default="f", allowed_values=("f", "b")
881 )
882
834883 media, total = await self.store.get_local_media_by_user_paginate(
835 start, limit, user_id
884 start, limit, user_id, order_by, direction
836885 )
837886
838887 ret = {"media": media, "total": total}
864913 self.auth = hs.get_auth()
865914 self.auth_handler = hs.get_auth_handler()
866915
867 async def on_POST(self, request, user_id):
916 async def on_POST(
917 self, request: SynapseRequest, user_id: str
918 ) -> Tuple[int, JsonDict]:
868919 requester = await self.auth.get_user_by_req(request)
869920 await assert_user_is_admin(self.auth, requester.user)
870921 auth_user = requester.user
916967 self.store = hs.get_datastore()
917968 self.auth = hs.get_auth()
918969
919 async def on_POST(self, request, user_id):
970 async def on_POST(
971 self, request: SynapseRequest, user_id: str
972 ) -> Tuple[int, JsonDict]:
920973 await assert_requester_is_admin(self.auth, request)
921974
922975 if not self.hs.is_mine_id(user_id):
1919 from synapse.api.ratelimiting import Ratelimiter
2020 from synapse.appservice import ApplicationService
2121 from synapse.handlers.sso import SsoIdentityProvider
22 from synapse.http import get_request_uri
2223 from synapse.http.server import HttpServer, finish_request
2324 from synapse.http.servlet import (
2425 RestServlet,
353354 hs.get_oidc_handler()
354355 self._sso_handler = hs.get_sso_handler()
355356 self._msc2858_enabled = hs.config.experimental.msc2858_enabled
357 self._public_baseurl = hs.config.public_baseurl
356358
357359 def register(self, http_server: HttpServer) -> None:
358360 super().register(http_server)
372374 async def on_GET(
373375 self, request: SynapseRequest, idp_id: Optional[str] = None
374376 ) -> None:
377 if not self._public_baseurl:
378 raise SynapseError(400, "SSO requires a valid public_baseurl")
379
380 # if this isn't the expected hostname, redirect to the right one, so that we
381 # get our cookies back.
382 requested_uri = get_request_uri(request)
383 baseurl_bytes = self._public_baseurl.encode("utf-8")
384 if not requested_uri.startswith(baseurl_bytes):
385 # swap out the incorrect base URL for the right one.
386 #
387 # The idea here is to redirect from
388 # https://foo.bar/whatever/_matrix/...
389 # to
390 # https://public.baseurl/_matrix/...
391 #
392 i = requested_uri.index(b"/_matrix")
393 new_uri = baseurl_bytes[:-1] + requested_uri[i:]
394 logger.info(
395 "Requested URI %s is not canonical: redirecting to %s",
396 requested_uri.decode("utf-8", errors="replace"),
397 new_uri.decode("utf-8", errors="replace"),
398 )
399 request.redirect(new_uri)
400 finish_request(request)
401 return
402
375403 client_redirect_url = parse_string(
376404 request, "redirectUrl", required=True, encoding=None
377405 )
1717 from functools import wraps
1818 from typing import TYPE_CHECKING, Optional, Tuple
1919
20 from twisted.web.http import Request
20 from twisted.web.server import Request
2121
2222 from synapse.api.constants import (
2323 MAX_GROUP_CATEGORYID_LENGTH,
5555 content = parse_json_object_from_request(request)
5656 assert_params_in_dict(content, ("messages",))
5757
58 sender_user_id = requester.user.to_string()
59
6058 await self.device_message_handler.send_device_message(
61 sender_user_id, message_type, content["messages"]
59 requester, message_type, content["messages"]
6260 )
6361
6462 response = (200, {}) # type: Tuple[int, dict]
2020
2121 from twisted.internet.interfaces import IConsumer
2222 from twisted.protocols.basic import FileSender
23 from twisted.web.http import Request
23 from twisted.web.server import Request
2424
2525 from synapse.api.errors import Codes, SynapseError, cs_error
2626 from synapse.http.server import finish_request, respond_with_json
4848
4949 def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
5050 try:
51 # The type on postpath seems incorrect in Twisted 21.2.0.
52 postpath = request.postpath # type: List[bytes] # type: ignore
53 assert postpath
54
5155 # This allows users to append e.g. /test.png to the URL. Useful for
5256 # clients that parse the URL to see content type.
53 server_name, media_id = request.postpath[:2]
54
55 if isinstance(server_name, bytes):
56 server_name = server_name.decode("utf-8")
57 media_id = media_id.decode("utf8")
57 server_name_bytes, media_id_bytes = postpath[:2]
58 server_name = server_name_bytes.decode("utf-8")
59 media_id = media_id_bytes.decode("utf8")
5860
5961 file_name = None
60 if len(request.postpath) > 2:
62 if len(postpath) > 2:
6163 try:
62 file_name = urllib.parse.unquote(request.postpath[-1].decode("utf-8"))
64 file_name = urllib.parse.unquote(postpath[-1].decode("utf-8"))
6365 except UnicodeDecodeError:
6466 pass
6567 return server_name, media_id, file_name
1616
1717 from typing import TYPE_CHECKING
1818
19 from twisted.web.http import Request
19 from twisted.web.server import Request
2020
2121 from synapse.http.server import DirectServeJsonResource, respond_with_json
2222
1515 import logging
1616 from typing import TYPE_CHECKING
1717
18 from twisted.web.http import Request
18 from twisted.web.server import Request
1919
2020 from synapse.http.server import DirectServeJsonResource, set_cors_headers
2121 from synapse.http.servlet import parse_boolean
2121
2222 import twisted.internet.error
2323 import twisted.web.http
24 from twisted.web.http import Request
2524 from twisted.web.resource import Resource
25 from twisted.web.server import Request
2626
2727 from synapse.api.errors import (
2828 FederationDeniedError,
508508 t_height: int,
509509 t_method: str,
510510 t_type: str,
511 url_cache: str,
511 url_cache: Optional[str],
512512 ) -> Optional[str]:
513513 input_path = await self.media_storage.ensure_media_is_in_local_cache(
514514 FileInfo(None, media_id, url_cache=url_cache)
243243 await consumer.wait()
244244 return local_path
245245
246 raise Exception("file could not be found")
246 raise NotFoundError()
247247
248248 def _file_info_to_path(self, file_info: FileInfo) -> str:
249249 """Converts file_info into a relative path.
2828 import attr
2929
3030 from twisted.internet.error import DNSLookupError
31 from twisted.web.http import Request
31 from twisted.web.server import Request
3232
3333 from synapse.api.errors import Codes, SynapseError
3434 from synapse.http.client import SimpleHttpClient
148148 treq_args={"browser_like_redirects": True},
149149 ip_whitelist=hs.config.url_preview_ip_range_whitelist,
150150 ip_blacklist=hs.config.url_preview_ip_range_blacklist,
151 http_proxy=os.getenvb(b"http_proxy"),
152 https_proxy=os.getenvb(b"HTTPS_PROXY"),
151 use_proxy=True,
153152 )
154153 self.media_repo = media_repo
155154 self.primary_base_path = media_repo.primary_base_path
1717 import logging
1818 from typing import TYPE_CHECKING, Any, Dict, List, Optional
1919
20 from twisted.web.http import Request
20 from twisted.web.server import Request
2121
2222 from synapse.api.errors import SynapseError
2323 from synapse.http.server import DirectServeJsonResource, set_cors_headers
112112 method,
113113 m_type,
114114 thumbnail_infos,
115 media_id,
115116 media_id,
116117 url_cache=media_info["url_cache"],
117118 server_name=None,
268269 method,
269270 m_type,
270271 thumbnail_infos,
272 media_id,
271273 media_info["filesystem_id"],
272274 url_cache=None,
273275 server_name=server_name,
281283 desired_method: str,
282284 desired_type: str,
283285 thumbnail_infos: List[Dict[str, Any]],
286 media_id: str,
284287 file_id: str,
285288 url_cache: Optional[str] = None,
286289 server_name: Optional[str] = None,
316319 return
317320
318321 responder = await self.media_storage.fetch_media(file_info)
322 if responder:
323 await respond_with_responder(
324 request,
325 responder,
326 file_info.thumbnail_type,
327 file_info.thumbnail_length,
328 )
329 return
330
331 # If we can't find the thumbnail we regenerate it. This can happen
332 # if e.g. we've deleted the thumbnails but still have the original
333 # image somewhere.
334 #
335 # Since we have an entry for the thumbnail in the DB we a) know we
336 # have have successfully generated the thumbnail in the past (so we
337 # don't need to worry about repeatedly failing to generate
338 # thumbnails), and b) have already calculated that appropriate
339 # width/height/method so we can just call the "generate exact"
340 # methods.
341
342 # First let's check that we do actually have the original image
343 # still. This will throw a 404 if we don't.
344 # TODO: We should refetch the thumbnails for remote media.
345 await self.media_storage.ensure_media_is_in_local_cache(
346 FileInfo(server_name, file_id, url_cache=url_cache)
347 )
348
349 if server_name:
350 await self.media_repo.generate_remote_exact_thumbnail(
351 server_name,
352 file_id=file_id,
353 media_id=media_id,
354 t_width=file_info.thumbnail_width,
355 t_height=file_info.thumbnail_height,
356 t_method=file_info.thumbnail_method,
357 t_type=file_info.thumbnail_type,
358 )
359 else:
360 await self.media_repo.generate_local_exact_thumbnail(
361 media_id=media_id,
362 t_width=file_info.thumbnail_width,
363 t_height=file_info.thumbnail_height,
364 t_method=file_info.thumbnail_method,
365 t_type=file_info.thumbnail_type,
366 url_cache=url_cache,
367 )
368
369 responder = await self.media_storage.fetch_media(file_info)
319370 await respond_with_responder(
320 request, responder, file_info.thumbnail_type, file_info.thumbnail_length
371 request,
372 responder,
373 file_info.thumbnail_type,
374 file_info.thumbnail_length,
321375 )
322376 else:
323377 logger.info("Failed to find any generated thumbnails")
1414 # limitations under the License.
1515
1616 import logging
17 from typing import TYPE_CHECKING
17 from typing import IO, TYPE_CHECKING
1818
19 from twisted.web.http import Request
19 from twisted.web.server import Request
2020
2121 from synapse.api.errors import Codes, SynapseError
2222 from synapse.http.server import DirectServeJsonResource, respond_with_json
7878 headers = request.requestHeaders
7979
8080 if headers.hasHeader(b"Content-Type"):
81 media_type = headers.getRawHeaders(b"Content-Type")[0].decode("ascii")
81 content_type_headers = headers.getRawHeaders(b"Content-Type")
82 assert content_type_headers # for mypy
83 media_type = content_type_headers[0].decode("ascii")
8284 else:
8385 raise SynapseError(msg="Upload request missing 'Content-Type'", code=400)
8486
8789 # TODO(markjh): parse content-dispostion
8890
8991 try:
92 content = request.content # type: IO # type: ignore
9093 content_uri = await self.media_repo.create_content(
91 media_type, upload_name, request.content, content_length, requester.user
94 media_type, upload_name, content, content_length, requester.user
9295 )
9396 except SpamMediaException:
9497 # For uploading of media we want to respond with a 400, instead of
1414 import logging
1515 from typing import TYPE_CHECKING
1616
17 from twisted.web.http import Request
17 from twisted.web.server import Request
1818
1919 from synapse.api.errors import SynapseError
2020 from synapse.handlers.sso import get_username_mapping_session_cookie_from_request
1414 import logging
1515 from typing import TYPE_CHECKING, Tuple
1616
17 from twisted.web.http import Request
17 from twisted.web.server import Request
1818
1919 from synapse.api.errors import ThreepidValidationError
2020 from synapse.config.emailconfig import ThreepidBehaviour
1515 import logging
1616 from typing import TYPE_CHECKING, List
1717
18 from twisted.web.http import Request
1918 from twisted.web.resource import Resource
19 from twisted.web.server import Request
2020
2121 from synapse.api.errors import SynapseError
2222 from synapse.handlers.sso import get_username_mapping_session_cookie_from_request
1515 import logging
1616 from typing import TYPE_CHECKING
1717
18 from twisted.web.http import Request
18 from twisted.web.server import Request
1919
2020 from synapse.api.errors import SynapseError
2121 from synapse.handlers.sso import get_username_mapping_session_cookie_from_request
2323 import abc
2424 import functools
2525 import logging
26 import os
2726 from typing import (
2827 TYPE_CHECKING,
2928 Any,
3837
3938 import twisted.internet.base
4039 import twisted.internet.tcp
40 from twisted.internet import defer
4141 from twisted.mail.smtp import sendmail
4242 from twisted.web.iweb import IPolicyForHTTPS
4343
247247 self.start_time = None # type: Optional[int]
248248
249249 self._instance_id = random_string(5)
250 self._instance_name = config.worker_name or "master"
250 self._instance_name = config.worker.instance_name
251251
252252 self.version_string = version_string
253253
369369 """
370370 An HTTP client that uses configured HTTP(S) proxies.
371371 """
372 return SimpleHttpClient(
373 self,
374 http_proxy=os.getenvb(b"http_proxy"),
375 https_proxy=os.getenvb(b"HTTPS_PROXY"),
376 )
372 return SimpleHttpClient(self, use_proxy=True)
377373
378374 @cache_in_self
379375 def get_proxied_blacklisted_http_client(self) -> SimpleHttpClient:
385381 self,
386382 ip_whitelist=self.config.ip_range_whitelist,
387383 ip_blacklist=self.config.ip_range_blacklist,
388 http_proxy=os.getenvb(b"http_proxy"),
389 https_proxy=os.getenvb(b"HTTPS_PROXY"),
384 use_proxy=True,
390385 )
391386
392387 @cache_in_self
408403 return RoomShutdownHandler(self)
409404
410405 @cache_in_self
411 def get_sendmail(self) -> sendmail:
406 def get_sendmail(self) -> Callable[..., defer.Deferred]:
412407 return sendmail
413408
414409 @cache_in_self
757752 reconnect=True,
758753 )
759754
760 async def remove_pusher(self, app_id: str, push_key: str, user_id: str):
761 return await self.get_pusherpool().remove_pusher(app_id, push_key, user_id)
762
763755 def should_send_federation(self) -> bool:
764756 "Should this server be sending federation traffic directly?"
765 return self.config.send_federation and (
766 not self.config.worker_app
767 or self.config.worker_app == "synapse.app.federation_sender"
768 )
757 return self.config.send_federation
4848 from synapse.storage.background_updates import BackgroundUpdater
4949 from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
5050 from synapse.storage.types import Connection, Cursor
51 from synapse.storage.util.sequence import build_sequence_generator
5251 from synapse.types import Collection
5352
5453 # python 3 does not have a maximum int value
380379 _TXN_ID = 0
381380
382381 def __init__(
383 self, hs, database_config: DatabaseConnectionConfig, engine: BaseDatabaseEngine
382 self,
383 hs,
384 database_config: DatabaseConnectionConfig,
385 engine: BaseDatabaseEngine,
384386 ):
385387 self.hs = hs
386388 self._clock = hs.get_clock()
418420 "upsert_safety_check",
419421 self._check_safe_to_upsert,
420422 )
421
422 # We define this sequence here so that it can be referenced from both
423 # the DataStore and PersistEventStore.
424 def get_chain_id_txn(txn):
425 txn.execute("SELECT COALESCE(max(chain_id), 0) FROM event_auth_chains")
426 return txn.fetchone()[0]
427
428 self.event_chain_id_gen = build_sequence_generator(
429 engine, get_chain_id_txn, "event_auth_chain_id"
430 )
431423
432424 def is_running(self) -> bool:
433425 """Is the database pool currently running"""
7878 # If we're on a process that can persist events also
7979 # instantiate a `PersistEventsStore`
8080 if hs.get_instance_name() in hs.config.worker.writers.events:
81 persist_events = PersistEventsStore(hs, database, main)
81 persist_events = PersistEventsStore(hs, database, main, db_conn)
8282
8383 if "state" in database_config.databases:
8484 logger.info(
1515 # limitations under the License.
1616
1717 import logging
18 from typing import Any, Dict, List, Optional, Tuple
18 from typing import List, Optional, Tuple
1919
2020 from synapse.api.constants import PresenceState
2121 from synapse.config.homeserver import HomeServerConfig
2626 MultiWriterIdGenerator,
2727 StreamIdGenerator,
2828 )
29 from synapse.types import get_domain_from_id
29 from synapse.types import JsonDict, get_domain_from_id
3030 from synapse.util.caches.stream_change_cache import StreamChangeCache
3131
3232 from .account_data import AccountDataStore
263263
264264 return [UserPresenceState(**row) for row in rows]
265265
266 async def get_users(self) -> List[Dict[str, Any]]:
266 async def get_users(self) -> List[JsonDict]:
267267 """Function to retrieve a list of users in users table.
268268
269269 Returns:
291291 name: Optional[str] = None,
292292 guests: bool = True,
293293 deactivated: bool = False,
294 ) -> Tuple[List[Dict[str, Any]], int]:
294 ) -> Tuple[List[JsonDict], int]:
295295 """Function to retrieve a paginated list of users from
296296 users list. This will return a json list of users and the
297297 total number of users matching the filter criteria.
352352 "get_users_paginate_txn", get_users_paginate_txn
353353 )
354354
355 async def search_users(self, term: str) -> Optional[List[Dict[str, Any]]]:
355 async def search_users(self, term: str) -> Optional[List[JsonDict]]:
356356 """Function to search users list for one or more users with
357357 the matched term.
358358
4141 from synapse.storage._base import db_to_json, make_in_list_sql_clause
4242 from synapse.storage.database import DatabasePool, LoggingTransaction
4343 from synapse.storage.databases.main.search import SearchEntry
44 from synapse.storage.types import Connection
4445 from synapse.storage.util.id_generators import MultiWriterIdGenerator
46 from synapse.storage.util.sequence import SequenceGenerator
4547 from synapse.types import StateMap, get_domain_from_id
4648 from synapse.util import json_encoder
4749 from synapse.util.iterutils import batch_iter, sorted_topologically
8991 """
9092
9193 def __init__(
92 self, hs: "HomeServer", db: DatabasePool, main_data_store: "DataStore"
94 self,
95 hs: "HomeServer",
96 db: DatabasePool,
97 main_data_store: "DataStore",
98 db_conn: Connection,
9399 ):
94100 self.hs = hs
95101 self.db_pool = db
473479 self._add_chain_cover_index(
474480 txn,
475481 self.db_pool,
482 self.store.event_chain_id_gen,
476483 event_to_room_id,
477484 event_to_types,
478485 event_to_auth_chain,
483490 cls,
484491 txn,
485492 db_pool: DatabasePool,
493 event_chain_id_gen: SequenceGenerator,
486494 event_to_room_id: Dict[str, str],
487495 event_to_types: Dict[str, Tuple[str, str]],
488496 event_to_auth_chain: Dict[str, List[str]],
629637 new_chain_tuples = cls._allocate_chain_ids(
630638 txn,
631639 db_pool,
640 event_chain_id_gen,
632641 event_to_room_id,
633642 event_to_types,
634643 event_to_auth_chain,
767776 def _allocate_chain_ids(
768777 txn,
769778 db_pool: DatabasePool,
779 event_chain_id_gen: SequenceGenerator,
770780 event_to_room_id: Dict[str, str],
771781 event_to_types: Dict[str, Tuple[str, str]],
772782 event_to_auth_chain: Dict[str, List[str]],
879889 chain_to_max_seq_no[new_chain_tuple[0]] = new_chain_tuple[1]
880890
881891 # Generate new chain IDs for all unallocated chain IDs.
882 newly_allocated_chain_ids = db_pool.event_chain_id_gen.get_next_mult_txn(
892 newly_allocated_chain_ids = event_chain_id_gen.get_next_mult_txn(
883893 txn, len(unallocated_chain_ids)
884894 )
885895
695695 )
696696
697697 if not has_event_auth:
698 for auth_id in event.auth_event_ids():
698 # Old, dodgy, events may have duplicate auth events, which we
699 # need to deduplicate as we have a unique constraint.
700 for auth_id in set(event.auth_event_ids()):
699701 auth_events.append(
700702 {
701703 "room_id": event.room_id,
916918 PersistEventsStore._add_chain_cover_index(
917919 txn,
918920 self.db_pool,
921 self.event_chain_id_gen,
919922 event_to_room_id,
920923 event_to_types,
921924 event_to_auth_chain,
4444 from synapse.storage.database import DatabasePool
4545 from synapse.storage.engines import PostgresEngine
4646 from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator
47 from synapse.storage.util.sequence import build_sequence_generator
4748 from synapse.types import Collection, JsonDict, get_domain_from_id
4849 from synapse.util.caches.descriptors import cached
4950 from synapse.util.caches.lrucache import LruCache
155156 self._event_fetch_list = []
156157 self._event_fetch_ongoing = 0
157158
159 # We define this sequence here so that it can be referenced from both
160 # the DataStore and PersistEventStore.
161 def get_chain_id_txn(txn):
162 txn.execute("SELECT COALESCE(max(chain_id), 0) FROM event_auth_chains")
163 return txn.fetchone()[0]
164
165 self.event_chain_id_gen = build_sequence_generator(
166 db_conn,
167 database.engine,
168 get_chain_id_txn,
169 "event_auth_chain_id",
170 table="event_auth_chains",
171 id_column="chain_id",
172 )
173
158174 def process_replication_rows(self, stream_name, instance_name, token, rows):
159175 if stream_name == EventsStream.NAME:
160176 self._stream_id_gen.advance(instance_name, token)
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 from enum import Enum
1516 from typing import Any, Dict, Iterable, List, Optional, Tuple
1617
1718 from synapse.storage._base import SQLBaseStore
2021 BG_UPDATE_REMOVE_MEDIA_REPO_INDEX_WITHOUT_METHOD = (
2122 "media_repository_drop_index_wo_method"
2223 )
24
25
26 class MediaSortOrder(Enum):
27 """
28 Enum to define the sorting method used when returning media with
29 get_local_media_by_user_paginate
30 """
31
32 MEDIA_ID = "media_id"
33 UPLOAD_NAME = "upload_name"
34 CREATED_TS = "created_ts"
35 LAST_ACCESS_TS = "last_access_ts"
36 MEDIA_LENGTH = "media_length"
37 MEDIA_TYPE = "media_type"
38 QUARANTINED_BY = "quarantined_by"
39 SAFE_FROM_QUARANTINE = "safe_from_quarantine"
2340
2441
2542 class MediaRepositoryBackgroundUpdateStore(SQLBaseStore):
117134 )
118135
119136 async def get_local_media_by_user_paginate(
120 self, start: int, limit: int, user_id: str
137 self,
138 start: int,
139 limit: int,
140 user_id: str,
141 order_by: str = MediaSortOrder.CREATED_TS.value,
142 direction: str = "f",
121143 ) -> Tuple[List[Dict[str, Any]], int]:
122144 """Get a paginated list of metadata for a local piece of media
123145 which an user_id has uploaded
126148 start: offset in the list
127149 limit: maximum amount of media_ids to retrieve
128150 user_id: fully-qualified user id
151 order_by: the sort order of the returned list
152 direction: sort ascending or descending
129153 Returns:
130154 A paginated list of all metadata of user's media,
131155 plus the total count of all the user's media
132156 """
133157
134158 def get_local_media_by_user_paginate_txn(txn):
159
160 # Set ordering
161 order_by_column = MediaSortOrder(order_by).value
162
163 if direction == "b":
164 order = "DESC"
165 else:
166 order = "ASC"
135167
136168 args = [user_id]
137169 sql = """
154186 "safe_from_quarantine"
155187 FROM local_media_repository
156188 WHERE user_id = ?
157 ORDER BY created_ts DESC, media_id DESC
189 ORDER BY {order_by_column} {order}, media_id ASC
158190 LIMIT ? OFFSET ?
159 """
191 """.format(
192 order_by_column=order_by_column,
193 order=order,
194 )
160195
161196 args += [limit, start]
162197 txn.execute(sql, args)
343378 thumbnail_method,
344379 thumbnail_length,
345380 ):
346 await self.db_pool.simple_insert(
347 "local_media_repository_thumbnails",
348 {
381 await self.db_pool.simple_upsert(
382 table="local_media_repository_thumbnails",
383 keyvalues={
349384 "media_id": media_id,
350385 "thumbnail_width": thumbnail_width,
351386 "thumbnail_height": thumbnail_height,
352387 "thumbnail_method": thumbnail_method,
353388 "thumbnail_type": thumbnail_type,
354 "thumbnail_length": thumbnail_length,
355389 },
390 values={"thumbnail_length": thumbnail_length},
356391 desc="store_local_thumbnail",
357392 )
358393
497532 thumbnail_method,
498533 thumbnail_length,
499534 ):
500 await self.db_pool.simple_insert(
501 "remote_media_cache_thumbnails",
502 {
535 await self.db_pool.simple_upsert(
536 table="remote_media_cache_thumbnails",
537 keyvalues={
503538 "media_origin": origin,
504539 "media_id": media_id,
505540 "thumbnail_width": thumbnail_width,
506541 "thumbnail_height": thumbnail_height,
507542 "thumbnail_method": thumbnail_method,
508543 "thumbnail_type": thumbnail_type,
509 "thumbnail_length": thumbnail_length,
510 "filesystem_id": filesystem_id,
511544 },
545 values={"thumbnail_length": thumbnail_length},
546 insertion_values={"filesystem_id": filesystem_id},
512547 desc="store_remote_media_thumbnail",
513548 )
514549
2727 async def purge_history(
2828 self, room_id: str, token: str, delete_local_events: bool
2929 ) -> Set[int]:
30 """Deletes room history before a certain point
30 """Deletes room history before a certain point.
31
32 Note that only a single purge can occur at once, this is guaranteed via
33 a higher level (in the PaginationHandler).
3134
3235 Args:
3336 room_id:
5154 delete_local_events,
5255 )
5356
54 def _purge_history_txn(self, txn, room_id, token, delete_local_events):
57 def _purge_history_txn(
58 self, txn, room_id: str, token: RoomStreamToken, delete_local_events: bool
59 ) -> Set[int]:
5560 # Tables that should be pruned:
5661 # event_auth
5762 # event_backward_extremities
102107 if max_depth < token.topological:
103108 # We need to ensure we don't delete all the events from the database
104109 # otherwise we wouldn't be able to send any events (due to not
105 # having any backwards extremeties)
110 # having any backwards extremities)
106111 raise SynapseError(
107112 400, "topological_ordering is greater than forward extremeties"
108113 )
153158
154159 logger.info("[purge] Finding new backward extremities")
155160
156 # We calculate the new entries for the backward extremeties by finding
161 # We calculate the new entries for the backward extremities by finding
157162 # events to be purged that are pointed to by events we're not going to
158163 # purge.
159164 txn.execute(
295300 "purge_room", self._purge_room_txn, room_id
296301 )
297302
298 def _purge_room_txn(self, txn, room_id):
303 def _purge_room_txn(self, txn, room_id: str) -> List[int]:
299304 # First we fetch all the state groups that should be deleted, before
300305 # we delete that information.
301306 txn.execute(
308313 )
309314
310315 state_groups = [row[0] for row in txn]
316
317 # Get all the auth chains that are referenced by events that are to be
318 # deleted.
319 txn.execute(
320 """
321 SELECT chain_id, sequence_number FROM events
322 LEFT JOIN event_auth_chains USING (event_id)
323 WHERE room_id = ?
324 """,
325 (room_id,),
326 )
327 referenced_chain_id_tuples = list(txn)
328
329 logger.info("[purge] removing events from event_auth_chain_links")
330 txn.executemany(
331 """
332 DELETE FROM event_auth_chain_links WHERE
333 (origin_chain_id = ? AND origin_sequence_number = ?) OR
334 (target_chain_id = ? AND target_sequence_number = ?)
335 """,
336 (
337 (chain_id, seq_num, chain_id, seq_num)
338 for (chain_id, seq_num) in referenced_chain_id_tuples
339 ),
340 )
311341
312342 # Now we delete tables which lack an index on room_id but have one on event_id
313343 for table in (
318348 "event_reference_hashes",
319349 "event_relations",
320350 "event_to_state_groups",
351 "event_auth_chains",
352 "event_auth_chain_to_calculate",
321353 "redactions",
322354 "rejections",
323355 "state_events",
3636 super().__init__(database, db_conn, hs)
3737 self._pushers_id_gen = StreamIdGenerator(
3838 db_conn, "pushers", "id", extra_tables=[("deleted_pushers", "stream_id")]
39 )
40
41 self.db_pool.updates.register_background_update_handler(
42 "remove_deactivated_pushers",
43 self._remove_deactivated_pushers,
44 )
45
46 self.db_pool.updates.register_background_update_handler(
47 "remove_stale_pushers",
48 self._remove_stale_pushers,
3949 )
4050
4151 def _decode_pushers_rows(self, rows: Iterable[dict]) -> Iterator[PusherConfig]:
282292 desc="set_throttle_params",
283293 lock=False,
284294 )
295
296 async def _remove_deactivated_pushers(self, progress: dict, batch_size: int) -> int:
297 """A background update that deletes all pushers for deactivated users.
298
299 Note that we don't proacively tell the pusherpool that we've deleted
300 these (just because its a bit off a faff to do from here), but they will
301 get cleaned up at the next restart
302 """
303
304 last_user = progress.get("last_user", "")
305
306 def _delete_pushers(txn) -> int:
307
308 sql = """
309 SELECT name FROM users
310 WHERE deactivated = ? and name > ?
311 ORDER BY name ASC
312 LIMIT ?
313 """
314
315 txn.execute(sql, (1, last_user, batch_size))
316 users = [row[0] for row in txn]
317
318 self.db_pool.simple_delete_many_txn(
319 txn,
320 table="pushers",
321 column="user_name",
322 iterable=users,
323 keyvalues={},
324 )
325
326 if users:
327 self.db_pool.updates._background_update_progress_txn(
328 txn, "remove_deactivated_pushers", {"last_user": users[-1]}
329 )
330
331 return len(users)
332
333 number_deleted = await self.db_pool.runInteraction(
334 "_remove_deactivated_pushers", _delete_pushers
335 )
336
337 if number_deleted < batch_size:
338 await self.db_pool.updates._end_background_update(
339 "remove_deactivated_pushers"
340 )
341
342 return number_deleted
343
344 async def _remove_stale_pushers(self, progress: dict, batch_size: int) -> int:
345 """A background update that deletes all pushers for logged out devices.
346
347 Note that we don't proacively tell the pusherpool that we've deleted
348 these (just because its a bit off a faff to do from here), but they will
349 get cleaned up at the next restart
350 """
351
352 last_pusher = progress.get("last_pusher", 0)
353
354 def _delete_pushers(txn) -> int:
355
356 sql = """
357 SELECT p.id, access_token FROM pushers AS p
358 LEFT JOIN access_tokens AS a ON (p.access_token = a.id)
359 WHERE p.id > ?
360 ORDER BY p.id ASC
361 LIMIT ?
362 """
363
364 txn.execute(sql, (last_pusher, batch_size))
365 pushers = [(row[0], row[1]) for row in txn]
366
367 self.db_pool.simple_delete_many_txn(
368 txn,
369 table="pushers",
370 column="id",
371 iterable=(pusher_id for pusher_id, token in pushers if token is None),
372 keyvalues={},
373 )
374
375 if pushers:
376 self.db_pool.updates._background_update_progress_txn(
377 txn, "remove_stale_pushers", {"last_pusher": pushers[-1][0]}
378 )
379
380 return len(pushers)
381
382 number_deleted = await self.db_pool.runInteraction(
383 "_remove_stale_pushers", _delete_pushers
384 )
385
386 if number_deleted < batch_size:
387 await self.db_pool.updates._end_background_update("remove_stale_pushers")
388
389 return number_deleted
285390
286391
287392 class PusherStore(PusherWorkerStore):
372477 await self.db_pool.runInteraction(
373478 "delete_pusher", delete_pusher_txn, stream_id
374479 )
480
481 async def delete_all_pushers_for_user(self, user_id: str) -> None:
482 """Delete all pushers associated with an account."""
483
484 # We want to generate a row in `deleted_pushers` for each pusher we're
485 # deleting, so we fetch the list now so we can generate the appropriate
486 # number of stream IDs.
487 #
488 # Note: technically there could be a race here between adding/deleting
489 # pushers, but a) the worst case if we don't stop a pusher until the
490 # next restart and b) this is only called when we're deactivating an
491 # account.
492 pushers = list(await self.get_pushers_by_user_id(user_id))
493
494 def delete_pushers_txn(txn, stream_ids):
495 self._invalidate_cache_and_stream( # type: ignore
496 txn, self.get_if_user_has_pusher, (user_id,)
497 )
498
499 self.db_pool.simple_delete_txn(
500 txn,
501 table="pushers",
502 keyvalues={"user_name": user_id},
503 )
504
505 self.db_pool.simple_insert_many_txn(
506 txn,
507 table="deleted_pushers",
508 values=[
509 {
510 "stream_id": stream_id,
511 "app_id": pusher.app_id,
512 "pushkey": pusher.pushkey,
513 "user_id": user_id,
514 }
515 for stream_id, pusher in zip(stream_ids, pushers)
516 ],
517 )
518
519 async with self._pushers_id_gen.get_next_mult(len(pushers)) as stream_ids:
520 await self.db_pool.runInteraction(
521 "delete_all_pushers_for_user", delete_pushers_txn, stream_ids
522 )
2222 from synapse.api.constants import UserTypes
2323 from synapse.api.errors import Codes, StoreError, SynapseError, ThreepidValidationError
2424 from synapse.metrics.background_process_metrics import wrap_as_background_process
25 from synapse.storage.database import DatabasePool
25 from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
2626 from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
2727 from synapse.storage.databases.main.stats import StatsStore
2828 from synapse.storage.types import Connection, Cursor
6969
7070
7171 class RegistrationWorkerStore(CacheInvalidationWorkerStore):
72 def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"):
72 def __init__(
73 self,
74 database: DatabasePool,
75 db_conn: LoggingDatabaseConnection,
76 hs: "HomeServer",
77 ):
7378 super().__init__(database, db_conn, hs)
7479
7580 self.config = hs.config
7883 # call `find_max_generated_user_id_localpart` each time, which is
7984 # expensive if there are many entries.
8085 self._user_id_seq = build_sequence_generator(
86 db_conn,
8187 database.engine,
8288 find_max_generated_user_id_localpart,
8389 "user_id_seq",
90 table=None,
91 id_column=None,
8492 )
8593
8694 self._account_validity = hs.config.account_validity
10351043
10361044
10371045 class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
1038 def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"):
1046 def __init__(
1047 self,
1048 database: DatabasePool,
1049 db_conn: LoggingDatabaseConnection,
1050 hs: "HomeServer",
1051 ):
10391052 super().__init__(database, db_conn, hs)
10401053
10411054 self._clock = hs.get_clock()
+0
-17
synapse/storage/databases/main/schema/delta/58/28rejected_events_metadata.sql less more
0 /* Copyright 2020 The Matrix.org Foundation C.I.C
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
16 (5828, 'rejected_events_metadata', '{}');
0 /* Copyright 2021 The Matrix.org Foundation C.I.C
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15
16 -- We may not have deleted all pushers for deactivated accounts, so we set up a
17 -- background job to delete them.
18 INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
19 (5908, 'remove_deactivated_pushers', '{}');
0 /* Copyright 2021 The Matrix.org Foundation C.I.C
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15
16 -- Delete all pushers associated with deleted devices. This is to clear up after
17 -- a bug where they weren't correctly deleted when using workers.
18 INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
19 (5908, 'remove_stale_pushers', '{}');
0 /* Copyright 2020 The Matrix.org Foundation C.I.C
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- This originally was in 58/, but landed after 59/ was created, and so some
16 -- servers running develop didn't run this delta. Running it again should be
17 -- safe.
18 --
19 -- We first delete any in progress `rejected_events_metadata` background update,
20 -- to ensure that we don't conflict when trying to insert the new one. (We could
21 -- alternatively do an ON CONFLICT DO NOTHING, but that syntax isn't supported
22 -- by older SQLite versions. Plus, this should be a rare case).
23 DELETE FROM background_updates WHERE update_name = 'rejected_events_metadata';
24 INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
25 (5828, 'rejected_events_metadata', '{}');
496496 async def add_users_in_public_rooms(
497497 self, room_id: str, user_ids: Iterable[str]
498498 ) -> None:
499 """Insert entries into the users_who_share_private_rooms table. The first
500 user should be a local user.
499 """Insert entries into the users_in_public_rooms table.
501500
502501 Args:
503502 room_id
555554 def __init__(self, database: DatabasePool, db_conn, hs):
556555 super().__init__(database, db_conn, hs)
557556
557 self._prefer_local_users_in_search = (
558 hs.config.user_directory_search_prefer_local_users
559 )
560 self._server_name = hs.config.server_name
561
558562 async def remove_from_user_dir(self, user_id: str) -> None:
559563 def _remove_from_user_dir_txn(txn):
560564 self.db_pool.simple_delete_txn(
664668 users.update(rows)
665669 return list(users)
666670
667 @cached()
668671 async def get_shared_rooms_for_users(
669672 self, user_id: str, other_user_id: str
670673 ) -> Set[str]:
753756 )
754757 """
755758
759 # We allow manipulating the ranking algorithm by injecting statements
760 # based on config options.
761 additional_ordering_statements = []
762 ordering_arguments = ()
763
756764 if isinstance(self.database_engine, PostgresEngine):
757765 full_query, exact_query, prefix_query = _parse_query_postgres(search_term)
766
767 # If enabled, this config option will rank local users higher than those on
768 # remote instances.
769 if self._prefer_local_users_in_search:
770 # This statement checks whether a given user's user ID contains a server name
771 # that matches the local server
772 statement = "* (CASE WHEN user_id LIKE ? THEN 2.0 ELSE 1.0 END)"
773 additional_ordering_statements.append(statement)
774
775 ordering_arguments += ("%:" + self._server_name,)
758776
759777 # We order by rank and then if they have profile info
760778 # The ranking algorithm is hand tweaked for "best" results. Broadly
766784 FROM user_directory_search as t
767785 INNER JOIN user_directory AS d USING (user_id)
768786 WHERE
769 %s
787 %(where_clause)s
770788 AND vector @@ to_tsquery('simple', ?)
771789 ORDER BY
772790 (CASE WHEN d.user_id IS NOT NULL THEN 4.0 ELSE 1.0 END)
786804 8
787805 )
788806 )
807 %(order_case_statements)s
789808 DESC,
790809 display_name IS NULL,
791810 avatar_url IS NULL
792811 LIMIT ?
793 """ % (
794 where_clause,
795 )
796 args = join_args + (full_query, exact_query, prefix_query, limit + 1)
812 """ % {
813 "where_clause": where_clause,
814 "order_case_statements": " ".join(additional_ordering_statements),
815 }
816 args = (
817 join_args
818 + (full_query, exact_query, prefix_query)
819 + ordering_arguments
820 + (limit + 1,)
821 )
797822 elif isinstance(self.database_engine, Sqlite3Engine):
798823 search_query = _parse_query_sqlite(search_term)
824
825 # If enabled, this config option will rank local users higher than those on
826 # remote instances.
827 if self._prefer_local_users_in_search:
828 # This statement checks whether a given user's user ID contains a server name
829 # that matches the local server
830 #
831 # Note that we need to include a comma at the end for valid SQL
832 statement = "user_id LIKE ? DESC,"
833 additional_ordering_statements.append(statement)
834
835 ordering_arguments += ("%:" + self._server_name,)
799836
800837 sql = """
801838 SELECT d.user_id AS user_id, display_name, avatar_url
802839 FROM user_directory_search as t
803840 INNER JOIN user_directory AS d USING (user_id)
804841 WHERE
805 %s
842 %(where_clause)s
806843 AND value MATCH ?
807844 ORDER BY
808845 rank(matchinfo(user_directory_search)) DESC,
846 %(order_statements)s
809847 display_name IS NULL,
810848 avatar_url IS NULL
811849 LIMIT ?
812 """ % (
813 where_clause,
814 )
815 args = join_args + (search_query, limit + 1)
850 """ % {
851 "where_clause": where_clause,
852 "order_statements": " ".join(additional_ordering_statements),
853 }
854 args = join_args + (search_query,) + ordering_arguments + (limit + 1,)
816855 else:
817856 # This should be unreachable.
818857 raise Exception("Unrecognized database engine")
9696 return txn.fetchone()[0]
9797
9898 self._state_group_seq_gen = build_sequence_generator(
99 self.database_engine, get_max_state_group_txn, "state_group_id_seq"
100 )
101 self._state_group_seq_gen.check_consistency(
102 db_conn, table="state_groups", id_column="id"
99 db_conn,
100 self.database_engine,
101 get_max_state_group_txn,
102 "state_group_id_seq",
103 table="state_groups",
104 id_column="id",
103105 )
104106
105107 @cached(max_entries=10000, iterable=True)
7272 Returns:
7373 The set of state groups that can be deleted.
7474 """
75 # Graph of state group -> previous group
76 graph = {}
77
7875 # Set of events that we have found to be referenced by events
7976 referenced_groups = set()
8077
110107 next_to_search |= prevs
111108 state_groups_seen |= prevs
112109
113 graph.update(edges)
114
115110 to_delete = state_groups_seen - referenced_groups
116111
117112 return to_delete
2424 )
2525
2626 GetRoomsForUserWithStreamOrdering = namedtuple(
27 "_GetRoomsForUserWithStreamOrdering", ("room_id", "event_pos")
27 "GetRoomsForUserWithStreamOrdering", ("room_id", "event_pos")
2828 )
2929
3030
250250
251251
252252 def build_sequence_generator(
253 db_conn: "LoggingDatabaseConnection",
253254 database_engine: BaseDatabaseEngine,
254255 get_first_callback: GetFirstCallbackType,
255256 sequence_name: str,
257 table: Optional[str],
258 id_column: Optional[str],
259 stream_name: Optional[str] = None,
260 positive: bool = True,
256261 ) -> SequenceGenerator:
257262 """Get the best impl of SequenceGenerator available
258263
264269 get_first_callback: a callback which gets the next sequence ID. Used if
265270 we're on sqlite.
266271 sequence_name: the name of a postgres sequence to use.
272 table, id_column, stream_name, positive: If set then `check_consistency`
273 is called on the created sequence. See docstring for
274 `check_consistency` details.
267275 """
268276 if isinstance(database_engine, PostgresEngine):
269 return PostgresSequenceGenerator(sequence_name)
277 seq = PostgresSequenceGenerator(sequence_name) # type: SequenceGenerator
270278 else:
271 return LocalSequenceGenerator(get_first_callback)
279 seq = LocalSequenceGenerator(get_first_callback)
280
281 if table:
282 assert id_column
283 seq.check_consistency(
284 db_conn=db_conn,
285 table=table,
286 id_column=id_column,
287 stream_name=stream_name,
288 positive=positive,
289 )
290
291 return seq
2929
3030 from synapse.config import find_config_files
3131
32 SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
32 SYNAPSE = [sys.executable, "-m", "synapse.app.homeserver"]
3333
3434 GREEN = "\x1b[1;32m"
3535 YELLOW = "\x1b[1;33m"
116116
117117 args = [
118118 sys.executable,
119 "-B",
120119 "-m",
121120 app,
122121 "-c",
520520 )
521521 self.assertEqual(expected_state.state, PresenceState.ONLINE)
522522 self.federation_sender.send_presence_to_destinations.assert_called_once_with(
523 destinations=["server2"], states=[expected_state]
523 destinations=["server2"], states={expected_state}
524524 )
525525
526526 #
532532
533533 self.federation_sender.send_presence.assert_not_called()
534534 self.federation_sender.send_presence_to_destinations.assert_called_once_with(
535 destinations=["server3"], states=[expected_state]
535 destinations=["server3"], states={expected_state}
536536 )
537537
538538 def test_remote_gets_presence_when_local_user_joins(self):
583583 self.presence_handler.current_state_for_user("@test2:server")
584584 )
585585 self.assertEqual(expected_state.state, PresenceState.ONLINE)
586 self.federation_sender.send_presence_to_destinations.assert_called_once_with(
587 destinations={"server2", "server3"}, states=[expected_state]
586 self.assertEqual(
587 self.federation_sender.send_presence_to_destinations.call_count, 2
588 )
589 self.federation_sender.send_presence_to_destinations.assert_any_call(
590 destinations=["server3"], states={expected_state}
591 )
592 self.federation_sender.send_presence_to_destinations.assert_any_call(
593 destinations=["server2"], states={expected_state}
588594 )
589595
590596 def _add_new_user(self, room_id, user_id):
160160
161161 response = self.get_success(
162162 self.query_handlers["profile"](
163 {"user_id": "@caroline:test", "field": "displayname"}
163 {
164 "user_id": "@caroline:test",
165 "field": "displayname",
166 "origin": "servername.tld",
167 }
164168 )
165169 )
166170
1717
1818 import synapse.rest.admin
1919 from synapse.api.constants import EventTypes, RoomEncryptionAlgorithms, UserTypes
20 from synapse.api.room_versions import RoomVersion, RoomVersions
2021 from synapse.rest.client.v1 import login, room
2122 from synapse.rest.client.v2_alpha import user_directory
2223 from synapse.storage.roommember import ProfileInfo
4546 def prepare(self, reactor, clock, hs):
4647 self.store = hs.get_datastore()
4748 self.handler = hs.get_user_directory_handler()
49 self.event_builder_factory = self.hs.get_event_builder_factory()
50 self.event_creation_handler = self.hs.get_event_creation_handler()
4851
4952 def test_handle_local_profile_change_with_support_user(self):
5053 support_user_id = "@support:test"
546549 s = self.get_success(self.handler.search_users(u1, u4, 10))
547550 self.assertEqual(len(s["results"]), 1)
548551
552 @override_config(
553 {
554 "user_directory": {
555 "enabled": True,
556 "search_all_users": True,
557 "prefer_local_users": True,
558 }
559 }
560 )
561 def test_prefer_local_users(self):
562 """Tests that local users are shown higher in search results when
563 user_directory.prefer_local_users is True.
564 """
565 # Create a room and few users to test the directory with
566 searching_user = self.register_user("searcher", "password")
567 searching_user_tok = self.login("searcher", "password")
568
569 room_id = self.helper.create_room_as(
570 searching_user,
571 room_version=RoomVersions.V1.identifier,
572 tok=searching_user_tok,
573 )
574
575 # Create a few local users and join them to the room
576 local_user_1 = self.register_user("user_xxxxx", "password")
577 local_user_2 = self.register_user("user_bbbbb", "password")
578 local_user_3 = self.register_user("user_zzzzz", "password")
579
580 self._add_user_to_room(room_id, RoomVersions.V1, local_user_1)
581 self._add_user_to_room(room_id, RoomVersions.V1, local_user_2)
582 self._add_user_to_room(room_id, RoomVersions.V1, local_user_3)
583
584 # Create a few "remote" users and join them to the room
585 remote_user_1 = "@user_aaaaa:remote_server"
586 remote_user_2 = "@user_yyyyy:remote_server"
587 remote_user_3 = "@user_ccccc:remote_server"
588 self._add_user_to_room(room_id, RoomVersions.V1, remote_user_1)
589 self._add_user_to_room(room_id, RoomVersions.V1, remote_user_2)
590 self._add_user_to_room(room_id, RoomVersions.V1, remote_user_3)
591
592 local_users = [local_user_1, local_user_2, local_user_3]
593 remote_users = [remote_user_1, remote_user_2, remote_user_3]
594
595 # Populate the user directory via background update
596 self._add_background_updates()
597 while not self.get_success(
598 self.store.db_pool.updates.has_completed_background_updates()
599 ):
600 self.get_success(
601 self.store.db_pool.updates.do_next_background_update(100), by=0.1
602 )
603
604 # The local searching user searches for the term "user", which other users have
605 # in their user id
606 results = self.get_success(
607 self.handler.search_users(searching_user, "user", 20)
608 )["results"]
609 received_user_id_ordering = [result["user_id"] for result in results]
610
611 # Typically we'd expect Synapse to return users in lexicographical order,
612 # assuming they have similar User IDs/display names, and profile information.
613
614 # Check that the order of returned results using our module is as we expect,
615 # i.e our local users show up first, despite all users having lexographically mixed
616 # user IDs.
617 [self.assertIn(user, local_users) for user in received_user_id_ordering[:3]]
618 [self.assertIn(user, remote_users) for user in received_user_id_ordering[3:]]
619
620 def _add_user_to_room(
621 self,
622 room_id: str,
623 room_version: RoomVersion,
624 user_id: str,
625 ):
626 # Add a user to the room.
627 builder = self.event_builder_factory.for_room_version(
628 room_version,
629 {
630 "type": "m.room.member",
631 "sender": user_id,
632 "state_key": user_id,
633 "room_id": room_id,
634 "content": {"membership": "join"},
635 },
636 )
637
638 event, context = self.get_success(
639 self.event_creation_handler.create_new_client_event(builder)
640 )
641
642 self.get_success(
643 self.hs.get_storage().persistence.persist_event(event, context)
644 )
645
549646
550647 class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
551648 user_id = "@test:test"
2525
2626
2727 class ReadBodyWithMaxSizeTests(TestCase):
28 def setUp(self):
28 def _build_response(self, length=UNKNOWN_LENGTH):
2929 """Start reading the body, returns the response, result and proto"""
30 response = Mock(length=UNKNOWN_LENGTH)
31 self.result = BytesIO()
32 self.deferred = read_body_with_max_size(response, self.result, 6)
30 response = Mock(length=length)
31 result = BytesIO()
32 deferred = read_body_with_max_size(response, result, 6)
3333
3434 # Fish the protocol out of the response.
35 self.protocol = response.deliverBody.call_args[0][0]
36 self.protocol.transport = Mock()
35 protocol = response.deliverBody.call_args[0][0]
36 protocol.transport = Mock()
3737
38 def _cleanup_error(self):
38 return result, deferred, protocol
39
40 def _assert_error(self, deferred, protocol):
41 """Ensure that the expected error is received."""
42 self.assertIsInstance(deferred.result, Failure)
43 self.assertIsInstance(deferred.result.value, BodyExceededMaxSize)
44 protocol.transport.abortConnection.assert_called_once()
45
46 def _cleanup_error(self, deferred):
3947 """Ensure that the error in the Deferred is handled gracefully."""
4048 called = [False]
4149
4250 def errback(f):
4351 called[0] = True
4452
45 self.deferred.addErrback(errback)
53 deferred.addErrback(errback)
4654 self.assertTrue(called[0])
4755
4856 def test_no_error(self):
4957 """A response that is NOT too large."""
58 result, deferred, protocol = self._build_response()
5059
5160 # Start sending data.
52 self.protocol.dataReceived(b"12345")
61 protocol.dataReceived(b"12345")
5362 # Close the connection.
54 self.protocol.connectionLost(Failure(ResponseDone()))
63 protocol.connectionLost(Failure(ResponseDone()))
5564
56 self.assertEqual(self.result.getvalue(), b"12345")
57 self.assertEqual(self.deferred.result, 5)
65 self.assertEqual(result.getvalue(), b"12345")
66 self.assertEqual(deferred.result, 5)
5867
5968 def test_too_large(self):
6069 """A response which is too large raises an exception."""
70 result, deferred, protocol = self._build_response()
6171
6272 # Start sending data.
63 self.protocol.dataReceived(b"1234567890")
64 # Close the connection.
65 self.protocol.connectionLost(Failure(ResponseDone()))
73 protocol.dataReceived(b"1234567890")
6674
67 self.assertEqual(self.result.getvalue(), b"1234567890")
68 self.assertIsInstance(self.deferred.result, Failure)
69 self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize)
70 self._cleanup_error()
75 self.assertEqual(result.getvalue(), b"1234567890")
76 self._assert_error(deferred, protocol)
77 self._cleanup_error(deferred)
7178
7279 def test_multiple_packets(self):
73 """Data should be accummulated through mutliple packets."""
80 """Data should be accumulated through mutliple packets."""
81 result, deferred, protocol = self._build_response()
7482
7583 # Start sending data.
76 self.protocol.dataReceived(b"12")
77 self.protocol.dataReceived(b"34")
84 protocol.dataReceived(b"12")
85 protocol.dataReceived(b"34")
7886 # Close the connection.
79 self.protocol.connectionLost(Failure(ResponseDone()))
87 protocol.connectionLost(Failure(ResponseDone()))
8088
81 self.assertEqual(self.result.getvalue(), b"1234")
82 self.assertEqual(self.deferred.result, 4)
89 self.assertEqual(result.getvalue(), b"1234")
90 self.assertEqual(deferred.result, 4)
8391
8492 def test_additional_data(self):
8593 """A connection can receive data after being closed."""
94 result, deferred, protocol = self._build_response()
8695
8796 # Start sending data.
88 self.protocol.dataReceived(b"1234567890")
89 self.assertIsInstance(self.deferred.result, Failure)
90 self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize)
91 self.protocol.transport.abortConnection.assert_called_once()
97 protocol.dataReceived(b"1234567890")
98 self._assert_error(deferred, protocol)
9299
93100 # More data might have come in.
94 self.protocol.dataReceived(b"1234567890")
95 # Close the connection.
96 self.protocol.connectionLost(Failure(ResponseDone()))
101 protocol.dataReceived(b"1234567890")
97102
98 self.assertEqual(self.result.getvalue(), b"1234567890")
99 self.assertIsInstance(self.deferred.result, Failure)
100 self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize)
101 self._cleanup_error()
103 self.assertEqual(result.getvalue(), b"1234567890")
104 self._assert_error(deferred, protocol)
105 self._cleanup_error(deferred)
106
107 def test_content_length(self):
108 """The body shouldn't be read (at all) if the Content-Length header is too large."""
109 result, deferred, protocol = self._build_response(length=10)
110
111 # Deferred shouldn't be called yet.
112 self.assertFalse(deferred.called)
113
114 # Start sending data.
115 protocol.dataReceived(b"12345")
116 self._assert_error(deferred, protocol)
117 self._cleanup_error(deferred)
118
119 # The data is never consumed.
120 self.assertEqual(result.getvalue(), b"")
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 import os
16 from unittest.mock import patch
1517
1618 import treq
1719 from netaddr import IPSet
99101
100102 return http_protocol
101103
102 def test_http_request(self):
103 agent = ProxyAgent(self.reactor)
104
105 self.reactor.lookups["test.com"] = "1.2.3.4"
106 d = agent.request(b"GET", b"http://test.com")
104 def _test_request_direct_connection(self, agent, scheme, hostname, path):
105 """Runs a test case for a direct connection not going through a proxy.
106
107 Args:
108 agent (ProxyAgent): the proxy agent being tested
109
110 scheme (bytes): expected to be either "http" or "https"
111
112 hostname (bytes): the hostname to connect to in the test
113
114 path (bytes): the path to connect to in the test
115 """
116 is_https = scheme == b"https"
117
118 self.reactor.lookups[hostname.decode()] = "1.2.3.4"
119 d = agent.request(b"GET", scheme + b"://" + hostname + b"/" + path)
107120
108121 # there should be a pending TCP connection
109122 clients = self.reactor.tcpClients
110123 self.assertEqual(len(clients), 1)
111124 (host, port, client_factory, _timeout, _bindAddress) = clients[0]
112125 self.assertEqual(host, "1.2.3.4")
113 self.assertEqual(port, 80)
114
115 # make a test server, and wire up the client
116 http_server = self._make_connection(
117 client_factory, _get_test_protocol_factory()
118 )
119
120 # the FakeTransport is async, so we need to pump the reactor
121 self.reactor.advance(0)
122
123 # now there should be a pending request
124 self.assertEqual(len(http_server.requests), 1)
125
126 request = http_server.requests[0]
127 self.assertEqual(request.method, b"GET")
128 self.assertEqual(request.path, b"/")
129 self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"])
130 request.write(b"result")
131 request.finish()
132
133 self.reactor.advance(0)
134
135 resp = self.successResultOf(d)
136 body = self.successResultOf(treq.content(resp))
137 self.assertEqual(body, b"result")
138
139 def test_https_request(self):
140 agent = ProxyAgent(self.reactor, contextFactory=get_test_https_policy())
141
142 self.reactor.lookups["test.com"] = "1.2.3.4"
143 d = agent.request(b"GET", b"https://test.com/abc")
144
145 # there should be a pending TCP connection
146 clients = self.reactor.tcpClients
147 self.assertEqual(len(clients), 1)
148 (host, port, client_factory, _timeout, _bindAddress) = clients[0]
149 self.assertEqual(host, "1.2.3.4")
150 self.assertEqual(port, 443)
126 self.assertEqual(port, 443 if is_https else 80)
151127
152128 # make a test server, and wire up the client
153129 http_server = self._make_connection(
154130 client_factory,
155131 _get_test_protocol_factory(),
156 ssl=True,
157 expected_sni=b"test.com",
132 ssl=is_https,
133 expected_sni=hostname if is_https else None,
158134 )
159135
160136 # the FakeTransport is async, so we need to pump the reactor
165141
166142 request = http_server.requests[0]
167143 self.assertEqual(request.method, b"GET")
168 self.assertEqual(request.path, b"/abc")
169 self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"])
144 self.assertEqual(request.path, b"/" + path)
145 self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [hostname])
170146 request.write(b"result")
171147 request.finish()
172148
176152 body = self.successResultOf(treq.content(resp))
177153 self.assertEqual(body, b"result")
178154
155 def test_http_request(self):
156 agent = ProxyAgent(self.reactor)
157 self._test_request_direct_connection(agent, b"http", b"test.com", b"")
158
159 def test_https_request(self):
160 agent = ProxyAgent(self.reactor, contextFactory=get_test_https_policy())
161 self._test_request_direct_connection(agent, b"https", b"test.com", b"abc")
162
163 def test_http_request_use_proxy_empty_environment(self):
164 agent = ProxyAgent(self.reactor, use_proxy=True)
165 self._test_request_direct_connection(agent, b"http", b"test.com", b"")
166
167 @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "NO_PROXY": "test.com"})
168 def test_http_request_via_uppercase_no_proxy(self):
169 agent = ProxyAgent(self.reactor, use_proxy=True)
170 self._test_request_direct_connection(agent, b"http", b"test.com", b"")
171
172 @patch.dict(
173 os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "test.com,unused.com"}
174 )
175 def test_http_request_via_no_proxy(self):
176 agent = ProxyAgent(self.reactor, use_proxy=True)
177 self._test_request_direct_connection(agent, b"http", b"test.com", b"")
178
179 @patch.dict(
180 os.environ, {"https_proxy": "proxy.com", "no_proxy": "test.com,unused.com"}
181 )
182 def test_https_request_via_no_proxy(self):
183 agent = ProxyAgent(
184 self.reactor,
185 contextFactory=get_test_https_policy(),
186 use_proxy=True,
187 )
188 self._test_request_direct_connection(agent, b"https", b"test.com", b"abc")
189
190 @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "*"})
191 def test_http_request_via_no_proxy_star(self):
192 agent = ProxyAgent(self.reactor, use_proxy=True)
193 self._test_request_direct_connection(agent, b"http", b"test.com", b"")
194
195 @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "*"})
196 def test_https_request_via_no_proxy_star(self):
197 agent = ProxyAgent(
198 self.reactor,
199 contextFactory=get_test_https_policy(),
200 use_proxy=True,
201 )
202 self._test_request_direct_connection(agent, b"https", b"test.com", b"abc")
203
204 @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "unused.com"})
179205 def test_http_request_via_proxy(self):
180 agent = ProxyAgent(self.reactor, http_proxy=b"proxy.com:8888")
206 agent = ProxyAgent(self.reactor, use_proxy=True)
181207
182208 self.reactor.lookups["proxy.com"] = "1.2.3.5"
183209 d = agent.request(b"GET", b"http://test.com")
213239 body = self.successResultOf(treq.content(resp))
214240 self.assertEqual(body, b"result")
215241
242 @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "unused.com"})
216243 def test_https_request_via_proxy(self):
217244 agent = ProxyAgent(
218245 self.reactor,
219246 contextFactory=get_test_https_policy(),
220 https_proxy=b"proxy.com",
247 use_proxy=True,
221248 )
222249
223250 self.reactor.lookups["proxy.com"] = "1.2.3.5"
293320 body = self.successResultOf(treq.content(resp))
294321 self.assertEqual(body, b"result")
295322
323 @patch.dict(os.environ, {"http_proxy": "proxy.com:8888"})
296324 def test_http_request_via_proxy_with_blacklist(self):
297325 # The blacklist includes the configured proxy IP.
298326 agent = ProxyAgent(
300328 self.reactor, ip_whitelist=None, ip_blacklist=IPSet(["1.0.0.0/8"])
301329 ),
302330 self.reactor,
303 http_proxy=b"proxy.com:8888",
331 use_proxy=True,
304332 )
305333
306334 self.reactor.lookups["proxy.com"] = "1.2.3.5"
337365 body = self.successResultOf(treq.content(resp))
338366 self.assertEqual(body, b"result")
339367
340 def test_https_request_via_proxy_with_blacklist(self):
368 @patch.dict(os.environ, {"HTTPS_PROXY": "proxy.com"})
369 def test_https_request_via_uppercase_proxy_with_blacklist(self):
341370 # The blacklist includes the configured proxy IP.
342371 agent = ProxyAgent(
343372 BlacklistingReactorWrapper(
345374 ),
346375 self.reactor,
347376 contextFactory=get_test_https_policy(),
348 https_proxy=b"proxy.com",
377 use_proxy=True,
349378 )
350379
351380 self.reactor.lookups["proxy.com"] = "1.2.3.5"
2020 from twisted.internet.defer import Deferred
2121
2222 import synapse.rest.admin
23 from synapse.api.errors import Codes, SynapseError
2324 from synapse.rest.client.v1 import login, room
2425
2526 from tests.unittest import HomeserverTestCase
99100 user_tuple = self.get_success(
100101 self.hs.get_datastore().get_user_by_access_token(self.access_token)
101102 )
102 token_id = user_tuple.token_id
103 self.token_id = user_tuple.token_id
104
105 # We need to add email to account before we can create a pusher.
106 self.get_success(
107 hs.get_datastore().user_add_threepid(
108 self.user_id, "email", "a@example.com", 0, 0
109 )
110 )
103111
104112 self.pusher = self.get_success(
105113 self.hs.get_pusherpool().add_pusher(
106114 user_id=self.user_id,
107 access_token=token_id,
115 access_token=self.token_id,
108116 kind="email",
109117 app_id="m.email",
110118 app_display_name="Email Notifications",
115123 )
116124 )
117125
126 def test_need_validated_email(self):
127 """Test that we can only add an email pusher if the user has validated
128 their email.
129 """
130 with self.assertRaises(SynapseError) as cm:
131 self.get_success_or_raise(
132 self.hs.get_pusherpool().add_pusher(
133 user_id=self.user_id,
134 access_token=self.token_id,
135 kind="email",
136 app_id="m.email",
137 app_display_name="Email Notifications",
138 device_display_name="b@example.com",
139 pushkey="b@example.com",
140 lang=None,
141 data={},
142 )
143 )
144
145 self.assertEqual(400, cm.exception.code)
146 self.assertEqual(Codes.THREEPID_NOT_FOUND, cm.exception.errcode)
147
118148 def test_simple_sends_email(self):
119149 # Create a simple room with two users
120150 room = self.helper.create_room_as(self.user_id, tok=self.access_token)
2323 # enable federation sending on the worker
2424 config = super()._get_worker_hs_config()
2525 # TODO: make it so we don't need both of these
26 config["send_federation"] = True
26 config["send_federation"] = False
2727 config["worker_app"] = "synapse.app.federation_sender"
2828 return config
2929
2626 def default_config(self) -> dict:
2727 config = super().default_config()
2828 config["worker_app"] = "synapse.app.federation_sender"
29 config["send_federation"] = True
29 config["send_federation"] = False
3030 return config
3131
3232 def make_homeserver(self, reactor, clock):
4848
4949 self.make_worker_hs(
5050 "synapse.app.federation_sender",
51 {"send_federation": True},
51 {"send_federation": False},
5252 federation_http_client=mock_client,
5353 )
5454
9494
9595 self.make_worker_hs(
9696 "synapse.app.pusher",
97 {"start_pushers": True},
97 {"start_pushers": False},
9898 proxied_blacklisted_http_client=http_client_mock,
9999 )
100100
1717 import json
1818 import urllib.parse
1919 from binascii import unhexlify
20 from typing import Optional
20 from typing import List, Optional
2121
2222 from mock import Mock
2323
3030 from synapse.types import JsonDict
3131
3232 from tests import unittest
33 from tests.server import FakeSite, make_request
3334 from tests.test_utils import make_awaitable
3435 from tests.unittest import override_config
3536
19531954 ]
19541955
19551956 def prepare(self, reactor, clock, hs):
1957 self.store = hs.get_datastore()
19561958 self.media_repo = hs.get_media_repository_resource()
19571959
19581960 self.admin_user = self.register_user("admin", "pass", admin=True)
20232025
20242026 number_media = 20
20252027 other_user_tok = self.login("user", "pass")
2026 self._create_media(other_user_tok, number_media)
2028 self._create_media_for_user(other_user_tok, number_media)
20272029
20282030 channel = self.make_request(
20292031 "GET",
20442046
20452047 number_media = 20
20462048 other_user_tok = self.login("user", "pass")
2047 self._create_media(other_user_tok, number_media)
2049 self._create_media_for_user(other_user_tok, number_media)
20482050
20492051 channel = self.make_request(
20502052 "GET",
20652067
20662068 number_media = 20
20672069 other_user_tok = self.login("user", "pass")
2068 self._create_media(other_user_tok, number_media)
2070 self._create_media_for_user(other_user_tok, number_media)
20692071
20702072 channel = self.make_request(
20712073 "GET",
20792081 self.assertEqual(len(channel.json_body["media"]), 10)
20802082 self._check_fields(channel.json_body["media"])
20812083
2082 def test_limit_is_negative(self):
2083 """
2084 Testing that a negative limit parameter returns a 400
2085 """
2086
2084 def test_invalid_parameter(self):
2085 """
2086 If parameters are invalid, an error is returned.
2087 """
2088 # unkown order_by
2089 channel = self.make_request(
2090 "GET",
2091 self.url + "?order_by=bar",
2092 access_token=self.admin_user_tok,
2093 )
2094
2095 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
2096 self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"])
2097
2098 # invalid search order
2099 channel = self.make_request(
2100 "GET",
2101 self.url + "?dir=bar",
2102 access_token=self.admin_user_tok,
2103 )
2104
2105 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
2106 self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"])
2107
2108 # negative limit
20872109 channel = self.make_request(
20882110 "GET",
20892111 self.url + "?limit=-5",
20932115 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
20942116 self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
20952117
2096 def test_from_is_negative(self):
2097 """
2098 Testing that a negative from parameter returns a 400
2099 """
2100
2118 # negative from
21012119 channel = self.make_request(
21022120 "GET",
21032121 self.url + "?from=-5",
21142132
21152133 number_media = 20
21162134 other_user_tok = self.login("user", "pass")
2117 self._create_media(other_user_tok, number_media)
2135 self._create_media_for_user(other_user_tok, number_media)
21182136
21192137 # `next_token` does not appear
21202138 # Number of results is the number of entries
21922210
21932211 number_media = 5
21942212 other_user_tok = self.login("user", "pass")
2195 self._create_media(other_user_tok, number_media)
2213 self._create_media_for_user(other_user_tok, number_media)
21962214
21972215 channel = self.make_request(
21982216 "GET",
22062224 self.assertNotIn("next_token", channel.json_body)
22072225 self._check_fields(channel.json_body["media"])
22082226
2209 def _create_media(self, user_token, number_media):
2227 def test_order_by(self):
2228 """
2229 Testing order list with parameter `order_by`
2230 """
2231
2232 other_user_tok = self.login("user", "pass")
2233
2234 # Resolution: 1×1, MIME type: image/png, Extension: png, Size: 67 B
2235 image_data1 = unhexlify(
2236 b"89504e470d0a1a0a0000000d4948445200000001000000010806"
2237 b"0000001f15c4890000000a49444154789c63000100000500010d"
2238 b"0a2db40000000049454e44ae426082"
2239 )
2240 # Resolution: 1×1, MIME type: image/gif, Extension: gif, Size: 35 B
2241 image_data2 = unhexlify(
2242 b"47494638376101000100800100000000"
2243 b"ffffff2c00000000010001000002024c"
2244 b"01003b"
2245 )
2246 # Resolution: 1×1, MIME type: image/bmp, Extension: bmp, Size: 54 B
2247 image_data3 = unhexlify(
2248 b"424d3a0000000000000036000000280000000100000001000000"
2249 b"0100180000000000040000000000000000000000000000000000"
2250 b"0000"
2251 )
2252
2253 # create media and make sure they do not have the same timestamp
2254 media1 = self._create_media_and_access(other_user_tok, image_data1, "image.png")
2255 self.pump(1.0)
2256 media2 = self._create_media_and_access(other_user_tok, image_data2, "image.gif")
2257 self.pump(1.0)
2258 media3 = self._create_media_and_access(other_user_tok, image_data3, "image.bmp")
2259 self.pump(1.0)
2260
2261 # Mark one media as safe from quarantine.
2262 self.get_success(self.store.mark_local_media_as_safe(media2))
2263 # Quarantine one media
2264 self.get_success(
2265 self.store.quarantine_media_by_id("test", media3, self.admin_user)
2266 )
2267
2268 # order by default ("created_ts")
2269 # default is backwards
2270 self._order_test([media3, media2, media1], None)
2271 self._order_test([media1, media2, media3], None, "f")
2272 self._order_test([media3, media2, media1], None, "b")
2273
2274 # sort by media_id
2275 sorted_media = sorted([media1, media2, media3], reverse=False)
2276 sorted_media_reverse = sorted(sorted_media, reverse=True)
2277
2278 # order by media_id
2279 self._order_test(sorted_media, "media_id")
2280 self._order_test(sorted_media, "media_id", "f")
2281 self._order_test(sorted_media_reverse, "media_id", "b")
2282
2283 # order by upload_name
2284 self._order_test([media3, media2, media1], "upload_name")
2285 self._order_test([media3, media2, media1], "upload_name", "f")
2286 self._order_test([media1, media2, media3], "upload_name", "b")
2287
2288 # order by media_type
2289 # result is ordered by media_id
2290 # because of uploaded media_type is always 'application/json'
2291 self._order_test(sorted_media, "media_type")
2292 self._order_test(sorted_media, "media_type", "f")
2293 self._order_test(sorted_media, "media_type", "b")
2294
2295 # order by media_length
2296 self._order_test([media2, media3, media1], "media_length")
2297 self._order_test([media2, media3, media1], "media_length", "f")
2298 self._order_test([media1, media3, media2], "media_length", "b")
2299
2300 # order by created_ts
2301 self._order_test([media1, media2, media3], "created_ts")
2302 self._order_test([media1, media2, media3], "created_ts", "f")
2303 self._order_test([media3, media2, media1], "created_ts", "b")
2304
2305 # order by last_access_ts
2306 self._order_test([media1, media2, media3], "last_access_ts")
2307 self._order_test([media1, media2, media3], "last_access_ts", "f")
2308 self._order_test([media3, media2, media1], "last_access_ts", "b")
2309
2310 # order by quarantined_by
2311 # one media is in quarantine, others are ordered by media_ids
2312
2313 # Different sort order of SQlite and PostreSQL
2314 # If a media is not in quarantine `quarantined_by` is NULL
2315 # SQLite considers NULL to be smaller than any other value.
2316 # PostreSQL considers NULL to be larger than any other value.
2317
2318 # self._order_test(sorted([media1, media2]) + [media3], "quarantined_by")
2319 # self._order_test(sorted([media1, media2]) + [media3], "quarantined_by", "f")
2320 # self._order_test([media3] + sorted([media1, media2]), "quarantined_by", "b")
2321
2322 # order by safe_from_quarantine
2323 # one media is safe from quarantine, others are ordered by media_ids
2324 self._order_test(sorted([media1, media3]) + [media2], "safe_from_quarantine")
2325 self._order_test(
2326 sorted([media1, media3]) + [media2], "safe_from_quarantine", "f"
2327 )
2328 self._order_test(
2329 [media2] + sorted([media1, media3]), "safe_from_quarantine", "b"
2330 )
2331
2332 def _create_media_for_user(self, user_token: str, number_media: int):
22102333 """
22112334 Create a number of media for a specific user
2212 """
2213 upload_resource = self.media_repo.children[b"upload"]
2335 Args:
2336 user_token: Access token of the user
2337 number_media: Number of media to be created for the user
2338 """
22142339 for i in range(number_media):
22152340 # file size is 67 Byte
22162341 image_data = unhexlify(
22192344 b"0a2db40000000049454e44ae426082"
22202345 )
22212346
2222 # Upload some media into the room
2223 self.helper.upload_media(
2224 upload_resource, image_data, tok=user_token, expect_code=200
2225 )
2226
2227 def _check_fields(self, content):
2228 """Checks that all attributes are present in content"""
2347 self._create_media_and_access(user_token, image_data)
2348
2349 def _create_media_and_access(
2350 self,
2351 user_token: str,
2352 image_data: bytes,
2353 filename: str = "image1.png",
2354 ) -> str:
2355 """
2356 Create one media for a specific user, access and returns `media_id`
2357 Args:
2358 user_token: Access token of the user
2359 image_data: binary data of image
2360 filename: The filename of the media to be uploaded
2361 Returns:
2362 The ID of the newly created media.
2363 """
2364 upload_resource = self.media_repo.children[b"upload"]
2365 download_resource = self.media_repo.children[b"download"]
2366
2367 # Upload some media into the room
2368 response = self.helper.upload_media(
2369 upload_resource, image_data, user_token, filename, expect_code=200
2370 )
2371
2372 # Extract media ID from the response
2373 server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
2374 media_id = server_and_media_id.split("/")[1]
2375
2376 # Try to access a media and to create `last_access_ts`
2377 channel = make_request(
2378 self.reactor,
2379 FakeSite(download_resource),
2380 "GET",
2381 server_and_media_id,
2382 shorthand=False,
2383 access_token=user_token,
2384 )
2385
2386 self.assertEqual(
2387 200,
2388 channel.code,
2389 msg=(
2390 "Expected to receive a 200 on accessing media: %s" % server_and_media_id
2391 ),
2392 )
2393
2394 return media_id
2395
2396 def _check_fields(self, content: JsonDict):
2397 """Checks that the expected user attributes are present in content
2398 Args:
2399 content: List that is checked for content
2400 """
22292401 for m in content:
22302402 self.assertIn("media_id", m)
22312403 self.assertIn("media_type", m)
22352407 self.assertIn("last_access_ts", m)
22362408 self.assertIn("quarantined_by", m)
22372409 self.assertIn("safe_from_quarantine", m)
2410
2411 def _order_test(
2412 self,
2413 expected_media_list: List[str],
2414 order_by: Optional[str],
2415 dir: Optional[str] = None,
2416 ):
2417 """Request the list of media in a certain order. Assert that order is what
2418 we expect
2419 Args:
2420 expected_media_list: The list of media_ids in the order we expect to get
2421 back from the server
2422 order_by: The type of ordering to give the server
2423 dir: The direction of ordering to give the server
2424 """
2425
2426 url = self.url + "?"
2427 if order_by is not None:
2428 url += "order_by=%s&" % (order_by,)
2429 if dir is not None and dir in ("b", "f"):
2430 url += "dir=%s" % (dir,)
2431 channel = self.make_request(
2432 "GET",
2433 url.encode("ascii"),
2434 access_token=self.admin_user_tok,
2435 )
2436 self.assertEqual(200, channel.code, msg=channel.json_body)
2437 self.assertEqual(channel.json_body["total"], len(expected_media_list))
2438
2439 returned_order = [row["media_id"] for row in channel.json_body["media"]]
2440 self.assertEqual(expected_media_list, returned_order)
2441 self._check_fields(channel.json_body["media"])
22382442
22392443
22402444 class UserTokenRestTestCase(unittest.HomeserverTestCase):
1414
1515 import time
1616 import urllib.parse
17 from typing import Any, Dict, List, Union
17 from typing import Any, Dict, List, Optional, Union
1818 from urllib.parse import urlencode
1919
2020 from mock import Mock
4646 HAS_JWT = False
4747
4848
49 # public_base_url used in some tests
50 BASE_URL = "https://synapse/"
49 # synapse server name: used to populate public_baseurl in some tests
50 SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse"
51
52 # public_baseurl for some tests. It uses an http:// scheme because
53 # FakeChannel.isSecure() returns False, so synapse will see the requested uri as
54 # http://..., so using http in the public_baseurl stops Synapse trying to redirect to
55 # https://....
56 BASE_URL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,)
5157
5258 # CAS server used in some tests
5359 CAS_SERVER = "https://fake.test"
479485 def test_multi_sso_redirect(self):
480486 """/login/sso/redirect should redirect to an identity picker"""
481487 # first hit the redirect url, which should redirect to our idp picker
482 channel = self.make_request(
483 "GET",
484 "/_matrix/client/r0/login/sso/redirect?redirectUrl="
485 + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL),
486 )
488 channel = self._make_sso_redirect_request(False, None)
487489 self.assertEqual(channel.code, 302, channel.result)
488490 uri = channel.headers.getRawHeaders("Location")[0]
489491
519521 shorthand=False,
520522 )
521523 self.assertEqual(channel.code, 302, channel.result)
522 cas_uri = channel.headers.getRawHeaders("Location")[0]
524 location_headers = channel.headers.getRawHeaders("Location")
525 assert location_headers
526 cas_uri = location_headers[0]
523527 cas_uri_path, cas_uri_query = cas_uri.split("?", 1)
524528
525529 # it should redirect us to the login page of the cas server
542546 + "&idp=saml",
543547 )
544548 self.assertEqual(channel.code, 302, channel.result)
545 saml_uri = channel.headers.getRawHeaders("Location")[0]
549 location_headers = channel.headers.getRawHeaders("Location")
550 assert location_headers
551 saml_uri = location_headers[0]
546552 saml_uri_path, saml_uri_query = saml_uri.split("?", 1)
547553
548554 # it should redirect us to the login page of the SAML server
564570 + "&idp=oidc",
565571 )
566572 self.assertEqual(channel.code, 302, channel.result)
567 oidc_uri = channel.headers.getRawHeaders("Location")[0]
573 location_headers = channel.headers.getRawHeaders("Location")
574 assert location_headers
575 oidc_uri = location_headers[0]
568576 oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1)
569577
570578 # it should redirect us to the auth page of the OIDC server
571579 self.assertEqual(oidc_uri_path, TEST_OIDC_AUTH_ENDPOINT)
572580
573581 # ... and should have set a cookie including the redirect url
574 cookies = dict(
575 h.split(";")[0].split("=", maxsplit=1)
576 for h in channel.headers.getRawHeaders("Set-Cookie")
577 )
582 cookie_headers = channel.headers.getRawHeaders("Set-Cookie")
583 assert cookie_headers
584 cookies = {} # type: Dict[str, str]
585 for h in cookie_headers:
586 key, value = h.split(";")[0].split("=", maxsplit=1)
587 cookies[key] = value
578588
579589 oidc_session_cookie = cookies["oidc_session"]
580590 macaroon = pymacaroons.Macaroon.deserialize(oidc_session_cookie)
587597
588598 # that should serve a confirmation page
589599 self.assertEqual(channel.code, 200, channel.result)
590 self.assertTrue(
591 channel.headers.getRawHeaders("Content-Type")[-1].startswith("text/html")
592 )
600 content_type_headers = channel.headers.getRawHeaders("Content-Type")
601 assert content_type_headers
602 self.assertTrue(content_type_headers[-1].startswith("text/html"))
593603 p = TestHtmlParser()
594604 p.feed(channel.text_body)
595605 p.close()
627637
628638 def test_client_idp_redirect_msc2858_disabled(self):
629639 """If the client tries to pick an IdP but MSC2858 is disabled, return a 400"""
630 channel = self.make_request(
631 "GET",
632 "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl="
633 + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL),
634 )
640 channel = self._make_sso_redirect_request(True, "oidc")
635641 self.assertEqual(channel.code, 400, channel.result)
636642 self.assertEqual(channel.json_body["errcode"], "M_UNRECOGNIZED")
637643
638644 @override_config({"experimental_features": {"msc2858_enabled": True}})
639645 def test_client_idp_redirect_to_unknown(self):
640646 """If the client tries to pick an unknown IdP, return a 404"""
641 channel = self.make_request(
642 "GET",
643 "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/xxx?redirectUrl="
644 + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL),
645 )
647 channel = self._make_sso_redirect_request(True, "xxx")
646648 self.assertEqual(channel.code, 404, channel.result)
647649 self.assertEqual(channel.json_body["errcode"], "M_NOT_FOUND")
648650
649651 @override_config({"experimental_features": {"msc2858_enabled": True}})
650652 def test_client_idp_redirect_to_oidc(self):
651653 """If the client pick a known IdP, redirect to it"""
652 channel = self.make_request(
653 "GET",
654 "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl="
655 + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL),
656 )
657
654 channel = self._make_sso_redirect_request(True, "oidc")
658655 self.assertEqual(channel.code, 302, channel.result)
659656 oidc_uri = channel.headers.getRawHeaders("Location")[0]
660657 oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1)
661658
662659 # it should redirect us to the auth page of the OIDC server
663660 self.assertEqual(oidc_uri_path, TEST_OIDC_AUTH_ENDPOINT)
661
662 def _make_sso_redirect_request(
663 self, unstable_endpoint: bool = False, idp_prov: Optional[str] = None
664 ):
665 """Send a request to /_matrix/client/r0/login/sso/redirect
666
667 ... or the unstable equivalent
668
669 ... possibly specifying an IDP provider
670 """
671 endpoint = (
672 "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect"
673 if unstable_endpoint
674 else "/_matrix/client/r0/login/sso/redirect"
675 )
676 if idp_prov is not None:
677 endpoint += "/" + idp_prov
678 endpoint += "?redirectUrl=" + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL)
679
680 return self.make_request(
681 "GET",
682 endpoint,
683 custom_headers=[("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME)],
684 )
664685
665686 @staticmethod
666687 def _get_value_from_macaroon(macaroon: pymacaroons.Macaroon, key: str) -> str:
792813
793814 self.assertEqual(channel.code, 302)
794815 location_headers = channel.headers.getRawHeaders("Location")
816 assert location_headers
795817 self.assertEqual(location_headers[0][: len(redirect_url)], redirect_url)
796818
797819 @override_config({"sso": {"client_whitelist": ["https://legit-site.com/"]}})
12341256
12351257 # that should redirect to the username picker
12361258 self.assertEqual(channel.code, 302, channel.result)
1237 picker_url = channel.headers.getRawHeaders("Location")[0]
1259 location_headers = channel.headers.getRawHeaders("Location")
1260 assert location_headers
1261 picker_url = location_headers[0]
12381262 self.assertEqual(picker_url, "/_synapse/client/pick_username/account_details")
12391263
12401264 # ... with a username_mapping_session cookie
12771301 )
12781302 self.assertEqual(chan.code, 302, chan.result)
12791303 location_headers = chan.headers.getRawHeaders("Location")
1304 assert location_headers
12801305
12811306 # send a request to the completion page, which should 302 to the client redirectUrl
12821307 chan = self.make_request(
12861311 )
12871312 self.assertEqual(chan.code, 302, chan.result)
12881313 location_headers = chan.headers.getRawHeaders("Location")
1314 assert location_headers
12891315
12901316 # ensure that the returned location matches the requested redirect URL
12911317 path, query = location_headers[0].split("?", 1)
541541 if client_redirect_url:
542542 params["redirectUrl"] = client_redirect_url
543543
544 # hit the redirect url (which will issue a cookie and state)
544 # hit the redirect url (which should redirect back to the redirect url. This
545 # is the easiest way of figuring out what the Host header ought to be set to
546 # to keep Synapse happy.
545547 channel = make_request(
546548 self.hs.get_reactor(),
547549 self.site,
548550 "GET",
549551 "/_matrix/client/r0/login/sso/redirect?" + urllib.parse.urlencode(params),
552 )
553 assert channel.code == 302
554
555 # hit the redirect url again with the right Host header, which should now issue
556 # a cookie and redirect to the SSO provider.
557 location = channel.headers.getRawHeaders("Location")[0]
558 parts = urllib.parse.urlsplit(location)
559 channel = make_request(
560 self.hs.get_reactor(),
561 self.site,
562 "GET",
563 urllib.parse.urlunsplit(("", "") + parts[2:]),
564 custom_headers=[
565 ("Host", parts[1]),
566 ],
550567 )
551568
552569 assert channel.code == 302
160160
161161 def default_config(self):
162162 config = super().default_config()
163 config["public_baseurl"] = "https://synapse.test"
163
164 # public_baseurl uses an http:// scheme because FakeChannel.isSecure() returns
165 # False, so synapse will see the requested uri as http://..., so using http in
166 # the public_baseurl stops Synapse trying to redirect to https.
167 config["public_baseurl"] = "http://synapse.test"
164168
165169 if HAS_OIDC:
166170 # we enable OIDC as a way of testing SSO flows
5353 A room should show up in the shared list of rooms between two users
5454 if it is public.
5555 """
56 u1 = self.register_user("user1", "pass")
57 u1_token = self.login(u1, "pass")
58 u2 = self.register_user("user2", "pass")
59 u2_token = self.login(u2, "pass")
60
61 room = self.helper.create_room_as(u1, is_public=True, tok=u1_token)
62 self.helper.invite(room, src=u1, targ=u2, tok=u1_token)
63 self.helper.join(room, user=u2, tok=u2_token)
64
65 channel = self._get_shared_rooms(u1_token, u2)
66 self.assertEquals(200, channel.code, channel.result)
67 self.assertEquals(len(channel.json_body["joined"]), 1)
68 self.assertEquals(channel.json_body["joined"][0], room)
56 self._check_shared_rooms_with(room_one_is_public=True, room_two_is_public=True)
6957
7058 def test_shared_room_list_private(self):
7159 """
7260 A room should show up in the shared list of rooms between two users
7361 if it is private.
7462 """
75 u1 = self.register_user("user1", "pass")
76 u1_token = self.login(u1, "pass")
77 u2 = self.register_user("user2", "pass")
78 u2_token = self.login(u2, "pass")
79
80 room = self.helper.create_room_as(u1, is_public=False, tok=u1_token)
81 self.helper.invite(room, src=u1, targ=u2, tok=u1_token)
82 self.helper.join(room, user=u2, tok=u2_token)
83
84 channel = self._get_shared_rooms(u1_token, u2)
85 self.assertEquals(200, channel.code, channel.result)
86 self.assertEquals(len(channel.json_body["joined"]), 1)
87 self.assertEquals(channel.json_body["joined"][0], room)
63 self._check_shared_rooms_with(
64 room_one_is_public=False, room_two_is_public=False
65 )
8866
8967 def test_shared_room_list_mixed(self):
9068 """
9169 The shared room list between two users should contain both public and private
9270 rooms.
9371 """
72 self._check_shared_rooms_with(room_one_is_public=True, room_two_is_public=False)
73
74 def _check_shared_rooms_with(
75 self, room_one_is_public: bool, room_two_is_public: bool
76 ):
77 """Checks that shared public or private rooms between two users appear in
78 their shared room lists
79 """
9480 u1 = self.register_user("user1", "pass")
9581 u1_token = self.login(u1, "pass")
9682 u2 = self.register_user("user2", "pass")
9783 u2_token = self.login(u2, "pass")
9884
99 room_public = self.helper.create_room_as(u1, is_public=True, tok=u1_token)
100 room_private = self.helper.create_room_as(u2, is_public=False, tok=u2_token)
101 self.helper.invite(room_public, src=u1, targ=u2, tok=u1_token)
102 self.helper.invite(room_private, src=u2, targ=u1, tok=u2_token)
103 self.helper.join(room_public, user=u2, tok=u2_token)
104 self.helper.join(room_private, user=u1, tok=u1_token)
85 # Create a room. user1 invites user2, who joins
86 room_id_one = self.helper.create_room_as(
87 u1, is_public=room_one_is_public, tok=u1_token
88 )
89 self.helper.invite(room_id_one, src=u1, targ=u2, tok=u1_token)
90 self.helper.join(room_id_one, user=u2, tok=u2_token)
10591
92 # Check shared rooms from user1's perspective.
93 # We should see the one room in common
94 channel = self._get_shared_rooms(u1_token, u2)
95 self.assertEquals(200, channel.code, channel.result)
96 self.assertEquals(len(channel.json_body["joined"]), 1)
97 self.assertEquals(channel.json_body["joined"][0], room_id_one)
98
99 # Create another room and invite user2 to it
100 room_id_two = self.helper.create_room_as(
101 u1, is_public=room_two_is_public, tok=u1_token
102 )
103 self.helper.invite(room_id_two, src=u1, targ=u2, tok=u1_token)
104 self.helper.join(room_id_two, user=u2, tok=u2_token)
105
106 # Check shared rooms again. We should now see both rooms.
106107 channel = self._get_shared_rooms(u1_token, u2)
107108 self.assertEquals(200, channel.code, channel.result)
108109 self.assertEquals(len(channel.json_body["joined"]), 2)
109 self.assertTrue(room_public in channel.json_body["joined"])
110 self.assertTrue(room_private in channel.json_body["joined"])
110 for room_id_id in channel.json_body["joined"]:
111 self.assertIn(room_id_id, [room_id_one, room_id_two])
111112
112113 def test_shared_room_list_after_leave(self):
113114 """
131132
132133 self.helper.leave(room, user=u1, tok=u1_token)
133134
135 # Check user1's view of shared rooms with user2
136 channel = self._get_shared_rooms(u1_token, u2)
137 self.assertEquals(200, channel.code, channel.result)
138 self.assertEquals(len(channel.json_body["joined"]), 0)
139
140 # Check user2's view of shared rooms with user1
134141 channel = self._get_shared_rooms(u2_token, u1)
135142 self.assertEquals(200, channel.code, channel.result)
136143 self.assertEquals(len(channel.json_body["joined"]), 0)
230230
231231 def prepare(self, reactor, clock, hs):
232232
233 self.media_repo = hs.get_media_repository_resource()
234 self.download_resource = self.media_repo.children[b"download"]
235 self.thumbnail_resource = self.media_repo.children[b"thumbnail"]
233 media_resource = hs.get_media_repository_resource()
234 self.download_resource = media_resource.children[b"download"]
235 self.thumbnail_resource = media_resource.children[b"thumbnail"]
236 self.store = hs.get_datastore()
237 self.media_repo = hs.get_media_repository()
236238
237239 self.media_id = "example.com/12345"
238240
355357 Override the config to generate only cropped thumbnails, but request a scaled one.
356358 """
357359 self._test_thumbnail("scale", None, False)
360
361 def test_thumbnail_repeated_thumbnail(self):
362 """Test that fetching the same thumbnail works, and deleting the on disk
363 thumbnail regenerates it.
364 """
365 self._test_thumbnail(
366 "scale", self.test_image.expected_scaled, self.test_image.expected_found
367 )
368
369 if not self.test_image.expected_found:
370 return
371
372 # Fetching again should work, without re-requesting the image from the
373 # remote.
374 params = "?width=32&height=32&method=scale"
375 channel = make_request(
376 self.reactor,
377 FakeSite(self.thumbnail_resource),
378 "GET",
379 self.media_id + params,
380 shorthand=False,
381 await_result=False,
382 )
383 self.pump()
384
385 self.assertEqual(channel.code, 200)
386 if self.test_image.expected_scaled:
387 self.assertEqual(
388 channel.result["body"],
389 self.test_image.expected_scaled,
390 channel.result["body"],
391 )
392
393 # Deleting the thumbnail on disk then re-requesting it should work as
394 # Synapse should regenerate missing thumbnails.
395 origin, media_id = self.media_id.split("/")
396 info = self.get_success(self.store.get_cached_remote_media(origin, media_id))
397 file_id = info["filesystem_id"]
398
399 thumbnail_dir = self.media_repo.filepaths.remote_media_thumbnail_dir(
400 origin, file_id
401 )
402 shutil.rmtree(thumbnail_dir, ignore_errors=True)
403
404 channel = make_request(
405 self.reactor,
406 FakeSite(self.thumbnail_resource),
407 "GET",
408 self.media_id + params,
409 shorthand=False,
410 await_result=False,
411 )
412 self.pump()
413
414 self.assertEqual(channel.code, 200)
415 if self.test_image.expected_scaled:
416 self.assertEqual(
417 channel.result["body"],
418 self.test_image.expected_scaled,
419 channel.result["body"],
420 )
358421
359422 def _test_thumbnail(self, method, expected_body, expected_found):
360423 params = "?width=32&height=32&method=" + method
123123 return address.IPv4Address("TCP", self._ip, 3423)
124124
125125 def getHost(self):
126 return None
126 # this is called by Request.__init__ to configure Request.host.
127 return address.IPv4Address("TCP", "127.0.0.1", 8888)
128
129 def isSecure(self):
130 return False
127131
128132 @property
129133 def transport(self):
113113 "server_name": name,
114114 "send_federation": False,
115115 "media_store_path": "media",
116 "uploads_path": "uploads",
117116 # the test signing key is just an arbitrary ed25519 key to keep the config
118117 # parser happy
119118 "signing_key": "ed25519 a_lPym qvioDNmfExFBRPgdTU+wtFYKq4JfwFRv7sYVgWvmgJg",
188188 [testenv:mypy]
189189 deps =
190190 {[base]deps}
191 # Type hints are broken with Twisted > 20.3.0, see https://github.com/matrix-org/synapse/issues/9513
192 twisted==20.3.0
191193 extras = all,mypy
192194 commands = mypy