New upstream version 1.27.0
Andrej Shadura
3 years ago
9 | 9 | |
10 | 10 | export LANG="C.UTF-8" |
11 | 11 | |
12 | # Prevent virtualenv from auto-updating pip to an incompatible version | |
13 | export VIRTUALENV_NO_DOWNLOAD=1 | |
14 | ||
12 | 15 | exec tox -e py35-old,combine |
0 | Synapse 1.27.0 (2021-02-16) | |
1 | =========================== | |
2 | ||
3 | Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically. | |
4 | ||
5 | This release also changes the callback URI for OpenID Connect (OIDC) identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes. | |
6 | ||
7 | This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes. | |
8 | ||
9 | ||
10 | Bugfixes | |
11 | -------- | |
12 | ||
13 | - Fix building Docker images for armv7. ([\#9405](https://github.com/matrix-org/synapse/issues/9405)) | |
14 | ||
15 | ||
16 | Synapse 1.27.0rc2 (2021-02-11) | |
17 | ============================== | |
18 | ||
19 | Features | |
20 | -------- | |
21 | ||
22 | - Further improvements to the user experience of registration via single sign-on. ([\#9297](https://github.com/matrix-org/synapse/issues/9297)) | |
23 | ||
24 | ||
25 | Bugfixes | |
26 | -------- | |
27 | ||
28 | - Fix ratelimiting introduced in v1.27.0rc1 for invites to respect the `ratelimit` flag on application services. ([\#9302](https://github.com/matrix-org/synapse/issues/9302)) | |
29 | - Do not automatically calculate `public_baseurl` since it can be wrong in some situations. Reverts behaviour introduced in v1.26.0. ([\#9313](https://github.com/matrix-org/synapse/issues/9313)) | |
30 | ||
31 | ||
32 | Improved Documentation | |
33 | ---------------------- | |
34 | ||
35 | - Clarify the sample configuration for changes made to the template loading code. ([\#9310](https://github.com/matrix-org/synapse/issues/9310)) | |
36 | ||
37 | ||
38 | Synapse 1.27.0rc1 (2021-02-02) | |
39 | ============================== | |
40 | ||
41 | Features | |
42 | -------- | |
43 | ||
44 | - Add an admin API for getting and deleting forward extremities for a room. ([\#9062](https://github.com/matrix-org/synapse/issues/9062)) | |
45 | - Add an admin API for retrieving the current room state of a room. ([\#9168](https://github.com/matrix-org/synapse/issues/9168)) | |
46 | - Add experimental support for allowing clients to pick an SSO Identity Provider ([MSC2858](https://github.com/matrix-org/matrix-doc/pull/2858)). ([\#9183](https://github.com/matrix-org/synapse/issues/9183), [\#9242](https://github.com/matrix-org/synapse/issues/9242)) | |
47 | - Add an admin API endpoint for shadow-banning users. ([\#9209](https://github.com/matrix-org/synapse/issues/9209)) | |
48 | - Add ratelimits to the 3PID `/requestToken` APIs. ([\#9238](https://github.com/matrix-org/synapse/issues/9238)) | |
49 | - Add support to the OpenID Connect integration for adding the user's email address. ([\#9245](https://github.com/matrix-org/synapse/issues/9245)) | |
50 | - Add ratelimits to invites in rooms and to specific users. ([\#9258](https://github.com/matrix-org/synapse/issues/9258)) | |
51 | - Improve the user experience of setting up an account via single-sign on. ([\#9262](https://github.com/matrix-org/synapse/issues/9262), [\#9272](https://github.com/matrix-org/synapse/issues/9272), [\#9275](https://github.com/matrix-org/synapse/issues/9275), [\#9276](https://github.com/matrix-org/synapse/issues/9276), [\#9277](https://github.com/matrix-org/synapse/issues/9277), [\#9286](https://github.com/matrix-org/synapse/issues/9286), [\#9287](https://github.com/matrix-org/synapse/issues/9287)) | |
52 | - Add phone home stats for encrypted messages. ([\#9283](https://github.com/matrix-org/synapse/issues/9283)) | |
53 | - Update the redirect URI for OIDC authentication. ([\#9288](https://github.com/matrix-org/synapse/issues/9288)) | |
54 | ||
55 | ||
56 | Bugfixes | |
57 | -------- | |
58 | ||
59 | - Fix spurious errors in logs when deleting a non-existant pusher. ([\#9121](https://github.com/matrix-org/synapse/issues/9121)) | |
60 | - Fix a long-standing bug where Synapse would return a 500 error when a thumbnail did not exist (and auto-generation of thumbnails was not enabled). ([\#9163](https://github.com/matrix-org/synapse/issues/9163)) | |
61 | - Fix a long-standing bug where an internal server error was raised when attempting to preview an HTML document in an unknown character encoding. ([\#9164](https://github.com/matrix-org/synapse/issues/9164)) | |
62 | - Fix a long-standing bug where invalid data could cause errors when calculating the presentable room name for push. ([\#9165](https://github.com/matrix-org/synapse/issues/9165)) | |
63 | - Fix bug where we sometimes didn't detect that Redis connections had died, causing workers to not see new data. ([\#9218](https://github.com/matrix-org/synapse/issues/9218)) | |
64 | - Fix a bug where `None` was passed to Synapse modules instead of an empty dictionary if an empty module `config` block was provided in the homeserver config. ([\#9229](https://github.com/matrix-org/synapse/issues/9229)) | |
65 | - Fix a bug in the `make_room_admin` admin API where it failed if the admin with the greatest power level was not in the room. Contributed by Pankaj Yadav. ([\#9235](https://github.com/matrix-org/synapse/issues/9235)) | |
66 | - Prevent password hashes from getting dropped if a client failed threepid validation during a User Interactive Auth stage. Removes a workaround for an ancient bug in Riot Web <v0.7.4. ([\#9265](https://github.com/matrix-org/synapse/issues/9265)) | |
67 | - Fix single-sign-on when the endpoints are routed to synapse workers. ([\#9271](https://github.com/matrix-org/synapse/issues/9271)) | |
68 | ||
69 | ||
70 | Improved Documentation | |
71 | ---------------------- | |
72 | ||
73 | - Add docs for using Gitea as OpenID provider. ([\#9134](https://github.com/matrix-org/synapse/issues/9134)) | |
74 | - Add link to Matrix VoIP tester for turn-howto. ([\#9135](https://github.com/matrix-org/synapse/issues/9135)) | |
75 | - Add notes on integrating with Facebook for SSO login. ([\#9244](https://github.com/matrix-org/synapse/issues/9244)) | |
76 | ||
77 | ||
78 | Deprecations and Removals | |
79 | ------------------------- | |
80 | ||
81 | - The `service_url` parameter in `cas_config` is deprecated in favor of `public_baseurl`. ([\#9199](https://github.com/matrix-org/synapse/issues/9199)) | |
82 | - Add new endpoint `/_synapse/client/saml2` for SAML2 authentication callbacks, and deprecate the old endpoint `/_matrix/saml2`. ([\#9289](https://github.com/matrix-org/synapse/issues/9289)) | |
83 | ||
84 | ||
85 | Internal Changes | |
86 | ---------------- | |
87 | ||
88 | - Add tests to `test_user.UsersListTestCase` for List Users Admin API. ([\#9045](https://github.com/matrix-org/synapse/issues/9045)) | |
89 | - Various improvements to the federation client. ([\#9129](https://github.com/matrix-org/synapse/issues/9129)) | |
90 | - Speed up chain cover calculation when persisting a batch of state events at once. ([\#9176](https://github.com/matrix-org/synapse/issues/9176)) | |
91 | - Add a `long_description_type` to the package metadata. ([\#9180](https://github.com/matrix-org/synapse/issues/9180)) | |
92 | - Speed up batch insertion when using PostgreSQL. ([\#9181](https://github.com/matrix-org/synapse/issues/9181), [\#9188](https://github.com/matrix-org/synapse/issues/9188)) | |
93 | - Emit an error at startup if different Identity Providers are configured with the same `idp_id`. ([\#9184](https://github.com/matrix-org/synapse/issues/9184)) | |
94 | - Improve performance of concurrent use of `StreamIDGenerators`. ([\#9190](https://github.com/matrix-org/synapse/issues/9190)) | |
95 | - Add some missing source directories to the automatic linting script. ([\#9191](https://github.com/matrix-org/synapse/issues/9191)) | |
96 | - Precompute joined hosts and store in Redis. ([\#9198](https://github.com/matrix-org/synapse/issues/9198), [\#9227](https://github.com/matrix-org/synapse/issues/9227)) | |
97 | - Clean-up template loading code. ([\#9200](https://github.com/matrix-org/synapse/issues/9200)) | |
98 | - Fix the Python 3.5 old dependencies build. ([\#9217](https://github.com/matrix-org/synapse/issues/9217)) | |
99 | - Update `isort` to v5.7.0 to bypass a bug where it would disagree with `black` about formatting. ([\#9222](https://github.com/matrix-org/synapse/issues/9222)) | |
100 | - Add type hints to handlers code. ([\#9223](https://github.com/matrix-org/synapse/issues/9223), [\#9232](https://github.com/matrix-org/synapse/issues/9232)) | |
101 | - Fix Debian package building on Ubuntu 16.04 LTS (Xenial). ([\#9254](https://github.com/matrix-org/synapse/issues/9254)) | |
102 | - Minor performance improvement during TLS handshake. ([\#9255](https://github.com/matrix-org/synapse/issues/9255)) | |
103 | - Refactor the generation of summary text for email notifications. ([\#9260](https://github.com/matrix-org/synapse/issues/9260)) | |
104 | - Restore PyPy compatibility by not calling CPython-specific GC methods when under PyPy. ([\#9270](https://github.com/matrix-org/synapse/issues/9270)) | |
105 | ||
106 | ||
0 | 107 | Synapse 1.26.0 (2021-01-27) |
1 | 108 | =========================== |
2 | 109 |
83 | 83 | # replace `1.3.0` and `stretch` accordingly: |
84 | 84 | wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
85 | 85 | dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
86 | ||
87 | Upgrading to v1.27.0 | |
88 | ==================== | |
89 | ||
90 | Changes to callback URI for OAuth2 / OpenID Connect | |
91 | --------------------------------------------------- | |
92 | ||
93 | This version changes the URI used for callbacks from OAuth2 identity providers. If | |
94 | your server is configured for single sign-on via an OpenID Connect or OAuth2 identity | |
95 | provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback`` | |
96 | to the list of permitted "redirect URIs" at the identity provider. | |
97 | ||
98 | See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID | |
99 | Connect. | |
100 | ||
101 | (Note: a similar change is being made for SAML2; in this case the old URI | |
102 | ``[synapse public baseurl]/_matrix/saml2`` is being deprecated, but will continue to | |
103 | work, so no immediate changes are required for existing installations.) | |
104 | ||
105 | Changes to HTML templates | |
106 | ------------------------- | |
107 | ||
108 | The HTML templates for SSO and email notifications now have `Jinja2's autoescape <https://jinja.palletsprojects.com/en/2.11.x/api/#autoescaping>`_ | |
109 | enabled for files ending in ``.html``, ``.htm``, and ``.xml``. If you have customised | |
110 | these templates and see issues when viewing them you might need to update them. | |
111 | It is expected that most configurations will need no changes. | |
112 | ||
113 | If you have customised the templates *names* for these templates, it is recommended | |
114 | to verify they end in ``.html`` to ensure autoescape is enabled. | |
115 | ||
116 | The above applies to the following templates: | |
117 | ||
118 | * ``add_threepid.html`` | |
119 | * ``add_threepid_failure.html`` | |
120 | * ``add_threepid_success.html`` | |
121 | * ``notice_expiry.html`` | |
122 | * ``notice_expiry.html`` | |
123 | * ``notif_mail.html`` (which, by default, includes ``room.html`` and ``notif.html``) | |
124 | * ``password_reset.html`` | |
125 | * ``password_reset_confirmation.html`` | |
126 | * ``password_reset_failure.html`` | |
127 | * ``password_reset_success.html`` | |
128 | * ``registration.html`` | |
129 | * ``registration_failure.html`` | |
130 | * ``registration_success.html`` | |
131 | * ``sso_account_deactivated.html`` | |
132 | * ``sso_auth_bad_user.html`` | |
133 | * ``sso_auth_confirm.html`` | |
134 | * ``sso_auth_success.html`` | |
135 | * ``sso_error.html`` | |
136 | * ``sso_login_idp_picker.html`` | |
137 | * ``sso_redirect_confirm.html`` | |
86 | 138 | |
87 | 139 | Upgrading to v1.26.0 |
88 | 140 | ==================== |
197 | 249 | |
198 | 250 | return {"localpart": localpart} |
199 | 251 | |
200 | Removal historical Synapse Admin API | |
252 | Removal historical Synapse Admin API | |
201 | 253 | ------------------------------------ |
202 | 254 | |
203 | 255 | Historically, the Synapse Admin API has been accessible under: |
32 | 32 | # Use --builtin-venv to use the better `venv` module from CPython 3.4+ rather |
33 | 33 | # than the 2/3 compatible `virtualenv`. |
34 | 34 | |
35 | # Pin pip to 20.3.4 to fix breakage in 21.0 on py3.5 (xenial) | |
36 | ||
35 | 37 | dh_virtualenv \ |
36 | 38 | --install-suffix "matrix-synapse" \ |
37 | 39 | --builtin-venv \ |
38 | 40 | --python "$SNAKE" \ |
39 | --upgrade-pip \ | |
41 | --upgrade-pip-to="20.3.4" \ | |
40 | 42 | --preinstall="lxml" \ |
41 | 43 | --preinstall="mock" \ |
42 | 44 | --extra-pip-arg="--no-cache-dir" \ |
0 | matrix-synapse-py3 (1.25.0ubuntu1) UNRELEASED; urgency=medium | |
1 | ||
0 | matrix-synapse-py3 (1.27.0) stable; urgency=medium | |
1 | ||
2 | [ Dan Callahan ] | |
3 | * Fix build on Ubuntu 16.04 LTS (Xenial). | |
4 | ||
5 | [ Synapse Packaging team ] | |
6 | * New synapse release 1.27.0. | |
7 | ||
8 | -- Synapse Packaging team <packages@matrix.org> Tue, 16 Feb 2021 13:11:28 +0000 | |
9 | ||
10 | matrix-synapse-py3 (1.26.0) stable; urgency=medium | |
11 | ||
12 | [ Richard van der Hoff ] | |
2 | 13 | * Remove dependency on `python3-distutils`. |
3 | 14 | |
4 | -- Richard van der Hoff <richard@matrix.org> Fri, 15 Jan 2021 12:44:19 +0000 | |
15 | [ Synapse Packaging team ] | |
16 | * New synapse release 1.26.0. | |
17 | ||
18 | -- Synapse Packaging team <packages@matrix.org> Wed, 27 Jan 2021 12:43:35 -0500 | |
5 | 19 | |
6 | 20 | matrix-synapse-py3 (1.25.0) stable; urgency=medium |
7 | 21 |
27 | 27 | libwebp-dev \ |
28 | 28 | libxml++2.6-dev \ |
29 | 29 | libxslt1-dev \ |
30 | rustc \ | |
30 | 31 | zlib1g-dev \ |
31 | 32 | && rm -rf /var/lib/apt/lists/* |
32 | 33 | |
33 | 34 | # Build dependencies that are not available as wheels, to speed up rebuilds |
34 | 35 | RUN pip install --prefix="/install" --no-warn-script-location \ |
36 | cryptography \ | |
35 | 37 | frozendict \ |
36 | 38 | jaeger-client \ |
37 | 39 | opentracing \ |
26 | 26 | wget |
27 | 27 | |
28 | 28 | # fetch and unpack the package |
29 | # TODO: Upgrade to 1.2.2 once xenial is dropped | |
29 | 30 | RUN mkdir /dh-virtualenv |
30 | 31 | RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz |
31 | 32 | RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz |
8 | 8 | * [Response](#response) |
9 | 9 | * [Undoing room shutdowns](#undoing-room-shutdowns) |
10 | 10 | - [Make Room Admin API](#make-room-admin-api) |
11 | - [Forward Extremities Admin API](#forward-extremities-admin-api) | |
11 | 12 | |
12 | 13 | # List Room API |
13 | 14 | |
366 | 367 | } |
367 | 368 | ``` |
368 | 369 | |
370 | # Room State API | |
371 | ||
372 | The Room State admin API allows server admins to get a list of all state events in a room. | |
373 | ||
374 | The response includes the following fields: | |
375 | ||
376 | * `state` - The current state of the room at the time of request. | |
377 | ||
378 | ## Usage | |
379 | ||
380 | A standard request: | |
381 | ||
382 | ``` | |
383 | GET /_synapse/admin/v1/rooms/<room_id>/state | |
384 | ||
385 | {} | |
386 | ``` | |
387 | ||
388 | Response: | |
389 | ||
390 | ```json | |
391 | { | |
392 | "state": [ | |
393 | {"type": "m.room.create", "state_key": "", "etc": true}, | |
394 | {"type": "m.room.power_levels", "state_key": "", "etc": true}, | |
395 | {"type": "m.room.name", "state_key": "", "etc": true} | |
396 | ] | |
397 | } | |
398 | ``` | |
399 | ||
369 | 400 | # Delete Room API |
370 | 401 | |
371 | 402 | The Delete Room admin API allows server admins to remove rooms from server |
510 | 541 | "user_id": "@foo:example.com" |
511 | 542 | } |
512 | 543 | ``` |
544 | ||
545 | # Forward Extremities Admin API | |
546 | ||
547 | Enables querying and deleting forward extremities from rooms. When a lot of forward | |
548 | extremities accumulate in a room, performance can become degraded. For details, see | |
549 | [#1760](https://github.com/matrix-org/synapse/issues/1760). | |
550 | ||
551 | ## Check for forward extremities | |
552 | ||
553 | To check the status of forward extremities for a room: | |
554 | ||
555 | ``` | |
556 | GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities | |
557 | ``` | |
558 | ||
559 | A response as follows will be returned: | |
560 | ||
561 | ```json | |
562 | { | |
563 | "count": 1, | |
564 | "results": [ | |
565 | { | |
566 | "event_id": "$M5SP266vsnxctfwFgFLNceaCo3ujhRtg_NiiHabcdefgh", | |
567 | "state_group": 439, | |
568 | "depth": 123, | |
569 | "received_ts": 1611263016761 | |
570 | } | |
571 | ] | |
572 | } | |
573 | ``` | |
574 | ||
575 | ## Deleting forward extremities | |
576 | ||
577 | **WARNING**: Please ensure you know what you're doing and have read | |
578 | the related issue [#1760](https://github.com/matrix-org/synapse/issues/1760). | |
579 | Under no situations should this API be executed as an automated maintenance task! | |
580 | ||
581 | If a room has lots of forward extremities, the extra can be | |
582 | deleted as follows: | |
583 | ||
584 | ``` | |
585 | DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities | |
586 | ``` | |
587 | ||
588 | A response as follows will be returned, indicating the amount of forward extremities | |
589 | that were deleted. | |
590 | ||
591 | ```json | |
592 | { | |
593 | "deleted": 1 | |
594 | } | |
595 | ``` |
759 | 759 | - ``total`` - integer - Number of pushers. |
760 | 760 | |
761 | 761 | See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_ |
762 | ||
763 | Shadow-banning users | |
764 | ==================== | |
765 | ||
766 | Shadow-banning is a useful tool for moderating malicious or egregiously abusive users. | |
767 | A shadow-banned users receives successful responses to their client-server API requests, | |
768 | but the events are not propagated into rooms. This can be an effective tool as it | |
769 | (hopefully) takes longer for the user to realise they are being moderated before | |
770 | pivoting to another account. | |
771 | ||
772 | Shadow-banning a user should be used as a tool of last resort and may lead to confusing | |
773 | or broken behaviour for the client. A shadow-banned user will not receive any | |
774 | notification and it is generally more appropriate to ban or kick abusive users. | |
775 | A shadow-banned user will be unable to contact anyone on the server. | |
776 | ||
777 | The API is:: | |
778 | ||
779 | POST /_synapse/admin/v1/users/<user_id>/shadow_ban | |
780 | ||
781 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
782 | server admin: see `README.rst <README.rst>`_. | |
783 | ||
784 | An empty JSON dict is returned. | |
785 | ||
786 | **Parameters** | |
787 | ||
788 | The following parameters should be set in the URL: | |
789 | ||
790 | - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must | |
791 | be local. |
43 | 43 | |
44 | 44 | To enable the OpenID integration, you should then add a section to the `oidc_providers` |
45 | 45 | setting in your configuration file (or uncomment one of the existing examples). |
46 | See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as | |
46 | See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as | |
47 | 47 | the text below for example configurations for specific providers. |
48 | 48 | |
49 | 49 | ## Sample configs |
51 | 51 | Here are a few configs for providers that should work with Synapse. |
52 | 52 | |
53 | 53 | ### Microsoft Azure Active Directory |
54 | Azure AD can act as an OpenID Connect Provider. Register a new application under | |
54 | Azure AD can act as an OpenID Connect Provider. Register a new application under | |
55 | 55 | *App registrations* in the Azure AD management console. The RedirectURI for your |
56 | application should point to your matrix server: `[synapse public baseurl]/_synapse/oidc/callback` | |
57 | ||
58 | Go to *Certificates & secrets* and register a new client secret. Make note of your | |
56 | application should point to your matrix server: | |
57 | `[synapse public baseurl]/_synapse/client/oidc/callback` | |
58 | ||
59 | Go to *Certificates & secrets* and register a new client secret. Make note of your | |
59 | 60 | Directory (tenant) ID as it will be used in the Azure links. |
60 | 61 | Edit your Synapse config file and change the `oidc_config` section: |
61 | 62 | |
93 | 94 | - id: synapse |
94 | 95 | secret: secret |
95 | 96 | redirectURIs: |
96 | - '[synapse public baseurl]/_synapse/oidc/callback' | |
97 | - '[synapse public baseurl]/_synapse/client/oidc/callback' | |
97 | 98 | name: 'Synapse' |
98 | 99 | ``` |
99 | 100 | |
117 | 118 | ``` |
118 | 119 | ### [Keycloak][keycloak-idp] |
119 | 120 | |
120 | [Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat. | |
121 | [Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat. | |
121 | 122 | |
122 | 123 | Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to install Keycloak and set up a realm. |
123 | 124 | |
139 | 140 | | Enabled | `On` | |
140 | 141 | | Client Protocol | `openid-connect` | |
141 | 142 | | Access Type | `confidential` | |
142 | | Valid Redirect URIs | `[synapse public baseurl]/_synapse/oidc/callback` | | |
143 | | Valid Redirect URIs | `[synapse public baseurl]/_synapse/client/oidc/callback` | | |
143 | 144 | |
144 | 145 | 5. Click `Save` |
145 | 146 | 6. On the Credentials tab, update the fields: |
167 | 168 | ### [Auth0][auth0] |
168 | 169 | |
169 | 170 | 1. Create a regular web application for Synapse |
170 | 2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/oidc/callback` | |
171 | 2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/client/oidc/callback` | |
171 | 172 | 3. Add a rule to add the `preferred_username` claim. |
172 | 173 | <details> |
173 | 174 | <summary>Code sample</summary> |
193 | 194 | |
194 | 195 | ```yaml |
195 | 196 | oidc_providers: |
196 | - idp_id: auth0 | |
197 | - idp_id: auth0 | |
197 | 198 | idp_name: Auth0 |
198 | 199 | issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED |
199 | 200 | client_id: "your-client-id" # TO BE FILLED |
216 | 217 | does not return a `sub` property, an alternative `subject_claim` has to be set. |
217 | 218 | |
218 | 219 | 1. Create a new OAuth application: https://github.com/settings/applications/new. |
219 | 2. Set the callback URL to `[synapse public baseurl]/_synapse/oidc/callback`. | |
220 | 2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`. | |
220 | 221 | |
221 | 222 | Synapse config: |
222 | 223 | |
224 | 225 | oidc_providers: |
225 | 226 | - idp_id: github |
226 | 227 | idp_name: Github |
228 | idp_brand: "org.matrix.github" # optional: styling hint for clients | |
227 | 229 | discover: false |
228 | 230 | issuer: "https://github.com/" |
229 | 231 | client_id: "your-client-id" # TO BE FILLED |
249 | 251 | oidc_providers: |
250 | 252 | - idp_id: google |
251 | 253 | idp_name: Google |
254 | idp_brand: "org.matrix.google" # optional: styling hint for clients | |
252 | 255 | issuer: "https://accounts.google.com/" |
253 | 256 | client_id: "your-client-id" # TO BE FILLED |
254 | 257 | client_secret: "your-client-secret" # TO BE FILLED |
259 | 262 | display_name_template: "{{ user.name }}" |
260 | 263 | ``` |
261 | 264 | 4. Back in the Google console, add this Authorized redirect URI: `[synapse |
262 | public baseurl]/_synapse/oidc/callback`. | |
265 | public baseurl]/_synapse/client/oidc/callback`. | |
263 | 266 | |
264 | 267 | ### Twitch |
265 | 268 | |
266 | 269 | 1. Setup a developer account on [Twitch](https://dev.twitch.tv/) |
267 | 270 | 2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/) |
268 | 3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/oidc/callback` | |
271 | 3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/client/oidc/callback` | |
269 | 272 | |
270 | 273 | Synapse config: |
271 | 274 | |
287 | 290 | |
288 | 291 | 1. Create a [new application](https://gitlab.com/profile/applications). |
289 | 292 | 2. Add the `read_user` and `openid` scopes. |
290 | 3. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback` | |
293 | 3. Add this Callback URL: `[synapse public baseurl]/_synapse/client/oidc/callback` | |
291 | 294 | |
292 | 295 | Synapse config: |
293 | 296 | |
295 | 298 | oidc_providers: |
296 | 299 | - idp_id: gitlab |
297 | 300 | idp_name: Gitlab |
301 | idp_brand: "org.matrix.gitlab" # optional: styling hint for clients | |
298 | 302 | issuer: "https://gitlab.com/" |
299 | 303 | client_id: "your-client-id" # TO BE FILLED |
300 | 304 | client_secret: "your-client-secret" # TO BE FILLED |
306 | 310 | localpart_template: '{{ user.nickname }}' |
307 | 311 | display_name_template: '{{ user.name }}' |
308 | 312 | ``` |
313 | ||
314 | ||
315 | ||
316 | Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant | |
317 | one so requires a little more configuration. | |
318 | ||
319 | 0. You will need a Facebook developer account. You can register for one | |
320 | [here](https://developers.facebook.com/async/registration/). | |
321 | 1. On the [apps](https://developers.facebook.com/apps/) page of the developer | |
322 | console, "Create App", and choose "Build Connected Experiences". | |
323 | 2. Once the app is created, add "Facebook Login" and choose "Web". You don't | |
324 | need to go through the whole form here. | |
325 | 3. In the left-hand menu, open "Products"/"Facebook Login"/"Settings". | |
326 | * Add `[synapse public baseurl]/_synapse/client/oidc/callback` as an OAuth Redirect | |
327 | URL. | |
328 | 4. In the left-hand menu, open "Settings/Basic". Here you can copy the "App ID" | |
329 | and "App Secret" for use below. | |
330 | ||
331 | Synapse config: | |
332 | ||
333 | ```yaml | |
334 | - idp_id: facebook | |
335 | idp_name: Facebook | |
336 | idp_brand: "org.matrix.facebook" # optional: styling hint for clients | |
337 | discover: false | |
338 | issuer: "https://facebook.com" | |
339 | client_id: "your-client-id" # TO BE FILLED | |
340 | client_secret: "your-client-secret" # TO BE FILLED | |
341 | scopes: ["openid", "email"] | |
342 | authorization_endpoint: https://facebook.com/dialog/oauth | |
343 | token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token | |
344 | user_profile_method: "userinfo_endpoint" | |
345 | userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture" | |
346 | user_mapping_provider: | |
347 | config: | |
348 | subject_claim: "id" | |
349 | display_name_template: "{{ user.name }}" | |
350 | ``` | |
351 | ||
352 | Relevant documents: | |
353 | * https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow | |
354 | * Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/ | |
355 | * Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user | |
356 | ||
357 | ### Gitea | |
358 | ||
359 | Gitea is, like Github, not an OpenID provider, but just an OAuth2 provider. | |
360 | ||
361 | The [`/user` API endpoint](https://try.gitea.io/api/swagger#/user/userGetCurrent) | |
362 | can be used to retrieve information on the authenticated user. As the Synapse | |
363 | login mechanism needs an attribute to uniquely identify users, and that endpoint | |
364 | does not return a `sub` property, an alternative `subject_claim` has to be set. | |
365 | ||
366 | 1. Create a new application. | |
367 | 2. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback` | |
368 | ||
369 | Synapse config: | |
370 | ||
371 | ```yaml | |
372 | oidc_providers: | |
373 | - idp_id: gitea | |
374 | idp_name: Gitea | |
375 | discover: false | |
376 | issuer: "https://your-gitea.com/" | |
377 | client_id: "your-client-id" # TO BE FILLED | |
378 | client_secret: "your-client-secret" # TO BE FILLED | |
379 | client_auth_method: client_secret_post | |
380 | scopes: [] # Gitea doesn't support Scopes | |
381 | authorization_endpoint: "https://your-gitea.com/login/oauth/authorize" | |
382 | token_endpoint: "https://your-gitea.com/login/oauth/access_token" | |
383 | userinfo_endpoint: "https://your-gitea.com/api/v1/user" | |
384 | user_mapping_provider: | |
385 | config: | |
386 | subject_claim: "id" | |
387 | localpart_template: "{{ user.login }}" | |
388 | display_name_template: "{{ user.full_name }}" | |
389 | ``` |
72 | 72 | # reverse proxy, this should be the URL to reach Synapse via the proxy. |
73 | 73 | # Otherwise, it should be the URL to reach Synapse's client HTTP listener (see |
74 | 74 | # 'listeners' below). |
75 | # | |
76 | # If this is left unset, it defaults to 'https://<server_name>/'. (Note that | |
77 | # that will not work unless you configure Synapse or a reverse-proxy to listen | |
78 | # on port 443.) | |
79 | 75 | # |
80 | 76 | #public_baseurl: https://example.com/ |
81 | 77 | |
823 | 819 | # users are joining rooms the server is already in (this is cheap) vs |
824 | 820 | # "remote" for when users are trying to join rooms not on the server (which |
825 | 821 | # can be more expensive) |
822 | # - one for ratelimiting how often a user or IP can attempt to validate a 3PID. | |
823 | # - two for ratelimiting how often invites can be sent in a room or to a | |
824 | # specific user. | |
826 | 825 | # |
827 | 826 | # The defaults are as shown below. |
828 | 827 | # |
856 | 855 | # remote: |
857 | 856 | # per_second: 0.01 |
858 | 857 | # burst_count: 3 |
859 | ||
858 | # | |
859 | #rc_3pid_validation: | |
860 | # per_second: 0.003 | |
861 | # burst_count: 5 | |
862 | # | |
863 | #rc_invites: | |
864 | # per_room: | |
865 | # per_second: 0.3 | |
866 | # burst_count: 10 | |
867 | # per_user: | |
868 | # per_second: 0.003 | |
869 | # burst_count: 5 | |
860 | 870 | |
861 | 871 | # Ratelimiting settings for incoming federation |
862 | 872 | # |
1154 | 1164 | # send an email to the account's email address with a renewal link. By |
1155 | 1165 | # default, no such emails are sent. |
1156 | 1166 | # |
1157 | # If you enable this setting, you will also need to fill out the 'email' | |
1158 | # configuration section. You should also check that 'public_baseurl' is set | |
1159 | # correctly. | |
1167 | # If you enable this setting, you will also need to fill out the 'email' and | |
1168 | # 'public_baseurl' configuration sections. | |
1160 | 1169 | # |
1161 | 1170 | #renew_at: 1w |
1162 | 1171 | |
1247 | 1256 | # The identity server which we suggest that clients should use when users log |
1248 | 1257 | # in on this server. |
1249 | 1258 | # |
1250 | # (By default, no suggestion is made, so it is left up to the client.) | |
1259 | # (By default, no suggestion is made, so it is left up to the client. | |
1260 | # This setting is ignored unless public_baseurl is also set.) | |
1251 | 1261 | # |
1252 | 1262 | #default_identity_server: https://matrix.org |
1253 | 1263 | |
1271 | 1281 | # Servers handling the these requests must answer the `/requestToken` endpoints defined |
1272 | 1282 | # by the Matrix Identity Service API specification: |
1273 | 1283 | # https://matrix.org/docs/spec/identity_service/latest |
1284 | # | |
1285 | # If a delegate is specified, the config option public_baseurl must also be filled out. | |
1274 | 1286 | # |
1275 | 1287 | account_threepid_delegates: |
1276 | 1288 | #email: https://example.com # Delegate email sending to example.com |
1551 | 1563 | # enable SAML login. |
1552 | 1564 | # |
1553 | 1565 | # Once SAML support is enabled, a metadata file will be exposed at |
1554 | # https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to | |
1566 | # https://<server>:<port>/_synapse/client/saml2/metadata.xml, which you may be able to | |
1555 | 1567 | # use to configure your SAML IdP with. Alternatively, you can manually configure |
1556 | 1568 | # the IdP to use an ACS location of |
1557 | # https://<server>:<port>/_matrix/saml2/authn_response. | |
1569 | # https://<server>:<port>/_synapse/client/saml2/authn_response. | |
1558 | 1570 | # |
1559 | 1571 | saml2_config: |
1560 | 1572 | # `sp_config` is the configuration for the pysaml2 Service Provider. |
1726 | 1738 | # offer the user a choice of login mechanisms. |
1727 | 1739 | # |
1728 | 1740 | # idp_icon: An optional icon for this identity provider, which is presented |
1729 | # by identity picker pages. If given, must be an MXC URI of the format | |
1730 | # mxc://<server-name>/<media-id>. (An easy way to obtain such an MXC URI | |
1731 | # is to upload an image to an (unencrypted) room and then copy the "url" | |
1732 | # from the source of the event.) | |
1741 | # by clients and Synapse's own IdP picker page. If given, must be an | |
1742 | # MXC URI of the format mxc://<server-name>/<media-id>. (An easy way to | |
1743 | # obtain such an MXC URI is to upload an image to an (unencrypted) room | |
1744 | # and then copy the "url" from the source of the event.) | |
1745 | # | |
1746 | # idp_brand: An optional brand for this identity provider, allowing clients | |
1747 | # to style the login flow according to the identity provider in question. | |
1748 | # See the spec for possible options here. | |
1733 | 1749 | # |
1734 | 1750 | # discover: set to 'false' to disable the use of the OIDC discovery mechanism |
1735 | 1751 | # to discover endpoints. Defaults to true. |
1790 | 1806 | # |
1791 | 1807 | # For the default provider, the following settings are available: |
1792 | 1808 | # |
1793 | # sub: name of the claim containing a unique identifier for the | |
1794 | # user. Defaults to 'sub', which OpenID Connect compliant | |
1795 | # providers should provide. | |
1809 | # subject_claim: name of the claim containing a unique identifier | |
1810 | # for the user. Defaults to 'sub', which OpenID Connect | |
1811 | # compliant providers should provide. | |
1796 | 1812 | # |
1797 | 1813 | # localpart_template: Jinja2 template for the localpart of the MXID. |
1798 | 1814 | # If this is not set, the user will be prompted to choose their |
1799 | # own username. | |
1815 | # own username (see 'sso_auth_account_details.html' in the 'sso' | |
1816 | # section of this file). | |
1800 | 1817 | # |
1801 | 1818 | # display_name_template: Jinja2 template for the display name to set |
1802 | 1819 | # on first login. If unset, no displayname will be set. |
1820 | # | |
1821 | # email_template: Jinja2 template for the email address of the user. | |
1822 | # If unset, no email address will be added to the account. | |
1803 | 1823 | # |
1804 | 1824 | # extra_attributes: a map of Jinja2 templates for extra attributes |
1805 | 1825 | # to send back to the client during login. |
1836 | 1856 | # userinfo_endpoint: "https://accounts.example.com/userinfo" |
1837 | 1857 | # jwks_uri: "https://accounts.example.com/.well-known/jwks.json" |
1838 | 1858 | # skip_verification: true |
1859 | # user_mapping_provider: | |
1860 | # config: | |
1861 | # subject_claim: "id" | |
1862 | # localpart_template: "{ user.login }" | |
1863 | # display_name_template: "{ user.name }" | |
1864 | # email_template: "{ user.email }" | |
1839 | 1865 | |
1840 | 1866 | # For use with Keycloak |
1841 | 1867 | # |
1850 | 1876 | # |
1851 | 1877 | #- idp_id: github |
1852 | 1878 | # idp_name: Github |
1879 | # idp_brand: org.matrix.github | |
1853 | 1880 | # discover: false |
1854 | 1881 | # issuer: "https://github.com/" |
1855 | 1882 | # client_id: "your-client-id" # TO BE FILLED |
1877 | 1904 | # |
1878 | 1905 | #server_url: "https://cas-server.com" |
1879 | 1906 | |
1880 | # The public URL of the homeserver. | |
1881 | # | |
1882 | #service_url: "https://homeserver.domain.com:8448" | |
1883 | ||
1884 | 1907 | # The attribute of the CAS response to use as the display name. |
1885 | 1908 | # |
1886 | 1909 | # If unset, no displayname will be set. |
1912 | 1935 | # phishing attacks from evil.site. To avoid this, include a slash after the |
1913 | 1936 | # hostname: "https://my.client/". |
1914 | 1937 | # |
1915 | # The login fallback page (used by clients that don't natively support the | |
1916 | # required login flows) is automatically whitelisted in addition to any URLs | |
1917 | # in this list. | |
1938 | # If public_baseurl is set, then the login fallback page (used by clients | |
1939 | # that don't natively support the required login flows) is whitelisted in | |
1940 | # addition to any URLs in this list. | |
1918 | 1941 | # |
1919 | 1942 | # By default, this list is empty. |
1920 | 1943 | # |
1935 | 1958 | # |
1936 | 1959 | # When rendering, this template is given the following variables: |
1937 | 1960 | # * redirect_url: the URL that the user will be redirected to after |
1938 | # login. Needs manual escaping (see | |
1939 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
1961 | # login. | |
1940 | 1962 | # |
1941 | 1963 | # * server_name: the homeserver's name. |
1942 | 1964 | # |
1943 | 1965 | # * providers: a list of available Identity Providers. Each element is |
1944 | 1966 | # an object with the following attributes: |
1967 | # | |
1945 | 1968 | # * idp_id: unique identifier for the IdP |
1946 | 1969 | # * idp_name: user-facing name for the IdP |
1970 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
1971 | # for the IdP | |
1972 | # * idp_brand: if specified in the IdP config, a textual identifier | |
1973 | # for the brand of the IdP | |
1947 | 1974 | # |
1948 | 1975 | # The rendered HTML page should contain a form which submits its results |
1949 | 1976 | # back as a GET request, with the following query parameters: |
1953 | 1980 | # |
1954 | 1981 | # * idp: the 'idp_id' of the chosen IDP. |
1955 | 1982 | # |
1983 | # * HTML page to prompt new users to enter a userid and confirm other | |
1984 | # details: 'sso_auth_account_details.html'. This is only shown if the | |
1985 | # SSO implementation (with any user_mapping_provider) does not return | |
1986 | # a localpart. | |
1987 | # | |
1988 | # When rendering, this template is given the following variables: | |
1989 | # | |
1990 | # * server_name: the homeserver's name. | |
1991 | # | |
1992 | # * idp: details of the SSO Identity Provider that the user logged in | |
1993 | # with: an object with the following attributes: | |
1994 | # | |
1995 | # * idp_id: unique identifier for the IdP | |
1996 | # * idp_name: user-facing name for the IdP | |
1997 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
1998 | # for the IdP | |
1999 | # * idp_brand: if specified in the IdP config, a textual identifier | |
2000 | # for the brand of the IdP | |
2001 | # | |
2002 | # * user_attributes: an object containing details about the user that | |
2003 | # we received from the IdP. May have the following attributes: | |
2004 | # | |
2005 | # * display_name: the user's display_name | |
2006 | # * emails: a list of email addresses | |
2007 | # | |
2008 | # The template should render a form which submits the following fields: | |
2009 | # | |
2010 | # * username: the localpart of the user's chosen user id | |
2011 | # | |
2012 | # * HTML page allowing the user to consent to the server's terms and | |
2013 | # conditions. This is only shown for new users, and only if | |
2014 | # `user_consent.require_at_registration` is set. | |
2015 | # | |
2016 | # When rendering, this template is given the following variables: | |
2017 | # | |
2018 | # * server_name: the homeserver's name. | |
2019 | # | |
2020 | # * user_id: the user's matrix proposed ID. | |
2021 | # | |
2022 | # * user_profile.display_name: the user's proposed display name, if any. | |
2023 | # | |
2024 | # * consent_version: the version of the terms that the user will be | |
2025 | # shown | |
2026 | # | |
2027 | # * terms_url: a link to the page showing the terms. | |
2028 | # | |
2029 | # The template should render a form which submits the following fields: | |
2030 | # | |
2031 | # * accepted_version: the version of the terms accepted by the user | |
2032 | # (ie, 'consent_version' from the input variables). | |
2033 | # | |
1956 | 2034 | # * HTML page for a confirmation step before redirecting back to the client |
1957 | 2035 | # with the login token: 'sso_redirect_confirm.html'. |
1958 | 2036 | # |
1959 | # When rendering, this template is given three variables: | |
1960 | # * redirect_url: the URL the user is about to be redirected to. Needs | |
1961 | # manual escaping (see | |
1962 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
2037 | # When rendering, this template is given the following variables: | |
2038 | # | |
2039 | # * redirect_url: the URL the user is about to be redirected to. | |
1963 | 2040 | # |
1964 | 2041 | # * display_url: the same as `redirect_url`, but with the query |
1965 | 2042 | # parameters stripped. The intention is to have a |
1966 | 2043 | # human-readable URL to show to users, not to use it as |
1967 | # the final address to redirect to. Needs manual escaping | |
1968 | # (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
2044 | # the final address to redirect to. | |
1969 | 2045 | # |
1970 | 2046 | # * server_name: the homeserver's name. |
2047 | # | |
2048 | # * new_user: a boolean indicating whether this is the user's first time | |
2049 | # logging in. | |
2050 | # | |
2051 | # * user_id: the user's matrix ID. | |
2052 | # | |
2053 | # * user_profile.avatar_url: an MXC URI for the user's avatar, if any. | |
2054 | # None if the user has not set an avatar. | |
2055 | # | |
2056 | # * user_profile.display_name: the user's display name. None if the user | |
2057 | # has not set a display name. | |
1971 | 2058 | # |
1972 | 2059 | # * HTML page which notifies the user that they are authenticating to confirm |
1973 | 2060 | # an operation on their account during the user interactive authentication |
1974 | 2061 | # process: 'sso_auth_confirm.html'. |
1975 | 2062 | # |
1976 | 2063 | # When rendering, this template is given the following variables: |
1977 | # * redirect_url: the URL the user is about to be redirected to. Needs | |
1978 | # manual escaping (see | |
1979 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
2064 | # * redirect_url: the URL the user is about to be redirected to. | |
1980 | 2065 | # |
1981 | 2066 | # * description: the operation which the user is being asked to confirm |
2067 | # | |
2068 | # * idp: details of the Identity Provider that we will use to confirm | |
2069 | # the user's identity: an object with the following attributes: | |
2070 | # | |
2071 | # * idp_id: unique identifier for the IdP | |
2072 | # * idp_name: user-facing name for the IdP | |
2073 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
2074 | # for the IdP | |
2075 | # * idp_brand: if specified in the IdP config, a textual identifier | |
2076 | # for the brand of the IdP | |
1982 | 2077 | # |
1983 | 2078 | # * HTML page shown after a successful user interactive authentication session: |
1984 | 2079 | # 'sso_auth_success.html'. |
231 | 231 | |
232 | 232 | (Understanding the output is beyond the scope of this document!) |
233 | 233 | |
234 | * You can test your Matrix homeserver TURN setup with https://test.voip.librepush.net/. | |
235 | Note that this test is not fully reliable yet, so don't be discouraged if | |
236 | the test fails. | |
237 | [Here](https://github.com/matrix-org/voip-tester) is the github repo of the | |
238 | source of the tester, where you can file bug reports. | |
239 | ||
234 | 240 | * There is a WebRTC test tool at |
235 | 241 | https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/. To |
236 | 242 | use it, you will need a username/password for your TURN server. You can |
38 | 38 | which relays replication commands between processes. This can give a significant |
39 | 39 | cpu saving on the main process and will be a prerequisite for upcoming |
40 | 40 | performance improvements. |
41 | ||
42 | If Redis support is enabled Synapse will use it as a shared cache, as well as a | |
43 | pub/sub mechanism. | |
41 | 44 | |
42 | 45 | See the [Architectural diagram](#architectural-diagram) section at the end for |
43 | 46 | a visualisation of what this looks like. |
224 | 227 | ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$ |
225 | 228 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$ |
226 | 229 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/ |
227 | ^/_synapse/client/password_reset/email/submit_token$ | |
228 | 230 | |
229 | 231 | # Registration/login requests |
230 | 232 | ^/_matrix/client/(api/v1|r0|unstable)/login$ |
255 | 257 | to use SSO (you only need to include the ones for whichever SSO provider you're |
256 | 258 | using): |
257 | 259 | |
260 | # for all SSO providers | |
261 | ^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect | |
262 | ^/_synapse/client/pick_idp$ | |
263 | ^/_synapse/client/pick_username | |
264 | ^/_synapse/client/new_user_consent$ | |
265 | ^/_synapse/client/sso_register$ | |
266 | ||
258 | 267 | # OpenID Connect requests. |
259 | ^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$ | |
260 | ^/_synapse/oidc/callback$ | |
268 | ^/_synapse/client/oidc/callback$ | |
261 | 269 | |
262 | 270 | # SAML requests. |
263 | ^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$ | |
264 | ^/_matrix/saml2/authn_response$ | |
271 | ^/_synapse/client/saml2/authn_response$ | |
265 | 272 | |
266 | 273 | # CAS requests. |
267 | ^/_matrix/client/(api/v1|r0|unstable)/login/(cas|sso)/redirect$ | |
268 | 274 | ^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$ |
275 | ||
276 | Ensure that all SSO logins go to a single process. | |
277 | For multiple workers not handling the SSO endpoints properly, see | |
278 | [#7530](https://github.com/matrix-org/synapse/issues/7530). | |
269 | 279 | |
270 | 280 | Note that a HTTP listener with `client` and `federation` resources must be |
271 | 281 | configured in the `worker_listeners` option in the worker config. |
272 | ||
273 | Ensure that all SSO logins go to a single process (usually the main process). | |
274 | For multiple workers not handling the SSO endpoints properly, see | |
275 | [#7530](https://github.com/matrix-org/synapse/issues/7530). | |
276 | 282 | |
277 | 283 | #### Load balancing |
278 | 284 |
22 | 22 | synapse/events/validator.py, |
23 | 23 | synapse/events/spamcheck.py, |
24 | 24 | synapse/federation, |
25 | synapse/handlers/_base.py, | |
26 | synapse/handlers/account_data.py, | |
27 | synapse/handlers/account_validity.py, | |
28 | synapse/handlers/admin.py, | |
29 | synapse/handlers/appservice.py, | |
30 | synapse/handlers/auth.py, | |
31 | synapse/handlers/cas_handler.py, | |
32 | synapse/handlers/deactivate_account.py, | |
33 | synapse/handlers/device.py, | |
34 | synapse/handlers/devicemessage.py, | |
35 | synapse/handlers/directory.py, | |
36 | synapse/handlers/events.py, | |
37 | synapse/handlers/federation.py, | |
38 | synapse/handlers/identity.py, | |
39 | synapse/handlers/initial_sync.py, | |
40 | synapse/handlers/message.py, | |
41 | synapse/handlers/oidc_handler.py, | |
42 | synapse/handlers/pagination.py, | |
43 | synapse/handlers/password_policy.py, | |
44 | synapse/handlers/presence.py, | |
45 | synapse/handlers/profile.py, | |
46 | synapse/handlers/read_marker.py, | |
47 | synapse/handlers/receipts.py, | |
48 | synapse/handlers/register.py, | |
49 | synapse/handlers/room.py, | |
50 | synapse/handlers/room_list.py, | |
51 | synapse/handlers/room_member.py, | |
52 | synapse/handlers/room_member_worker.py, | |
53 | synapse/handlers/saml_handler.py, | |
54 | synapse/handlers/sso.py, | |
55 | synapse/handlers/sync.py, | |
56 | synapse/handlers/user_directory.py, | |
57 | synapse/handlers/ui_auth, | |
25 | synapse/handlers, | |
58 | 26 | synapse/http/client.py, |
59 | 27 | synapse/http/federation/matrix_federation_agent.py, |
60 | 28 | synapse/http/federation/well_known_resolver.py, |
193 | 161 | |
194 | 162 | [mypy-hiredis] |
195 | 163 | ignore_missing_imports = True |
164 | ||
165 | [mypy-josepy.*] | |
166 | ignore_missing_imports = True | |
167 | ||
168 | [mypy-txacme.*] | |
169 | ignore_missing_imports = True |
79 | 79 | # then lint everything! |
80 | 80 | if [[ -z ${files+x} ]]; then |
81 | 81 | # Lint all source code files and directories |
82 | files=("synapse" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py" "synmark") | |
82 | # Note: this list aims the mirror the one in tox.ini | |
83 | files=("synapse" "docker" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py" "synmark" "stubs" ".buildkite") | |
83 | 84 | fi |
84 | 85 | fi |
85 | 86 |
95 | 95 | # |
96 | 96 | # We pin black so that our tests don't start failing on new releases. |
97 | 97 | CONDITIONAL_REQUIREMENTS["lint"] = [ |
98 | "isort==5.0.3", | |
98 | "isort==5.7.0", | |
99 | 99 | "black==19.10b0", |
100 | 100 | "flake8-comprehensions", |
101 | 101 | "flake8", |
120 | 120 | include_package_data=True, |
121 | 121 | zip_safe=False, |
122 | 122 | long_description=long_description, |
123 | long_description_content_type="text/x-rst", | |
123 | 124 | python_requires="~=3.5", |
124 | 125 | classifiers=[ |
125 | 126 | "Development Status :: 5 - Production/Stable", |
14 | 14 | |
15 | 15 | """Contains *incomplete* type hints for txredisapi. |
16 | 16 | """ |
17 | ||
18 | from typing import List, Optional, Type, Union | |
17 | from typing import Any, List, Optional, Type, Union | |
19 | 18 | |
20 | 19 | class RedisProtocol: |
21 | 20 | def publish(self, channel: str, message: bytes): ... |
21 | async def ping(self) -> None: ... | |
22 | async def set( | |
23 | self, | |
24 | key: str, | |
25 | value: Any, | |
26 | expire: Optional[int] = None, | |
27 | pexpire: Optional[int] = None, | |
28 | only_if_not_exists: bool = False, | |
29 | only_if_exists: bool = False, | |
30 | ) -> None: ... | |
31 | async def get(self, key: str) -> Any: ... | |
22 | 32 | |
23 | class SubscriberProtocol: | |
33 | class SubscriberProtocol(RedisProtocol): | |
24 | 34 | def __init__(self, *args, **kwargs): ... |
25 | 35 | password: Optional[str] |
26 | 36 | def subscribe(self, channels: Union[str, List[str]]): ... |
39 | 49 | convertNumbers: bool = ..., |
40 | 50 | ) -> RedisProtocol: ... |
41 | 51 | |
42 | class SubscriberFactory: | |
43 | def buildProtocol(self, addr): ... | |
44 | ||
45 | 52 | class ConnectionHandler: ... |
46 | 53 | |
47 | 54 | class RedisFactory: |
48 | 55 | continueTrying: bool |
49 | 56 | handler: RedisProtocol |
57 | pool: List[RedisProtocol] | |
58 | replyTimeout: Optional[int] | |
50 | 59 | def __init__( |
51 | 60 | self, |
52 | 61 | uuid: str, |
59 | 68 | replyTimeout: Optional[int] = None, |
60 | 69 | convertNumbers: Optional[int] = True, |
61 | 70 | ): ... |
71 | def buildProtocol(self, addr) -> RedisProtocol: ... | |
72 | ||
73 | class SubscriberFactory(RedisFactory): | |
74 | def __init__(self): ... |
47 | 47 | except ImportError: |
48 | 48 | pass |
49 | 49 | |
50 | __version__ = "1.26.0" | |
50 | __version__ = "1.27.0" | |
51 | 51 | |
52 | 52 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
53 | 53 | # We import here so that we don't have to install a bunch of deps when |
41 | 41 | """ |
42 | 42 | if hs_config.form_secret is None: |
43 | 43 | raise ConfigError("form_secret not set in config") |
44 | if hs_config.public_baseurl is None: | |
45 | raise ConfigError("public_baseurl not set in config") | |
44 | 46 | |
45 | 47 | self._hmac_secret = hs_config.form_secret.encode("utf-8") |
46 | 48 | self._public_baseurl = hs_config.public_baseurl |
15 | 15 | import gc |
16 | 16 | import logging |
17 | 17 | import os |
18 | import platform | |
18 | 19 | import signal |
19 | 20 | import socket |
20 | 21 | import sys |
338 | 339 | # rest of time. Doing so means less work each GC (hopefully). |
339 | 340 | # |
340 | 341 | # This only works on Python 3.7 |
341 | if sys.version_info >= (3, 7): | |
342 | if platform.python_implementation() == "CPython" and sys.version_info >= (3, 7): | |
342 | 343 | gc.collect() |
343 | 344 | gc.freeze() |
344 | 345 |
21 | 21 | from typing_extensions import ContextManager |
22 | 22 | |
23 | 23 | from twisted.internet import address |
24 | from twisted.web.resource import IResource | |
24 | 25 | |
25 | 26 | import synapse |
26 | 27 | import synapse.events |
89 | 90 | ToDeviceStream, |
90 | 91 | ) |
91 | 92 | from synapse.rest.admin import register_servlets_for_media_repo |
92 | from synapse.rest.client.v1 import events, room | |
93 | from synapse.rest.client.v1 import events, login, room | |
93 | 94 | from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet |
94 | from synapse.rest.client.v1.login import LoginRestServlet | |
95 | 95 | from synapse.rest.client.v1.profile import ( |
96 | 96 | ProfileAvatarURLRestServlet, |
97 | 97 | ProfileDisplaynameRestServlet, |
126 | 126 | from synapse.rest.client.versions import VersionsRestServlet |
127 | 127 | from synapse.rest.health import HealthResource |
128 | 128 | from synapse.rest.key.v2 import KeyApiV2Resource |
129 | from synapse.rest.synapse.client import build_synapse_client_resource_tree | |
129 | 130 | from synapse.server import HomeServer, cache_in_self |
130 | 131 | from synapse.storage.databases.main.censor_events import CensorEventsStore |
131 | 132 | from synapse.storage.databases.main.client_ips import ClientIpWorkerStore |
506 | 507 | site_tag = port |
507 | 508 | |
508 | 509 | # We always include a health resource. |
509 | resources = {"/health": HealthResource()} | |
510 | resources = {"/health": HealthResource()} # type: Dict[str, IResource] | |
510 | 511 | |
511 | 512 | for res in listener_config.http_options.resources: |
512 | 513 | for name in res.names: |
516 | 517 | resource = JsonResource(self, canonical_json=False) |
517 | 518 | |
518 | 519 | RegisterRestServlet(self).register(resource) |
519 | LoginRestServlet(self).register(resource) | |
520 | login.register_servlets(self, resource) | |
520 | 521 | ThreepidRestServlet(self).register(resource) |
521 | 522 | DevicesRestServlet(self).register(resource) |
522 | 523 | KeyQueryServlet(self).register(resource) |
556 | 557 | groups.register_servlets(self, resource) |
557 | 558 | |
558 | 559 | resources.update({CLIENT_API_PREFIX: resource}) |
560 | ||
561 | resources.update(build_synapse_client_resource_tree(self)) | |
559 | 562 | elif name == "federation": |
560 | 563 | resources.update({FEDERATION_PREFIX: TransportLayerServer(self)}) |
561 | 564 | elif name == "media": |
59 | 59 | from synapse.rest.admin import AdminRestResource |
60 | 60 | from synapse.rest.health import HealthResource |
61 | 61 | from synapse.rest.key.v2 import KeyApiV2Resource |
62 | from synapse.rest.synapse.client.pick_idp import PickIdpResource | |
63 | from synapse.rest.synapse.client.pick_username import pick_username_resource | |
62 | from synapse.rest.synapse.client import build_synapse_client_resource_tree | |
64 | 63 | from synapse.rest.well_known import WellKnownResource |
65 | 64 | from synapse.server import HomeServer |
66 | 65 | from synapse.storage import DataStore |
189 | 188 | "/_matrix/client/versions": client_resource, |
190 | 189 | "/.well-known/matrix/client": WellKnownResource(self), |
191 | 190 | "/_synapse/admin": AdminRestResource(self), |
192 | "/_synapse/client/pick_username": pick_username_resource(self), | |
193 | "/_synapse/client/pick_idp": PickIdpResource(self), | |
191 | **build_synapse_client_resource_tree(self), | |
194 | 192 | } |
195 | 193 | ) |
196 | ||
197 | if self.get_config().oidc_enabled: | |
198 | from synapse.rest.oidc import OIDCResource | |
199 | ||
200 | resources["/_synapse/oidc"] = OIDCResource(self) | |
201 | ||
202 | if self.get_config().saml2_enabled: | |
203 | from synapse.rest.saml2 import SAML2Resource | |
204 | ||
205 | resources["/_matrix/saml2"] = SAML2Resource(self) | |
206 | 194 | |
207 | 195 | if self.get_config().threepid_behaviour_email == ThreepidBehaviour.LOCAL: |
208 | 196 | from synapse.rest.synapse.client.password_reset import ( |
92 | 92 | |
93 | 93 | stats["daily_active_users"] = await hs.get_datastore().count_daily_users() |
94 | 94 | stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users() |
95 | daily_active_e2ee_rooms = await hs.get_datastore().count_daily_active_e2ee_rooms() | |
96 | stats["daily_active_e2ee_rooms"] = daily_active_e2ee_rooms | |
97 | stats["daily_e2ee_messages"] = await hs.get_datastore().count_daily_e2ee_messages() | |
98 | daily_sent_e2ee_messages = await hs.get_datastore().count_daily_sent_e2ee_messages() | |
99 | stats["daily_sent_e2ee_messages"] = daily_sent_e2ee_messages | |
95 | 100 | stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms() |
96 | 101 | stats["daily_messages"] = await hs.get_datastore().count_daily_messages() |
102 | daily_sent_messages = await hs.get_datastore().count_daily_sent_messages() | |
103 | stats["daily_sent_messages"] = daily_sent_messages | |
97 | 104 | |
98 | 105 | r30_results = await hs.get_datastore().count_r30_users() |
99 | 106 | for name, count in r30_results.items(): |
100 | 107 | stats["r30_users_" + name] = count |
101 | 108 | |
102 | daily_sent_messages = await hs.get_datastore().count_daily_sent_messages() | |
103 | stats["daily_sent_messages"] = daily_sent_messages | |
104 | 109 | stats["cache_factor"] = hs.config.caches.global_factor |
105 | 110 | stats["event_cache_size"] = hs.config.caches.event_cache_size |
106 | 111 |
17 | 17 | import argparse |
18 | 18 | import errno |
19 | 19 | import os |
20 | import time | |
21 | import urllib.parse | |
22 | 20 | from collections import OrderedDict |
23 | 21 | from hashlib import sha256 |
24 | 22 | from textwrap import dedent |
25 | from typing import Any, Callable, Iterable, List, MutableMapping, Optional | |
23 | from typing import Any, Iterable, List, MutableMapping, Optional | |
26 | 24 | |
27 | 25 | import attr |
28 | 26 | import jinja2 |
29 | 27 | import pkg_resources |
30 | 28 | import yaml |
29 | ||
30 | from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter | |
31 | 31 | |
32 | 32 | |
33 | 33 | class ConfigError(Exception): |
202 | 202 | with open(file_path) as file_stream: |
203 | 203 | return file_stream.read() |
204 | 204 | |
205 | def read_template(self, filename: str) -> jinja2.Template: | |
206 | """Load a template file from disk. | |
207 | ||
208 | This function will attempt to load the given template from the default Synapse | |
209 | template directory. | |
210 | ||
211 | Files read are treated as Jinja templates. The templates is not rendered yet | |
212 | and has autoescape enabled. | |
213 | ||
214 | Args: | |
215 | filename: A template filename to read. | |
216 | ||
217 | Raises: | |
218 | ConfigError: if the file's path is incorrect or otherwise cannot be read. | |
219 | ||
220 | Returns: | |
221 | A jinja2 template. | |
222 | """ | |
223 | return self.read_templates([filename])[0] | |
224 | ||
205 | 225 | def read_templates( |
206 | self, | |
207 | filenames: List[str], | |
208 | custom_template_directory: Optional[str] = None, | |
209 | autoescape: bool = False, | |
226 | self, filenames: List[str], custom_template_directory: Optional[str] = None, | |
210 | 227 | ) -> List[jinja2.Template]: |
211 | 228 | """Load a list of template files from disk using the given variables. |
212 | 229 | |
214 | 231 | template directory. If `custom_template_directory` is supplied, that directory |
215 | 232 | is tried first. |
216 | 233 | |
217 | Files read are treated as Jinja templates. These templates are not rendered yet. | |
234 | Files read are treated as Jinja templates. The templates are not rendered yet | |
235 | and have autoescape enabled. | |
218 | 236 | |
219 | 237 | Args: |
220 | 238 | filenames: A list of template filenames to read. |
222 | 240 | custom_template_directory: A directory to try to look for the templates |
223 | 241 | before using the default Synapse template directory instead. |
224 | 242 | |
225 | autoescape: Whether to autoescape variables before inserting them into the | |
226 | template. | |
227 | ||
228 | 243 | Raises: |
229 | 244 | ConfigError: if the file's path is incorrect or otherwise cannot be read. |
230 | 245 | |
231 | 246 | Returns: |
232 | 247 | A list of jinja2 templates. |
233 | 248 | """ |
234 | templates = [] | |
235 | 249 | search_directories = [self.default_template_dir] |
236 | 250 | |
237 | 251 | # The loader will first look in the custom template directory (if specified) for the |
247 | 261 | # Search the custom template directory as well |
248 | 262 | search_directories.insert(0, custom_template_directory) |
249 | 263 | |
264 | # TODO: switch to synapse.util.templates.build_jinja_env | |
250 | 265 | loader = jinja2.FileSystemLoader(search_directories) |
251 | env = jinja2.Environment(loader=loader, autoescape=autoescape) | |
266 | env = jinja2.Environment(loader=loader, autoescape=jinja2.select_autoescape(),) | |
252 | 267 | |
253 | 268 | # Update the environment with our custom filters |
254 | 269 | env.filters.update( |
258 | 273 | } |
259 | 274 | ) |
260 | 275 | |
261 | for filename in filenames: | |
262 | # Load the template | |
263 | template = env.get_template(filename) | |
264 | templates.append(template) | |
265 | ||
266 | return templates | |
267 | ||
268 | ||
269 | def _format_ts_filter(value: int, format: str): | |
270 | return time.strftime(format, time.localtime(value / 1000)) | |
271 | ||
272 | ||
273 | def _create_mxc_to_http_filter(public_baseurl: str) -> Callable: | |
274 | """Create and return a jinja2 filter that converts MXC urls to HTTP | |
275 | ||
276 | Args: | |
277 | public_baseurl: The public, accessible base URL of the homeserver | |
278 | """ | |
279 | ||
280 | def mxc_to_http_filter(value, width, height, resize_method="crop"): | |
281 | if value[0:6] != "mxc://": | |
282 | return "" | |
283 | ||
284 | server_and_media_id = value[6:] | |
285 | fragment = None | |
286 | if "#" in server_and_media_id: | |
287 | server_and_media_id, fragment = server_and_media_id.split("#", 1) | |
288 | fragment = "#" + fragment | |
289 | ||
290 | params = {"width": width, "height": height, "method": resize_method} | |
291 | return "%s_matrix/media/v1/thumbnail/%s?%s%s" % ( | |
292 | public_baseurl, | |
293 | server_and_media_id, | |
294 | urllib.parse.urlencode(params), | |
295 | fragment or "", | |
296 | ) | |
297 | ||
298 | return mxc_to_http_filter | |
276 | # Load the templates | |
277 | return [env.get_template(filename) for filename in filenames] | |
299 | 278 | |
300 | 279 | |
301 | 280 | class RootConfig: |
8 | 8 | consent_config, |
9 | 9 | database, |
10 | 10 | emailconfig, |
11 | experimental, | |
11 | 12 | groups, |
12 | 13 | jwt_config, |
13 | 14 | key, |
17 | 18 | password_auth_providers, |
18 | 19 | push, |
19 | 20 | ratelimiting, |
21 | redis, | |
20 | 22 | registration, |
21 | 23 | repository, |
22 | 24 | room_directory, |
47 | 49 | |
48 | 50 | class RootConfig: |
49 | 51 | server: server.ServerConfig |
52 | experimental: experimental.ExperimentalConfig | |
50 | 53 | tls: tls.TlsConfig |
51 | 54 | database: database.DatabaseConfig |
52 | 55 | logging: logger.LoggingConfig |
53 | ratelimit: ratelimiting.RatelimitConfig | |
56 | ratelimiting: ratelimiting.RatelimitConfig | |
54 | 57 | media: repository.ContentRepositoryConfig |
55 | 58 | captcha: captcha.CaptchaConfig |
56 | 59 | voip: voip.VoipConfig |
78 | 81 | roomdirectory: room_directory.RoomDirectoryConfig |
79 | 82 | thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig |
80 | 83 | tracer: tracer.TracerConfig |
84 | redis: redis.RedisConfig | |
81 | 85 | |
82 | 86 | config_classes: List = ... |
83 | 87 | def __init__(self) -> None: ... |
27 | 27 | "recaptcha_siteverify_api", |
28 | 28 | "https://www.recaptcha.net/recaptcha/api/siteverify", |
29 | 29 | ) |
30 | self.recaptcha_template = self.read_templates( | |
31 | ["recaptcha.html"], autoescape=True | |
32 | )[0] | |
30 | self.recaptcha_template = self.read_template("recaptcha.html") | |
33 | 31 | |
34 | 32 | def generate_config_section(self, **kwargs): |
35 | 33 | return """\ |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from ._base import Config | |
15 | from ._base import Config, ConfigError | |
16 | 16 | |
17 | 17 | |
18 | 18 | class CasConfig(Config): |
29 | 29 | |
30 | 30 | if self.cas_enabled: |
31 | 31 | self.cas_server_url = cas_config["server_url"] |
32 | self.cas_service_url = cas_config["service_url"] | |
32 | ||
33 | # The public baseurl is required because it is used by the redirect | |
34 | # template. | |
35 | public_baseurl = self.public_baseurl | |
36 | if not public_baseurl: | |
37 | raise ConfigError("cas_config requires a public_baseurl to be set") | |
38 | ||
39 | # TODO Update this to a _synapse URL. | |
40 | self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket" | |
33 | 41 | self.cas_displayname_attribute = cas_config.get("displayname_attribute") |
34 | 42 | self.cas_required_attributes = cas_config.get("required_attributes") or {} |
35 | 43 | else: |
52 | 60 | # |
53 | 61 | #server_url: "https://cas-server.com" |
54 | 62 | |
55 | # The public URL of the homeserver. | |
56 | # | |
57 | #service_url: "https://homeserver.domain.com:8448" | |
58 | ||
59 | 63 | # The attribute of the CAS response to use as the display name. |
60 | 64 | # |
61 | 65 | # If unset, no displayname will be set. |
88 | 88 | |
89 | 89 | def read_config(self, config, **kwargs): |
90 | 90 | consent_config = config.get("user_consent") |
91 | self.terms_template = self.read_templates(["terms.html"], autoescape=True)[0] | |
91 | self.terms_template = self.read_template("terms.html") | |
92 | 92 | |
93 | 93 | if consent_config is None: |
94 | 94 | return |
164 | 164 | missing = [] |
165 | 165 | if not self.email_notif_from: |
166 | 166 | missing.append("email.notif_from") |
167 | ||
168 | # public_baseurl is required to build password reset and validation links that | |
169 | # will be emailed to users | |
170 | if config.get("public_baseurl") is None: | |
171 | missing.append("public_baseurl") | |
167 | 172 | |
168 | 173 | if missing: |
169 | 174 | raise ConfigError( |
263 | 268 | if not self.email_notif_from: |
264 | 269 | missing.append("email.notif_from") |
265 | 270 | |
271 | if config.get("public_baseurl") is None: | |
272 | missing.append("public_baseurl") | |
273 | ||
266 | 274 | if missing: |
267 | 275 | raise ConfigError( |
268 | 276 | "email.enable_notifs is True but required keys are missing: %s" |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from synapse.config._base import Config | |
16 | from synapse.types import JsonDict | |
17 | ||
18 | ||
19 | class ExperimentalConfig(Config): | |
20 | """Config section for enabling experimental features""" | |
21 | ||
22 | section = "experimental" | |
23 | ||
24 | def read_config(self, config: JsonDict, **kwargs): | |
25 | experimental = config.get("experimental_features") or {} | |
26 | ||
27 | # MSC2858 (multiple SSO identity providers) | |
28 | self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool |
23 | 23 | from .consent_config import ConsentConfig |
24 | 24 | from .database import DatabaseConfig |
25 | 25 | from .emailconfig import EmailConfig |
26 | from .experimental import ExperimentalConfig | |
26 | 27 | from .federation import FederationConfig |
27 | 28 | from .groups import GroupsConfig |
28 | 29 | from .jwt_config import JWTConfig |
56 | 57 | |
57 | 58 | config_classes = [ |
58 | 59 | ServerConfig, |
60 | ExperimentalConfig, | |
59 | 61 | TlsConfig, |
60 | 62 | FederationConfig, |
61 | 63 | CacheConfig, |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | import string | |
16 | from collections import Counter | |
17 | 17 | from typing import Iterable, Optional, Tuple, Type |
18 | 18 | |
19 | 19 | import attr |
42 | 42 | except DependencyException as e: |
43 | 43 | raise ConfigError(e.message) from e |
44 | 44 | |
45 | # check we don't have any duplicate idp_ids now. (The SSO handler will also | |
46 | # check for duplicates when the REST listeners get registered, but that happens | |
47 | # after synapse has forked so doesn't give nice errors.) | |
48 | c = Counter([i.idp_id for i in self.oidc_providers]) | |
49 | for idp_id, count in c.items(): | |
50 | if count > 1: | |
51 | raise ConfigError( | |
52 | "Multiple OIDC providers have the idp_id %r." % idp_id | |
53 | ) | |
54 | ||
45 | 55 | public_baseurl = self.public_baseurl |
46 | self.oidc_callback_url = public_baseurl + "_synapse/oidc/callback" | |
56 | if public_baseurl is None: | |
57 | raise ConfigError("oidc_config requires a public_baseurl to be set") | |
58 | self.oidc_callback_url = public_baseurl + "_synapse/client/oidc/callback" | |
47 | 59 | |
48 | 60 | @property |
49 | 61 | def oidc_enabled(self) -> bool: |
67 | 79 | # offer the user a choice of login mechanisms. |
68 | 80 | # |
69 | 81 | # idp_icon: An optional icon for this identity provider, which is presented |
70 | # by identity picker pages. If given, must be an MXC URI of the format | |
71 | # mxc://<server-name>/<media-id>. (An easy way to obtain such an MXC URI | |
72 | # is to upload an image to an (unencrypted) room and then copy the "url" | |
73 | # from the source of the event.) | |
82 | # by clients and Synapse's own IdP picker page. If given, must be an | |
83 | # MXC URI of the format mxc://<server-name>/<media-id>. (An easy way to | |
84 | # obtain such an MXC URI is to upload an image to an (unencrypted) room | |
85 | # and then copy the "url" from the source of the event.) | |
86 | # | |
87 | # idp_brand: An optional brand for this identity provider, allowing clients | |
88 | # to style the login flow according to the identity provider in question. | |
89 | # See the spec for possible options here. | |
74 | 90 | # |
75 | 91 | # discover: set to 'false' to disable the use of the OIDC discovery mechanism |
76 | 92 | # to discover endpoints. Defaults to true. |
131 | 147 | # |
132 | 148 | # For the default provider, the following settings are available: |
133 | 149 | # |
134 | # sub: name of the claim containing a unique identifier for the | |
135 | # user. Defaults to 'sub', which OpenID Connect compliant | |
136 | # providers should provide. | |
150 | # subject_claim: name of the claim containing a unique identifier | |
151 | # for the user. Defaults to 'sub', which OpenID Connect | |
152 | # compliant providers should provide. | |
137 | 153 | # |
138 | 154 | # localpart_template: Jinja2 template for the localpart of the MXID. |
139 | 155 | # If this is not set, the user will be prompted to choose their |
140 | # own username. | |
156 | # own username (see 'sso_auth_account_details.html' in the 'sso' | |
157 | # section of this file). | |
141 | 158 | # |
142 | 159 | # display_name_template: Jinja2 template for the display name to set |
143 | 160 | # on first login. If unset, no displayname will be set. |
161 | # | |
162 | # email_template: Jinja2 template for the email address of the user. | |
163 | # If unset, no email address will be added to the account. | |
144 | 164 | # |
145 | 165 | # extra_attributes: a map of Jinja2 templates for extra attributes |
146 | 166 | # to send back to the client during login. |
177 | 197 | # userinfo_endpoint: "https://accounts.example.com/userinfo" |
178 | 198 | # jwks_uri: "https://accounts.example.com/.well-known/jwks.json" |
179 | 199 | # skip_verification: true |
200 | # user_mapping_provider: | |
201 | # config: | |
202 | # subject_claim: "id" | |
203 | # localpart_template: "{{ user.login }}" | |
204 | # display_name_template: "{{ user.name }}" | |
205 | # email_template: "{{ user.email }}" | |
180 | 206 | |
181 | 207 | # For use with Keycloak |
182 | 208 | # |
191 | 217 | # |
192 | 218 | #- idp_id: github |
193 | 219 | # idp_name: Github |
220 | # idp_brand: org.matrix.github | |
194 | 221 | # discover: false |
195 | 222 | # issuer: "https://github.com/" |
196 | 223 | # client_id: "your-client-id" # TO BE FILLED |
214 | 241 | "type": "object", |
215 | 242 | "required": ["issuer", "client_id", "client_secret"], |
216 | 243 | "properties": { |
217 | # TODO: fix the maxLength here depending on what MSC2528 decides | |
218 | # remember that we prefix the ID given here with `oidc-` | |
219 | "idp_id": {"type": "string", "minLength": 1, "maxLength": 128}, | |
244 | "idp_id": { | |
245 | "type": "string", | |
246 | "minLength": 1, | |
247 | # MSC2858 allows a maxlen of 255, but we prefix with "oidc-" | |
248 | "maxLength": 250, | |
249 | "pattern": "^[A-Za-z0-9._~-]+$", | |
250 | }, | |
220 | 251 | "idp_name": {"type": "string"}, |
221 | 252 | "idp_icon": {"type": "string"}, |
253 | "idp_brand": { | |
254 | "type": "string", | |
255 | # MSC2758-style namespaced identifier | |
256 | "minLength": 1, | |
257 | "maxLength": 255, | |
258 | "pattern": "^[a-z][a-z0-9_.-]*$", | |
259 | }, | |
222 | 260 | "discover": {"type": "boolean"}, |
223 | 261 | "issuer": {"type": "string"}, |
224 | 262 | "client_id": {"type": "string"}, |
337 | 375 | config_path + ("user_mapping_provider", "module"), |
338 | 376 | ) |
339 | 377 | |
340 | # MSC2858 will apply certain limits in what can be used as an IdP id, so let's | |
341 | # enforce those limits now. | |
342 | # TODO: factor out this stuff to a generic function | |
343 | 378 | idp_id = oidc_config.get("idp_id", "oidc") |
344 | ||
345 | # TODO: update this validity check based on what MSC2858 decides. | |
346 | valid_idp_chars = set(string.ascii_lowercase + string.digits + "-._") | |
347 | ||
348 | if any(c not in valid_idp_chars for c in idp_id): | |
349 | raise ConfigError( | |
350 | 'idp_id may only contain a-z, 0-9, "-", ".", "_"', | |
351 | config_path + ("idp_id",), | |
352 | ) | |
353 | ||
354 | if idp_id[0] not in string.ascii_lowercase: | |
355 | raise ConfigError( | |
356 | "idp_id must start with a-z", config_path + ("idp_id",), | |
357 | ) | |
358 | 379 | |
359 | 380 | # prefix the given IDP with a prefix specific to the SSO mechanism, to avoid |
360 | 381 | # clashes with other mechs (such as SAML, CAS). |
381 | 402 | idp_id=idp_id, |
382 | 403 | idp_name=oidc_config.get("idp_name", "OIDC"), |
383 | 404 | idp_icon=idp_icon, |
405 | idp_brand=oidc_config.get("idp_brand"), | |
384 | 406 | discover=oidc_config.get("discover", True), |
385 | 407 | issuer=oidc_config["issuer"], |
386 | 408 | client_id=oidc_config["client_id"], |
411 | 433 | # Optional MXC URI for icon for this IdP. |
412 | 434 | idp_icon = attr.ib(type=Optional[str]) |
413 | 435 | |
436 | # Optional brand identifier for this IdP. | |
437 | idp_brand = attr.ib(type=Optional[str]) | |
438 | ||
414 | 439 | # whether the OIDC discovery mechanism is used to discover endpoints |
415 | 440 | discover = attr.ib(type=bool) |
416 | 441 |
23 | 23 | defaults={"per_second": 0.17, "burst_count": 3.0}, |
24 | 24 | ): |
25 | 25 | self.per_second = config.get("per_second", defaults["per_second"]) |
26 | self.burst_count = config.get("burst_count", defaults["burst_count"]) | |
26 | self.burst_count = int(config.get("burst_count", defaults["burst_count"])) | |
27 | 27 | |
28 | 28 | |
29 | 29 | class FederationRateLimitConfig: |
99 | 99 | self.rc_joins_remote = RateLimitConfig( |
100 | 100 | config.get("rc_joins", {}).get("remote", {}), |
101 | 101 | defaults={"per_second": 0.01, "burst_count": 3}, |
102 | ) | |
103 | ||
104 | self.rc_3pid_validation = RateLimitConfig( | |
105 | config.get("rc_3pid_validation") or {}, | |
106 | defaults={"per_second": 0.003, "burst_count": 5}, | |
107 | ) | |
108 | ||
109 | self.rc_invites_per_room = RateLimitConfig( | |
110 | config.get("rc_invites", {}).get("per_room", {}), | |
111 | defaults={"per_second": 0.3, "burst_count": 10}, | |
112 | ) | |
113 | self.rc_invites_per_user = RateLimitConfig( | |
114 | config.get("rc_invites", {}).get("per_user", {}), | |
115 | defaults={"per_second": 0.003, "burst_count": 5}, | |
102 | 116 | ) |
103 | 117 | |
104 | 118 | def generate_config_section(self, **kwargs): |
130 | 144 | # users are joining rooms the server is already in (this is cheap) vs |
131 | 145 | # "remote" for when users are trying to join rooms not on the server (which |
132 | 146 | # can be more expensive) |
147 | # - one for ratelimiting how often a user or IP can attempt to validate a 3PID. | |
148 | # - two for ratelimiting how often invites can be sent in a room or to a | |
149 | # specific user. | |
133 | 150 | # |
134 | 151 | # The defaults are as shown below. |
135 | 152 | # |
163 | 180 | # remote: |
164 | 181 | # per_second: 0.01 |
165 | 182 | # burst_count: 3 |
166 | ||
183 | # | |
184 | #rc_3pid_validation: | |
185 | # per_second: 0.003 | |
186 | # burst_count: 5 | |
187 | # | |
188 | #rc_invites: | |
189 | # per_room: | |
190 | # per_second: 0.3 | |
191 | # burst_count: 10 | |
192 | # per_user: | |
193 | # per_second: 0.003 | |
194 | # burst_count: 5 | |
167 | 195 | |
168 | 196 | # Ratelimiting settings for incoming federation |
169 | 197 | # |
48 | 48 | |
49 | 49 | self.startup_job_max_delta = self.period * 10.0 / 100.0 |
50 | 50 | |
51 | if self.renew_by_email_enabled: | |
52 | if "public_baseurl" not in synapse_config: | |
53 | raise ConfigError("Can't send renewal emails without 'public_baseurl'") | |
54 | ||
51 | 55 | template_dir = config.get("template_dir") |
52 | 56 | |
53 | 57 | if not template_dir: |
104 | 108 | account_threepid_delegates = config.get("account_threepid_delegates") or {} |
105 | 109 | self.account_threepid_delegate_email = account_threepid_delegates.get("email") |
106 | 110 | self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn") |
111 | if self.account_threepid_delegate_msisdn and not self.public_baseurl: | |
112 | raise ConfigError( | |
113 | "The configuration option `public_baseurl` is required if " | |
114 | "`account_threepid_delegate.msisdn` is set, such that " | |
115 | "clients know where to submit validation tokens to. Please " | |
116 | "configure `public_baseurl`." | |
117 | ) | |
107 | 118 | |
108 | 119 | self.default_identity_server = config.get("default_identity_server") |
109 | 120 | self.allow_guest_access = config.get("allow_guest_access", False) |
175 | 186 | self.session_lifetime = session_lifetime |
176 | 187 | |
177 | 188 | # The success template used during fallback auth. |
178 | self.fallback_success_template = self.read_templates( | |
179 | ["auth_success.html"], autoescape=True | |
180 | )[0] | |
189 | self.fallback_success_template = self.read_template("auth_success.html") | |
181 | 190 | |
182 | 191 | def generate_config_section(self, generate_secrets=False, **kwargs): |
183 | 192 | if generate_secrets: |
228 | 237 | # send an email to the account's email address with a renewal link. By |
229 | 238 | # default, no such emails are sent. |
230 | 239 | # |
231 | # If you enable this setting, you will also need to fill out the 'email' | |
232 | # configuration section. You should also check that 'public_baseurl' is set | |
233 | # correctly. | |
240 | # If you enable this setting, you will also need to fill out the 'email' and | |
241 | # 'public_baseurl' configuration sections. | |
234 | 242 | # |
235 | 243 | #renew_at: 1w |
236 | 244 | |
321 | 329 | # The identity server which we suggest that clients should use when users log |
322 | 330 | # in on this server. |
323 | 331 | # |
324 | # (By default, no suggestion is made, so it is left up to the client.) | |
332 | # (By default, no suggestion is made, so it is left up to the client. | |
333 | # This setting is ignored unless public_baseurl is also set.) | |
325 | 334 | # |
326 | 335 | #default_identity_server: https://matrix.org |
327 | 336 | |
345 | 354 | # Servers handling the these requests must answer the `/requestToken` endpoints defined |
346 | 355 | # by the Matrix Identity Service API specification: |
347 | 356 | # https://matrix.org/docs/spec/identity_service/latest |
357 | # | |
358 | # If a delegate is specified, the config option public_baseurl must also be filled out. | |
348 | 359 | # |
349 | 360 | account_threepid_delegates: |
350 | 361 | #email: https://example.com # Delegate email sending to example.com |
188 | 188 | import saml2 |
189 | 189 | |
190 | 190 | public_baseurl = self.public_baseurl |
191 | if public_baseurl is None: | |
192 | raise ConfigError("saml2_config requires a public_baseurl to be set") | |
191 | 193 | |
192 | 194 | if self.saml2_grandfathered_mxid_source_attribute: |
193 | 195 | optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute) |
194 | 196 | optional_attributes -= required_attributes |
195 | 197 | |
196 | metadata_url = public_baseurl + "_matrix/saml2/metadata.xml" | |
197 | response_url = public_baseurl + "_matrix/saml2/authn_response" | |
198 | metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml" | |
199 | response_url = public_baseurl + "_synapse/client/saml2/authn_response" | |
198 | 200 | return { |
199 | 201 | "entityid": metadata_url, |
200 | 202 | "service": { |
232 | 234 | # enable SAML login. |
233 | 235 | # |
234 | 236 | # Once SAML support is enabled, a metadata file will be exposed at |
235 | # https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to | |
237 | # https://<server>:<port>/_synapse/client/saml2/metadata.xml, which you may be able to | |
236 | 238 | # use to configure your SAML IdP with. Alternatively, you can manually configure |
237 | 239 | # the IdP to use an ACS location of |
238 | # https://<server>:<port>/_matrix/saml2/authn_response. | |
240 | # https://<server>:<port>/_synapse/client/saml2/authn_response. | |
239 | 241 | # |
240 | 242 | saml2_config: |
241 | 243 | # `sp_config` is the configuration for the pysaml2 Service Provider. |
160 | 160 | self.print_pidfile = config.get("print_pidfile") |
161 | 161 | self.user_agent_suffix = config.get("user_agent_suffix") |
162 | 162 | self.use_frozen_dicts = config.get("use_frozen_dicts", False) |
163 | self.public_baseurl = config.get("public_baseurl") or "https://%s/" % ( | |
164 | self.server_name, | |
165 | ) | |
166 | if self.public_baseurl[-1] != "/": | |
167 | self.public_baseurl += "/" | |
163 | self.public_baseurl = config.get("public_baseurl") | |
168 | 164 | |
169 | 165 | # Whether to enable user presence. |
170 | 166 | self.use_presence = config.get("use_presence", True) |
320 | 316 | # Always blacklist 0.0.0.0, :: |
321 | 317 | self.federation_ip_range_blacklist.update(["0.0.0.0", "::"]) |
322 | 318 | |
319 | if self.public_baseurl is not None: | |
320 | if self.public_baseurl[-1] != "/": | |
321 | self.public_baseurl += "/" | |
323 | 322 | self.start_pushers = config.get("start_pushers", True) |
324 | 323 | |
325 | 324 | # (undocumented) option for torturing the worker-mode replication a bit, |
747 | 746 | # Otherwise, it should be the URL to reach Synapse's client HTTP listener (see |
748 | 747 | # 'listeners' below). |
749 | 748 | # |
750 | # If this is left unset, it defaults to 'https://<server_name>/'. (Note that | |
751 | # that will not work unless you configure Synapse or a reverse-proxy to listen | |
752 | # on port 443.) | |
753 | # | |
754 | 749 | #public_baseurl: https://example.com/ |
755 | 750 | |
756 | 751 | # Set the soft limit on the number of file descriptors synapse can use |
26 | 26 | sso_config = config.get("sso") or {} # type: Dict[str, Any] |
27 | 27 | |
28 | 28 | # The sso-specific template_dir |
29 | template_dir = sso_config.get("template_dir") | |
29 | self.sso_template_dir = sso_config.get("template_dir") | |
30 | 30 | |
31 | 31 | # Read templates from disk |
32 | 32 | ( |
47 | 47 | "sso_auth_success.html", |
48 | 48 | "sso_auth_bad_user.html", |
49 | 49 | ], |
50 | template_dir, | |
50 | self.sso_template_dir, | |
51 | 51 | ) |
52 | 52 | |
53 | 53 | # These templates have no placeholders, so render them here |
63 | 63 | # gracefully to the client). This would make it pointless to ask the user for |
64 | 64 | # confirmation, since the URL the confirmation page would be showing wouldn't be |
65 | 65 | # the client's. |
66 | login_fallback_url = self.public_baseurl + "_matrix/static/client/login" | |
67 | self.sso_client_whitelist.append(login_fallback_url) | |
66 | # public_baseurl is an optional setting, so we only add the fallback's URL to the | |
67 | # list if it's provided (because we can't figure out what that URL is otherwise). | |
68 | if self.public_baseurl: | |
69 | login_fallback_url = self.public_baseurl + "_matrix/static/client/login" | |
70 | self.sso_client_whitelist.append(login_fallback_url) | |
68 | 71 | |
69 | 72 | def generate_config_section(self, **kwargs): |
70 | 73 | return """\ |
82 | 85 | # phishing attacks from evil.site. To avoid this, include a slash after the |
83 | 86 | # hostname: "https://my.client/". |
84 | 87 | # |
85 | # The login fallback page (used by clients that don't natively support the | |
86 | # required login flows) is automatically whitelisted in addition to any URLs | |
87 | # in this list. | |
88 | # If public_baseurl is set, then the login fallback page (used by clients | |
89 | # that don't natively support the required login flows) is whitelisted in | |
90 | # addition to any URLs in this list. | |
88 | 91 | # |
89 | 92 | # By default, this list is empty. |
90 | 93 | # |
105 | 108 | # |
106 | 109 | # When rendering, this template is given the following variables: |
107 | 110 | # * redirect_url: the URL that the user will be redirected to after |
108 | # login. Needs manual escaping (see | |
109 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
111 | # login. | |
110 | 112 | # |
111 | 113 | # * server_name: the homeserver's name. |
112 | 114 | # |
113 | 115 | # * providers: a list of available Identity Providers. Each element is |
114 | 116 | # an object with the following attributes: |
117 | # | |
115 | 118 | # * idp_id: unique identifier for the IdP |
116 | 119 | # * idp_name: user-facing name for the IdP |
120 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
121 | # for the IdP | |
122 | # * idp_brand: if specified in the IdP config, a textual identifier | |
123 | # for the brand of the IdP | |
117 | 124 | # |
118 | 125 | # The rendered HTML page should contain a form which submits its results |
119 | 126 | # back as a GET request, with the following query parameters: |
123 | 130 | # |
124 | 131 | # * idp: the 'idp_id' of the chosen IDP. |
125 | 132 | # |
133 | # * HTML page to prompt new users to enter a userid and confirm other | |
134 | # details: 'sso_auth_account_details.html'. This is only shown if the | |
135 | # SSO implementation (with any user_mapping_provider) does not return | |
136 | # a localpart. | |
137 | # | |
138 | # When rendering, this template is given the following variables: | |
139 | # | |
140 | # * server_name: the homeserver's name. | |
141 | # | |
142 | # * idp: details of the SSO Identity Provider that the user logged in | |
143 | # with: an object with the following attributes: | |
144 | # | |
145 | # * idp_id: unique identifier for the IdP | |
146 | # * idp_name: user-facing name for the IdP | |
147 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
148 | # for the IdP | |
149 | # * idp_brand: if specified in the IdP config, a textual identifier | |
150 | # for the brand of the IdP | |
151 | # | |
152 | # * user_attributes: an object containing details about the user that | |
153 | # we received from the IdP. May have the following attributes: | |
154 | # | |
155 | # * display_name: the user's display_name | |
156 | # * emails: a list of email addresses | |
157 | # | |
158 | # The template should render a form which submits the following fields: | |
159 | # | |
160 | # * username: the localpart of the user's chosen user id | |
161 | # | |
162 | # * HTML page allowing the user to consent to the server's terms and | |
163 | # conditions. This is only shown for new users, and only if | |
164 | # `user_consent.require_at_registration` is set. | |
165 | # | |
166 | # When rendering, this template is given the following variables: | |
167 | # | |
168 | # * server_name: the homeserver's name. | |
169 | # | |
170 | # * user_id: the user's matrix proposed ID. | |
171 | # | |
172 | # * user_profile.display_name: the user's proposed display name, if any. | |
173 | # | |
174 | # * consent_version: the version of the terms that the user will be | |
175 | # shown | |
176 | # | |
177 | # * terms_url: a link to the page showing the terms. | |
178 | # | |
179 | # The template should render a form which submits the following fields: | |
180 | # | |
181 | # * accepted_version: the version of the terms accepted by the user | |
182 | # (ie, 'consent_version' from the input variables). | |
183 | # | |
126 | 184 | # * HTML page for a confirmation step before redirecting back to the client |
127 | 185 | # with the login token: 'sso_redirect_confirm.html'. |
128 | 186 | # |
129 | # When rendering, this template is given three variables: | |
130 | # * redirect_url: the URL the user is about to be redirected to. Needs | |
131 | # manual escaping (see | |
132 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
187 | # When rendering, this template is given the following variables: | |
188 | # | |
189 | # * redirect_url: the URL the user is about to be redirected to. | |
133 | 190 | # |
134 | 191 | # * display_url: the same as `redirect_url`, but with the query |
135 | 192 | # parameters stripped. The intention is to have a |
136 | 193 | # human-readable URL to show to users, not to use it as |
137 | # the final address to redirect to. Needs manual escaping | |
138 | # (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
139 | # | |
140 | # * server_name: the homeserver's name. | |
194 | # the final address to redirect to. | |
195 | # | |
196 | # * server_name: the homeserver's name. | |
197 | # | |
198 | # * new_user: a boolean indicating whether this is the user's first time | |
199 | # logging in. | |
200 | # | |
201 | # * user_id: the user's matrix ID. | |
202 | # | |
203 | # * user_profile.avatar_url: an MXC URI for the user's avatar, if any. | |
204 | # None if the user has not set an avatar. | |
205 | # | |
206 | # * user_profile.display_name: the user's display name. None if the user | |
207 | # has not set a display name. | |
141 | 208 | # |
142 | 209 | # * HTML page which notifies the user that they are authenticating to confirm |
143 | 210 | # an operation on their account during the user interactive authentication |
144 | 211 | # process: 'sso_auth_confirm.html'. |
145 | 212 | # |
146 | 213 | # When rendering, this template is given the following variables: |
147 | # * redirect_url: the URL the user is about to be redirected to. Needs | |
148 | # manual escaping (see | |
149 | # https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping). | |
214 | # * redirect_url: the URL the user is about to be redirected to. | |
150 | 215 | # |
151 | 216 | # * description: the operation which the user is being asked to confirm |
217 | # | |
218 | # * idp: details of the Identity Provider that we will use to confirm | |
219 | # the user's identity: an object with the following attributes: | |
220 | # | |
221 | # * idp_id: unique identifier for the IdP | |
222 | # * idp_name: user-facing name for the IdP | |
223 | # * idp_icon: if specified in the IdP config, an MXC URI for an icon | |
224 | # for the IdP | |
225 | # * idp_brand: if specified in the IdP config, a textual identifier | |
226 | # for the brand of the IdP | |
152 | 227 | # |
153 | 228 | # * HTML page shown after a successful user interactive authentication session: |
154 | 229 | # 'sso_auth_success.html'. |
124 | 124 | self._no_verify_ssl_context = _no_verify_ssl.getContext() |
125 | 125 | self._no_verify_ssl_context.set_info_callback(_context_info_cb) |
126 | 126 | |
127 | self._should_verify = self._config.federation_verify_certificates | |
128 | ||
129 | self._federation_certificate_verification_whitelist = ( | |
130 | self._config.federation_certificate_verification_whitelist | |
131 | ) | |
132 | ||
127 | 133 | def get_options(self, host: bytes): |
128 | ||
129 | 134 | # IPolicyForHTTPS.get_options takes bytes, but we want to compare |
130 | 135 | # against the str whitelist. The hostnames in the whitelist are already |
131 | 136 | # IDNA-encoded like the hosts will be here. |
132 | 137 | ascii_host = host.decode("ascii") |
133 | 138 | |
134 | 139 | # Check if certificate verification has been enabled |
135 | should_verify = self._config.federation_verify_certificates | |
140 | should_verify = self._should_verify | |
136 | 141 | |
137 | 142 | # Check if we've disabled certificate verification for this host |
138 | if should_verify: | |
139 | for regex in self._config.federation_certificate_verification_whitelist: | |
143 | if self._should_verify: | |
144 | for regex in self._federation_certificate_verification_whitelist: | |
140 | 145 | if regex.match(ascii_host): |
141 | 146 | should_verify = False |
142 | 147 | break |
17 | 17 | import itertools |
18 | 18 | import logging |
19 | 19 | from typing import ( |
20 | TYPE_CHECKING, | |
20 | 21 | Any, |
21 | 22 | Awaitable, |
22 | 23 | Callable, |
25 | 26 | List, |
26 | 27 | Mapping, |
27 | 28 | Optional, |
28 | Sequence, | |
29 | 29 | Tuple, |
30 | 30 | TypeVar, |
31 | 31 | Union, |
60 | 60 | from synapse.util.caches.expiringcache import ExpiringCache |
61 | 61 | from synapse.util.retryutils import NotRetryingDestination |
62 | 62 | |
63 | if TYPE_CHECKING: | |
64 | from synapse.app.homeserver import HomeServer | |
65 | ||
63 | 66 | logger = logging.getLogger(__name__) |
64 | 67 | |
65 | 68 | sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"]) |
79 | 82 | |
80 | 83 | |
81 | 84 | class FederationClient(FederationBase): |
82 | def __init__(self, hs): | |
85 | def __init__(self, hs: "HomeServer"): | |
83 | 86 | super().__init__(hs) |
84 | 87 | |
85 | self.pdu_destination_tried = {} | |
88 | self.pdu_destination_tried = {} # type: Dict[str, Dict[str, int]] | |
86 | 89 | self._clock.looping_call(self._clear_tried_cache, 60 * 1000) |
87 | 90 | self.state = hs.get_state_handler() |
88 | 91 | self.transport_layer = hs.get_federation_transport_client() |
115 | 118 | self.pdu_destination_tried[event_id] = destination_dict |
116 | 119 | |
117 | 120 | @log_function |
118 | def make_query( | |
121 | async def make_query( | |
119 | 122 | self, |
120 | destination, | |
121 | query_type, | |
122 | args, | |
123 | retry_on_dns_fail=False, | |
124 | ignore_backoff=False, | |
125 | ): | |
123 | destination: str, | |
124 | query_type: str, | |
125 | args: dict, | |
126 | retry_on_dns_fail: bool = False, | |
127 | ignore_backoff: bool = False, | |
128 | ) -> JsonDict: | |
126 | 129 | """Sends a federation Query to a remote homeserver of the given type |
127 | 130 | and arguments. |
128 | 131 | |
129 | 132 | Args: |
130 | destination (str): Domain name of the remote homeserver | |
131 | query_type (str): Category of the query type; should match the | |
133 | destination: Domain name of the remote homeserver | |
134 | query_type: Category of the query type; should match the | |
132 | 135 | handler name used in register_query_handler(). |
133 | args (dict): Mapping of strings to strings containing the details | |
136 | args: Mapping of strings to strings containing the details | |
134 | 137 | of the query request. |
135 | ignore_backoff (bool): true to ignore the historical backoff data | |
138 | ignore_backoff: true to ignore the historical backoff data | |
136 | 139 | and try the request anyway. |
137 | 140 | |
138 | 141 | Returns: |
139 | a Awaitable which will eventually yield a JSON object from the | |
140 | response | |
142 | The JSON object from the response | |
141 | 143 | """ |
142 | 144 | sent_queries_counter.labels(query_type).inc() |
143 | 145 | |
144 | return self.transport_layer.make_query( | |
146 | return await self.transport_layer.make_query( | |
145 | 147 | destination, |
146 | 148 | query_type, |
147 | 149 | args, |
150 | 152 | ) |
151 | 153 | |
152 | 154 | @log_function |
153 | def query_client_keys(self, destination, content, timeout): | |
155 | async def query_client_keys( | |
156 | self, destination: str, content: JsonDict, timeout: int | |
157 | ) -> JsonDict: | |
154 | 158 | """Query device keys for a device hosted on a remote server. |
155 | 159 | |
156 | 160 | Args: |
157 | destination (str): Domain name of the remote homeserver | |
158 | content (dict): The query content. | |
159 | ||
160 | Returns: | |
161 | an Awaitable which will eventually yield a JSON object from the | |
162 | response | |
161 | destination: Domain name of the remote homeserver | |
162 | content: The query content. | |
163 | ||
164 | Returns: | |
165 | The JSON object from the response | |
163 | 166 | """ |
164 | 167 | sent_queries_counter.labels("client_device_keys").inc() |
165 | return self.transport_layer.query_client_keys(destination, content, timeout) | |
168 | return await self.transport_layer.query_client_keys( | |
169 | destination, content, timeout | |
170 | ) | |
166 | 171 | |
167 | 172 | @log_function |
168 | def query_user_devices(self, destination, user_id, timeout=30000): | |
173 | async def query_user_devices( | |
174 | self, destination: str, user_id: str, timeout: int = 30000 | |
175 | ) -> JsonDict: | |
169 | 176 | """Query the device keys for a list of user ids hosted on a remote |
170 | 177 | server. |
171 | 178 | """ |
172 | 179 | sent_queries_counter.labels("user_devices").inc() |
173 | return self.transport_layer.query_user_devices(destination, user_id, timeout) | |
180 | return await self.transport_layer.query_user_devices( | |
181 | destination, user_id, timeout | |
182 | ) | |
174 | 183 | |
175 | 184 | @log_function |
176 | def claim_client_keys(self, destination, content, timeout): | |
185 | async def claim_client_keys( | |
186 | self, destination: str, content: JsonDict, timeout: int | |
187 | ) -> JsonDict: | |
177 | 188 | """Claims one-time keys for a device hosted on a remote server. |
178 | 189 | |
179 | 190 | Args: |
180 | destination (str): Domain name of the remote homeserver | |
181 | content (dict): The query content. | |
182 | ||
183 | Returns: | |
184 | an Awaitable which will eventually yield a JSON object from the | |
185 | response | |
191 | destination: Domain name of the remote homeserver | |
192 | content: The query content. | |
193 | ||
194 | Returns: | |
195 | The JSON object from the response | |
186 | 196 | """ |
187 | 197 | sent_queries_counter.labels("client_one_time_keys").inc() |
188 | return self.transport_layer.claim_client_keys(destination, content, timeout) | |
198 | return await self.transport_layer.claim_client_keys( | |
199 | destination, content, timeout | |
200 | ) | |
189 | 201 | |
190 | 202 | async def backfill( |
191 | 203 | self, dest: str, room_id: str, limit: int, extremities: Iterable[str] |
194 | 206 | given destination server. |
195 | 207 | |
196 | 208 | Args: |
197 | dest (str): The remote homeserver to ask. | |
198 | room_id (str): The room_id to backfill. | |
199 | limit (int): The maximum number of events to return. | |
200 | extremities (list): our current backwards extremities, to backfill from | |
209 | dest: The remote homeserver to ask. | |
210 | room_id: The room_id to backfill. | |
211 | limit: The maximum number of events to return. | |
212 | extremities: our current backwards extremities, to backfill from | |
201 | 213 | """ |
202 | 214 | logger.debug("backfill extrem=%s", extremities) |
203 | 215 | |
369 | 381 | for events that have failed their checks |
370 | 382 | |
371 | 383 | Returns: |
372 | Deferred : A list of PDUs that have valid signatures and hashes. | |
384 | A list of PDUs that have valid signatures and hashes. | |
373 | 385 | """ |
374 | 386 | deferreds = self._check_sigs_and_hashes(room_version, pdus) |
375 | 387 | |
417 | 429 | else: |
418 | 430 | return [p for p in valid_pdus if p] |
419 | 431 | |
420 | async def get_event_auth(self, destination, room_id, event_id): | |
432 | async def get_event_auth( | |
433 | self, destination: str, room_id: str, event_id: str | |
434 | ) -> List[EventBase]: | |
421 | 435 | res = await self.transport_layer.get_event_auth(destination, room_id, event_id) |
422 | 436 | |
423 | 437 | room_version = await self.store.get_room_version(room_id) |
699 | 713 | |
700 | 714 | return await self._try_destination_list("send_join", destinations, send_request) |
701 | 715 | |
702 | async def _do_send_join(self, destination: str, pdu: EventBase): | |
716 | async def _do_send_join(self, destination: str, pdu: EventBase) -> JsonDict: | |
703 | 717 | time_now = self._clock.time_msec() |
704 | 718 | |
705 | 719 | try: |
706 | content = await self.transport_layer.send_join_v2( | |
720 | return await self.transport_layer.send_join_v2( | |
707 | 721 | destination=destination, |
708 | 722 | room_id=pdu.room_id, |
709 | 723 | event_id=pdu.event_id, |
710 | 724 | content=pdu.get_pdu_json(time_now), |
711 | 725 | ) |
712 | ||
713 | return content | |
714 | 726 | except HttpResponseException as e: |
715 | 727 | if e.code in [400, 404]: |
716 | 728 | err = e.to_synapse_error() |
768 | 780 | time_now = self._clock.time_msec() |
769 | 781 | |
770 | 782 | try: |
771 | content = await self.transport_layer.send_invite_v2( | |
783 | return await self.transport_layer.send_invite_v2( | |
772 | 784 | destination=destination, |
773 | 785 | room_id=pdu.room_id, |
774 | 786 | event_id=pdu.event_id, |
778 | 790 | "invite_room_state": pdu.unsigned.get("invite_room_state", []), |
779 | 791 | }, |
780 | 792 | ) |
781 | return content | |
782 | 793 | except HttpResponseException as e: |
783 | 794 | if e.code in [400, 404]: |
784 | 795 | err = e.to_synapse_error() |
798 | 809 | "User's homeserver does not support this room version", |
799 | 810 | Codes.UNSUPPORTED_ROOM_VERSION, |
800 | 811 | ) |
801 | elif e.code == 403: | |
812 | elif e.code in (403, 429): | |
802 | 813 | raise e.to_synapse_error() |
803 | 814 | else: |
804 | 815 | raise |
841 | 852 | "send_leave", destinations, send_request |
842 | 853 | ) |
843 | 854 | |
844 | async def _do_send_leave(self, destination, pdu): | |
855 | async def _do_send_leave(self, destination: str, pdu: EventBase) -> JsonDict: | |
845 | 856 | time_now = self._clock.time_msec() |
846 | 857 | |
847 | 858 | try: |
848 | content = await self.transport_layer.send_leave_v2( | |
859 | return await self.transport_layer.send_leave_v2( | |
849 | 860 | destination=destination, |
850 | 861 | room_id=pdu.room_id, |
851 | 862 | event_id=pdu.event_id, |
852 | 863 | content=pdu.get_pdu_json(time_now), |
853 | 864 | ) |
854 | ||
855 | return content | |
856 | 865 | except HttpResponseException as e: |
857 | 866 | if e.code in [400, 404]: |
858 | 867 | err = e.to_synapse_error() |
878 | 887 | # content. |
879 | 888 | return resp[1] |
880 | 889 | |
881 | def get_public_rooms( | |
890 | async def get_public_rooms( | |
882 | 891 | self, |
883 | 892 | remote_server: str, |
884 | 893 | limit: Optional[int] = None, |
886 | 895 | search_filter: Optional[Dict] = None, |
887 | 896 | include_all_networks: bool = False, |
888 | 897 | third_party_instance_id: Optional[str] = None, |
889 | ): | |
898 | ) -> JsonDict: | |
890 | 899 | """Get the list of public rooms from a remote homeserver |
891 | 900 | |
892 | 901 | Args: |
900 | 909 | party instance |
901 | 910 | |
902 | 911 | Returns: |
903 | Awaitable[Dict[str, Any]]: The response from the remote server, or None if | |
904 | `remote_server` is the same as the local server_name | |
912 | The response from the remote server. | |
905 | 913 | |
906 | 914 | Raises: |
907 | 915 | HttpResponseException: There was an exception returned from the remote server |
909 | 917 | requests over federation |
910 | 918 | |
911 | 919 | """ |
912 | return self.transport_layer.get_public_rooms( | |
920 | return await self.transport_layer.get_public_rooms( | |
913 | 921 | remote_server, |
914 | 922 | limit, |
915 | 923 | since_token, |
922 | 930 | self, |
923 | 931 | destination: str, |
924 | 932 | room_id: str, |
925 | earliest_events_ids: Sequence[str], | |
933 | earliest_events_ids: Iterable[str], | |
926 | 934 | latest_events: Iterable[EventBase], |
927 | 935 | limit: int, |
928 | 936 | min_depth: int, |
973 | 981 | |
974 | 982 | return signed_events |
975 | 983 | |
976 | async def forward_third_party_invite(self, destinations, room_id, event_dict): | |
984 | async def forward_third_party_invite( | |
985 | self, destinations: Iterable[str], room_id: str, event_dict: JsonDict | |
986 | ) -> None: | |
977 | 987 | for destination in destinations: |
978 | 988 | if destination == self.server_name: |
979 | 989 | continue |
982 | 992 | await self.transport_layer.exchange_third_party_invite( |
983 | 993 | destination=destination, room_id=room_id, event_dict=event_dict |
984 | 994 | ) |
985 | return None | |
995 | return | |
986 | 996 | except CodeMessageException: |
987 | 997 | raise |
988 | 998 | except Exception as e: |
994 | 1004 | |
995 | 1005 | async def get_room_complexity( |
996 | 1006 | self, destination: str, room_id: str |
997 | ) -> Optional[dict]: | |
1007 | ) -> Optional[JsonDict]: | |
998 | 1008 | """ |
999 | 1009 | Fetch the complexity of a remote room from another server. |
1000 | 1010 | |
1007 | 1017 | could not fetch the complexity. |
1008 | 1018 | """ |
1009 | 1019 | try: |
1010 | complexity = await self.transport_layer.get_room_complexity( | |
1020 | return await self.transport_layer.get_room_complexity( | |
1011 | 1021 | destination=destination, room_id=room_id |
1012 | 1022 | ) |
1013 | return complexity | |
1014 | 1023 | except CodeMessageException as e: |
1015 | 1024 | # We didn't manage to get it -- probably a 404. We are okay if other |
1016 | 1025 | # servers don't give it to us. |
141 | 141 | self._wake_destinations_needing_catchup, |
142 | 142 | ) |
143 | 143 | |
144 | self._external_cache = hs.get_external_cache() | |
145 | ||
144 | 146 | def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue: |
145 | 147 | """Get or create a PerDestinationQueue for the given destination |
146 | 148 | |
196 | 198 | if not event.internal_metadata.should_proactively_send(): |
197 | 199 | return |
198 | 200 | |
199 | try: | |
200 | # Get the state from before the event. | |
201 | # We need to make sure that this is the state from before | |
202 | # the event and not from after it. | |
203 | # Otherwise if the last member on a server in a room is | |
204 | # banned then it won't receive the event because it won't | |
205 | # be in the room after the ban. | |
206 | destinations = await self.state.get_hosts_in_room_at_events( | |
207 | event.room_id, event_ids=event.prev_event_ids() | |
201 | destinations = None # type: Optional[Set[str]] | |
202 | if not event.prev_event_ids(): | |
203 | # If there are no prev event IDs then the state is empty | |
204 | # and so no remote servers in the room | |
205 | destinations = set() | |
206 | else: | |
207 | # We check the external cache for the destinations, which is | |
208 | # stored per state group. | |
209 | ||
210 | sg = await self._external_cache.get( | |
211 | "event_to_prev_state_group", event.event_id | |
208 | 212 | ) |
209 | except Exception: | |
210 | logger.exception( | |
211 | "Failed to calculate hosts in room for event: %s", | |
212 | event.event_id, | |
213 | ) | |
214 | return | |
213 | if sg: | |
214 | destinations = await self._external_cache.get( | |
215 | "get_joined_hosts", str(sg) | |
216 | ) | |
217 | ||
218 | if destinations is None: | |
219 | try: | |
220 | # Get the state from before the event. | |
221 | # We need to make sure that this is the state from before | |
222 | # the event and not from after it. | |
223 | # Otherwise if the last member on a server in a room is | |
224 | # banned then it won't receive the event because it won't | |
225 | # be in the room after the ban. | |
226 | destinations = await self.state.get_hosts_in_room_at_events( | |
227 | event.room_id, event_ids=event.prev_event_ids() | |
228 | ) | |
229 | except Exception: | |
230 | logger.exception( | |
231 | "Failed to calculate hosts in room for event: %s", | |
232 | event.event_id, | |
233 | ) | |
234 | return | |
215 | 235 | |
216 | 236 | destinations = { |
217 | 237 | d |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import TYPE_CHECKING | |
16 | 17 | |
17 | 18 | import twisted |
18 | 19 | import twisted.internet.error |
20 | 21 | from twisted.web.resource import Resource |
21 | 22 | |
22 | 23 | from synapse.app import check_bind_error |
24 | ||
25 | if TYPE_CHECKING: | |
26 | from synapse.app.homeserver import HomeServer | |
23 | 27 | |
24 | 28 | logger = logging.getLogger(__name__) |
25 | 29 | |
34 | 38 | |
35 | 39 | |
36 | 40 | class AcmeHandler: |
37 | def __init__(self, hs): | |
41 | def __init__(self, hs: "HomeServer"): | |
38 | 42 | self.hs = hs |
39 | 43 | self.reactor = hs.get_reactor() |
40 | 44 | self._acme_domain = hs.config.acme_domain |
41 | 45 | |
42 | async def start_listening(self): | |
46 | async def start_listening(self) -> None: | |
43 | 47 | from synapse.handlers import acme_issuing_service |
44 | 48 | |
45 | 49 | # Configure logging for txacme, if you need to debug |
84 | 88 | logger.error(ACME_REGISTER_FAIL_ERROR) |
85 | 89 | raise |
86 | 90 | |
87 | async def provision_certificate(self): | |
91 | async def provision_certificate(self) -> None: | |
88 | 92 | |
89 | 93 | logger.warning("Reprovisioning %s", self._acme_domain) |
90 | 94 | |
109 | 113 | except Exception: |
110 | 114 | logger.exception("Failed saving!") |
111 | 115 | raise |
112 | ||
113 | return True |
21 | 21 | imported conditionally. |
22 | 22 | """ |
23 | 23 | import logging |
24 | from typing import Dict, Iterable, List | |
24 | 25 | |
25 | 26 | import attr |
27 | import pem | |
26 | 28 | from cryptography.hazmat.backends import default_backend |
27 | 29 | from cryptography.hazmat.primitives import serialization |
28 | 30 | from josepy import JWKRSA |
35 | 37 | from zope.interface import implementer |
36 | 38 | |
37 | 39 | from twisted.internet import defer |
40 | from twisted.internet.interfaces import IReactorTCP | |
38 | 41 | from twisted.python.filepath import FilePath |
39 | 42 | from twisted.python.url import URL |
43 | from twisted.web.resource import IResource | |
40 | 44 | |
41 | 45 | logger = logging.getLogger(__name__) |
42 | 46 | |
43 | 47 | |
44 | def create_issuing_service(reactor, acme_url, account_key_file, well_known_resource): | |
48 | def create_issuing_service( | |
49 | reactor: IReactorTCP, | |
50 | acme_url: str, | |
51 | account_key_file: str, | |
52 | well_known_resource: IResource, | |
53 | ) -> AcmeIssuingService: | |
45 | 54 | """Create an ACME issuing service, and attach it to a web Resource |
46 | 55 | |
47 | 56 | Args: |
48 | 57 | reactor: twisted reactor |
49 | acme_url (str): URL to use to request certificates | |
50 | account_key_file (str): where to store the account key | |
51 | well_known_resource (twisted.web.IResource): web resource for .well-known. | |
58 | acme_url: URL to use to request certificates | |
59 | account_key_file: where to store the account key | |
60 | well_known_resource: web resource for .well-known. | |
52 | 61 | we will attach a child resource for "acme-challenge". |
53 | 62 | |
54 | 63 | Returns: |
82 | 91 | A store that only stores in memory. |
83 | 92 | """ |
84 | 93 | |
85 | certs = attr.ib(default=attr.Factory(dict)) | |
94 | certs = attr.ib(type=Dict[bytes, List[bytes]], default=attr.Factory(dict)) | |
86 | 95 | |
87 | def store(self, server_name, pem_objects): | |
96 | def store( | |
97 | self, server_name: bytes, pem_objects: Iterable[pem.AbstractPEMObject] | |
98 | ) -> defer.Deferred: | |
88 | 99 | self.certs[server_name] = [o.as_bytes() for o in pem_objects] |
89 | 100 | return defer.succeed(None) |
90 | 101 | |
91 | 102 | |
92 | def load_or_create_client_key(key_file): | |
103 | def load_or_create_client_key(key_file: str) -> JWKRSA: | |
93 | 104 | """Load the ACME account key from a file, creating it if it does not exist. |
94 | 105 | |
95 | 106 | Args: |
96 | key_file (str): name of the file to use as the account key | |
107 | key_file: name of the file to use as the account key | |
97 | 108 | """ |
98 | 109 | # this is based on txacme.endpoint.load_or_create_client_key, but doesn't |
99 | 110 | # hardcode the 'client.key' filename |
60 | 60 | from synapse.logging.context import defer_to_thread |
61 | 61 | from synapse.metrics.background_process_metrics import run_as_background_process |
62 | 62 | from synapse.module_api import ModuleApi |
63 | from synapse.storage.roommember import ProfileInfo | |
63 | 64 | from synapse.types import JsonDict, Requester, UserID |
64 | 65 | from synapse.util import stringutils as stringutils |
65 | 66 | from synapse.util.async_helpers import maybe_awaitable |
566 | 567 | session.session_id, login_type, result |
567 | 568 | ) |
568 | 569 | except LoginError as e: |
569 | if login_type == LoginType.EMAIL_IDENTITY: | |
570 | # riot used to have a bug where it would request a new | |
571 | # validation token (thus sending a new email) each time it | |
572 | # got a 401 with a 'flows' field. | |
573 | # (https://github.com/vector-im/vector-web/issues/2447). | |
574 | # | |
575 | # Grandfather in the old behaviour for now to avoid | |
576 | # breaking old riot deployments. | |
577 | raise | |
578 | ||
579 | 570 | # this step failed. Merge the error dict into the response |
580 | 571 | # so that the client can have another go. |
581 | 572 | errordict = e.error_dict() |
1386 | 1377 | ) |
1387 | 1378 | |
1388 | 1379 | return self._sso_auth_confirm_template.render( |
1389 | description=session.description, redirect_url=redirect_url, | |
1380 | description=session.description, | |
1381 | redirect_url=redirect_url, | |
1382 | idp=sso_auth_provider, | |
1390 | 1383 | ) |
1391 | 1384 | |
1392 | 1385 | async def complete_sso_login( |
1395 | 1388 | request: Request, |
1396 | 1389 | client_redirect_url: str, |
1397 | 1390 | extra_attributes: Optional[JsonDict] = None, |
1391 | new_user: bool = False, | |
1398 | 1392 | ): |
1399 | 1393 | """Having figured out a mxid for this user, complete the HTTP request |
1400 | 1394 | |
1405 | 1399 | process. |
1406 | 1400 | extra_attributes: Extra attributes which will be passed to the client |
1407 | 1401 | during successful login. Must be JSON serializable. |
1402 | new_user: True if we should use wording appropriate to a user who has just | |
1403 | registered. | |
1408 | 1404 | """ |
1409 | 1405 | # If the account has been deactivated, do not proceed with the login |
1410 | 1406 | # flow. |
1413 | 1409 | respond_with_html(request, 403, self._sso_account_deactivated_template) |
1414 | 1410 | return |
1415 | 1411 | |
1412 | profile = await self.store.get_profileinfo( | |
1413 | UserID.from_string(registered_user_id).localpart | |
1414 | ) | |
1415 | ||
1416 | 1416 | self._complete_sso_login( |
1417 | registered_user_id, request, client_redirect_url, extra_attributes | |
1417 | registered_user_id, | |
1418 | request, | |
1419 | client_redirect_url, | |
1420 | extra_attributes, | |
1421 | new_user=new_user, | |
1422 | user_profile_data=profile, | |
1418 | 1423 | ) |
1419 | 1424 | |
1420 | 1425 | def _complete_sso_login( |
1423 | 1428 | request: Request, |
1424 | 1429 | client_redirect_url: str, |
1425 | 1430 | extra_attributes: Optional[JsonDict] = None, |
1431 | new_user: bool = False, | |
1432 | user_profile_data: Optional[ProfileInfo] = None, | |
1426 | 1433 | ): |
1427 | 1434 | """ |
1428 | 1435 | The synchronous portion of complete_sso_login. |
1429 | 1436 | |
1430 | 1437 | This exists purely for backwards compatibility of synapse.module_api.ModuleApi. |
1431 | 1438 | """ |
1439 | ||
1440 | if user_profile_data is None: | |
1441 | user_profile_data = ProfileInfo(None, None) | |
1442 | ||
1432 | 1443 | # Store any extra attributes which will be passed in the login response. |
1433 | 1444 | # Note that this is per-user so it may overwrite a previous value, this |
1434 | 1445 | # is considered OK since the newest SSO attributes should be most valid. |
1466 | 1477 | display_url=redirect_url_no_params, |
1467 | 1478 | redirect_url=redirect_url, |
1468 | 1479 | server_name=self._server_name, |
1480 | new_user=new_user, | |
1481 | user_id=registered_user_id, | |
1482 | user_profile=user_profile_data, | |
1469 | 1483 | ) |
1470 | 1484 | respond_with_html(request, 200, html) |
1471 | 1485 |
79 | 79 | # user-facing name of this auth provider |
80 | 80 | self.idp_name = "CAS" |
81 | 81 | |
82 | # we do not currently support icons for CAS auth, but this is required by | |
82 | # we do not currently support brands/icons for CAS auth, but this is required by | |
83 | 83 | # the SsoIdentityProvider protocol type. |
84 | 84 | self.idp_icon = None |
85 | self.idp_brand = None | |
85 | 86 | |
86 | 87 | self._sso_handler = hs.get_sso_handler() |
87 | 88 | |
98 | 99 | Returns: |
99 | 100 | The URL to use as a "service" parameter. |
100 | 101 | """ |
101 | return "%s%s?%s" % ( | |
102 | self._cas_service_url, | |
103 | "/_matrix/client/r0/login/cas/ticket", | |
104 | urllib.parse.urlencode(args), | |
105 | ) | |
102 | return "%s?%s" % (self._cas_service_url, urllib.parse.urlencode(args),) | |
106 | 103 | |
107 | 104 | async def _validate_ticket( |
108 | 105 | self, ticket: str, service_args: Dict[str, str] |
14 | 14 | # See the License for the specific language governing permissions and |
15 | 15 | # limitations under the License. |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple | |
17 | from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple | |
18 | 18 | |
19 | 19 | from synapse.api import errors |
20 | 20 | from synapse.api.constants import EventTypes |
61 | 61 | self._auth_handler = hs.get_auth_handler() |
62 | 62 | |
63 | 63 | @trace |
64 | async def get_devices_by_user(self, user_id: str) -> List[Dict[str, Any]]: | |
64 | async def get_devices_by_user(self, user_id: str) -> List[JsonDict]: | |
65 | 65 | """ |
66 | 66 | Retrieve the given user's devices |
67 | 67 | |
84 | 84 | return devices |
85 | 85 | |
86 | 86 | @trace |
87 | async def get_device(self, user_id: str, device_id: str) -> Dict[str, Any]: | |
87 | async def get_device(self, user_id: str, device_id: str) -> JsonDict: | |
88 | 88 | """ Retrieve the given device |
89 | 89 | |
90 | 90 | Args: |
597 | 597 | |
598 | 598 | |
599 | 599 | def _update_device_from_client_ips( |
600 | device: Dict[str, Any], client_ips: Dict[Tuple[str, str], Dict[str, Any]] | |
600 | device: JsonDict, client_ips: Dict[Tuple[str, str], JsonDict] | |
601 | 601 | ) -> None: |
602 | 602 | ip = client_ips.get((device["user_id"], device["device_id"]), {}) |
603 | 603 | device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")}) |
945 | 945 | async def process_cross_signing_key_update( |
946 | 946 | self, |
947 | 947 | user_id: str, |
948 | master_key: Optional[Dict[str, Any]], | |
949 | self_signing_key: Optional[Dict[str, Any]], | |
948 | master_key: Optional[JsonDict], | |
949 | self_signing_key: Optional[JsonDict], | |
950 | 950 | ) -> List[str]: |
951 | 951 | """Process the given new master and self-signing key for the given remote user. |
952 | 952 |
15 | 15 | # limitations under the License. |
16 | 16 | |
17 | 17 | import logging |
18 | from typing import Dict, List, Optional, Tuple | |
18 | from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple | |
19 | 19 | |
20 | 20 | import attr |
21 | 21 | from canonicaljson import encode_canonical_json |
30 | 30 | from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace |
31 | 31 | from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet |
32 | 32 | from synapse.types import ( |
33 | JsonDict, | |
33 | 34 | UserID, |
34 | 35 | get_domain_from_id, |
35 | 36 | get_verify_key_from_cross_signing_key, |
39 | 40 | from synapse.util.caches.expiringcache import ExpiringCache |
40 | 41 | from synapse.util.retryutils import NotRetryingDestination |
41 | 42 | |
43 | if TYPE_CHECKING: | |
44 | from synapse.app.homeserver import HomeServer | |
45 | ||
42 | 46 | logger = logging.getLogger(__name__) |
43 | 47 | |
44 | 48 | |
45 | 49 | class E2eKeysHandler: |
46 | def __init__(self, hs): | |
50 | def __init__(self, hs: "HomeServer"): | |
47 | 51 | self.store = hs.get_datastore() |
48 | 52 | self.federation = hs.get_federation_client() |
49 | 53 | self.device_handler = hs.get_device_handler() |
77 | 81 | ) |
78 | 82 | |
79 | 83 | @trace |
80 | async def query_devices(self, query_body, timeout, from_user_id): | |
84 | async def query_devices( | |
85 | self, query_body: JsonDict, timeout: int, from_user_id: str | |
86 | ) -> JsonDict: | |
81 | 87 | """ Handle a device key query from a client |
82 | 88 | |
83 | 89 | { |
97 | 103 | } |
98 | 104 | |
99 | 105 | Args: |
100 | from_user_id (str): the user making the query. This is used when | |
106 | from_user_id: the user making the query. This is used when | |
101 | 107 | adding cross-signing signatures to limit what signatures users |
102 | 108 | can see. |
103 | 109 | """ |
104 | 110 | |
105 | device_keys_query = query_body.get("device_keys", {}) | |
111 | device_keys_query = query_body.get( | |
112 | "device_keys", {} | |
113 | ) # type: Dict[str, Iterable[str]] | |
106 | 114 | |
107 | 115 | # separate users by domain. |
108 | 116 | # make a map from domain to user_id to device_ids |
120 | 128 | set_tag("remote_key_query", remote_queries) |
121 | 129 | |
122 | 130 | # First get local devices. |
123 | failures = {} | |
131 | # A map of destination -> failure response. | |
132 | failures = {} # type: Dict[str, JsonDict] | |
124 | 133 | results = {} |
125 | 134 | if local_query: |
126 | 135 | local_result = await self.query_local_devices(local_query) |
134 | 143 | ) |
135 | 144 | |
136 | 145 | # Now attempt to get any remote devices from our local cache. |
137 | remote_queries_not_in_cache = {} | |
146 | # A map of destination -> user ID -> device IDs. | |
147 | remote_queries_not_in_cache = {} # type: Dict[str, Dict[str, Iterable[str]]] | |
138 | 148 | if remote_queries: |
139 | query_list = [] | |
149 | query_list = [] # type: List[Tuple[str, Optional[str]]] | |
140 | 150 | for user_id, device_ids in remote_queries.items(): |
141 | 151 | if device_ids: |
142 | 152 | query_list.extend((user_id, device_id) for device_id in device_ids) |
283 | 293 | return ret |
284 | 294 | |
285 | 295 | async def get_cross_signing_keys_from_cache( |
286 | self, query, from_user_id | |
296 | self, query: Iterable[str], from_user_id: Optional[str] | |
287 | 297 | ) -> Dict[str, Dict[str, dict]]: |
288 | 298 | """Get cross-signing keys for users from the database |
289 | 299 | |
290 | 300 | Args: |
291 | query (Iterable[string]) an iterable of user IDs. A dict whose keys | |
301 | query: an iterable of user IDs. A dict whose keys | |
292 | 302 | are user IDs satisfies this, so the query format used for |
293 | 303 | query_devices can be used here. |
294 | from_user_id (str): the user making the query. This is used when | |
304 | from_user_id: the user making the query. This is used when | |
295 | 305 | adding cross-signing signatures to limit what signatures users |
296 | 306 | can see. |
297 | 307 | |
314 | 324 | if "self_signing" in user_info: |
315 | 325 | self_signing_keys[user_id] = user_info["self_signing"] |
316 | 326 | |
317 | if ( | |
318 | from_user_id in keys | |
319 | and keys[from_user_id] is not None | |
320 | and "user_signing" in keys[from_user_id] | |
321 | ): | |
322 | # users can see other users' master and self-signing keys, but can | |
323 | # only see their own user-signing keys | |
324 | user_signing_keys[from_user_id] = keys[from_user_id]["user_signing"] | |
327 | # users can see other users' master and self-signing keys, but can | |
328 | # only see their own user-signing keys | |
329 | if from_user_id: | |
330 | from_user_key = keys.get(from_user_id) | |
331 | if from_user_key and "user_signing" in from_user_key: | |
332 | user_signing_keys[from_user_id] = from_user_key["user_signing"] | |
325 | 333 | |
326 | 334 | return { |
327 | 335 | "master_keys": master_keys, |
343 | 351 | A map from user_id -> device_id -> device details |
344 | 352 | """ |
345 | 353 | set_tag("local_query", query) |
346 | local_query = [] | |
347 | ||
348 | result_dict = {} | |
354 | local_query = [] # type: List[Tuple[str, Optional[str]]] | |
355 | ||
356 | result_dict = {} # type: Dict[str, Dict[str, dict]] | |
349 | 357 | for user_id, device_ids in query.items(): |
350 | 358 | # we use UserID.from_string to catch invalid user ids |
351 | 359 | if not self.is_mine(UserID.from_string(user_id)): |
379 | 387 | log_kv(results) |
380 | 388 | return result_dict |
381 | 389 | |
382 | async def on_federation_query_client_keys(self, query_body): | |
390 | async def on_federation_query_client_keys( | |
391 | self, query_body: Dict[str, Dict[str, Optional[List[str]]]] | |
392 | ) -> JsonDict: | |
383 | 393 | """ Handle a device key query from a federated server |
384 | 394 | """ |
385 | device_keys_query = query_body.get("device_keys", {}) | |
395 | device_keys_query = query_body.get( | |
396 | "device_keys", {} | |
397 | ) # type: Dict[str, Optional[List[str]]] | |
386 | 398 | res = await self.query_local_devices(device_keys_query) |
387 | 399 | ret = {"device_keys": res} |
388 | 400 | |
396 | 408 | return ret |
397 | 409 | |
398 | 410 | @trace |
399 | async def claim_one_time_keys(self, query, timeout): | |
400 | local_query = [] | |
401 | remote_queries = {} | |
402 | ||
403 | for user_id, device_keys in query.get("one_time_keys", {}).items(): | |
411 | async def claim_one_time_keys( | |
412 | self, query: Dict[str, Dict[str, Dict[str, str]]], timeout: int | |
413 | ) -> JsonDict: | |
414 | local_query = [] # type: List[Tuple[str, str, str]] | |
415 | remote_queries = {} # type: Dict[str, Dict[str, Dict[str, str]]] | |
416 | ||
417 | for user_id, one_time_keys in query.get("one_time_keys", {}).items(): | |
404 | 418 | # we use UserID.from_string to catch invalid user ids |
405 | 419 | if self.is_mine(UserID.from_string(user_id)): |
406 | for device_id, algorithm in device_keys.items(): | |
420 | for device_id, algorithm in one_time_keys.items(): | |
407 | 421 | local_query.append((user_id, device_id, algorithm)) |
408 | 422 | else: |
409 | 423 | domain = get_domain_from_id(user_id) |
410 | remote_queries.setdefault(domain, {})[user_id] = device_keys | |
424 | remote_queries.setdefault(domain, {})[user_id] = one_time_keys | |
411 | 425 | |
412 | 426 | set_tag("local_key_query", local_query) |
413 | 427 | set_tag("remote_key_query", remote_queries) |
414 | 428 | |
415 | 429 | results = await self.store.claim_e2e_one_time_keys(local_query) |
416 | 430 | |
417 | json_result = {} | |
418 | failures = {} | |
431 | # A map of user ID -> device ID -> key ID -> key. | |
432 | json_result = {} # type: Dict[str, Dict[str, Dict[str, JsonDict]]] | |
433 | failures = {} # type: Dict[str, JsonDict] | |
419 | 434 | for user_id, device_keys in results.items(): |
420 | 435 | for device_id, keys in device_keys.items(): |
421 | for key_id, json_bytes in keys.items(): | |
436 | for key_id, json_str in keys.items(): | |
422 | 437 | json_result.setdefault(user_id, {})[device_id] = { |
423 | key_id: json_decoder.decode(json_bytes) | |
438 | key_id: json_decoder.decode(json_str) | |
424 | 439 | } |
425 | 440 | |
426 | 441 | @trace |
467 | 482 | return {"one_time_keys": json_result, "failures": failures} |
468 | 483 | |
469 | 484 | @tag_args |
470 | async def upload_keys_for_user(self, user_id, device_id, keys): | |
485 | async def upload_keys_for_user( | |
486 | self, user_id: str, device_id: str, keys: JsonDict | |
487 | ) -> JsonDict: | |
471 | 488 | |
472 | 489 | time_now = self.clock.time_msec() |
473 | 490 | |
542 | 559 | return {"one_time_key_counts": result} |
543 | 560 | |
544 | 561 | async def _upload_one_time_keys_for_user( |
545 | self, user_id, device_id, time_now, one_time_keys | |
546 | ): | |
562 | self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict | |
563 | ) -> None: | |
547 | 564 | logger.info( |
548 | 565 | "Adding one_time_keys %r for device %r for user %r at %d", |
549 | 566 | one_time_keys.keys(), |
584 | 601 | log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys}) |
585 | 602 | await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys) |
586 | 603 | |
587 | async def upload_signing_keys_for_user(self, user_id, keys): | |
604 | async def upload_signing_keys_for_user( | |
605 | self, user_id: str, keys: JsonDict | |
606 | ) -> JsonDict: | |
588 | 607 | """Upload signing keys for cross-signing |
589 | 608 | |
590 | 609 | Args: |
591 | user_id (string): the user uploading the keys | |
592 | keys (dict[string, dict]): the signing keys | |
610 | user_id: the user uploading the keys | |
611 | keys: the signing keys | |
593 | 612 | """ |
594 | 613 | |
595 | 614 | # if a master key is uploaded, then check it. Otherwise, load the |
666 | 685 | |
667 | 686 | return {} |
668 | 687 | |
669 | async def upload_signatures_for_device_keys(self, user_id, signatures): | |
688 | async def upload_signatures_for_device_keys( | |
689 | self, user_id: str, signatures: JsonDict | |
690 | ) -> JsonDict: | |
670 | 691 | """Upload device signatures for cross-signing |
671 | 692 | |
672 | 693 | Args: |
673 | user_id (string): the user uploading the signatures | |
674 | signatures (dict[string, dict[string, dict]]): map of users to | |
675 | devices to signed keys. This is the submission from the user; an | |
676 | exception will be raised if it is malformed. | |
694 | user_id: the user uploading the signatures | |
695 | signatures: map of users to devices to signed keys. This is the submission | |
696 | from the user; an exception will be raised if it is malformed. | |
677 | 697 | Returns: |
678 | dict: response to be sent back to the client. The response will have | |
698 | The response to be sent back to the client. The response will have | |
679 | 699 | a "failures" key, which will be a dict mapping users to devices |
680 | 700 | to errors for the signatures that failed. |
681 | 701 | Raises: |
718 | 738 | |
719 | 739 | return {"failures": failures} |
720 | 740 | |
721 | async def _process_self_signatures(self, user_id, signatures): | |
741 | async def _process_self_signatures( | |
742 | self, user_id: str, signatures: JsonDict | |
743 | ) -> Tuple[List["SignatureListItem"], Dict[str, Dict[str, dict]]]: | |
722 | 744 | """Process uploaded signatures of the user's own keys. |
723 | 745 | |
724 | 746 | Signatures of the user's own keys from this API come in two forms: |
730 | 752 | signatures (dict[string, dict]): map of devices to signed keys |
731 | 753 | |
732 | 754 | Returns: |
733 | (list[SignatureListItem], dict[string, dict[string, dict]]): | |
734 | a list of signatures to store, and a map of users to devices to failure | |
735 | reasons | |
755 | A tuple of a list of signatures to store, and a map of users to | |
756 | devices to failure reasons | |
736 | 757 | |
737 | 758 | Raises: |
738 | 759 | SynapseError: if the input is malformed |
739 | 760 | """ |
740 | signature_list = [] | |
741 | failures = {} | |
761 | signature_list = [] # type: List[SignatureListItem] | |
762 | failures = {} # type: Dict[str, Dict[str, JsonDict]] | |
742 | 763 | if not signatures: |
743 | 764 | return signature_list, failures |
744 | 765 | |
833 | 854 | return signature_list, failures |
834 | 855 | |
835 | 856 | def _check_master_key_signature( |
836 | self, user_id, master_key_id, signed_master_key, stored_master_key, devices | |
837 | ): | |
857 | self, | |
858 | user_id: str, | |
859 | master_key_id: str, | |
860 | signed_master_key: JsonDict, | |
861 | stored_master_key: JsonDict, | |
862 | devices: Dict[str, Dict[str, JsonDict]], | |
863 | ) -> List["SignatureListItem"]: | |
838 | 864 | """Check signatures of a user's master key made by their devices. |
839 | 865 | |
840 | 866 | Args: |
841 | user_id (string): the user whose master key is being checked | |
842 | master_key_id (string): the ID of the user's master key | |
843 | signed_master_key (dict): the user's signed master key that was uploaded | |
844 | stored_master_key (dict): our previously-stored copy of the user's master key | |
845 | devices (iterable(dict)): the user's devices | |
867 | user_id: the user whose master key is being checked | |
868 | master_key_id: the ID of the user's master key | |
869 | signed_master_key: the user's signed master key that was uploaded | |
870 | stored_master_key: our previously-stored copy of the user's master key | |
871 | devices: the user's devices | |
846 | 872 | |
847 | 873 | Returns: |
848 | list[SignatureListItem]: a list of signatures to store | |
874 | A list of signatures to store | |
849 | 875 | |
850 | 876 | Raises: |
851 | 877 | SynapseError: if a signature is invalid |
876 | 902 | |
877 | 903 | return master_key_signature_list |
878 | 904 | |
879 | async def _process_other_signatures(self, user_id, signatures): | |
905 | async def _process_other_signatures( | |
906 | self, user_id: str, signatures: Dict[str, dict] | |
907 | ) -> Tuple[List["SignatureListItem"], Dict[str, Dict[str, dict]]]: | |
880 | 908 | """Process uploaded signatures of other users' keys. These will be the |
881 | 909 | target user's master keys, signed by the uploading user's user-signing |
882 | 910 | key. |
883 | 911 | |
884 | 912 | Args: |
885 | user_id (string): the user uploading the keys | |
886 | signatures (dict[string, dict]): map of users to devices to signed keys | |
913 | user_id: the user uploading the keys | |
914 | signatures: map of users to devices to signed keys | |
887 | 915 | |
888 | 916 | Returns: |
889 | (list[SignatureListItem], dict[string, dict[string, dict]]): | |
890 | a list of signatures to store, and a map of users to devices to failure | |
917 | A list of signatures to store, and a map of users to devices to failure | |
891 | 918 | reasons |
892 | 919 | |
893 | 920 | Raises: |
894 | 921 | SynapseError: if the input is malformed |
895 | 922 | """ |
896 | signature_list = [] | |
897 | failures = {} | |
923 | signature_list = [] # type: List[SignatureListItem] | |
924 | failures = {} # type: Dict[str, Dict[str, JsonDict]] | |
898 | 925 | if not signatures: |
899 | 926 | return signature_list, failures |
900 | 927 | |
982 | 1009 | |
983 | 1010 | async def _get_e2e_cross_signing_verify_key( |
984 | 1011 | self, user_id: str, key_type: str, from_user_id: str = None |
985 | ): | |
1012 | ) -> Tuple[JsonDict, str, VerifyKey]: | |
986 | 1013 | """Fetch locally or remotely query for a cross-signing public key. |
987 | 1014 | |
988 | 1015 | First, attempt to fetch the cross-signing public key from storage. |
996 | 1023 | This affects what signatures are fetched. |
997 | 1024 | |
998 | 1025 | Returns: |
999 | dict, str, VerifyKey: the raw key data, the key ID, and the | |
1000 | signedjson verify key | |
1026 | The raw key data, the key ID, and the signedjson verify key | |
1001 | 1027 | |
1002 | 1028 | Raises: |
1003 | 1029 | NotFoundError: if the key is not found |
1134 | 1160 | return desired_key, desired_key_id, desired_verify_key |
1135 | 1161 | |
1136 | 1162 | |
1137 | def _check_cross_signing_key(key, user_id, key_type, signing_key=None): | |
1163 | def _check_cross_signing_key( | |
1164 | key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None | |
1165 | ) -> None: | |
1138 | 1166 | """Check a cross-signing key uploaded by a user. Performs some basic sanity |
1139 | 1167 | checking, and ensures that it is signed, if a signature is required. |
1140 | 1168 | |
1141 | 1169 | Args: |
1142 | key (dict): the key data to verify | |
1143 | user_id (str): the user whose key is being checked | |
1144 | key_type (str): the type of key that the key should be | |
1145 | signing_key (VerifyKey): (optional) the signing key that the key should | |
1146 | be signed with. If omitted, signatures will not be checked. | |
1170 | key: the key data to verify | |
1171 | user_id: the user whose key is being checked | |
1172 | key_type: the type of key that the key should be | |
1173 | signing_key: the signing key that the key should be signed with. If | |
1174 | omitted, signatures will not be checked. | |
1147 | 1175 | """ |
1148 | 1176 | if ( |
1149 | 1177 | key.get("user_id") != user_id |
1161 | 1189 | ) |
1162 | 1190 | |
1163 | 1191 | |
1164 | def _check_device_signature(user_id, verify_key, signed_device, stored_device): | |
1192 | def _check_device_signature( | |
1193 | user_id: str, | |
1194 | verify_key: VerifyKey, | |
1195 | signed_device: JsonDict, | |
1196 | stored_device: JsonDict, | |
1197 | ) -> None: | |
1165 | 1198 | """Check that a signature on a device or cross-signing key is correct and |
1166 | 1199 | matches the copy of the device/key that we have stored. Throws an |
1167 | 1200 | exception if an error is detected. |
1168 | 1201 | |
1169 | 1202 | Args: |
1170 | user_id (str): the user ID whose signature is being checked | |
1171 | verify_key (VerifyKey): the key to verify the device with | |
1172 | signed_device (dict): the uploaded signed device data | |
1173 | stored_device (dict): our previously stored copy of the device | |
1203 | user_id: the user ID whose signature is being checked | |
1204 | verify_key: the key to verify the device with | |
1205 | signed_device: the uploaded signed device data | |
1206 | stored_device: our previously stored copy of the device | |
1174 | 1207 | |
1175 | 1208 | Raises: |
1176 | 1209 | SynapseError: if the signature was invalid or the sent device is not the |
1200 | 1233 | raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE) |
1201 | 1234 | |
1202 | 1235 | |
1203 | def _exception_to_failure(e): | |
1236 | def _exception_to_failure(e: Exception) -> JsonDict: | |
1204 | 1237 | if isinstance(e, SynapseError): |
1205 | 1238 | return {"status": e.code, "errcode": e.errcode, "message": str(e)} |
1206 | 1239 | |
1217 | 1250 | return {"status": 503, "message": str(e)} |
1218 | 1251 | |
1219 | 1252 | |
1220 | def _one_time_keys_match(old_key_json, new_key): | |
1253 | def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool: | |
1221 | 1254 | old_key = json_decoder.decode(old_key_json) |
1222 | 1255 | |
1223 | 1256 | # if either is a string rather than an object, they must match exactly |
1238 | 1271 | """An item in the signature list as used by upload_signatures_for_device_keys. |
1239 | 1272 | """ |
1240 | 1273 | |
1241 | signing_key_id = attr.ib() | |
1242 | target_user_id = attr.ib() | |
1243 | target_device_id = attr.ib() | |
1244 | signature = attr.ib() | |
1274 | signing_key_id = attr.ib(type=str) | |
1275 | target_user_id = attr.ib(type=str) | |
1276 | target_device_id = attr.ib(type=str) | |
1277 | signature = attr.ib(type=JsonDict) | |
1245 | 1278 | |
1246 | 1279 | |
1247 | 1280 | class SigningKeyEduUpdater: |
1248 | 1281 | """Handles incoming signing key updates from federation and updates the DB""" |
1249 | 1282 | |
1250 | def __init__(self, hs, e2e_keys_handler): | |
1283 | def __init__(self, hs: "HomeServer", e2e_keys_handler: E2eKeysHandler): | |
1251 | 1284 | self.store = hs.get_datastore() |
1252 | 1285 | self.federation = hs.get_federation_client() |
1253 | 1286 | self.clock = hs.get_clock() |
1256 | 1289 | self._remote_edu_linearizer = Linearizer(name="remote_signing_key") |
1257 | 1290 | |
1258 | 1291 | # user_id -> list of updates waiting to be handled. |
1259 | self._pending_updates = {} | |
1292 | self._pending_updates = {} # type: Dict[str, List[Tuple[JsonDict, JsonDict]]] | |
1260 | 1293 | |
1261 | 1294 | # Recently seen stream ids. We don't bother keeping these in the DB, |
1262 | 1295 | # but they're useful to have them about to reduce the number of spurious |
1269 | 1302 | iterable=True, |
1270 | 1303 | ) |
1271 | 1304 | |
1272 | async def incoming_signing_key_update(self, origin, edu_content): | |
1305 | async def incoming_signing_key_update( | |
1306 | self, origin: str, edu_content: JsonDict | |
1307 | ) -> None: | |
1273 | 1308 | """Called on incoming signing key update from federation. Responsible for |
1274 | 1309 | parsing the EDU and adding to pending updates list. |
1275 | 1310 | |
1276 | 1311 | Args: |
1277 | origin (string): the server that sent the EDU | |
1278 | edu_content (dict): the contents of the EDU | |
1312 | origin: the server that sent the EDU | |
1313 | edu_content: the contents of the EDU | |
1279 | 1314 | """ |
1280 | 1315 | |
1281 | 1316 | user_id = edu_content.pop("user_id") |
1298 | 1333 | |
1299 | 1334 | await self._handle_signing_key_updates(user_id) |
1300 | 1335 | |
1301 | async def _handle_signing_key_updates(self, user_id): | |
1336 | async def _handle_signing_key_updates(self, user_id: str) -> None: | |
1302 | 1337 | """Actually handle pending updates. |
1303 | 1338 | |
1304 | 1339 | Args: |
1305 | user_id (string): the user whose updates we are processing | |
1340 | user_id: the user whose updates we are processing | |
1306 | 1341 | """ |
1307 | 1342 | |
1308 | 1343 | device_handler = self.e2e_keys_handler.device_handler |
1314 | 1349 | # This can happen since we batch updates |
1315 | 1350 | return |
1316 | 1351 | |
1317 | device_ids = [] | |
1352 | device_ids = [] # type: List[str] | |
1318 | 1353 | |
1319 | 1354 | logger.info("pending updates: %r", pending_updates) |
1320 | 1355 |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING, List, Optional | |
17 | 18 | |
18 | 19 | from synapse.api.errors import ( |
19 | 20 | Codes, |
23 | 24 | SynapseError, |
24 | 25 | ) |
25 | 26 | from synapse.logging.opentracing import log_kv, trace |
27 | from synapse.types import JsonDict | |
26 | 28 | from synapse.util.async_helpers import Linearizer |
29 | ||
30 | if TYPE_CHECKING: | |
31 | from synapse.app.homeserver import HomeServer | |
27 | 32 | |
28 | 33 | logger = logging.getLogger(__name__) |
29 | 34 | |
36 | 41 | The actual payload of the encrypted keys is completely opaque to the handler. |
37 | 42 | """ |
38 | 43 | |
39 | def __init__(self, hs): | |
44 | def __init__(self, hs: "HomeServer"): | |
40 | 45 | self.store = hs.get_datastore() |
41 | 46 | |
42 | 47 | # Used to lock whenever a client is uploading key data. This prevents collisions |
47 | 52 | self._upload_linearizer = Linearizer("upload_room_keys_lock") |
48 | 53 | |
49 | 54 | @trace |
50 | async def get_room_keys(self, user_id, version, room_id=None, session_id=None): | |
55 | async def get_room_keys( | |
56 | self, | |
57 | user_id: str, | |
58 | version: str, | |
59 | room_id: Optional[str] = None, | |
60 | session_id: Optional[str] = None, | |
61 | ) -> List[JsonDict]: | |
51 | 62 | """Bulk get the E2E room keys for a given backup, optionally filtered to a given |
52 | 63 | room, or a given session. |
53 | 64 | See EndToEndRoomKeyStore.get_e2e_room_keys for full details. |
54 | 65 | |
55 | 66 | Args: |
56 | user_id(str): the user whose keys we're getting | |
57 | version(str): the version ID of the backup we're getting keys from | |
58 | room_id(string): room ID to get keys for, for None to get keys for all rooms | |
59 | session_id(string): session ID to get keys for, for None to get keys for all | |
67 | user_id: the user whose keys we're getting | |
68 | version: the version ID of the backup we're getting keys from | |
69 | room_id: room ID to get keys for, for None to get keys for all rooms | |
70 | session_id: session ID to get keys for, for None to get keys for all | |
60 | 71 | sessions |
61 | 72 | Raises: |
62 | 73 | NotFoundError: if the backup version does not exist |
63 | 74 | Returns: |
64 | A deferred list of dicts giving the session_data and message metadata for | |
75 | A list of dicts giving the session_data and message metadata for | |
65 | 76 | these room keys. |
66 | 77 | """ |
67 | 78 | |
85 | 96 | return results |
86 | 97 | |
87 | 98 | @trace |
88 | async def delete_room_keys(self, user_id, version, room_id=None, session_id=None): | |
99 | async def delete_room_keys( | |
100 | self, | |
101 | user_id: str, | |
102 | version: str, | |
103 | room_id: Optional[str] = None, | |
104 | session_id: Optional[str] = None, | |
105 | ) -> JsonDict: | |
89 | 106 | """Bulk delete the E2E room keys for a given backup, optionally filtered to a given |
90 | 107 | room or a given session. |
91 | 108 | See EndToEndRoomKeyStore.delete_e2e_room_keys for full details. |
92 | 109 | |
93 | 110 | Args: |
94 | user_id(str): the user whose backup we're deleting | |
95 | version(str): the version ID of the backup we're deleting | |
96 | room_id(string): room ID to delete keys for, for None to delete keys for all | |
111 | user_id: the user whose backup we're deleting | |
112 | version: the version ID of the backup we're deleting | |
113 | room_id: room ID to delete keys for, for None to delete keys for all | |
97 | 114 | rooms |
98 | session_id(string): session ID to delete keys for, for None to delete keys | |
115 | session_id: session ID to delete keys for, for None to delete keys | |
99 | 116 | for all sessions |
100 | 117 | Raises: |
101 | 118 | NotFoundError: if the backup version does not exist |
127 | 144 | return {"etag": str(version_etag), "count": count} |
128 | 145 | |
129 | 146 | @trace |
130 | async def upload_room_keys(self, user_id, version, room_keys): | |
147 | async def upload_room_keys( | |
148 | self, user_id: str, version: str, room_keys: JsonDict | |
149 | ) -> JsonDict: | |
131 | 150 | """Bulk upload a list of room keys into a given backup version, asserting |
132 | 151 | that the given version is the current backup version. room_keys are merged |
133 | 152 | into the current backup as described in RoomKeysServlet.on_PUT(). |
134 | 153 | |
135 | 154 | Args: |
136 | user_id(str): the user whose backup we're setting | |
137 | version(str): the version ID of the backup we're updating | |
138 | room_keys(dict): a nested dict describing the room_keys we're setting: | |
155 | user_id: the user whose backup we're setting | |
156 | version: the version ID of the backup we're updating | |
157 | room_keys: a nested dict describing the room_keys we're setting: | |
139 | 158 | |
140 | 159 | { |
141 | 160 | "rooms": { |
253 | 272 | return {"etag": str(version_etag), "count": count} |
254 | 273 | |
255 | 274 | @staticmethod |
256 | def _should_replace_room_key(current_room_key, room_key): | |
275 | def _should_replace_room_key( | |
276 | current_room_key: Optional[JsonDict], room_key: JsonDict | |
277 | ) -> bool: | |
257 | 278 | """ |
258 | 279 | Determine whether to replace a given current_room_key (if any) |
259 | 280 | with a newly uploaded room_key backup |
260 | 281 | |
261 | 282 | Args: |
262 | current_room_key (dict): Optional, the current room_key dict if any | |
263 | room_key (dict): The new room_key dict which may or may not be fit to | |
283 | current_room_key: Optional, the current room_key dict if any | |
284 | room_key : The new room_key dict which may or may not be fit to | |
264 | 285 | replace the current_room_key |
265 | 286 | |
266 | 287 | Returns: |
285 | 306 | return True |
286 | 307 | |
287 | 308 | @trace |
288 | async def create_version(self, user_id, version_info): | |
309 | async def create_version(self, user_id: str, version_info: JsonDict) -> str: | |
289 | 310 | """Create a new backup version. This automatically becomes the new |
290 | 311 | backup version for the user's keys; previous backups will no longer be |
291 | 312 | writeable to. |
292 | 313 | |
293 | 314 | Args: |
294 | user_id(str): the user whose backup version we're creating | |
295 | version_info(dict): metadata about the new version being created | |
315 | user_id: the user whose backup version we're creating | |
316 | version_info: metadata about the new version being created | |
296 | 317 | |
297 | 318 | { |
298 | 319 | "algorithm": "m.megolm_backup.v1", |
300 | 321 | } |
301 | 322 | |
302 | 323 | Returns: |
303 | A deferred of a string that gives the new version number. | |
324 | The new version number. | |
304 | 325 | """ |
305 | 326 | |
306 | 327 | # TODO: Validate the JSON to make sure it has the right keys. |
312 | 333 | ) |
313 | 334 | return new_version |
314 | 335 | |
315 | async def get_version_info(self, user_id, version=None): | |
336 | async def get_version_info( | |
337 | self, user_id: str, version: Optional[str] = None | |
338 | ) -> JsonDict: | |
316 | 339 | """Get the info about a given version of the user's backup |
317 | 340 | |
318 | 341 | Args: |
319 | user_id(str): the user whose current backup version we're querying | |
320 | version(str): Optional; if None gives the most recent version | |
342 | user_id: the user whose current backup version we're querying | |
343 | version: Optional; if None gives the most recent version | |
321 | 344 | otherwise a historical one. |
322 | 345 | Raises: |
323 | 346 | NotFoundError: if the requested backup version doesn't exist |
324 | 347 | Returns: |
325 | A deferred of a info dict that gives the info about the new version. | |
348 | A info dict that gives the info about the new version. | |
326 | 349 | |
327 | 350 | { |
328 | 351 | "version": "1234", |
345 | 368 | return res |
346 | 369 | |
347 | 370 | @trace |
348 | async def delete_version(self, user_id, version=None): | |
371 | async def delete_version(self, user_id: str, version: Optional[str] = None) -> None: | |
349 | 372 | """Deletes a given version of the user's e2e_room_keys backup |
350 | 373 | |
351 | 374 | Args: |
365 | 388 | raise |
366 | 389 | |
367 | 390 | @trace |
368 | async def update_version(self, user_id, version, version_info): | |
391 | async def update_version( | |
392 | self, user_id: str, version: str, version_info: JsonDict | |
393 | ) -> JsonDict: | |
369 | 394 | """Update the info about a given version of the user's backup |
370 | 395 | |
371 | 396 | Args: |
372 | user_id(str): the user whose current backup version we're updating | |
373 | version(str): the backup version we're updating | |
374 | version_info(dict): the new information about the backup | |
397 | user_id: the user whose current backup version we're updating | |
398 | version: the backup version we're updating | |
399 | version_info: the new information about the backup | |
375 | 400 | Raises: |
376 | 401 | NotFoundError: if the requested backup version doesn't exist |
377 | 402 | Returns: |
378 | A deferred of an empty dict. | |
403 | An empty dict. | |
379 | 404 | """ |
380 | 405 | if "version" not in version_info: |
381 | 406 | version_info["version"] = version |
1616 | 1616 | if event.state_key == self._server_notices_mxid: |
1617 | 1617 | raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user") |
1618 | 1618 | |
1619 | # We retrieve the room member handler here as to not cause a cyclic dependency | |
1620 | member_handler = self.hs.get_room_member_handler() | |
1621 | # We don't rate limit based on room ID, as that should be done by | |
1622 | # sending server. | |
1623 | member_handler.ratelimit_invite(None, event.state_key) | |
1624 | ||
1619 | 1625 | # keep a record of the room version, if we don't yet know it. |
1620 | 1626 | # (this may get overwritten if we later get a different room version in a |
1621 | 1627 | # join dance). |
2091 | 2097 | |
2092 | 2098 | if event.type == EventTypes.GuestAccess and not context.rejected: |
2093 | 2099 | await self.maybe_kick_guest_users(event) |
2100 | ||
2101 | # If we are going to send this event over federation we precaclculate | |
2102 | # the joined hosts. | |
2103 | if event.internal_metadata.get_send_on_behalf_of(): | |
2104 | await self.event_creation_handler.cache_joined_hosts_for_event(event) | |
2094 | 2105 | |
2095 | 2106 | return context |
2096 | 2107 |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING, Dict, Iterable, List, Set | |
17 | 18 | |
18 | 19 | from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError |
19 | from synapse.types import GroupID, get_domain_from_id | |
20 | from synapse.types import GroupID, JsonDict, get_domain_from_id | |
21 | ||
22 | if TYPE_CHECKING: | |
23 | from synapse.app.homeserver import HomeServer | |
20 | 24 | |
21 | 25 | logger = logging.getLogger(__name__) |
22 | 26 | |
55 | 59 | |
56 | 60 | |
57 | 61 | class GroupsLocalWorkerHandler: |
58 | def __init__(self, hs): | |
62 | def __init__(self, hs: "HomeServer"): | |
59 | 63 | self.hs = hs |
60 | 64 | self.store = hs.get_datastore() |
61 | 65 | self.room_list_handler = hs.get_room_list_handler() |
83 | 87 | get_group_role = _create_rerouter("get_group_role") |
84 | 88 | get_group_roles = _create_rerouter("get_group_roles") |
85 | 89 | |
86 | async def get_group_summary(self, group_id, requester_user_id): | |
90 | async def get_group_summary( | |
91 | self, group_id: str, requester_user_id: str | |
92 | ) -> JsonDict: | |
87 | 93 | """Get the group summary for a group. |
88 | 94 | |
89 | 95 | If the group is remote we check that the users have valid attestations. |
136 | 142 | |
137 | 143 | return res |
138 | 144 | |
139 | async def get_users_in_group(self, group_id, requester_user_id): | |
145 | async def get_users_in_group( | |
146 | self, group_id: str, requester_user_id: str | |
147 | ) -> JsonDict: | |
140 | 148 | """Get users in a group |
141 | 149 | """ |
142 | 150 | if self.is_mine_id(group_id): |
143 | res = await self.groups_server_handler.get_users_in_group( | |
151 | return await self.groups_server_handler.get_users_in_group( | |
144 | 152 | group_id, requester_user_id |
145 | 153 | ) |
146 | return res | |
147 | 154 | |
148 | 155 | group_server_name = get_domain_from_id(group_id) |
149 | 156 | |
177 | 184 | |
178 | 185 | return res |
179 | 186 | |
180 | async def get_joined_groups(self, user_id): | |
187 | async def get_joined_groups(self, user_id: str) -> JsonDict: | |
181 | 188 | group_ids = await self.store.get_joined_groups(user_id) |
182 | 189 | return {"groups": group_ids} |
183 | 190 | |
184 | async def get_publicised_groups_for_user(self, user_id): | |
191 | async def get_publicised_groups_for_user(self, user_id: str) -> JsonDict: | |
185 | 192 | if self.hs.is_mine_id(user_id): |
186 | 193 | result = await self.store.get_publicised_groups_for_user(user_id) |
187 | 194 | |
205 | 212 | # TODO: Verify attestations |
206 | 213 | return {"groups": result} |
207 | 214 | |
208 | async def bulk_get_publicised_groups(self, user_ids, proxy=True): | |
209 | destinations = {} | |
215 | async def bulk_get_publicised_groups( | |
216 | self, user_ids: Iterable[str], proxy: bool = True | |
217 | ) -> JsonDict: | |
218 | destinations = {} # type: Dict[str, Set[str]] | |
210 | 219 | local_users = set() |
211 | 220 | |
212 | 221 | for user_id in user_ids: |
219 | 228 | raise SynapseError(400, "Some user_ids are not local") |
220 | 229 | |
221 | 230 | results = {} |
222 | failed_results = [] | |
231 | failed_results = [] # type: List[str] | |
223 | 232 | for destination, dest_user_ids in destinations.items(): |
224 | 233 | try: |
225 | 234 | r = await self.transport_client.bulk_get_publicised_groups( |
241 | 250 | |
242 | 251 | |
243 | 252 | class GroupsLocalHandler(GroupsLocalWorkerHandler): |
244 | def __init__(self, hs): | |
253 | def __init__(self, hs: "HomeServer"): | |
245 | 254 | super().__init__(hs) |
246 | 255 | |
247 | 256 | # Ensure attestations get renewed |
270 | 279 | |
271 | 280 | set_group_join_policy = _create_rerouter("set_group_join_policy") |
272 | 281 | |
273 | async def create_group(self, group_id, user_id, content): | |
282 | async def create_group( | |
283 | self, group_id: str, user_id: str, content: JsonDict | |
284 | ) -> JsonDict: | |
274 | 285 | """Create a group |
275 | 286 | """ |
276 | 287 | |
283 | 294 | local_attestation = None |
284 | 295 | remote_attestation = None |
285 | 296 | else: |
286 | local_attestation = self.attestations.create_attestation(group_id, user_id) | |
287 | content["attestation"] = local_attestation | |
288 | ||
289 | content["user_profile"] = await self.profile_handler.get_profile(user_id) | |
290 | ||
291 | try: | |
292 | res = await self.transport_client.create_group( | |
293 | get_domain_from_id(group_id), group_id, user_id, content | |
294 | ) | |
295 | except HttpResponseException as e: | |
296 | raise e.to_synapse_error() | |
297 | except RequestSendFailed: | |
298 | raise SynapseError(502, "Failed to contact group server") | |
299 | ||
300 | remote_attestation = res["attestation"] | |
301 | await self.attestations.verify_attestation( | |
302 | remote_attestation, | |
303 | group_id=group_id, | |
304 | user_id=user_id, | |
305 | server_name=get_domain_from_id(group_id), | |
306 | ) | |
297 | raise SynapseError(400, "Unable to create remote groups") | |
307 | 298 | |
308 | 299 | is_publicised = content.get("publicise", False) |
309 | 300 | token = await self.store.register_user_group_membership( |
319 | 310 | |
320 | 311 | return res |
321 | 312 | |
322 | async def join_group(self, group_id, user_id, content): | |
313 | async def join_group( | |
314 | self, group_id: str, user_id: str, content: JsonDict | |
315 | ) -> JsonDict: | |
323 | 316 | """Request to join a group |
324 | 317 | """ |
325 | 318 | if self.is_mine_id(group_id): |
364 | 357 | |
365 | 358 | return {} |
366 | 359 | |
367 | async def accept_invite(self, group_id, user_id, content): | |
360 | async def accept_invite( | |
361 | self, group_id: str, user_id: str, content: JsonDict | |
362 | ) -> JsonDict: | |
368 | 363 | """Accept an invite to a group |
369 | 364 | """ |
370 | 365 | if self.is_mine_id(group_id): |
409 | 404 | |
410 | 405 | return {} |
411 | 406 | |
412 | async def invite(self, group_id, user_id, requester_user_id, config): | |
407 | async def invite( | |
408 | self, group_id: str, user_id: str, requester_user_id: str, config: JsonDict | |
409 | ) -> JsonDict: | |
413 | 410 | """Invite a user to a group |
414 | 411 | """ |
415 | 412 | content = {"requester_user_id": requester_user_id, "config": config} |
433 | 430 | |
434 | 431 | return res |
435 | 432 | |
436 | async def on_invite(self, group_id, user_id, content): | |
433 | async def on_invite( | |
434 | self, group_id: str, user_id: str, content: JsonDict | |
435 | ) -> JsonDict: | |
437 | 436 | """One of our users were invited to a group |
438 | 437 | """ |
439 | 438 | # TODO: Support auto join and rejection |
464 | 463 | return {"state": "invite", "user_profile": user_profile} |
465 | 464 | |
466 | 465 | async def remove_user_from_group( |
467 | self, group_id, user_id, requester_user_id, content | |
468 | ): | |
466 | self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict | |
467 | ) -> JsonDict: | |
469 | 468 | """Remove a user from a group |
470 | 469 | """ |
471 | 470 | if user_id == requester_user_id: |
498 | 497 | |
499 | 498 | return res |
500 | 499 | |
501 | async def user_removed_from_group(self, group_id, user_id, content): | |
500 | async def user_removed_from_group( | |
501 | self, group_id: str, user_id: str, content: JsonDict | |
502 | ) -> None: | |
502 | 503 | """One of our users was removed/kicked from a group |
503 | 504 | """ |
504 | 505 | # TODO: Check if user in group |
26 | 26 | HttpResponseException, |
27 | 27 | SynapseError, |
28 | 28 | ) |
29 | from synapse.api.ratelimiting import Ratelimiter | |
29 | 30 | from synapse.config.emailconfig import ThreepidBehaviour |
30 | 31 | from synapse.http import RequestTimedOutError |
31 | 32 | from synapse.http.client import SimpleHttpClient |
33 | from synapse.http.site import SynapseRequest | |
32 | 34 | from synapse.types import JsonDict, Requester |
33 | 35 | from synapse.util import json_decoder |
34 | 36 | from synapse.util.hash import sha256_and_url_safe_base64 |
55 | 57 | self.hs = hs |
56 | 58 | |
57 | 59 | self._web_client_location = hs.config.invite_client_location |
60 | ||
61 | # Ratelimiters for `/requestToken` endpoints. | |
62 | self._3pid_validation_ratelimiter_ip = Ratelimiter( | |
63 | clock=hs.get_clock(), | |
64 | rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second, | |
65 | burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count, | |
66 | ) | |
67 | self._3pid_validation_ratelimiter_address = Ratelimiter( | |
68 | clock=hs.get_clock(), | |
69 | rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second, | |
70 | burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count, | |
71 | ) | |
72 | ||
73 | def ratelimit_request_token_requests( | |
74 | self, request: SynapseRequest, medium: str, address: str, | |
75 | ): | |
76 | """Used to ratelimit requests to `/requestToken` by IP and address. | |
77 | ||
78 | Args: | |
79 | request: The associated request | |
80 | medium: The type of threepid, e.g. "msisdn" or "email" | |
81 | address: The actual threepid ID, e.g. the phone number or email address | |
82 | """ | |
83 | ||
84 | self._3pid_validation_ratelimiter_ip.ratelimit((medium, request.getClientIP())) | |
85 | self._3pid_validation_ratelimiter_address.ratelimit((medium, address)) | |
58 | 86 | |
59 | 87 | async def threepid_from_creds( |
60 | 88 | self, id_server: str, creds: Dict[str, str] |
474 | 502 | raise e.to_synapse_error() |
475 | 503 | except RequestTimedOutError: |
476 | 504 | raise SynapseError(500, "Timed out contacting identity server") |
505 | ||
506 | # It is already checked that public_baseurl is configured since this code | |
507 | # should only be used if account_threepid_delegate_msisdn is true. | |
508 | assert self.hs.config.public_baseurl | |
477 | 509 | |
478 | 510 | # we need to tell the client to send the token back to us, since it doesn't |
479 | 511 | # otherwise know where to send it, so add submit_url response parameter |
173 | 173 | raise NotFoundError("Can't find event for token %s" % (at_token,)) |
174 | 174 | |
175 | 175 | visible_events = await filter_events_for_client( |
176 | self.storage, user_id, last_events, filter_send_to_client=False | |
176 | self.storage, user_id, last_events, filter_send_to_client=False, | |
177 | 177 | ) |
178 | 178 | |
179 | 179 | event = last_events[0] |
430 | 430 | self._message_handler = hs.get_message_handler() |
431 | 431 | |
432 | 432 | self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages |
433 | ||
434 | self._external_cache = hs.get_external_cache() | |
433 | 435 | |
434 | 436 | async def create_event( |
435 | 437 | self, |
938 | 940 | |
939 | 941 | await self.action_generator.handle_push_actions_for_event(event, context) |
940 | 942 | |
943 | await self.cache_joined_hosts_for_event(event) | |
944 | ||
941 | 945 | try: |
942 | 946 | # If we're a worker we need to hit out to the master. |
943 | 947 | writer_instance = self._events_shard_config.get_instance(event.room_id) |
976 | 980 | # staging area, if we calculated them. |
977 | 981 | await self.store.remove_push_actions_from_staging(event.event_id) |
978 | 982 | raise |
983 | ||
984 | async def cache_joined_hosts_for_event(self, event: EventBase) -> None: | |
985 | """Precalculate the joined hosts at the event, when using Redis, so that | |
986 | external federation senders don't have to recalculate it themselves. | |
987 | """ | |
988 | ||
989 | if not self._external_cache.is_enabled(): | |
990 | return | |
991 | ||
992 | # We actually store two mappings, event ID -> prev state group, | |
993 | # state group -> joined hosts, which is much more space efficient | |
994 | # than event ID -> joined hosts. | |
995 | # | |
996 | # Note: We have to cache event ID -> prev state group, as we don't | |
997 | # store that in the DB. | |
998 | # | |
999 | # Note: We always set the state group -> joined hosts cache, even if | |
1000 | # we already set it, so that the expiry time is reset. | |
1001 | ||
1002 | state_entry = await self.state.resolve_state_groups_for_events( | |
1003 | event.room_id, event_ids=event.prev_event_ids() | |
1004 | ) | |
1005 | ||
1006 | if state_entry.state_group: | |
1007 | joined_hosts = await self.store.get_joined_hosts(event.room_id, state_entry) | |
1008 | ||
1009 | await self._external_cache.set( | |
1010 | "event_to_prev_state_group", | |
1011 | event.event_id, | |
1012 | state_entry.state_group, | |
1013 | expiry_ms=60 * 60 * 1000, | |
1014 | ) | |
1015 | await self._external_cache.set( | |
1016 | "get_joined_hosts", | |
1017 | str(state_entry.state_group), | |
1018 | list(joined_hosts), | |
1019 | expiry_ms=60 * 60 * 1000, | |
1020 | ) | |
979 | 1021 | |
980 | 1022 | async def _validate_canonical_alias( |
981 | 1023 | self, directory_handler, room_alias_str: str, expected_room_id: str |
101 | 101 | ) from e |
102 | 102 | |
103 | 103 | async def handle_oidc_callback(self, request: SynapseRequest) -> None: |
104 | """Handle an incoming request to /_synapse/oidc/callback | |
104 | """Handle an incoming request to /_synapse/client/oidc/callback | |
105 | 105 | |
106 | 106 | Since we might want to display OIDC-related errors in a user-friendly |
107 | 107 | way, we don't raise SynapseError from here. Instead, we call |
272 | 272 | |
273 | 273 | # MXC URI for icon for this auth provider |
274 | 274 | self.idp_icon = provider.idp_icon |
275 | ||
276 | # optional brand identifier for this auth provider | |
277 | self.idp_brand = provider.idp_brand | |
275 | 278 | |
276 | 279 | self._sso_handler = hs.get_sso_handler() |
277 | 280 | |
639 | 642 | |
640 | 643 | - ``client_id``: the client ID set in ``oidc_config.client_id`` |
641 | 644 | - ``response_type``: ``code`` |
642 | - ``redirect_uri``: the callback URL ; ``{base url}/_synapse/oidc/callback`` | |
645 | - ``redirect_uri``: the callback URL ; ``{base url}/_synapse/client/oidc/callback`` | |
643 | 646 | - ``scope``: the list of scopes set in ``oidc_config.scopes`` |
644 | 647 | - ``state``: a random string |
645 | 648 | - ``nonce``: a random string |
680 | 683 | request.addCookie( |
681 | 684 | SESSION_COOKIE_NAME, |
682 | 685 | cookie, |
683 | path="/_synapse/oidc", | |
686 | path="/_synapse/client/oidc", | |
684 | 687 | max_age="3600", |
685 | 688 | httpOnly=True, |
686 | 689 | sameSite="lax", |
701 | 704 | async def handle_oidc_callback( |
702 | 705 | self, request: SynapseRequest, session_data: "OidcSessionData", code: str |
703 | 706 | ) -> None: |
704 | """Handle an incoming request to /_synapse/oidc/callback | |
707 | """Handle an incoming request to /_synapse/client/oidc/callback | |
705 | 708 | |
706 | 709 | By this time we have already validated the session on the synapse side, and |
707 | 710 | now need to do the provider-specific operations. This includes: |
1055 | 1058 | |
1056 | 1059 | |
1057 | 1060 | UserAttributeDict = TypedDict( |
1058 | "UserAttributeDict", {"localpart": Optional[str], "display_name": Optional[str]} | |
1061 | "UserAttributeDict", | |
1062 | {"localpart": Optional[str], "display_name": Optional[str], "emails": List[str]}, | |
1059 | 1063 | ) |
1060 | 1064 | C = TypeVar("C") |
1061 | 1065 | |
1134 | 1138 | env = Environment(finalize=jinja_finalize) |
1135 | 1139 | |
1136 | 1140 | |
1137 | @attr.s | |
1141 | @attr.s(slots=True, frozen=True) | |
1138 | 1142 | class JinjaOidcMappingConfig: |
1139 | 1143 | subject_claim = attr.ib(type=str) |
1140 | 1144 | localpart_template = attr.ib(type=Optional[Template]) |
1141 | 1145 | display_name_template = attr.ib(type=Optional[Template]) |
1146 | email_template = attr.ib(type=Optional[Template]) | |
1142 | 1147 | extra_attributes = attr.ib(type=Dict[str, Template]) |
1143 | 1148 | |
1144 | 1149 | |
1155 | 1160 | def parse_config(config: dict) -> JinjaOidcMappingConfig: |
1156 | 1161 | subject_claim = config.get("subject_claim", "sub") |
1157 | 1162 | |
1158 | localpart_template = None # type: Optional[Template] | |
1159 | if "localpart_template" in config: | |
1163 | def parse_template_config(option_name: str) -> Optional[Template]: | |
1164 | if option_name not in config: | |
1165 | return None | |
1160 | 1166 | try: |
1161 | localpart_template = env.from_string(config["localpart_template"]) | |
1167 | return env.from_string(config[option_name]) | |
1162 | 1168 | except Exception as e: |
1163 | raise ConfigError( | |
1164 | "invalid jinja template", path=["localpart_template"] | |
1165 | ) from e | |
1166 | ||
1167 | display_name_template = None # type: Optional[Template] | |
1168 | if "display_name_template" in config: | |
1169 | try: | |
1170 | display_name_template = env.from_string(config["display_name_template"]) | |
1171 | except Exception as e: | |
1172 | raise ConfigError( | |
1173 | "invalid jinja template", path=["display_name_template"] | |
1174 | ) from e | |
1169 | raise ConfigError("invalid jinja template", path=[option_name]) from e | |
1170 | ||
1171 | localpart_template = parse_template_config("localpart_template") | |
1172 | display_name_template = parse_template_config("display_name_template") | |
1173 | email_template = parse_template_config("email_template") | |
1175 | 1174 | |
1176 | 1175 | extra_attributes = {} # type Dict[str, Template] |
1177 | 1176 | if "extra_attributes" in config: |
1191 | 1190 | subject_claim=subject_claim, |
1192 | 1191 | localpart_template=localpart_template, |
1193 | 1192 | display_name_template=display_name_template, |
1193 | email_template=email_template, | |
1194 | 1194 | extra_attributes=extra_attributes, |
1195 | 1195 | ) |
1196 | 1196 | |
1212 | 1212 | # a usable mxid. |
1213 | 1213 | localpart += str(failures) if failures else "" |
1214 | 1214 | |
1215 | display_name = None # type: Optional[str] | |
1216 | if self._config.display_name_template is not None: | |
1217 | display_name = self._config.display_name_template.render( | |
1218 | user=userinfo | |
1219 | ).strip() | |
1220 | ||
1221 | if display_name == "": | |
1222 | display_name = None | |
1223 | ||
1224 | return UserAttributeDict(localpart=localpart, display_name=display_name) | |
1215 | def render_template_field(template: Optional[Template]) -> Optional[str]: | |
1216 | if template is None: | |
1217 | return None | |
1218 | return template.render(user=userinfo).strip() | |
1219 | ||
1220 | display_name = render_template_field(self._config.display_name_template) | |
1221 | if display_name == "": | |
1222 | display_name = None | |
1223 | ||
1224 | emails = [] # type: List[str] | |
1225 | email = render_template_field(self._config.email_template) | |
1226 | if email: | |
1227 | emails.append(email) | |
1228 | ||
1229 | return UserAttributeDict( | |
1230 | localpart=localpart, display_name=display_name, emails=emails | |
1231 | ) | |
1225 | 1232 | |
1226 | 1233 | async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict: |
1227 | 1234 | extras = {} # type: Dict[str, str] |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | """Contains functions for registering clients.""" |
16 | ||
16 | 17 | import logging |
17 | from typing import TYPE_CHECKING, List, Optional, Tuple | |
18 | from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple | |
18 | 19 | |
19 | 20 | from synapse import types |
20 | 21 | from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType |
151 | 152 | user_type: Optional[str] = None, |
152 | 153 | default_display_name: Optional[str] = None, |
153 | 154 | address: Optional[str] = None, |
154 | bind_emails: List[str] = [], | |
155 | bind_emails: Iterable[str] = [], | |
155 | 156 | by_admin: bool = False, |
156 | 157 | user_agent_ips: Optional[List[Tuple[str, str]]] = None, |
157 | 158 | ) -> str: |
692 | 693 | access_token: The access token of the newly logged in device, or |
693 | 694 | None if `inhibit_login` enabled. |
694 | 695 | """ |
696 | # TODO: 3pid registration can actually happen on the workers. Consider | |
697 | # refactoring it. | |
695 | 698 | if self.hs.config.worker_app: |
696 | 699 | await self._post_registration_client( |
697 | 700 | user_id=user_id, auth_result=auth_result, access_token=access_token |
125 | 125 | |
126 | 126 | self.third_party_event_rules = hs.get_third_party_event_rules() |
127 | 127 | |
128 | self._invite_burst_count = ( | |
129 | hs.config.ratelimiting.rc_invites_per_room.burst_count | |
130 | ) | |
131 | ||
128 | 132 | async def upgrade_room( |
129 | 133 | self, requester: Requester, old_room_id: str, new_version: RoomVersion |
130 | 134 | ) -> str: |
660 | 664 | # Allow the request to go through, but remove any associated invites. |
661 | 665 | invite_3pid_list = [] |
662 | 666 | invite_list = [] |
667 | ||
668 | if len(invite_list) + len(invite_3pid_list) > self._invite_burst_count: | |
669 | raise SynapseError(400, "Cannot invite so many users at once") | |
663 | 670 | |
664 | 671 | await self.event_creation_handler.assert_accepted_privacy_policy(requester) |
665 | 672 |
84 | 84 | burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count, |
85 | 85 | ) |
86 | 86 | |
87 | self._invites_per_room_limiter = Ratelimiter( | |
88 | clock=self.clock, | |
89 | rate_hz=hs.config.ratelimiting.rc_invites_per_room.per_second, | |
90 | burst_count=hs.config.ratelimiting.rc_invites_per_room.burst_count, | |
91 | ) | |
92 | self._invites_per_user_limiter = Ratelimiter( | |
93 | clock=self.clock, | |
94 | rate_hz=hs.config.ratelimiting.rc_invites_per_user.per_second, | |
95 | burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count, | |
96 | ) | |
97 | ||
87 | 98 | # This is only used to get at ratelimit function, and |
88 | 99 | # maybe_kick_guest_users. It's fine there are multiple of these as |
89 | 100 | # it doesn't store state. |
142 | 153 | room_id |
143 | 154 | """ |
144 | 155 | raise NotImplementedError() |
156 | ||
157 | def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str): | |
158 | """Ratelimit invites by room and by target user. | |
159 | ||
160 | If room ID is missing then we just rate limit by target user. | |
161 | """ | |
162 | if room_id: | |
163 | self._invites_per_room_limiter.ratelimit(room_id) | |
164 | ||
165 | self._invites_per_user_limiter.ratelimit(invitee_user_id) | |
145 | 166 | |
146 | 167 | async def _local_membership_update( |
147 | 168 | self, |
386 | 407 | raise SynapseError(403, "This room has been blocked on this server") |
387 | 408 | |
388 | 409 | if effective_membership_state == Membership.INVITE: |
410 | target_id = target.to_string() | |
411 | if ratelimit: | |
412 | # Don't ratelimit application services. | |
413 | if not requester.app_service or requester.app_service.is_rate_limited(): | |
414 | self.ratelimit_invite(room_id, target_id) | |
415 | ||
389 | 416 | # block any attempts to invite the server notices mxid |
390 | if target.to_string() == self._server_notices_mxid: | |
417 | if target_id == self._server_notices_mxid: | |
391 | 418 | raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user") |
392 | 419 | |
393 | 420 | block_invite = False |
411 | 438 | block_invite = True |
412 | 439 | |
413 | 440 | if not await self.spam_checker.user_may_invite( |
414 | requester.user.to_string(), target.to_string(), room_id | |
441 | requester.user.to_string(), target_id, room_id | |
415 | 442 | ): |
416 | 443 | logger.info("Blocking invite due to spam checker") |
417 | 444 | block_invite = True |
77 | 77 | # user-facing name of this auth provider |
78 | 78 | self.idp_name = "SAML" |
79 | 79 | |
80 | # we do not currently support icons for SAML auth, but this is required by | |
80 | # we do not currently support icons/brands for SAML auth, but this is required by | |
81 | 81 | # the SsoIdentityProvider protocol type. |
82 | 82 | self.idp_icon = None |
83 | self.idp_brand = None | |
83 | 84 | |
84 | 85 | # a map from saml session id to Saml2SessionData object |
85 | 86 | self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData] |
131 | 132 | raise Exception("prepare_for_authenticate didn't return a Location header") |
132 | 133 | |
133 | 134 | async def handle_saml_response(self, request: SynapseRequest) -> None: |
134 | """Handle an incoming request to /_matrix/saml2/authn_response | |
135 | """Handle an incoming request to /_synapse/client/saml2/authn_response | |
135 | 136 | |
136 | 137 | Args: |
137 | 138 | request: the incoming request from the browser. We'll |
14 | 14 | |
15 | 15 | import itertools |
16 | 16 | import logging |
17 | from typing import Iterable | |
17 | from typing import TYPE_CHECKING, Dict, Iterable, List, Optional | |
18 | 18 | |
19 | 19 | from unpaddedbase64 import decode_base64, encode_base64 |
20 | 20 | |
21 | 21 | from synapse.api.constants import EventTypes, Membership |
22 | 22 | from synapse.api.errors import NotFoundError, SynapseError |
23 | 23 | from synapse.api.filtering import Filter |
24 | from synapse.events import EventBase | |
24 | 25 | from synapse.storage.state import StateFilter |
26 | from synapse.types import JsonDict, UserID | |
25 | 27 | from synapse.visibility import filter_events_for_client |
26 | 28 | |
27 | 29 | from ._base import BaseHandler |
28 | 30 | |
31 | if TYPE_CHECKING: | |
32 | from synapse.app.homeserver import HomeServer | |
33 | ||
29 | 34 | logger = logging.getLogger(__name__) |
30 | 35 | |
31 | 36 | |
32 | 37 | class SearchHandler(BaseHandler): |
33 | def __init__(self, hs): | |
38 | def __init__(self, hs: "HomeServer"): | |
34 | 39 | super().__init__(hs) |
35 | 40 | self._event_serializer = hs.get_event_client_serializer() |
36 | 41 | self.storage = hs.get_storage() |
86 | 91 | |
87 | 92 | return historical_room_ids |
88 | 93 | |
89 | async def search(self, user, content, batch=None): | |
94 | async def search( | |
95 | self, user: UserID, content: JsonDict, batch: Optional[str] = None | |
96 | ) -> JsonDict: | |
90 | 97 | """Performs a full text search for a user. |
91 | 98 | |
92 | 99 | Args: |
93 | user (UserID) | |
94 | content (dict): Search parameters | |
95 | batch (str): The next_batch parameter. Used for pagination. | |
100 | user | |
101 | content: Search parameters | |
102 | batch: The next_batch parameter. Used for pagination. | |
96 | 103 | |
97 | 104 | Returns: |
98 | 105 | dict to be returned to the client with results of search |
185 | 192 | # If doing a subset of all rooms seearch, check if any of the rooms |
186 | 193 | # are from an upgraded room, and search their contents as well |
187 | 194 | if search_filter.rooms: |
188 | historical_room_ids = [] | |
195 | historical_room_ids = [] # type: List[str] | |
189 | 196 | for room_id in search_filter.rooms: |
190 | 197 | # Add any previous rooms to the search if they exist |
191 | 198 | ids = await self.get_old_rooms_from_upgraded_room(room_id) |
208 | 215 | |
209 | 216 | rank_map = {} # event_id -> rank of event |
210 | 217 | allowed_events = [] |
211 | room_groups = {} # Holds result of grouping by room, if applicable | |
212 | sender_group = {} # Holds result of grouping by sender, if applicable | |
218 | # Holds result of grouping by room, if applicable | |
219 | room_groups = {} # type: Dict[str, JsonDict] | |
220 | # Holds result of grouping by sender, if applicable | |
221 | sender_group = {} # type: Dict[str, JsonDict] | |
213 | 222 | |
214 | 223 | # Holds the next_batch for the entire result set if one of those exists |
215 | 224 | global_next_batch = None |
253 | 262 | s["results"].append(e.event_id) |
254 | 263 | |
255 | 264 | elif order_by == "recent": |
256 | room_events = [] | |
265 | room_events = [] # type: List[EventBase] | |
257 | 266 | i = 0 |
258 | 267 | |
259 | 268 | pagination_token = batch_token |
417 | 426 | |
418 | 427 | state_results = {} |
419 | 428 | if include_state: |
420 | rooms = {e.room_id for e in allowed_events} | |
421 | for room_id in rooms: | |
429 | for room_id in {e.room_id for e in allowed_events}: | |
422 | 430 | state = await self.state_handler.get_current_state(room_id) |
423 | 431 | state_results[room_id] = list(state.values()) |
424 | ||
425 | state_results.values() | |
426 | 432 | |
427 | 433 | # We're now about to serialize the events. We should not make any |
428 | 434 | # blocking calls after this. Otherwise the 'age' will be wrong |
447 | 453 | |
448 | 454 | if state_results: |
449 | 455 | s = {} |
450 | for room_id, state in state_results.items(): | |
456 | for room_id, state_events in state_results.items(): | |
451 | 457 | s[room_id] = await self._event_serializer.serialize_events( |
452 | state, time_now | |
458 | state_events, time_now | |
453 | 459 | ) |
454 | 460 | |
455 | 461 | rooms_cat_res["state"] = s |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import Optional | |
15 | from typing import TYPE_CHECKING, Optional | |
16 | 16 | |
17 | 17 | from synapse.api.errors import Codes, StoreError, SynapseError |
18 | 18 | from synapse.types import Requester |
19 | 19 | |
20 | 20 | from ._base import BaseHandler |
21 | ||
22 | if TYPE_CHECKING: | |
23 | from synapse.app.homeserver import HomeServer | |
21 | 24 | |
22 | 25 | logger = logging.getLogger(__name__) |
23 | 26 | |
25 | 28 | class SetPasswordHandler(BaseHandler): |
26 | 29 | """Handler which deals with changing user account passwords""" |
27 | 30 | |
28 | def __init__(self, hs): | |
31 | def __init__(self, hs: "HomeServer"): | |
29 | 32 | super().__init__(hs) |
30 | 33 | self._auth_handler = hs.get_auth_handler() |
31 | 34 | self._device_handler = hs.get_device_handler() |
32 | self._password_policy_handler = hs.get_password_policy_handler() | |
33 | 35 | |
34 | 36 | async def set_password( |
35 | 37 | self, |
37 | 39 | password_hash: str, |
38 | 40 | logout_devices: bool, |
39 | 41 | requester: Optional[Requester] = None, |
40 | ): | |
42 | ) -> None: | |
41 | 43 | if not self.hs.config.password_localdb_enabled: |
42 | 44 | raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN) |
43 | 45 |
13 | 13 | # limitations under the License. |
14 | 14 | import abc |
15 | 15 | import logging |
16 | from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Mapping, Optional | |
16 | from typing import ( | |
17 | TYPE_CHECKING, | |
18 | Awaitable, | |
19 | Callable, | |
20 | Dict, | |
21 | Iterable, | |
22 | Mapping, | |
23 | Optional, | |
24 | Set, | |
25 | ) | |
17 | 26 | from urllib.parse import urlencode |
18 | 27 | |
19 | 28 | import attr |
20 | 29 | from typing_extensions import NoReturn, Protocol |
21 | 30 | |
22 | 31 | from twisted.web.http import Request |
32 | from twisted.web.iweb import IRequest | |
23 | 33 | |
24 | 34 | from synapse.api.constants import LoginType |
25 | from synapse.api.errors import Codes, RedirectException, SynapseError | |
35 | from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError | |
26 | 36 | from synapse.handlers.ui_auth import UIAuthSessionDataConstants |
27 | 37 | from synapse.http import get_request_user_agent |
28 | from synapse.http.server import respond_with_html | |
38 | from synapse.http.server import respond_with_html, respond_with_redirect | |
29 | 39 | from synapse.http.site import SynapseRequest |
30 | from synapse.types import JsonDict, UserID, contains_invalid_mxid_characters | |
40 | from synapse.types import Collection, JsonDict, UserID, contains_invalid_mxid_characters | |
31 | 41 | from synapse.util.async_helpers import Linearizer |
32 | 42 | from synapse.util.stringutils import random_string |
33 | 43 | |
77 | 87 | @property |
78 | 88 | def idp_icon(self) -> Optional[str]: |
79 | 89 | """Optional MXC URI for user-facing icon""" |
90 | return None | |
91 | ||
92 | @property | |
93 | def idp_brand(self) -> Optional[str]: | |
94 | """Optional branding identifier""" | |
80 | 95 | return None |
81 | 96 | |
82 | 97 | @abc.abstractmethod |
108 | 123 | # enter one. |
109 | 124 | localpart = attr.ib(type=Optional[str]) |
110 | 125 | display_name = attr.ib(type=Optional[str], default=None) |
111 | emails = attr.ib(type=List[str], default=attr.Factory(list)) | |
126 | emails = attr.ib(type=Collection[str], default=attr.Factory(list)) | |
112 | 127 | |
113 | 128 | |
114 | 129 | @attr.s(slots=True) |
123 | 138 | |
124 | 139 | # attributes returned by the ID mapper |
125 | 140 | display_name = attr.ib(type=Optional[str]) |
126 | emails = attr.ib(type=List[str]) | |
141 | emails = attr.ib(type=Collection[str]) | |
127 | 142 | |
128 | 143 | # An optional dictionary of extra attributes to be provided to the client in the |
129 | 144 | # login response. |
134 | 149 | |
135 | 150 | # expiry time for the session, in milliseconds |
136 | 151 | expiry_time_ms = attr.ib(type=int) |
152 | ||
153 | # choices made by the user | |
154 | chosen_localpart = attr.ib(type=Optional[str], default=None) | |
155 | use_display_name = attr.ib(type=bool, default=True) | |
156 | emails_to_use = attr.ib(type=Collection[str], default=()) | |
157 | terms_accepted_version = attr.ib(type=Optional[str], default=None) | |
137 | 158 | |
138 | 159 | |
139 | 160 | # the HTTP cookie used to track the mapping session id |
168 | 189 | |
169 | 190 | # map from idp_id to SsoIdentityProvider |
170 | 191 | self._identity_providers = {} # type: Dict[str, SsoIdentityProvider] |
192 | ||
193 | self._consent_at_registration = hs.config.consent.user_consent_at_registration | |
171 | 194 | |
172 | 195 | def register_identity_provider(self, p: SsoIdentityProvider): |
173 | 196 | p_id = p.idp_id |
234 | 257 | respond_with_html(request, code, html) |
235 | 258 | |
236 | 259 | async def handle_redirect_request( |
237 | self, request: SynapseRequest, client_redirect_url: bytes, | |
260 | self, | |
261 | request: SynapseRequest, | |
262 | client_redirect_url: bytes, | |
263 | idp_id: Optional[str], | |
238 | 264 | ) -> str: |
239 | 265 | """Handle a request to /login/sso/redirect |
240 | 266 | |
242 | 268 | request: incoming HTTP request |
243 | 269 | client_redirect_url: the URL that we should redirect the |
244 | 270 | client to after login. |
271 | idp_id: optional identity provider chosen by the client | |
245 | 272 | |
246 | 273 | Returns: |
247 | 274 | the URI to redirect to |
251 | 278 | 400, "Homeserver not configured for SSO.", errcode=Codes.UNRECOGNIZED |
252 | 279 | ) |
253 | 280 | |
281 | # if the client chose an IdP, use that | |
282 | idp = None # type: Optional[SsoIdentityProvider] | |
283 | if idp_id: | |
284 | idp = self._identity_providers.get(idp_id) | |
285 | if not idp: | |
286 | raise NotFoundError("Unknown identity provider") | |
287 | ||
254 | 288 | # if we only have one auth provider, redirect to it directly |
255 | if len(self._identity_providers) == 1: | |
256 | ap = next(iter(self._identity_providers.values())) | |
257 | return await ap.handle_redirect_request(request, client_redirect_url) | |
289 | elif len(self._identity_providers) == 1: | |
290 | idp = next(iter(self._identity_providers.values())) | |
291 | ||
292 | if idp: | |
293 | return await idp.handle_redirect_request(request, client_redirect_url) | |
258 | 294 | |
259 | 295 | # otherwise, redirect to the IDP picker |
260 | 296 | return "/_synapse/client/pick_idp?" + urlencode( |
368 | 404 | to an additional page. (e.g. to prompt for more information) |
369 | 405 | |
370 | 406 | """ |
407 | new_user = False | |
408 | ||
371 | 409 | # grab a lock while we try to find a mapping for this user. This seems... |
372 | 410 | # optimistic, especially for implementations that end up redirecting to |
373 | 411 | # interstitial pages. |
408 | 446 | get_request_user_agent(request), |
409 | 447 | request.getClientIP(), |
410 | 448 | ) |
449 | new_user = True | |
411 | 450 | |
412 | 451 | await self._auth_handler.complete_sso_login( |
413 | user_id, request, client_redirect_url, extra_login_attributes | |
452 | user_id, | |
453 | request, | |
454 | client_redirect_url, | |
455 | extra_login_attributes, | |
456 | new_user=new_user, | |
414 | 457 | ) |
415 | 458 | |
416 | 459 | async def _call_attribute_mapper( |
500 | 543 | logger.info("Recorded registration session id %s", session_id) |
501 | 544 | |
502 | 545 | # Set the cookie and redirect to the username picker |
503 | e = RedirectException(b"/_synapse/client/pick_username") | |
546 | e = RedirectException(b"/_synapse/client/pick_username/account_details") | |
504 | 547 | e.cookies.append( |
505 | 548 | b"%s=%s; path=/" |
506 | 549 | % (USERNAME_MAPPING_SESSION_COOKIE_NAME, session_id.encode("ascii")) |
628 | 671 | ) |
629 | 672 | respond_with_html(request, 200, html) |
630 | 673 | |
674 | def get_mapping_session(self, session_id: str) -> UsernameMappingSession: | |
675 | """Look up the given username mapping session | |
676 | ||
677 | If it is not found, raises a SynapseError with an http code of 400 | |
678 | ||
679 | Args: | |
680 | session_id: session to look up | |
681 | Returns: | |
682 | active mapping session | |
683 | Raises: | |
684 | SynapseError if the session is not found/has expired | |
685 | """ | |
686 | self._expire_old_sessions() | |
687 | session = self._username_mapping_sessions.get(session_id) | |
688 | if session: | |
689 | return session | |
690 | logger.info("Couldn't find session id %s", session_id) | |
691 | raise SynapseError(400, "unknown session") | |
692 | ||
631 | 693 | async def check_username_availability( |
632 | 694 | self, localpart: str, session_id: str, |
633 | 695 | ) -> bool: |
644 | 706 | |
645 | 707 | # make sure that there is a valid mapping session, to stop people dictionary- |
646 | 708 | # scanning for accounts |
647 | ||
648 | self._expire_old_sessions() | |
649 | session = self._username_mapping_sessions.get(session_id) | |
650 | if not session: | |
651 | logger.info("Couldn't find session id %s", session_id) | |
652 | raise SynapseError(400, "unknown session") | |
709 | self.get_mapping_session(session_id) | |
653 | 710 | |
654 | 711 | logger.info( |
655 | 712 | "[session %s] Checking for availability of username %s", |
666 | 723 | return not user_infos |
667 | 724 | |
668 | 725 | async def handle_submit_username_request( |
669 | self, request: SynapseRequest, localpart: str, session_id: str | |
726 | self, | |
727 | request: SynapseRequest, | |
728 | session_id: str, | |
729 | localpart: str, | |
730 | use_display_name: bool, | |
731 | emails_to_use: Iterable[str], | |
670 | 732 | ) -> None: |
671 | 733 | """Handle a request to the username-picker 'submit' endpoint |
672 | 734 | |
676 | 738 | request: HTTP request |
677 | 739 | localpart: localpart requested by the user |
678 | 740 | session_id: ID of the username mapping session, extracted from a cookie |
679 | """ | |
680 | self._expire_old_sessions() | |
681 | session = self._username_mapping_sessions.get(session_id) | |
682 | if not session: | |
683 | logger.info("Couldn't find session id %s", session_id) | |
684 | raise SynapseError(400, "unknown session") | |
685 | ||
686 | logger.info("[session %s] Registering localpart %s", session_id, localpart) | |
741 | use_display_name: whether the user wants to use the suggested display name | |
742 | emails_to_use: emails that the user would like to use | |
743 | """ | |
744 | session = self.get_mapping_session(session_id) | |
745 | ||
746 | # update the session with the user's choices | |
747 | session.chosen_localpart = localpart | |
748 | session.use_display_name = use_display_name | |
749 | ||
750 | emails_from_idp = set(session.emails) | |
751 | filtered_emails = set() # type: Set[str] | |
752 | ||
753 | # we iterate through the list rather than just building a set conjunction, so | |
754 | # that we can log attempts to use unknown addresses | |
755 | for email in emails_to_use: | |
756 | if email in emails_from_idp: | |
757 | filtered_emails.add(email) | |
758 | else: | |
759 | logger.warning( | |
760 | "[session %s] ignoring user request to use unknown email address %r", | |
761 | session_id, | |
762 | email, | |
763 | ) | |
764 | session.emails_to_use = filtered_emails | |
765 | ||
766 | # we may now need to collect consent from the user, in which case, redirect | |
767 | # to the consent-extraction-unit | |
768 | if self._consent_at_registration: | |
769 | redirect_url = b"/_synapse/client/new_user_consent" | |
770 | ||
771 | # otherwise, redirect to the completion page | |
772 | else: | |
773 | redirect_url = b"/_synapse/client/sso_register" | |
774 | ||
775 | respond_with_redirect(request, redirect_url) | |
776 | ||
777 | async def handle_terms_accepted( | |
778 | self, request: Request, session_id: str, terms_version: str | |
779 | ): | |
780 | """Handle a request to the new-user 'consent' endpoint | |
781 | ||
782 | Will serve an HTTP response to the request. | |
783 | ||
784 | Args: | |
785 | request: HTTP request | |
786 | session_id: ID of the username mapping session, extracted from a cookie | |
787 | terms_version: the version of the terms which the user viewed and consented | |
788 | to | |
789 | """ | |
790 | logger.info( | |
791 | "[session %s] User consented to terms version %s", | |
792 | session_id, | |
793 | terms_version, | |
794 | ) | |
795 | session = self.get_mapping_session(session_id) | |
796 | session.terms_accepted_version = terms_version | |
797 | ||
798 | # we're done; now we can register the user | |
799 | respond_with_redirect(request, b"/_synapse/client/sso_register") | |
800 | ||
801 | async def register_sso_user(self, request: Request, session_id: str) -> None: | |
802 | """Called once we have all the info we need to register a new user. | |
803 | ||
804 | Does so and serves an HTTP response | |
805 | ||
806 | Args: | |
807 | request: HTTP request | |
808 | session_id: ID of the username mapping session, extracted from a cookie | |
809 | """ | |
810 | session = self.get_mapping_session(session_id) | |
811 | ||
812 | logger.info( | |
813 | "[session %s] Registering localpart %s", | |
814 | session_id, | |
815 | session.chosen_localpart, | |
816 | ) | |
687 | 817 | |
688 | 818 | attributes = UserAttributes( |
689 | localpart=localpart, | |
690 | display_name=session.display_name, | |
691 | emails=session.emails, | |
692 | ) | |
819 | localpart=session.chosen_localpart, emails=session.emails_to_use, | |
820 | ) | |
821 | ||
822 | if session.use_display_name: | |
823 | attributes.display_name = session.display_name | |
693 | 824 | |
694 | 825 | # the following will raise a 400 error if the username has been taken in the |
695 | 826 | # meantime. |
701 | 832 | request.getClientIP(), |
702 | 833 | ) |
703 | 834 | |
704 | logger.info("[session %s] Registered userid %s", session_id, user_id) | |
835 | logger.info( | |
836 | "[session %s] Registered userid %s with attributes %s", | |
837 | session_id, | |
838 | user_id, | |
839 | attributes, | |
840 | ) | |
705 | 841 | |
706 | 842 | # delete the mapping session and the cookie |
707 | 843 | del self._username_mapping_sessions[session_id] |
714 | 850 | path=b"/", |
715 | 851 | ) |
716 | 852 | |
853 | auth_result = {} | |
854 | if session.terms_accepted_version: | |
855 | # TODO: make this less awful. | |
856 | auth_result[LoginType.TERMS] = True | |
857 | ||
858 | await self._registration_handler.post_registration_actions( | |
859 | user_id, auth_result, access_token=None | |
860 | ) | |
861 | ||
717 | 862 | await self._auth_handler.complete_sso_login( |
718 | 863 | user_id, |
719 | 864 | request, |
720 | 865 | session.client_redirect_url, |
721 | 866 | session.extra_login_attributes, |
867 | new_user=True, | |
722 | 868 | ) |
723 | 869 | |
724 | 870 | def _expire_old_sessions(self): |
732 | 878 | for session_id in to_expire: |
733 | 879 | logger.info("Expiring mapping session %s", session_id) |
734 | 880 | del self._username_mapping_sessions[session_id] |
881 | ||
882 | ||
883 | def get_username_mapping_session_cookie_from_request(request: IRequest) -> str: | |
884 | """Extract the session ID from the cookie | |
885 | ||
886 | Raises a SynapseError if the cookie isn't found | |
887 | """ | |
888 | session_id = request.getCookie(USERNAME_MAPPING_SESSION_COOKIE_NAME) | |
889 | if not session_id: | |
890 | raise SynapseError(code=400, msg="missing session_id") | |
891 | return session_id.decode("ascii", errors="replace") |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import TYPE_CHECKING, Optional | |
17 | ||
18 | if TYPE_CHECKING: | |
19 | from synapse.app.homeserver import HomeServer | |
16 | 20 | |
17 | 21 | logger = logging.getLogger(__name__) |
18 | 22 | |
19 | 23 | |
20 | 24 | class StateDeltasHandler: |
21 | def __init__(self, hs): | |
25 | def __init__(self, hs: "HomeServer"): | |
22 | 26 | self.store = hs.get_datastore() |
23 | 27 | |
24 | async def _get_key_change(self, prev_event_id, event_id, key_name, public_value): | |
28 | async def _get_key_change( | |
29 | self, | |
30 | prev_event_id: Optional[str], | |
31 | event_id: Optional[str], | |
32 | key_name: str, | |
33 | public_value: str, | |
34 | ) -> Optional[bool]: | |
25 | 35 | """Given two events check if the `key_name` field in content changed |
26 | 36 | from not matching `public_value` to doing so. |
27 | 37 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
15 | 14 | import logging |
16 | 15 | from collections import Counter |
16 | from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Tuple | |
17 | ||
18 | from typing_extensions import Counter as CounterType | |
17 | 19 | |
18 | 20 | from synapse.api.constants import EventTypes, Membership |
19 | 21 | from synapse.metrics import event_processing_positions |
20 | 22 | from synapse.metrics.background_process_metrics import run_as_background_process |
23 | from synapse.types import JsonDict | |
24 | ||
25 | if TYPE_CHECKING: | |
26 | from synapse.app.homeserver import HomeServer | |
21 | 27 | |
22 | 28 | logger = logging.getLogger(__name__) |
23 | 29 | |
30 | 36 | Heavily derived from UserDirectoryHandler |
31 | 37 | """ |
32 | 38 | |
33 | def __init__(self, hs): | |
39 | def __init__(self, hs: "HomeServer"): | |
34 | 40 | self.hs = hs |
35 | 41 | self.store = hs.get_datastore() |
36 | 42 | self.state = hs.get_state_handler() |
43 | 49 | self.stats_enabled = hs.config.stats_enabled |
44 | 50 | |
45 | 51 | # The current position in the current_state_delta stream |
46 | self.pos = None | |
52 | self.pos = None # type: Optional[int] | |
47 | 53 | |
48 | 54 | # Guard to ensure we only process deltas one at a time |
49 | 55 | self._is_processing = False |
55 | 61 | # we start populating stats |
56 | 62 | self.clock.call_later(0, self.notify_new_event) |
57 | 63 | |
58 | def notify_new_event(self): | |
64 | def notify_new_event(self) -> None: | |
59 | 65 | """Called when there may be more deltas to process |
60 | 66 | """ |
61 | 67 | if not self.stats_enabled or self._is_processing: |
71 | 77 | |
72 | 78 | run_as_background_process("stats.notify_new_event", process) |
73 | 79 | |
74 | async def _unsafe_process(self): | |
80 | async def _unsafe_process(self) -> None: | |
75 | 81 | # If self.pos is None then means we haven't fetched it from DB |
76 | 82 | if self.pos is None: |
77 | 83 | self.pos = await self.store.get_stats_positions() |
109 | 115 | ) |
110 | 116 | |
111 | 117 | for room_id, fields in room_count.items(): |
112 | room_deltas.setdefault(room_id, {}).update(fields) | |
118 | room_deltas.setdefault(room_id, Counter()).update(fields) | |
113 | 119 | |
114 | 120 | for user_id, fields in user_count.items(): |
115 | user_deltas.setdefault(user_id, {}).update(fields) | |
121 | user_deltas.setdefault(user_id, Counter()).update(fields) | |
116 | 122 | |
117 | 123 | logger.debug("room_deltas: %s", room_deltas) |
118 | 124 | logger.debug("user_deltas: %s", user_deltas) |
130 | 136 | |
131 | 137 | self.pos = max_pos |
132 | 138 | |
133 | async def _handle_deltas(self, deltas): | |
139 | async def _handle_deltas( | |
140 | self, deltas: Iterable[JsonDict] | |
141 | ) -> Tuple[Dict[str, CounterType[str]], Dict[str, CounterType[str]]]: | |
134 | 142 | """Called with the state deltas to process |
135 | 143 | |
136 | 144 | Returns: |
137 | tuple[dict[str, Counter], dict[str, counter]] | |
138 | 145 | Two dicts: the room deltas and the user deltas, |
139 | 146 | mapping from room/user ID to changes in the various fields. |
140 | 147 | """ |
141 | 148 | |
142 | room_to_stats_deltas = {} | |
143 | user_to_stats_deltas = {} | |
144 | ||
145 | room_to_state_updates = {} | |
149 | room_to_stats_deltas = {} # type: Dict[str, CounterType[str]] | |
150 | user_to_stats_deltas = {} # type: Dict[str, CounterType[str]] | |
151 | ||
152 | room_to_state_updates = {} # type: Dict[str, Dict[str, Any]] | |
146 | 153 | |
147 | 154 | for delta in deltas: |
148 | 155 | typ = delta["type"] |
172 | 179 | ) |
173 | 180 | continue |
174 | 181 | |
175 | event_content = {} | |
182 | event_content = {} # type: JsonDict | |
176 | 183 | |
177 | 184 | sender = None |
178 | 185 | if event_id is not None: |
256 | 263 | ) |
257 | 264 | |
258 | 265 | if has_changed_joinedness: |
259 | delta = +1 if membership == Membership.JOIN else -1 | |
266 | membership_delta = +1 if membership == Membership.JOIN else -1 | |
260 | 267 | |
261 | 268 | user_to_stats_deltas.setdefault(user_id, Counter())[ |
262 | 269 | "joined_rooms" |
263 | ] += delta | |
264 | ||
265 | room_stats_delta["local_users_in_room"] += delta | |
270 | ] += membership_delta | |
271 | ||
272 | room_stats_delta["local_users_in_room"] += membership_delta | |
266 | 273 | |
267 | 274 | elif typ == EventTypes.Create: |
268 | 275 | room_state["is_federatable"] = ( |
14 | 14 | import logging |
15 | 15 | import random |
16 | 16 | from collections import namedtuple |
17 | from typing import TYPE_CHECKING, List, Set, Tuple | |
17 | from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple | |
18 | 18 | |
19 | 19 | from synapse.api.errors import AuthError, ShadowBanError, SynapseError |
20 | 20 | from synapse.appservice import ApplicationService |
21 | 21 | from synapse.metrics.background_process_metrics import run_as_background_process |
22 | 22 | from synapse.replication.tcp.streams import TypingStream |
23 | from synapse.types import JsonDict, UserID, get_domain_from_id | |
23 | from synapse.types import JsonDict, Requester, UserID, get_domain_from_id | |
24 | 24 | from synapse.util.caches.stream_change_cache import StreamChangeCache |
25 | 25 | from synapse.util.metrics import Measure |
26 | 26 | from synapse.util.wheel_timer import WheelTimer |
64 | 64 | ) |
65 | 65 | |
66 | 66 | # map room IDs to serial numbers |
67 | self._room_serials = {} | |
67 | self._room_serials = {} # type: Dict[str, int] | |
68 | 68 | # map room IDs to sets of users currently typing |
69 | self._room_typing = {} | |
70 | ||
71 | self._member_last_federation_poke = {} | |
69 | self._room_typing = {} # type: Dict[str, Set[str]] | |
70 | ||
71 | self._member_last_federation_poke = {} # type: Dict[RoomMember, int] | |
72 | 72 | self.wheel_timer = WheelTimer(bucket_size=5000) |
73 | 73 | self._latest_room_serial = 0 |
74 | 74 | |
75 | 75 | self.clock.looping_call(self._handle_timeouts, 5000) |
76 | 76 | |
77 | def _reset(self): | |
77 | def _reset(self) -> None: | |
78 | 78 | """Reset the typing handler's data caches. |
79 | 79 | """ |
80 | 80 | # map room IDs to serial numbers |
85 | 85 | self._member_last_federation_poke = {} |
86 | 86 | self.wheel_timer = WheelTimer(bucket_size=5000) |
87 | 87 | |
88 | def _handle_timeouts(self): | |
88 | def _handle_timeouts(self) -> None: | |
89 | 89 | logger.debug("Checking for typing timeouts") |
90 | 90 | |
91 | 91 | now = self.clock.time_msec() |
95 | 95 | for member in members: |
96 | 96 | self._handle_timeout_for_member(now, member) |
97 | 97 | |
98 | def _handle_timeout_for_member(self, now: int, member: RoomMember): | |
98 | def _handle_timeout_for_member(self, now: int, member: RoomMember) -> None: | |
99 | 99 | if not self.is_typing(member): |
100 | 100 | # Nothing to do if they're no longer typing |
101 | 101 | return |
113 | 113 | # each person typing. |
114 | 114 | self.wheel_timer.insert(now=now, obj=member, then=now + 60 * 1000) |
115 | 115 | |
116 | def is_typing(self, member): | |
116 | def is_typing(self, member: RoomMember) -> bool: | |
117 | 117 | return member.user_id in self._room_typing.get(member.room_id, []) |
118 | 118 | |
119 | async def _push_remote(self, member, typing): | |
119 | async def _push_remote(self, member: RoomMember, typing: bool) -> None: | |
120 | 120 | if not self.federation: |
121 | 121 | return |
122 | 122 | |
147 | 147 | |
148 | 148 | def process_replication_rows( |
149 | 149 | self, token: int, rows: List[TypingStream.TypingStreamRow] |
150 | ): | |
150 | ) -> None: | |
151 | 151 | """Should be called whenever we receive updates for typing stream. |
152 | 152 | """ |
153 | 153 | |
177 | 177 | |
178 | 178 | async def _send_changes_in_typing_to_remotes( |
179 | 179 | self, room_id: str, prev_typing: Set[str], now_typing: Set[str] |
180 | ): | |
180 | ) -> None: | |
181 | 181 | """Process a change in typing of a room from replication, sending EDUs |
182 | 182 | for any local users. |
183 | 183 | """ |
193 | 193 | if self.is_mine_id(user_id): |
194 | 194 | await self._push_remote(RoomMember(room_id, user_id), False) |
195 | 195 | |
196 | def get_current_token(self): | |
196 | def get_current_token(self) -> int: | |
197 | 197 | return self._latest_room_serial |
198 | 198 | |
199 | 199 | |
200 | 200 | class TypingWriterHandler(FollowerTypingHandler): |
201 | def __init__(self, hs): | |
201 | def __init__(self, hs: "HomeServer"): | |
202 | 202 | super().__init__(hs) |
203 | 203 | |
204 | 204 | assert hs.config.worker.writers.typing == hs.get_instance_name() |
212 | 212 | |
213 | 213 | hs.get_distributor().observe("user_left_room", self.user_left_room) |
214 | 214 | |
215 | self._member_typing_until = {} # clock time we expect to stop | |
215 | # clock time we expect to stop | |
216 | self._member_typing_until = {} # type: Dict[RoomMember, int] | |
216 | 217 | |
217 | 218 | # caches which room_ids changed at which serials |
218 | 219 | self._typing_stream_change_cache = StreamChangeCache( |
219 | 220 | "TypingStreamChangeCache", self._latest_room_serial |
220 | 221 | ) |
221 | 222 | |
222 | def _handle_timeout_for_member(self, now: int, member: RoomMember): | |
223 | def _handle_timeout_for_member(self, now: int, member: RoomMember) -> None: | |
223 | 224 | super()._handle_timeout_for_member(now, member) |
224 | 225 | |
225 | 226 | if not self.is_typing(member): |
232 | 233 | self._stopped_typing(member) |
233 | 234 | return |
234 | 235 | |
235 | async def started_typing(self, target_user, requester, room_id, timeout): | |
236 | async def started_typing( | |
237 | self, target_user: UserID, requester: Requester, room_id: str, timeout: int | |
238 | ) -> None: | |
236 | 239 | target_user_id = target_user.to_string() |
237 | 240 | auth_user_id = requester.user.to_string() |
238 | 241 | |
262 | 265 | |
263 | 266 | if was_present: |
264 | 267 | # No point sending another notification |
265 | return None | |
268 | return | |
266 | 269 | |
267 | 270 | self._push_update(member=member, typing=True) |
268 | 271 | |
269 | async def stopped_typing(self, target_user, requester, room_id): | |
272 | async def stopped_typing( | |
273 | self, target_user: UserID, requester: Requester, room_id: str | |
274 | ) -> None: | |
270 | 275 | target_user_id = target_user.to_string() |
271 | 276 | auth_user_id = requester.user.to_string() |
272 | 277 | |
289 | 294 | |
290 | 295 | self._stopped_typing(member) |
291 | 296 | |
292 | def user_left_room(self, user, room_id): | |
297 | def user_left_room(self, user: UserID, room_id: str) -> None: | |
293 | 298 | user_id = user.to_string() |
294 | 299 | if self.is_mine_id(user_id): |
295 | 300 | member = RoomMember(room_id=room_id, user_id=user_id) |
296 | 301 | self._stopped_typing(member) |
297 | 302 | |
298 | def _stopped_typing(self, member): | |
303 | def _stopped_typing(self, member: RoomMember) -> None: | |
299 | 304 | if member.user_id not in self._room_typing.get(member.room_id, set()): |
300 | 305 | # No point |
301 | return None | |
306 | return | |
302 | 307 | |
303 | 308 | self._member_typing_until.pop(member, None) |
304 | 309 | self._member_last_federation_poke.pop(member, None) |
305 | 310 | |
306 | 311 | self._push_update(member=member, typing=False) |
307 | 312 | |
308 | def _push_update(self, member, typing): | |
313 | def _push_update(self, member: RoomMember, typing: bool) -> None: | |
309 | 314 | if self.hs.is_mine_id(member.user_id): |
310 | 315 | # Only send updates for changes to our own users. |
311 | 316 | run_as_background_process( |
314 | 319 | |
315 | 320 | self._push_update_local(member=member, typing=typing) |
316 | 321 | |
317 | async def _recv_edu(self, origin, content): | |
322 | async def _recv_edu(self, origin: str, content: JsonDict) -> None: | |
318 | 323 | room_id = content["room_id"] |
319 | 324 | user_id = content["user_id"] |
320 | 325 | |
339 | 344 | self.wheel_timer.insert(now=now, obj=member, then=now + FEDERATION_TIMEOUT) |
340 | 345 | self._push_update_local(member=member, typing=content["typing"]) |
341 | 346 | |
342 | def _push_update_local(self, member, typing): | |
347 | def _push_update_local(self, member: RoomMember, typing: bool) -> None: | |
343 | 348 | room_set = self._room_typing.setdefault(member.room_id, set()) |
344 | 349 | if typing: |
345 | 350 | room_set.add(member.user_id) |
385 | 390 | |
386 | 391 | changed_rooms = self._typing_stream_change_cache.get_all_entities_changed( |
387 | 392 | last_id |
388 | ) | |
393 | ) # type: Optional[Iterable[str]] | |
389 | 394 | |
390 | 395 | if changed_rooms is None: |
391 | 396 | changed_rooms = self._room_serials |
411 | 416 | |
412 | 417 | def process_replication_rows( |
413 | 418 | self, token: int, rows: List[TypingStream.TypingStreamRow] |
414 | ): | |
419 | ) -> None: | |
415 | 420 | # The writing process should never get updates from replication. |
416 | 421 | raise Exception("Typing writer instance got typing info over replication") |
417 | 422 | |
418 | 423 | |
419 | 424 | class TypingNotificationEventSource: |
420 | def __init__(self, hs): | |
425 | def __init__(self, hs: "HomeServer"): | |
421 | 426 | self.hs = hs |
422 | 427 | self.clock = hs.get_clock() |
423 | 428 | # We can't call get_typing_handler here because there's a cycle: |
426 | 431 | # |
427 | 432 | self.get_typing_handler = hs.get_typing_handler |
428 | 433 | |
429 | def _make_event_for(self, room_id): | |
434 | def _make_event_for(self, room_id: str) -> JsonDict: | |
430 | 435 | typing = self.get_typing_handler()._room_typing[room_id] |
431 | 436 | return { |
432 | 437 | "type": "m.typing", |
461 | 466 | |
462 | 467 | return (events, handler._latest_room_serial) |
463 | 468 | |
464 | async def get_new_events(self, from_key, room_ids, **kwargs): | |
469 | async def get_new_events( | |
470 | self, from_key: int, room_ids: Iterable[str], **kwargs | |
471 | ) -> Tuple[List[JsonDict], int]: | |
465 | 472 | with Measure(self.clock, "typing.get_new_events"): |
466 | 473 | from_key = int(from_key) |
467 | 474 | handler = self.get_typing_handler() |
477 | 484 | |
478 | 485 | return (events, handler._latest_room_serial) |
479 | 486 | |
480 | def get_current_key(self): | |
487 | def get_current_key(self) -> int: | |
481 | 488 | return self.get_typing_handler()._latest_room_serial |
144 | 144 | if self.pos is None: |
145 | 145 | self.pos = await self.store.get_user_directory_stream_pos() |
146 | 146 | |
147 | # If still None then the initial background update hasn't happened yet | |
148 | if self.pos is None: | |
149 | return None | |
150 | ||
151 | 147 | # Loop round handling deltas until we're up to date |
152 | 148 | while True: |
153 | 149 | with Measure(self.clock, "user_dir_delta"): |
232 | 228 | |
233 | 229 | if change: # The user joined |
234 | 230 | event = await self.store.get_event(event_id, allow_none=True) |
231 | # It isn't expected for this event to not exist, but we | |
232 | # don't want the entire background process to break. | |
233 | if event is None: | |
234 | continue | |
235 | ||
235 | 236 | profile = ProfileInfo( |
236 | 237 | avatar_url=event.content.get("avatar_url"), |
237 | 238 | display_name=event.content.get("displayname"), |
21 | 21 | import urllib |
22 | 22 | from http import HTTPStatus |
23 | 23 | from io import BytesIO |
24 | from typing import Any, Callable, Dict, Iterator, List, Tuple, Union | |
24 | from typing import ( | |
25 | Any, | |
26 | Awaitable, | |
27 | Callable, | |
28 | Dict, | |
29 | Iterable, | |
30 | Iterator, | |
31 | List, | |
32 | Pattern, | |
33 | Tuple, | |
34 | Union, | |
35 | ) | |
25 | 36 | |
26 | 37 | import jinja2 |
27 | 38 | from canonicaljson import iterencode_canonical_json |
39 | from typing_extensions import Protocol | |
28 | 40 | from zope.interface import implementer |
29 | 41 | |
30 | 42 | from twisted.internet import defer, interfaces |
167 | 179 | return preserve_fn(wrapped_async_request_handler) |
168 | 180 | |
169 | 181 | |
170 | class HttpServer: | |
182 | # Type of a callback method for processing requests | |
183 | # it is actually called with a SynapseRequest and a kwargs dict for the params, | |
184 | # but I can't figure out how to represent that. | |
185 | ServletCallback = Callable[ | |
186 | ..., Union[None, Awaitable[None], Tuple[int, Any], Awaitable[Tuple[int, Any]]] | |
187 | ] | |
188 | ||
189 | ||
190 | class HttpServer(Protocol): | |
171 | 191 | """ Interface for registering callbacks on a HTTP server |
172 | 192 | """ |
173 | 193 | |
174 | def register_paths(self, method, path_patterns, callback): | |
194 | def register_paths( | |
195 | self, | |
196 | method: str, | |
197 | path_patterns: Iterable[Pattern], | |
198 | callback: ServletCallback, | |
199 | servlet_classname: str, | |
200 | ) -> None: | |
175 | 201 | """ Register a callback that gets fired if we receive a http request |
176 | 202 | with the given method for a path that matches the given regex. |
177 | 203 | |
179 | 205 | an unpacked tuple. |
180 | 206 | |
181 | 207 | Args: |
182 | method (str): The method to listen to. | |
183 | path_patterns (list<SRE_Pattern>): The regex used to match requests. | |
184 | callback (function): The function to fire if we receive a matched | |
208 | method: The HTTP method to listen to. | |
209 | path_patterns: The regex used to match requests. | |
210 | callback: The function to fire if we receive a matched | |
185 | 211 | request. The first argument will be the request object and |
186 | 212 | subsequent arguments will be any matched groups from the regex. |
187 | This should return a tuple of (code, response). | |
213 | This should return either tuple of (code, response), or None. | |
214 | servlet_classname (str): The name of the handler to be used in prometheus | |
215 | and opentracing logs. | |
188 | 216 | """ |
189 | 217 | pass |
190 | 218 | |
353 | 381 | |
354 | 382 | def _get_handler_for_request( |
355 | 383 | self, request: SynapseRequest |
356 | ) -> Tuple[Callable, str, Dict[str, str]]: | |
384 | ) -> Tuple[ServletCallback, str, Dict[str, str]]: | |
357 | 385 | """Finds a callback method to handle the given request. |
358 | 386 | |
359 | 387 | Returns: |
732 | 760 | request.setHeader(b"Content-Security-Policy", b"frame-ancestors 'none';") |
733 | 761 | |
734 | 762 | |
763 | def respond_with_redirect(request: Request, url: bytes) -> None: | |
764 | """Write a 302 response to the request, if it is still alive.""" | |
765 | logger.debug("Redirect to %s", url.decode("utf-8")) | |
766 | request.redirect(url) | |
767 | finish_request(request) | |
768 | ||
769 | ||
735 | 770 | def finish_request(request: Request): |
736 | 771 | """ Finish writing the response to the request. |
737 | 772 |
790 | 790 | |
791 | 791 | @wraps(func) |
792 | 792 | def _tag_args_inner(*args, **kwargs): |
793 | argspec = inspect.getargspec(func) | |
793 | argspec = inspect.getfullargspec(func) | |
794 | 794 | for i, arg in enumerate(argspec.args[1:]): |
795 | 795 | set_tag("ARG_" + arg, args[i]) |
796 | 796 | set_tag("args", args[len(argspec.args) :]) |
278 | 278 | ) |
279 | 279 | |
280 | 280 | async def complete_sso_login_async( |
281 | self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str | |
281 | self, | |
282 | registered_user_id: str, | |
283 | request: SynapseRequest, | |
284 | client_redirect_url: str, | |
285 | new_user: bool = False, | |
282 | 286 | ): |
283 | 287 | """Complete a SSO login by redirecting the user to a page to confirm whether they |
284 | 288 | want their access token sent to `client_redirect_url`, or redirect them to that |
290 | 294 | request: The request to respond to. |
291 | 295 | client_redirect_url: The URL to which to offer to redirect the user (or to |
292 | 296 | redirect them directly if whitelisted). |
297 | new_user: set to true to use wording for the consent appropriate to a user | |
298 | who has just registered. | |
293 | 299 | """ |
294 | 300 | await self._auth_handler.complete_sso_login( |
295 | registered_user_id, request, client_redirect_url, | |
301 | registered_user_id, request, client_redirect_url, new_user=new_user | |
296 | 302 | ) |
297 | 303 | |
298 | 304 | @defer.inlineCallbacks |
266 | 266 | fallback_to_members=True, |
267 | 267 | ) |
268 | 268 | |
269 | summary_text = await self.make_summary_text( | |
270 | notifs_by_room, state_by_room, notif_events, user_id, reason | |
271 | ) | |
269 | if len(notifs_by_room) == 1: | |
270 | # Only one room has new stuff | |
271 | room_id = list(notifs_by_room.keys())[0] | |
272 | ||
273 | summary_text = await self.make_summary_text_single_room( | |
274 | room_id, | |
275 | notifs_by_room[room_id], | |
276 | state_by_room[room_id], | |
277 | notif_events, | |
278 | user_id, | |
279 | ) | |
280 | else: | |
281 | summary_text = await self.make_summary_text( | |
282 | notifs_by_room, state_by_room, notif_events, reason | |
283 | ) | |
272 | 284 | |
273 | 285 | template_vars = { |
274 | 286 | "user_display_name": user_display_name, |
491 | 503 | if "url" in event.content: |
492 | 504 | messagevars["image_url"] = event.content["url"] |
493 | 505 | |
506 | async def make_summary_text_single_room( | |
507 | self, | |
508 | room_id: str, | |
509 | notifs: List[Dict[str, Any]], | |
510 | room_state_ids: StateMap[str], | |
511 | notif_events: Dict[str, EventBase], | |
512 | user_id: str, | |
513 | ) -> str: | |
514 | """ | |
515 | Make a summary text for the email when only a single room has notifications. | |
516 | ||
517 | Args: | |
518 | room_id: The ID of the room. | |
519 | notifs: The notifications for this room. | |
520 | room_state_ids: The state map for the room. | |
521 | notif_events: A map of event ID -> notification event. | |
522 | user_id: The user receiving the notification. | |
523 | ||
524 | Returns: | |
525 | The summary text. | |
526 | """ | |
527 | # If the room has some kind of name, use it, but we don't | |
528 | # want the generated-from-names one here otherwise we'll | |
529 | # end up with, "new message from Bob in the Bob room" | |
530 | room_name = await calculate_room_name( | |
531 | self.store, room_state_ids, user_id, fallback_to_members=False | |
532 | ) | |
533 | ||
534 | # See if one of the notifs is an invite event for the user | |
535 | invite_event = None | |
536 | for n in notifs: | |
537 | ev = notif_events[n["event_id"]] | |
538 | if ev.type == EventTypes.Member and ev.state_key == user_id: | |
539 | if ev.content.get("membership") == Membership.INVITE: | |
540 | invite_event = ev | |
541 | break | |
542 | ||
543 | if invite_event: | |
544 | inviter_member_event_id = room_state_ids.get( | |
545 | ("m.room.member", invite_event.sender) | |
546 | ) | |
547 | inviter_name = invite_event.sender | |
548 | if inviter_member_event_id: | |
549 | inviter_member_event = await self.store.get_event( | |
550 | inviter_member_event_id, allow_none=True | |
551 | ) | |
552 | if inviter_member_event: | |
553 | inviter_name = name_from_member_event(inviter_member_event) | |
554 | ||
555 | if room_name is None: | |
556 | return self.email_subjects.invite_from_person % { | |
557 | "person": inviter_name, | |
558 | "app": self.app_name, | |
559 | } | |
560 | ||
561 | return self.email_subjects.invite_from_person_to_room % { | |
562 | "person": inviter_name, | |
563 | "room": room_name, | |
564 | "app": self.app_name, | |
565 | } | |
566 | ||
567 | if len(notifs) == 1: | |
568 | # There is just the one notification, so give some detail | |
569 | sender_name = None | |
570 | event = notif_events[notifs[0]["event_id"]] | |
571 | if ("m.room.member", event.sender) in room_state_ids: | |
572 | state_event_id = room_state_ids[("m.room.member", event.sender)] | |
573 | state_event = await self.store.get_event(state_event_id) | |
574 | sender_name = name_from_member_event(state_event) | |
575 | ||
576 | if sender_name is not None and room_name is not None: | |
577 | return self.email_subjects.message_from_person_in_room % { | |
578 | "person": sender_name, | |
579 | "room": room_name, | |
580 | "app": self.app_name, | |
581 | } | |
582 | elif sender_name is not None: | |
583 | return self.email_subjects.message_from_person % { | |
584 | "person": sender_name, | |
585 | "app": self.app_name, | |
586 | } | |
587 | ||
588 | # The sender is unknown, just use the room name (or ID). | |
589 | return self.email_subjects.messages_in_room % { | |
590 | "room": room_name or room_id, | |
591 | "app": self.app_name, | |
592 | } | |
593 | else: | |
594 | # There's more than one notification for this room, so just | |
595 | # say there are several | |
596 | if room_name is not None: | |
597 | return self.email_subjects.messages_in_room % { | |
598 | "room": room_name, | |
599 | "app": self.app_name, | |
600 | } | |
601 | ||
602 | return await self.make_summary_text_from_member_events( | |
603 | room_id, notifs, room_state_ids, notif_events | |
604 | ) | |
605 | ||
494 | 606 | async def make_summary_text( |
495 | 607 | self, |
496 | 608 | notifs_by_room: Dict[str, List[Dict[str, Any]]], |
497 | 609 | room_state_ids: Dict[str, StateMap[str]], |
498 | 610 | notif_events: Dict[str, EventBase], |
499 | user_id: str, | |
500 | 611 | reason: Dict[str, Any], |
501 | ): | |
502 | if len(notifs_by_room) == 1: | |
503 | # Only one room has new stuff | |
504 | room_id = list(notifs_by_room.keys())[0] | |
505 | ||
506 | # If the room has some kind of name, use it, but we don't | |
507 | # want the generated-from-names one here otherwise we'll | |
508 | # end up with, "new message from Bob in the Bob room" | |
509 | room_name = await calculate_room_name( | |
510 | self.store, room_state_ids[room_id], user_id, fallback_to_members=False | |
511 | ) | |
512 | ||
513 | # See if one of the notifs is an invite event for the user | |
514 | invite_event = None | |
515 | for n in notifs_by_room[room_id]: | |
516 | ev = notif_events[n["event_id"]] | |
517 | if ev.type == EventTypes.Member and ev.state_key == user_id: | |
518 | if ev.content.get("membership") == Membership.INVITE: | |
519 | invite_event = ev | |
520 | break | |
521 | ||
522 | if invite_event: | |
523 | inviter_member_event_id = room_state_ids[room_id].get( | |
524 | ("m.room.member", invite_event.sender) | |
525 | ) | |
526 | inviter_name = invite_event.sender | |
527 | if inviter_member_event_id: | |
528 | inviter_member_event = await self.store.get_event( | |
529 | inviter_member_event_id, allow_none=True | |
530 | ) | |
531 | if inviter_member_event: | |
532 | inviter_name = name_from_member_event(inviter_member_event) | |
533 | ||
534 | if room_name is None: | |
535 | return self.email_subjects.invite_from_person % { | |
536 | "person": inviter_name, | |
537 | "app": self.app_name, | |
538 | } | |
539 | else: | |
540 | return self.email_subjects.invite_from_person_to_room % { | |
541 | "person": inviter_name, | |
542 | "room": room_name, | |
543 | "app": self.app_name, | |
544 | } | |
545 | ||
546 | sender_name = None | |
547 | if len(notifs_by_room[room_id]) == 1: | |
548 | # There is just the one notification, so give some detail | |
549 | event = notif_events[notifs_by_room[room_id][0]["event_id"]] | |
550 | if ("m.room.member", event.sender) in room_state_ids[room_id]: | |
551 | state_event_id = room_state_ids[room_id][ | |
552 | ("m.room.member", event.sender) | |
553 | ] | |
554 | state_event = await self.store.get_event(state_event_id) | |
555 | sender_name = name_from_member_event(state_event) | |
556 | ||
557 | if sender_name is not None and room_name is not None: | |
558 | return self.email_subjects.message_from_person_in_room % { | |
559 | "person": sender_name, | |
560 | "room": room_name, | |
561 | "app": self.app_name, | |
562 | } | |
563 | elif sender_name is not None: | |
564 | return self.email_subjects.message_from_person % { | |
565 | "person": sender_name, | |
566 | "app": self.app_name, | |
567 | } | |
568 | else: | |
569 | # There's more than one notification for this room, so just | |
570 | # say there are several | |
571 | if room_name is not None: | |
572 | return self.email_subjects.messages_in_room % { | |
573 | "room": room_name, | |
574 | "app": self.app_name, | |
575 | } | |
576 | else: | |
577 | # If the room doesn't have a name, say who the messages | |
578 | # are from explicitly to avoid, "messages in the Bob room" | |
579 | sender_ids = list( | |
580 | { | |
581 | notif_events[n["event_id"]].sender | |
582 | for n in notifs_by_room[room_id] | |
583 | } | |
584 | ) | |
585 | ||
586 | member_events = await self.store.get_events( | |
587 | [ | |
588 | room_state_ids[room_id][("m.room.member", s)] | |
589 | for s in sender_ids | |
590 | ] | |
591 | ) | |
592 | ||
593 | return self.email_subjects.messages_from_person % { | |
594 | "person": descriptor_from_member_events(member_events.values()), | |
595 | "app": self.app_name, | |
596 | } | |
597 | else: | |
598 | # Stuff's happened in multiple different rooms | |
599 | ||
600 | # ...but we still refer to the 'reason' room which triggered the mail | |
601 | if reason["room_name"] is not None: | |
602 | return self.email_subjects.messages_in_room_and_others % { | |
603 | "room": reason["room_name"], | |
604 | "app": self.app_name, | |
605 | } | |
606 | else: | |
607 | # If the reason room doesn't have a name, say who the messages | |
608 | # are from explicitly to avoid, "messages in the Bob room" | |
609 | room_id = reason["room_id"] | |
610 | ||
611 | sender_ids = list( | |
612 | { | |
613 | notif_events[n["event_id"]].sender | |
614 | for n in notifs_by_room[room_id] | |
615 | } | |
616 | ) | |
617 | ||
618 | member_events = await self.store.get_events( | |
619 | [room_state_ids[room_id][("m.room.member", s)] for s in sender_ids] | |
620 | ) | |
621 | ||
622 | return self.email_subjects.messages_from_person_and_others % { | |
623 | "person": descriptor_from_member_events(member_events.values()), | |
624 | "app": self.app_name, | |
625 | } | |
612 | ) -> str: | |
613 | """ | |
614 | Make a summary text for the email when multiple rooms have notifications. | |
615 | ||
616 | Args: | |
617 | notifs_by_room: A map of room ID to the notifications for that room. | |
618 | room_state_ids: A map of room ID to the state map for that room. | |
619 | notif_events: A map of event ID -> notification event. | |
620 | reason: The reason this notification is being sent. | |
621 | ||
622 | Returns: | |
623 | The summary text. | |
624 | """ | |
625 | # Stuff's happened in multiple different rooms | |
626 | # ...but we still refer to the 'reason' room which triggered the mail | |
627 | if reason["room_name"] is not None: | |
628 | return self.email_subjects.messages_in_room_and_others % { | |
629 | "room": reason["room_name"], | |
630 | "app": self.app_name, | |
631 | } | |
632 | ||
633 | room_id = reason["room_id"] | |
634 | return await self.make_summary_text_from_member_events( | |
635 | room_id, notifs_by_room[room_id], room_state_ids[room_id], notif_events | |
636 | ) | |
637 | ||
638 | async def make_summary_text_from_member_events( | |
639 | self, | |
640 | room_id: str, | |
641 | notifs: List[Dict[str, Any]], | |
642 | room_state_ids: StateMap[str], | |
643 | notif_events: Dict[str, EventBase], | |
644 | ) -> str: | |
645 | """ | |
646 | Make a summary text for the email when only a single room has notifications. | |
647 | ||
648 | Args: | |
649 | room_id: The ID of the room. | |
650 | notifs: The notifications for this room. | |
651 | room_state_ids: The state map for the room. | |
652 | notif_events: A map of event ID -> notification event. | |
653 | ||
654 | Returns: | |
655 | The summary text. | |
656 | """ | |
657 | # If the room doesn't have a name, say who the messages | |
658 | # are from explicitly to avoid, "messages in the Bob room" | |
659 | sender_ids = {notif_events[n["event_id"]].sender for n in notifs} | |
660 | ||
661 | member_events = await self.store.get_events( | |
662 | [room_state_ids[("m.room.member", s)] for s in sender_ids] | |
663 | ) | |
664 | ||
665 | # There was a single sender. | |
666 | if len(sender_ids) == 1: | |
667 | return self.email_subjects.messages_from_person % { | |
668 | "person": descriptor_from_member_events(member_events.values()), | |
669 | "app": self.app_name, | |
670 | } | |
671 | ||
672 | # There was more than one sender, use the first one and a tweaked template. | |
673 | return self.email_subjects.messages_from_person_and_others % { | |
674 | "person": descriptor_from_member_events(list(member_events.values())[:1]), | |
675 | "app": self.app_name, | |
676 | } | |
626 | 677 | |
627 | 678 | def make_room_link(self, room_id: str) -> str: |
628 | 679 | if self.hs.config.email_riot_base_url: |
667 | 718 | |
668 | 719 | |
669 | 720 | def safe_markup(raw_html: str) -> jinja2.Markup: |
721 | """ | |
722 | Sanitise a raw HTML string to a set of allowed tags and attributes, and linkify any bare URLs. | |
723 | ||
724 | Args | |
725 | raw_html: Unsafe HTML. | |
726 | ||
727 | Returns: | |
728 | A Markup object ready to safely use in a Jinja template. | |
729 | """ | |
670 | 730 | return jinja2.Markup( |
671 | 731 | bleach.linkify( |
672 | 732 | bleach.clean( |
683 | 743 | |
684 | 744 | def safe_text(raw_text: str) -> jinja2.Markup: |
685 | 745 | """ |
686 | Process text: treat it as HTML but escape any tags (ie. just escape the | |
687 | HTML) then linkify it. | |
746 | Sanitise text (escape any HTML tags), and then linkify any bare URLs. | |
747 | ||
748 | Args | |
749 | raw_text: Unsafe text which might include HTML markup. | |
750 | ||
751 | Returns: | |
752 | A Markup object ready to safely use in a Jinja template. | |
688 | 753 | """ |
689 | 754 | return jinja2.Markup( |
690 | 755 | bleach.linkify(bleach.clean(raw_text, tags=[], attributes={}, strip=False)) |
16 | 16 | import re |
17 | 17 | from typing import TYPE_CHECKING, Dict, Iterable, Optional |
18 | 18 | |
19 | from synapse.api.constants import EventTypes | |
19 | from synapse.api.constants import EventTypes, Membership | |
20 | 20 | from synapse.events import EventBase |
21 | 21 | from synapse.types import StateMap |
22 | 22 | |
62 | 62 | m_room_name = await store.get_event( |
63 | 63 | room_state_ids[(EventTypes.Name, "")], allow_none=True |
64 | 64 | ) |
65 | if m_room_name and m_room_name.content and m_room_name.content["name"]: | |
65 | if m_room_name and m_room_name.content and m_room_name.content.get("name"): | |
66 | 66 | return m_room_name.content["name"] |
67 | 67 | |
68 | 68 | # does it have a canonical alias? |
73 | 73 | if ( |
74 | 74 | canon_alias |
75 | 75 | and canon_alias.content |
76 | and canon_alias.content["alias"] | |
76 | and canon_alias.content.get("alias") | |
77 | 77 | and _looks_like_an_alias(canon_alias.content["alias"]) |
78 | 78 | ): |
79 | 79 | return canon_alias.content["alias"] |
80 | ||
81 | # at this point we're going to need to search the state by all state keys | |
82 | # for an event type, so rearrange the data structure | |
83 | room_state_bytype_ids = _state_as_two_level_dict(room_state_ids) | |
84 | 80 | |
85 | 81 | if not fallback_to_members: |
86 | 82 | return None |
93 | 89 | |
94 | 90 | if ( |
95 | 91 | my_member_event is not None |
96 | and my_member_event.content["membership"] == "invite" | |
92 | and my_member_event.content.get("membership") == Membership.INVITE | |
97 | 93 | ): |
98 | 94 | if (EventTypes.Member, my_member_event.sender) in room_state_ids: |
99 | 95 | inviter_member_event = await store.get_event( |
110 | 106 | else: |
111 | 107 | return "Room Invite" |
112 | 108 | |
109 | # at this point we're going to need to search the state by all state keys | |
110 | # for an event type, so rearrange the data structure | |
111 | room_state_bytype_ids = _state_as_two_level_dict(room_state_ids) | |
112 | ||
113 | 113 | # we're going to have to generate a name based on who's in the room, |
114 | 114 | # so find out who is in the room that isn't the user. |
115 | 115 | if EventTypes.Member in room_state_bytype_ids: |
119 | 119 | all_members = [ |
120 | 120 | ev |
121 | 121 | for ev in member_events.values() |
122 | if ev.content["membership"] == "join" | |
123 | or ev.content["membership"] == "invite" | |
122 | if ev.content.get("membership") == Membership.JOIN | |
123 | or ev.content.get("membership") == Membership.INVITE | |
124 | 124 | ] |
125 | 125 | # Sort the member events oldest-first so the we name people in the |
126 | 126 | # order the joined (it should at least be deterministic rather than |
193 | 193 | |
194 | 194 | |
195 | 195 | def name_from_member_event(member_event: EventBase) -> str: |
196 | if ( | |
197 | member_event.content | |
198 | and "displayname" in member_event.content | |
199 | and member_event.content["displayname"] | |
200 | ): | |
196 | if member_event.content and member_event.content.get("displayname"): | |
201 | 197 | return member_event.content["displayname"] |
202 | 198 | return member_event.state_key |
203 | 199 |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | from typing import TYPE_CHECKING, Any, Optional | |
17 | ||
18 | from prometheus_client import Counter | |
19 | ||
20 | from synapse.logging.context import make_deferred_yieldable | |
21 | from synapse.util import json_decoder, json_encoder | |
22 | ||
23 | if TYPE_CHECKING: | |
24 | from synapse.server import HomeServer | |
25 | ||
26 | set_counter = Counter( | |
27 | "synapse_external_cache_set", | |
28 | "Number of times we set a cache", | |
29 | labelnames=["cache_name"], | |
30 | ) | |
31 | ||
32 | get_counter = Counter( | |
33 | "synapse_external_cache_get", | |
34 | "Number of times we get a cache", | |
35 | labelnames=["cache_name", "hit"], | |
36 | ) | |
37 | ||
38 | ||
39 | logger = logging.getLogger(__name__) | |
40 | ||
41 | ||
42 | class ExternalCache: | |
43 | """A cache backed by an external Redis. Does nothing if no Redis is | |
44 | configured. | |
45 | """ | |
46 | ||
47 | def __init__(self, hs: "HomeServer"): | |
48 | self._redis_connection = hs.get_outbound_redis_connection() | |
49 | ||
50 | def _get_redis_key(self, cache_name: str, key: str) -> str: | |
51 | return "cache_v1:%s:%s" % (cache_name, key) | |
52 | ||
53 | def is_enabled(self) -> bool: | |
54 | """Whether the external cache is used or not. | |
55 | ||
56 | It's safe to use the cache when this returns false, the methods will | |
57 | just no-op, but the function is useful to avoid doing unnecessary work. | |
58 | """ | |
59 | return self._redis_connection is not None | |
60 | ||
61 | async def set(self, cache_name: str, key: str, value: Any, expiry_ms: int) -> None: | |
62 | """Add the key/value to the named cache, with the expiry time given. | |
63 | """ | |
64 | ||
65 | if self._redis_connection is None: | |
66 | return | |
67 | ||
68 | set_counter.labels(cache_name).inc() | |
69 | ||
70 | # txredisapi requires the value to be string, bytes or numbers, so we | |
71 | # encode stuff in JSON. | |
72 | encoded_value = json_encoder.encode(value) | |
73 | ||
74 | logger.debug("Caching %s %s: %r", cache_name, key, encoded_value) | |
75 | ||
76 | return await make_deferred_yieldable( | |
77 | self._redis_connection.set( | |
78 | self._get_redis_key(cache_name, key), encoded_value, pexpire=expiry_ms, | |
79 | ) | |
80 | ) | |
81 | ||
82 | async def get(self, cache_name: str, key: str) -> Optional[Any]: | |
83 | """Look up a key/value in the named cache. | |
84 | """ | |
85 | ||
86 | if self._redis_connection is None: | |
87 | return None | |
88 | ||
89 | result = await make_deferred_yieldable( | |
90 | self._redis_connection.get(self._get_redis_key(cache_name, key)) | |
91 | ) | |
92 | ||
93 | logger.debug("Got cache result %s %s: %r", cache_name, key, result) | |
94 | ||
95 | get_counter.labels(cache_name, result is not None).inc() | |
96 | ||
97 | if not result: | |
98 | return None | |
99 | ||
100 | # For some reason the integers get magically converted back to integers | |
101 | if isinstance(result, int): | |
102 | return result | |
103 | ||
104 | return json_decoder.decode(result) |
14 | 14 | # limitations under the License. |
15 | 15 | import logging |
16 | 16 | from typing import ( |
17 | TYPE_CHECKING, | |
17 | 18 | Any, |
18 | 19 | Awaitable, |
19 | 20 | Dict, |
62 | 63 | TypingStream, |
63 | 64 | ) |
64 | 65 | |
66 | if TYPE_CHECKING: | |
67 | from synapse.server import HomeServer | |
68 | ||
65 | 69 | logger = logging.getLogger(__name__) |
66 | 70 | |
67 | 71 | |
87 | 91 | back out to connections. |
88 | 92 | """ |
89 | 93 | |
90 | def __init__(self, hs): | |
94 | def __init__(self, hs: "HomeServer"): | |
91 | 95 | self._replication_data_handler = hs.get_replication_data_handler() |
92 | 96 | self._presence_handler = hs.get_presence_handler() |
93 | 97 | self._store = hs.get_datastore() |
281 | 285 | if hs.config.redis.redis_enabled: |
282 | 286 | from synapse.replication.tcp.redis import ( |
283 | 287 | RedisDirectTcpReplicationClientFactory, |
284 | lazyConnection, | |
285 | ) | |
286 | ||
287 | logger.info( | |
288 | "Connecting to redis (host=%r port=%r)", | |
289 | hs.config.redis_host, | |
290 | hs.config.redis_port, | |
291 | 288 | ) |
292 | 289 | |
293 | 290 | # First let's ensure that we have a ReplicationStreamer started. |
298 | 295 | # connection after SUBSCRIBE is called). |
299 | 296 | |
300 | 297 | # First create the connection for sending commands. |
301 | outbound_redis_connection = lazyConnection( | |
302 | reactor=hs.get_reactor(), | |
303 | host=hs.config.redis_host, | |
304 | port=hs.config.redis_port, | |
305 | password=hs.config.redis.redis_password, | |
306 | reconnect=True, | |
307 | ) | |
298 | outbound_redis_connection = hs.get_outbound_redis_connection() | |
308 | 299 | |
309 | 300 | # Now create the factory/connection for the subscription stream. |
310 | 301 | self._factory = RedisDirectTcpReplicationClientFactory( |
14 | 14 | |
15 | 15 | import logging |
16 | 16 | from inspect import isawaitable |
17 | from typing import TYPE_CHECKING, Optional | |
17 | from typing import TYPE_CHECKING, Optional, Type, cast | |
18 | 18 | |
19 | 19 | import txredisapi |
20 | 20 | |
22 | 22 | from synapse.metrics.background_process_metrics import ( |
23 | 23 | BackgroundProcessLoggingContext, |
24 | 24 | run_as_background_process, |
25 | wrap_as_background_process, | |
25 | 26 | ) |
26 | 27 | from synapse.replication.tcp.commands import ( |
27 | 28 | Command, |
58 | 59 | immediately after initialisation. |
59 | 60 | |
60 | 61 | Attributes: |
61 | handler: The command handler to handle incoming commands. | |
62 | stream_name: The *redis* stream name to subscribe to and publish from | |
63 | (not anything to do with Synapse replication streams). | |
64 | outbound_redis_connection: The connection to redis to use to send | |
62 | synapse_handler: The command handler to handle incoming commands. | |
63 | synapse_stream_name: The *redis* stream name to subscribe to and publish | |
64 | from (not anything to do with Synapse replication streams). | |
65 | synapse_outbound_redis_connection: The connection to redis to use to send | |
65 | 66 | commands. |
66 | 67 | """ |
67 | 68 | |
68 | handler = None # type: ReplicationCommandHandler | |
69 | stream_name = None # type: str | |
70 | outbound_redis_connection = None # type: txredisapi.RedisProtocol | |
69 | synapse_handler = None # type: ReplicationCommandHandler | |
70 | synapse_stream_name = None # type: str | |
71 | synapse_outbound_redis_connection = None # type: txredisapi.RedisProtocol | |
71 | 72 | |
72 | 73 | def __init__(self, *args, **kwargs): |
73 | 74 | super().__init__(*args, **kwargs) |
87 | 88 | # it's important to make sure that we only send the REPLICATE command once we |
88 | 89 | # have successfully subscribed to the stream - otherwise we might miss the |
89 | 90 | # POSITION response sent back by the other end. |
90 | logger.info("Sending redis SUBSCRIBE for %s", self.stream_name) | |
91 | await make_deferred_yieldable(self.subscribe(self.stream_name)) | |
91 | logger.info("Sending redis SUBSCRIBE for %s", self.synapse_stream_name) | |
92 | await make_deferred_yieldable(self.subscribe(self.synapse_stream_name)) | |
92 | 93 | logger.info( |
93 | 94 | "Successfully subscribed to redis stream, sending REPLICATE command" |
94 | 95 | ) |
95 | self.handler.new_connection(self) | |
96 | self.synapse_handler.new_connection(self) | |
96 | 97 | await self._async_send_command(ReplicateCommand()) |
97 | 98 | logger.info("REPLICATE successfully sent") |
98 | 99 | |
99 | 100 | # We send out our positions when there is a new connection in case the |
100 | 101 | # other side missed updates. We do this for Redis connections as the |
101 | 102 | # otherside won't know we've connected and so won't issue a REPLICATE. |
102 | self.handler.send_positions_to_connection(self) | |
103 | self.synapse_handler.send_positions_to_connection(self) | |
103 | 104 | |
104 | 105 | def messageReceived(self, pattern: str, channel: str, message: str): |
105 | 106 | """Received a message from redis. |
136 | 137 | cmd: received command |
137 | 138 | """ |
138 | 139 | |
139 | cmd_func = getattr(self.handler, "on_%s" % (cmd.NAME,), None) | |
140 | cmd_func = getattr(self.synapse_handler, "on_%s" % (cmd.NAME,), None) | |
140 | 141 | if not cmd_func: |
141 | 142 | logger.warning("Unhandled command: %r", cmd) |
142 | 143 | return |
154 | 155 | def connectionLost(self, reason): |
155 | 156 | logger.info("Lost connection to redis") |
156 | 157 | super().connectionLost(reason) |
157 | self.handler.lost_connection(self) | |
158 | self.synapse_handler.lost_connection(self) | |
158 | 159 | |
159 | 160 | # mark the logging context as finished |
160 | 161 | self._logging_context.__exit__(None, None, None) |
182 | 183 | tcp_outbound_commands_counter.labels(cmd.NAME, "redis").inc() |
183 | 184 | |
184 | 185 | await make_deferred_yieldable( |
185 | self.outbound_redis_connection.publish(self.stream_name, encoded_string) | |
186 | ) | |
187 | ||
188 | ||
189 | class RedisDirectTcpReplicationClientFactory(txredisapi.SubscriberFactory): | |
186 | self.synapse_outbound_redis_connection.publish( | |
187 | self.synapse_stream_name, encoded_string | |
188 | ) | |
189 | ) | |
190 | ||
191 | ||
192 | class SynapseRedisFactory(txredisapi.RedisFactory): | |
193 | """A subclass of RedisFactory that periodically sends pings to ensure that | |
194 | we detect dead connections. | |
195 | """ | |
196 | ||
197 | def __init__( | |
198 | self, | |
199 | hs: "HomeServer", | |
200 | uuid: str, | |
201 | dbid: Optional[int], | |
202 | poolsize: int, | |
203 | isLazy: bool = False, | |
204 | handler: Type = txredisapi.ConnectionHandler, | |
205 | charset: str = "utf-8", | |
206 | password: Optional[str] = None, | |
207 | replyTimeout: int = 30, | |
208 | convertNumbers: Optional[int] = True, | |
209 | ): | |
210 | super().__init__( | |
211 | uuid=uuid, | |
212 | dbid=dbid, | |
213 | poolsize=poolsize, | |
214 | isLazy=isLazy, | |
215 | handler=handler, | |
216 | charset=charset, | |
217 | password=password, | |
218 | replyTimeout=replyTimeout, | |
219 | convertNumbers=convertNumbers, | |
220 | ) | |
221 | ||
222 | hs.get_clock().looping_call(self._send_ping, 30 * 1000) | |
223 | ||
224 | @wrap_as_background_process("redis_ping") | |
225 | async def _send_ping(self): | |
226 | for connection in self.pool: | |
227 | try: | |
228 | await make_deferred_yieldable(connection.ping()) | |
229 | except Exception: | |
230 | logger.warning("Failed to send ping to a redis connection") | |
231 | ||
232 | ||
233 | class RedisDirectTcpReplicationClientFactory(SynapseRedisFactory): | |
190 | 234 | """This is a reconnecting factory that connects to redis and immediately |
191 | 235 | subscribes to a stream. |
192 | 236 | |
205 | 249 | self, hs: "HomeServer", outbound_redis_connection: txredisapi.RedisProtocol |
206 | 250 | ): |
207 | 251 | |
208 | super().__init__() | |
209 | ||
210 | # This sets the password on the RedisFactory base class (as | |
211 | # SubscriberFactory constructor doesn't pass it through). | |
212 | self.password = hs.config.redis.redis_password | |
213 | ||
214 | self.handler = hs.get_tcp_replication() | |
215 | self.stream_name = hs.hostname | |
216 | ||
217 | self.outbound_redis_connection = outbound_redis_connection | |
252 | super().__init__( | |
253 | hs, | |
254 | uuid="subscriber", | |
255 | dbid=None, | |
256 | poolsize=1, | |
257 | replyTimeout=30, | |
258 | password=hs.config.redis.redis_password, | |
259 | ) | |
260 | ||
261 | self.synapse_handler = hs.get_tcp_replication() | |
262 | self.synapse_stream_name = hs.hostname | |
263 | ||
264 | self.synapse_outbound_redis_connection = outbound_redis_connection | |
218 | 265 | |
219 | 266 | def buildProtocol(self, addr): |
220 | p = super().buildProtocol(addr) # type: RedisSubscriber | |
267 | p = super().buildProtocol(addr) | |
268 | p = cast(RedisSubscriber, p) | |
221 | 269 | |
222 | 270 | # We do this here rather than add to the constructor of `RedisSubcriber` |
223 | 271 | # as to do so would involve overriding `buildProtocol` entirely, however |
224 | 272 | # the base method does some other things than just instantiating the |
225 | 273 | # protocol. |
226 | p.handler = self.handler | |
227 | p.outbound_redis_connection = self.outbound_redis_connection | |
228 | p.stream_name = self.stream_name | |
229 | p.password = self.password | |
274 | p.synapse_handler = self.synapse_handler | |
275 | p.synapse_outbound_redis_connection = self.synapse_outbound_redis_connection | |
276 | p.synapse_stream_name = self.synapse_stream_name | |
230 | 277 | |
231 | 278 | return p |
232 | 279 | |
233 | 280 | |
234 | 281 | def lazyConnection( |
235 | reactor, | |
282 | hs: "HomeServer", | |
236 | 283 | host: str = "localhost", |
237 | 284 | port: int = 6379, |
238 | 285 | dbid: Optional[int] = None, |
239 | 286 | reconnect: bool = True, |
240 | charset: str = "utf-8", | |
241 | 287 | password: Optional[str] = None, |
242 | connectTimeout: Optional[int] = None, | |
243 | replyTimeout: Optional[int] = None, | |
244 | convertNumbers: bool = True, | |
288 | replyTimeout: int = 30, | |
245 | 289 | ) -> txredisapi.RedisProtocol: |
246 | """Equivalent to `txredisapi.lazyConnection`, except allows specifying a | |
247 | reactor. | |
290 | """Creates a connection to Redis that is lazily set up and reconnects if the | |
291 | connections is lost. | |
248 | 292 | """ |
249 | 293 | |
250 | isLazy = True | |
251 | poolsize = 1 | |
252 | ||
253 | 294 | uuid = "%s:%d" % (host, port) |
254 | factory = txredisapi.RedisFactory( | |
255 | uuid, | |
256 | dbid, | |
257 | poolsize, | |
258 | isLazy, | |
259 | txredisapi.ConnectionHandler, | |
260 | charset, | |
261 | password, | |
262 | replyTimeout, | |
263 | convertNumbers, | |
295 | factory = SynapseRedisFactory( | |
296 | hs, | |
297 | uuid=uuid, | |
298 | dbid=dbid, | |
299 | poolsize=1, | |
300 | isLazy=True, | |
301 | handler=txredisapi.ConnectionHandler, | |
302 | password=password, | |
303 | replyTimeout=replyTimeout, | |
264 | 304 | ) |
265 | 305 | factory.continueTrying = reconnect |
266 | for x in range(poolsize): | |
267 | reactor.connectTCP(host, port, factory, connectTimeout) | |
306 | ||
307 | reactor = hs.get_reactor() | |
308 | reactor.connectTCP(host, port, factory, 30) | |
268 | 309 | |
269 | 310 | return factory.handler |
0 | body { | |
1 | font-family: "Inter", "Helvetica", "Arial", sans-serif; | |
2 | font-size: 14px; | |
3 | color: #17191C; | |
4 | } | |
5 | ||
6 | header { | |
7 | max-width: 480px; | |
8 | width: 100%; | |
9 | margin: 24px auto; | |
10 | text-align: center; | |
11 | } | |
12 | ||
13 | header p { | |
14 | color: #737D8C; | |
15 | line-height: 24px; | |
16 | } | |
17 | ||
18 | h1 { | |
19 | font-size: 24px; | |
20 | } | |
21 | ||
22 | .error_page h1 { | |
23 | color: #FE2928; | |
24 | } | |
25 | ||
26 | h2 { | |
27 | font-size: 14px; | |
28 | } | |
29 | ||
30 | h2 img { | |
31 | vertical-align: middle; | |
32 | margin-right: 8px; | |
33 | width: 24px; | |
34 | height: 24px; | |
35 | } | |
36 | ||
37 | label { | |
38 | cursor: pointer; | |
39 | } | |
40 | ||
41 | main { | |
42 | max-width: 360px; | |
43 | width: 100%; | |
44 | margin: 24px auto; | |
45 | } | |
46 | ||
47 | .primary-button { | |
48 | border: none; | |
49 | text-decoration: none; | |
50 | padding: 12px; | |
51 | color: white; | |
52 | background-color: #418DED; | |
53 | font-weight: bold; | |
54 | display: block; | |
55 | border-radius: 12px; | |
56 | width: 100%; | |
57 | box-sizing: border-box; | |
58 | margin: 16px 0; | |
59 | cursor: pointer; | |
60 | text-align: center; | |
61 | } | |
62 | ||
63 | .profile { | |
64 | display: flex; | |
65 | justify-content: center; | |
66 | margin: 24px 0; | |
67 | } | |
68 | ||
69 | .profile .avatar { | |
70 | width: 36px; | |
71 | height: 36px; | |
72 | border-radius: 100%; | |
73 | display: block; | |
74 | margin-right: 8px; | |
75 | } | |
76 | ||
77 | .profile .display-name { | |
78 | font-weight: bold; | |
79 | margin-bottom: 4px; | |
80 | } | |
81 | .profile .user-id { | |
82 | color: #737D8C; | |
83 | } | |
84 | ||
85 | .profile .display-name, .profile .user-id { | |
86 | line-height: 18px; | |
87 | } |
0 | 0 | <!DOCTYPE html> |
1 | 1 | <html lang="en"> |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>SSO account deactivated</title> | |
5 | </head> | |
6 | <body> | |
7 | <p>This account has been deactivated.</p> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>SSO account deactivated</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | </style> | |
9 | </head> | |
10 | <body class="error_page"> | |
11 | <header> | |
12 | <h1>Your account has been deactivated</h1> | |
13 | <p> | |
14 | <strong>No account found</strong> | |
15 | </p> | |
16 | <p> | |
17 | Your account might have been deactivated by the server administrator. | |
18 | You can either try to create a new account or contact the server’s | |
19 | administrator. | |
20 | </p> | |
21 | </header> | |
8 | 22 | </body> |
9 | 23 | </html> |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <title>Synapse Login</title> | |
4 | <meta charset="utf-8"> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | ||
9 | .username_input { | |
10 | display: flex; | |
11 | border: 2px solid #418DED; | |
12 | border-radius: 8px; | |
13 | padding: 12px; | |
14 | position: relative; | |
15 | margin: 16px 0; | |
16 | align-items: center; | |
17 | font-size: 12px; | |
18 | } | |
19 | ||
20 | .username_input.invalid { | |
21 | border-color: #FE2928; | |
22 | } | |
23 | ||
24 | .username_input.invalid input, .username_input.invalid label { | |
25 | color: #FE2928; | |
26 | } | |
27 | ||
28 | .username_input div, .username_input input { | |
29 | line-height: 18px; | |
30 | font-size: 14px; | |
31 | } | |
32 | ||
33 | .username_input label { | |
34 | position: absolute; | |
35 | top: -8px; | |
36 | left: 14px; | |
37 | font-size: 80%; | |
38 | background: white; | |
39 | padding: 2px; | |
40 | } | |
41 | ||
42 | .username_input input { | |
43 | flex: 1; | |
44 | display: block; | |
45 | min-width: 0; | |
46 | border: none; | |
47 | } | |
48 | ||
49 | .username_input div { | |
50 | color: #8D99A5; | |
51 | } | |
52 | ||
53 | .idp-pick-details { | |
54 | border: 1px solid #E9ECF1; | |
55 | border-radius: 8px; | |
56 | margin: 24px 0; | |
57 | } | |
58 | ||
59 | .idp-pick-details h2 { | |
60 | margin: 0; | |
61 | padding: 8px 12px; | |
62 | } | |
63 | ||
64 | .idp-pick-details .idp-detail { | |
65 | border-top: 1px solid #E9ECF1; | |
66 | padding: 12px; | |
67 | } | |
68 | .idp-pick-details .check-row { | |
69 | display: flex; | |
70 | align-items: center; | |
71 | } | |
72 | ||
73 | .idp-pick-details .check-row .name { | |
74 | flex: 1; | |
75 | } | |
76 | ||
77 | .idp-pick-details .use, .idp-pick-details .idp-value { | |
78 | color: #737D8C; | |
79 | } | |
80 | ||
81 | .idp-pick-details .idp-value { | |
82 | margin: 0; | |
83 | margin-top: 8px; | |
84 | } | |
85 | ||
86 | .idp-pick-details .avatar { | |
87 | width: 53px; | |
88 | height: 53px; | |
89 | border-radius: 100%; | |
90 | display: block; | |
91 | margin-top: 8px; | |
92 | } | |
93 | ||
94 | output { | |
95 | padding: 0 14px; | |
96 | display: block; | |
97 | } | |
98 | ||
99 | output.error { | |
100 | color: #FE2928; | |
101 | } | |
102 | </style> | |
103 | </head> | |
104 | <body> | |
105 | <header> | |
106 | <h1>Your account is nearly ready</h1> | |
107 | <p>Check your details before creating an account on {{ server_name }}</p> | |
108 | </header> | |
109 | <main> | |
110 | <form method="post" class="form__input" id="form"> | |
111 | <div class="username_input" id="username_input"> | |
112 | <label for="field-username">Username</label> | |
113 | <div class="prefix">@</div> | |
114 | <input type="text" name="username" id="field-username" autofocus> | |
115 | <div class="postfix">:{{ server_name }}</div> | |
116 | </div> | |
117 | <output for="username_input" id="field-username-output"></output> | |
118 | <input type="submit" value="Continue" class="primary-button"> | |
119 | {% if user_attributes %} | |
120 | <section class="idp-pick-details"> | |
121 | <h2><img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>Information from {{ idp.idp_name }}</h2> | |
122 | {% if user_attributes.avatar_url %} | |
123 | <div class="idp-detail idp-avatar"> | |
124 | <div class="check-row"> | |
125 | <label for="idp-avatar" class="name">Avatar</label> | |
126 | <label for="idp-avatar" class="use">Use</label> | |
127 | <input type="checkbox" name="use_avatar" id="idp-avatar" value="true" checked> | |
128 | </div> | |
129 | <img src="{{ user_attributes.avatar_url }}" class="avatar" /> | |
130 | </div> | |
131 | {% endif %} | |
132 | {% if user_attributes.display_name %} | |
133 | <div class="idp-detail"> | |
134 | <div class="check-row"> | |
135 | <label for="idp-displayname" class="name">Display name</label> | |
136 | <label for="idp-displayname" class="use">Use</label> | |
137 | <input type="checkbox" name="use_display_name" id="idp-displayname" value="true" checked> | |
138 | </div> | |
139 | <p class="idp-value">{{ user_attributes.display_name }}</p> | |
140 | </div> | |
141 | {% endif %} | |
142 | {% for email in user_attributes.emails %} | |
143 | <div class="idp-detail"> | |
144 | <div class="check-row"> | |
145 | <label for="idp-email{{ loop.index }}" class="name">E-mail</label> | |
146 | <label for="idp-email{{ loop.index }}" class="use">Use</label> | |
147 | <input type="checkbox" name="use_email" id="idp-email{{ loop.index }}" value="{{ email }}" checked> | |
148 | </div> | |
149 | <p class="idp-value">{{ email }}</p> | |
150 | </div> | |
151 | {% endfor %} | |
152 | </section> | |
153 | {% endif %} | |
154 | </form> | |
155 | </main> | |
156 | <script type="text/javascript"> | |
157 | {% include "sso_auth_account_details.js" without context %} | |
158 | </script> | |
159 | </body> | |
160 | </html> |
0 | const usernameField = document.getElementById("field-username"); | |
1 | const usernameOutput = document.getElementById("field-username-output"); | |
2 | const form = document.getElementById("form"); | |
3 | ||
4 | // needed to validate on change event when no input was changed | |
5 | let needsValidation = true; | |
6 | let isValid = false; | |
7 | ||
8 | function throttle(fn, wait) { | |
9 | let timeout; | |
10 | const throttleFn = function() { | |
11 | const args = Array.from(arguments); | |
12 | if (timeout) { | |
13 | clearTimeout(timeout); | |
14 | } | |
15 | timeout = setTimeout(fn.bind.apply(fn, [null].concat(args)), wait); | |
16 | }; | |
17 | throttleFn.cancelQueued = function() { | |
18 | clearTimeout(timeout); | |
19 | }; | |
20 | return throttleFn; | |
21 | } | |
22 | ||
23 | function checkUsernameAvailable(username) { | |
24 | let check_uri = 'check?username=' + encodeURIComponent(username); | |
25 | return fetch(check_uri, { | |
26 | // include the cookie | |
27 | "credentials": "same-origin", | |
28 | }).then(function(response) { | |
29 | if(!response.ok) { | |
30 | // for non-200 responses, raise the body of the response as an exception | |
31 | return response.text().then((text) => { throw new Error(text); }); | |
32 | } else { | |
33 | return response.json(); | |
34 | } | |
35 | }).then(function(json) { | |
36 | if(json.error) { | |
37 | return {message: json.error}; | |
38 | } else if(json.available) { | |
39 | return {available: true}; | |
40 | } else { | |
41 | return {message: username + " is not available, please choose another."}; | |
42 | } | |
43 | }); | |
44 | } | |
45 | ||
46 | const allowedUsernameCharacters = new RegExp("^[a-z0-9\\.\\_\\-\\/\\=]+$"); | |
47 | const allowedCharactersString = "lowercase letters, digits, ., _, -, /, ="; | |
48 | ||
49 | function reportError(error) { | |
50 | throttledCheckUsernameAvailable.cancelQueued(); | |
51 | usernameOutput.innerText = error; | |
52 | usernameOutput.classList.add("error"); | |
53 | usernameField.parentElement.classList.add("invalid"); | |
54 | usernameField.focus(); | |
55 | } | |
56 | ||
57 | function validateUsername(username) { | |
58 | isValid = false; | |
59 | needsValidation = false; | |
60 | usernameOutput.innerText = ""; | |
61 | usernameField.parentElement.classList.remove("invalid"); | |
62 | usernameOutput.classList.remove("error"); | |
63 | if (!username) { | |
64 | return reportError("Please provide a username"); | |
65 | } | |
66 | if (username.length > 255) { | |
67 | return reportError("Too long, please choose something shorter"); | |
68 | } | |
69 | if (!allowedUsernameCharacters.test(username)) { | |
70 | return reportError("Invalid username, please only use " + allowedCharactersString); | |
71 | } | |
72 | usernameOutput.innerText = "Checking if username is available …"; | |
73 | throttledCheckUsernameAvailable(username); | |
74 | } | |
75 | ||
76 | const throttledCheckUsernameAvailable = throttle(function(username) { | |
77 | const handleError = function(err) { | |
78 | // don't prevent form submission on error | |
79 | usernameOutput.innerText = ""; | |
80 | isValid = true; | |
81 | }; | |
82 | try { | |
83 | checkUsernameAvailable(username).then(function(result) { | |
84 | if (!result.available) { | |
85 | reportError(result.message); | |
86 | } else { | |
87 | isValid = true; | |
88 | usernameOutput.innerText = ""; | |
89 | } | |
90 | }, handleError); | |
91 | } catch (err) { | |
92 | handleError(err); | |
93 | } | |
94 | }, 500); | |
95 | ||
96 | form.addEventListener("submit", function(evt) { | |
97 | if (needsValidation) { | |
98 | validateUsername(usernameField.value); | |
99 | evt.preventDefault(); | |
100 | return; | |
101 | } | |
102 | if (!isValid) { | |
103 | evt.preventDefault(); | |
104 | usernameField.focus(); | |
105 | return; | |
106 | } | |
107 | }); | |
108 | usernameField.addEventListener("input", function(evt) { | |
109 | validateUsername(usernameField.value); | |
110 | }); | |
111 | usernameField.addEventListener("change", function(evt) { | |
112 | if (needsValidation) { | |
113 | validateUsername(usernameField.value); | |
114 | } | |
115 | }); |
0 | <html> | |
1 | <head> | |
2 | <title>Authentication Failed</title> | |
3 | </head> | |
4 | <body> | |
5 | <div> | |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>Authentication failed</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | </style> | |
9 | </head> | |
10 | <body class="error_page"> | |
11 | <header> | |
12 | <h1>That doesn't look right</h1> | |
6 | 13 | <p> |
7 | We were unable to validate your <tt>{{server_name | e}}</tt> account via | |
8 | single-sign-on (SSO), because the SSO Identity Provider returned | |
9 | different details than when you logged in. | |
14 | <strong>We were unable to validate your {{ server_name }} account</strong> | |
15 | via single sign‑on (SSO), because the SSO Identity | |
16 | Provider returned different details than when you logged in. | |
10 | 17 | </p> |
11 | 18 | <p> |
12 | 19 | Try the operation again, and ensure that you use the same details on |
13 | 20 | the Identity Provider as when you log into your account. |
14 | 21 | </p> |
15 | </div> | |
22 | </header> | |
16 | 23 | </body> |
17 | 24 | </html> |
0 | <html> | |
1 | <head> | |
2 | <title>Authentication</title> | |
3 | </head> | |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>Authentication</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | </style> | |
9 | </head> | |
4 | 10 | <body> |
5 | <div> | |
11 | <header> | |
12 | <h1>Confirm it's you to continue</h1> | |
6 | 13 | <p> |
7 | A client is trying to {{ description | e }}. To confirm this action, | |
8 | <a href="{{ redirect_url | e }}">re-authenticate with single sign-on</a>. | |
9 | If you did not expect this, your account may be compromised! | |
14 | A client is trying to {{ description }}. To confirm this action | |
15 | re-authorize your account with single sign-on. | |
10 | 16 | </p> |
11 | </div> | |
17 | <p><strong> | |
18 | If you did not expect this, your account may be compromised. | |
19 | </strong></p> | |
20 | </header> | |
21 | <main> | |
22 | <a href="{{ redirect_url }}" class="primary-button"> | |
23 | Continue with {{ idp.idp_name }} | |
24 | </a> | |
25 | </main> | |
12 | 26 | </body> |
13 | 27 | </html> |
0 | <html> | |
1 | <head> | |
2 | <title>Authentication Successful</title> | |
3 | <script> | |
4 | if (window.onAuthDone) { | |
5 | window.onAuthDone(); | |
6 | } else if (window.opener && window.opener.postMessage) { | |
7 | window.opener.postMessage("authDone", "*"); | |
8 | } | |
9 | </script> | |
10 | </head> | |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>Authentication successful</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | </style> | |
9 | <script> | |
10 | if (window.onAuthDone) { | |
11 | window.onAuthDone(); | |
12 | } else if (window.opener && window.opener.postMessage) { | |
13 | window.opener.postMessage("authDone", "*"); | |
14 | } | |
15 | </script> | |
16 | </head> | |
11 | 17 | <body> |
12 | <div> | |
13 | <p>Thank you</p> | |
14 | <p>You may now close this window and return to the application</p> | |
15 | </div> | |
18 | <header> | |
19 | <h1>Thank you</h1> | |
20 | <p> | |
21 | Now we know it’s you, you can close this window and return to the | |
22 | application. | |
23 | </p> | |
24 | </header> | |
16 | 25 | </body> |
17 | 26 | </html> |
0 | 0 | <!DOCTYPE html> |
1 | 1 | <html lang="en"> |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>SSO error</title> | |
5 | </head> | |
6 | <body> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>Authentication failed</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | ||
9 | #error_code { | |
10 | margin-top: 56px; | |
11 | } | |
12 | </style> | |
13 | </head> | |
14 | <body class="error_page"> | |
7 | 15 | {# If an error of unauthorised is returned it means we have actively rejected their login #} |
8 | 16 | {% if error == "unauthorised" %} |
9 | <p>You are not allowed to log in here.</p> | |
17 | <header> | |
18 | <p>You are not allowed to log in here.</p> | |
19 | </header> | |
10 | 20 | {% else %} |
11 | <p> | |
12 | There was an error during authentication: | |
13 | </p> | |
14 | <div id="errormsg" style="margin:20px 80px">{{ error_description | e }}</div> | |
15 | <p> | |
16 | If you are seeing this page after clicking a link sent to you via email, make | |
17 | sure you only click the confirmation link once, and that you open the | |
18 | validation link in the same client you're logging in from. | |
19 | </p> | |
20 | <p> | |
21 | Try logging in again from your Matrix client and if the problem persists | |
22 | please contact the server's administrator. | |
23 | </p> | |
24 | <p>Error: <code>{{ error }}</code></p> | |
21 | <header> | |
22 | <h1>There was an error</h1> | |
23 | <p> | |
24 | <strong id="errormsg">{{ error_description }}</strong> | |
25 | </p> | |
26 | <p> | |
27 | If you are seeing this page after clicking a link sent to you via email, | |
28 | make sure you only click the confirmation link once, and that you open | |
29 | the validation link in the same client you're logging in from. | |
30 | </p> | |
31 | <p> | |
32 | Try logging in again from your Matrix client and if the problem persists | |
33 | please contact the server's administrator. | |
34 | </p> | |
35 | <div id="error_code"> | |
36 | <p><strong>Error code</strong></p> | |
37 | <p>{{ error }}</p> | |
38 | </div> | |
39 | </header> | |
25 | 40 | |
26 | <script type="text/javascript"> | |
27 | // Error handling to support Auth0 errors that we might get through a GET request | |
28 | // to the validation endpoint. If an error is provided, it's either going to be | |
29 | // located in the query string or in a query string-like URI fragment. | |
30 | // We try to locate the error from any of these two locations, but if we can't | |
31 | // we just don't print anything specific. | |
32 | let searchStr = ""; | |
33 | if (window.location.search) { | |
34 | // window.location.searchParams isn't always defined when | |
35 | // window.location.search is, so it's more reliable to parse the latter. | |
36 | searchStr = window.location.search; | |
37 | } else if (window.location.hash) { | |
38 | // Replace the # with a ? so that URLSearchParams does the right thing and | |
39 | // doesn't parse the first parameter incorrectly. | |
40 | searchStr = window.location.hash.replace("#", "?"); | |
41 | } | |
41 | <script type="text/javascript"> | |
42 | // Error handling to support Auth0 errors that we might get through a GET request | |
43 | // to the validation endpoint. If an error is provided, it's either going to be | |
44 | // located in the query string or in a query string-like URI fragment. | |
45 | // We try to locate the error from any of these two locations, but if we can't | |
46 | // we just don't print anything specific. | |
47 | let searchStr = ""; | |
48 | if (window.location.search) { | |
49 | // window.location.searchParams isn't always defined when | |
50 | // window.location.search is, so it's more reliable to parse the latter. | |
51 | searchStr = window.location.search; | |
52 | } else if (window.location.hash) { | |
53 | // Replace the # with a ? so that URLSearchParams does the right thing and | |
54 | // doesn't parse the first parameter incorrectly. | |
55 | searchStr = window.location.hash.replace("#", "?"); | |
56 | } | |
42 | 57 | |
43 | // We might end up with no error in the URL, so we need to check if we have one | |
44 | // to print one. | |
45 | let errorDesc = new URLSearchParams(searchStr).get("error_description") | |
46 | if (errorDesc) { | |
47 | document.getElementById("errormsg").innerText = errorDesc; | |
48 | } | |
49 | </script> | |
58 | // We might end up with no error in the URL, so we need to check if we have one | |
59 | // to print one. | |
60 | let errorDesc = new URLSearchParams(searchStr).get("error_description") | |
61 | if (errorDesc) { | |
62 | document.getElementById("errormsg").innerText = errorDesc; | |
63 | } | |
64 | </script> | |
50 | 65 | {% endif %} |
51 | 66 | </body> |
52 | 67 | </html> |
2 | 2 | <head> |
3 | 3 | <meta charset="UTF-8"> |
4 | 4 | <link rel="stylesheet" href="/_matrix/static/client/login/style.css"> |
5 | <title>{{server_name | e}} Login</title> | |
5 | <title>{{ server_name }} Login</title> | |
6 | 6 | </head> |
7 | 7 | <body> |
8 | 8 | <div id="container"> |
9 | <h1 id="title">{{server_name | e}} Login</h1> | |
9 | <h1 id="title">{{ server_name }} Login</h1> | |
10 | 10 | <div class="login_flow"> |
11 | 11 | <p>Choose one of the following identity providers:</p> |
12 | 12 | <form> |
13 | <input type="hidden" name="redirectUrl" value="{{redirect_url | e}}"> | |
13 | <input type="hidden" name="redirectUrl" value="{{ redirect_url }}"> | |
14 | 14 | <ul class="radiobuttons"> |
15 | 15 | {% for p in providers %} |
16 | 16 | <li> |
17 | <input type="radio" name="idp" id="prov{{loop.index}}" value="{{p.idp_id}}"> | |
18 | <label for="prov{{loop.index}}">{{p.idp_name | e}}</label> | |
17 | <input type="radio" name="idp" id="prov{{ loop.index }}" value="{{ p.idp_id }}"> | |
18 | <label for="prov{{ loop.index }}">{{ p.idp_name }}</label> | |
19 | 19 | {% if p.idp_icon %} |
20 | <img src="{{p.idp_icon | mxc_to_http(32, 32)}}"/> | |
20 | <img src="{{ p.idp_icon | mxc_to_http(32, 32) }}"/> | |
21 | 21 | {% endif %} |
22 | 22 | </li> |
23 | 23 | {% endfor %} |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <meta charset="UTF-8"> | |
4 | <title>SSO redirect confirmation</title> | |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | ||
9 | #consent_form { | |
10 | margin-top: 56px; | |
11 | } | |
12 | </style> | |
13 | </head> | |
14 | <body> | |
15 | <header> | |
16 | <h1>Your account is nearly ready</h1> | |
17 | <p>Agree to the terms to create your account.</p> | |
18 | </header> | |
19 | <main> | |
20 | <!-- {% if user_profile.avatar_url and user_profile.display_name %} --> | |
21 | <div class="profile"> | |
22 | <img src="{{ user_profile.avatar_url | mxc_to_http(64, 64) }}" class="avatar" /> | |
23 | <div class="profile-details"> | |
24 | <div class="display-name">{{ user_profile.display_name }}</div> | |
25 | <div class="user-id">{{ user_id }}</div> | |
26 | </div> | |
27 | </div> | |
28 | <!-- {% endif %} --> | |
29 | <form method="post" action="{{my_url}}" id="consent_form"> | |
30 | <p> | |
31 | <input id="accepted_version" type="checkbox" name="accepted_version" value="{{ consent_version }}" required> | |
32 | <label for="accepted_version">I have read and agree to the <a href="{{ terms_url }}" target="_blank">terms and conditions</a>.</label> | |
33 | </p> | |
34 | <input type="submit" class="primary-button" value="Continue"/> | |
35 | </form> | |
36 | </main> | |
37 | </body> | |
38 | </html> |
2 | 2 | <head> |
3 | 3 | <meta charset="UTF-8"> |
4 | 4 | <title>SSO redirect confirmation</title> |
5 | <meta name="viewport" content="width=device-width, user-scalable=no"> | |
6 | <style type="text/css"> | |
7 | {% include "sso.css" without context %} | |
8 | </style> | |
5 | 9 | </head> |
6 | 10 | <body> |
7 | <p>The application at <span style="font-weight:bold">{{ display_url | e }}</span> is requesting full access to your <span style="font-weight:bold">{{ server_name }}</span> Matrix account.</p> | |
8 | <p>If you don't recognise this address, you should ignore this and close this tab.</p> | |
9 | <p> | |
10 | <a href="{{ redirect_url | e }}">I trust this address</a> | |
11 | </p> | |
11 | <header> | |
12 | {% if new_user %} | |
13 | <h1>Your account is now ready</h1> | |
14 | <p>You've made your account on {{ server_name }}.</p> | |
15 | {% else %} | |
16 | <h1>Log in</h1> | |
17 | {% endif %} | |
18 | <p>Continue to confirm you trust <strong>{{ display_url }}</strong>.</p> | |
19 | </header> | |
20 | <main> | |
21 | {% if user_profile.avatar_url %} | |
22 | <div class="profile"> | |
23 | <img src="{{ user_profile.avatar_url | mxc_to_http(64, 64) }}" class="avatar" /> | |
24 | <div class="profile-details"> | |
25 | {% if user_profile.display_name %} | |
26 | <div class="display-name">{{ user_profile.display_name }}</div> | |
27 | {% endif %} | |
28 | <div class="user-id">{{ user_id }}</div> | |
29 | </div> | |
30 | </div> | |
31 | {% endif %} | |
32 | <a href="{{ redirect_url }}" class="primary-button">Continue</a> | |
33 | </main> | |
12 | 34 | </body> |
13 | </html>⏎ | |
35 | </html> |
0 | <!DOCTYPE html> | |
1 | <html lang="en"> | |
2 | <head> | |
3 | <title>Synapse Login</title> | |
4 | <link rel="stylesheet" href="style.css" type="text/css" /> | |
5 | </head> | |
6 | <body> | |
7 | <div class="card"> | |
8 | <form method="post" class="form__input" id="form" action="submit"> | |
9 | <label for="field-username">Please pick your username:</label> | |
10 | <input type="text" name="username" id="field-username" autofocus=""> | |
11 | <input type="submit" class="button button--full-width" id="button-submit" value="Submit"> | |
12 | </form> | |
13 | <!-- this is used for feedback --> | |
14 | <div role=alert class="tooltip hidden" id="message"></div> | |
15 | <script src="script.js"></script> | |
16 | </div> | |
17 | </body> | |
18 | </html> |
0 | let inputField = document.getElementById("field-username"); | |
1 | let inputForm = document.getElementById("form"); | |
2 | let submitButton = document.getElementById("button-submit"); | |
3 | let message = document.getElementById("message"); | |
4 | ||
5 | // Submit username and receive response | |
6 | function showMessage(messageText) { | |
7 | // Unhide the message text | |
8 | message.classList.remove("hidden"); | |
9 | ||
10 | message.textContent = messageText; | |
11 | }; | |
12 | ||
13 | function doSubmit() { | |
14 | showMessage("Success. Please wait a moment for your browser to redirect."); | |
15 | ||
16 | // remove the event handler before re-submitting the form. | |
17 | delete inputForm.onsubmit; | |
18 | inputForm.submit(); | |
19 | } | |
20 | ||
21 | function onResponse(response) { | |
22 | // Display message | |
23 | showMessage(response); | |
24 | ||
25 | // Enable submit button and input field | |
26 | submitButton.classList.remove('button--disabled'); | |
27 | submitButton.value = "Submit"; | |
28 | }; | |
29 | ||
30 | let allowedUsernameCharacters = RegExp("[^a-z0-9\\.\\_\\=\\-\\/]"); | |
31 | function usernameIsValid(username) { | |
32 | return !allowedUsernameCharacters.test(username); | |
33 | } | |
34 | let allowedCharactersString = "lowercase letters, digits, ., _, -, /, ="; | |
35 | ||
36 | function buildQueryString(params) { | |
37 | return Object.keys(params) | |
38 | .map(k => encodeURIComponent(k) + '=' + encodeURIComponent(params[k])) | |
39 | .join('&'); | |
40 | } | |
41 | ||
42 | function submitUsername(username) { | |
43 | if(username.length == 0) { | |
44 | onResponse("Please enter a username."); | |
45 | return; | |
46 | } | |
47 | if(!usernameIsValid(username)) { | |
48 | onResponse("Invalid username. Only the following characters are allowed: " + allowedCharactersString); | |
49 | return; | |
50 | } | |
51 | ||
52 | // if this browser doesn't support fetch, skip the availability check. | |
53 | if(!window.fetch) { | |
54 | doSubmit(); | |
55 | return; | |
56 | } | |
57 | ||
58 | let check_uri = 'check?' + buildQueryString({"username": username}); | |
59 | fetch(check_uri, { | |
60 | // include the cookie | |
61 | "credentials": "same-origin", | |
62 | }).then((response) => { | |
63 | if(!response.ok) { | |
64 | // for non-200 responses, raise the body of the response as an exception | |
65 | return response.text().then((text) => { throw text; }); | |
66 | } else { | |
67 | return response.json(); | |
68 | } | |
69 | }).then((json) => { | |
70 | if(json.error) { | |
71 | throw json.error; | |
72 | } else if(json.available) { | |
73 | doSubmit(); | |
74 | } else { | |
75 | onResponse("This username is not available, please choose another."); | |
76 | } | |
77 | }).catch((err) => { | |
78 | onResponse("Error checking username availability: " + err); | |
79 | }); | |
80 | } | |
81 | ||
82 | function clickSubmit() { | |
83 | event.preventDefault(); | |
84 | if(submitButton.classList.contains('button--disabled')) { return; } | |
85 | ||
86 | // Disable submit button and input field | |
87 | submitButton.classList.add('button--disabled'); | |
88 | ||
89 | // Submit username | |
90 | submitButton.value = "Checking..."; | |
91 | submitUsername(inputField.value); | |
92 | }; | |
93 | ||
94 | inputForm.onsubmit = clickSubmit; |
0 | input[type="text"] { | |
1 | font-size: 100%; | |
2 | background-color: #ededf0; | |
3 | border: 1px solid #fff; | |
4 | border-radius: .2em; | |
5 | padding: .5em .9em; | |
6 | display: block; | |
7 | width: 26em; | |
8 | } | |
9 | ||
10 | .button--disabled { | |
11 | border-color: #fff; | |
12 | background-color: transparent; | |
13 | color: #000; | |
14 | text-transform: none; | |
15 | } | |
16 | ||
17 | .hidden { | |
18 | display: none; | |
19 | } | |
20 | ||
21 | .tooltip { | |
22 | background-color: #f9f9fa; | |
23 | padding: 1em; | |
24 | margin: 1em 0; | |
25 | } | |
26 |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | 2 | # Copyright 2018-2019 New Vector Ltd |
3 | # Copyright 2020, 2021 The Matrix.org Foundation C.I.C. | |
4 | ||
3 | 5 | # |
4 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); |
5 | 7 | # you may not use this file except in compliance with the License. |
35 | 37 | from synapse.rest.admin.purge_room_servlet import PurgeRoomServlet |
36 | 38 | from synapse.rest.admin.rooms import ( |
37 | 39 | DeleteRoomRestServlet, |
40 | ForwardExtremitiesRestServlet, | |
38 | 41 | JoinRoomAliasServlet, |
39 | 42 | ListRoomRestServlet, |
40 | 43 | MakeRoomAdminRestServlet, |
41 | 44 | RoomMembersRestServlet, |
42 | 45 | RoomRestServlet, |
46 | RoomStateRestServlet, | |
43 | 47 | ShutdownRoomRestServlet, |
44 | 48 | ) |
45 | 49 | from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet |
50 | 54 | PushersRestServlet, |
51 | 55 | ResetPasswordRestServlet, |
52 | 56 | SearchUsersRestServlet, |
57 | ShadowBanRestServlet, | |
53 | 58 | UserAdminServlet, |
54 | 59 | UserMediaRestServlet, |
55 | 60 | UserMembershipRestServlet, |
208 | 213 | """ |
209 | 214 | register_servlets_for_client_rest_resource(hs, http_server) |
210 | 215 | ListRoomRestServlet(hs).register(http_server) |
216 | RoomStateRestServlet(hs).register(http_server) | |
211 | 217 | RoomRestServlet(hs).register(http_server) |
212 | 218 | RoomMembersRestServlet(hs).register(http_server) |
213 | 219 | DeleteRoomRestServlet(hs).register(http_server) |
229 | 235 | EventReportsRestServlet(hs).register(http_server) |
230 | 236 | PushersRestServlet(hs).register(http_server) |
231 | 237 | MakeRoomAdminRestServlet(hs).register(http_server) |
238 | ShadowBanRestServlet(hs).register(http_server) | |
239 | ForwardExtremitiesRestServlet(hs).register(http_server) | |
232 | 240 | |
233 | 241 | |
234 | 242 | def register_servlets_for_client_rest_resource(hs, http_server): |
0 | 0 | # -*- coding: utf-8 -*- |
1 | # Copyright 2019 The Matrix.org Foundation C.I.C. | |
1 | # Copyright 2019-2021 The Matrix.org Foundation C.I.C. | |
2 | 2 | # |
3 | 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 4 | # you may not use this file except in compliance with the License. |
291 | 291 | return 200, ret |
292 | 292 | |
293 | 293 | |
294 | class RoomStateRestServlet(RestServlet): | |
295 | """ | |
296 | Get full state within a room. | |
297 | """ | |
298 | ||
299 | PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/state") | |
300 | ||
301 | def __init__(self, hs: "HomeServer"): | |
302 | self.hs = hs | |
303 | self.auth = hs.get_auth() | |
304 | self.store = hs.get_datastore() | |
305 | self.clock = hs.get_clock() | |
306 | self._event_serializer = hs.get_event_client_serializer() | |
307 | ||
308 | async def on_GET( | |
309 | self, request: SynapseRequest, room_id: str | |
310 | ) -> Tuple[int, JsonDict]: | |
311 | requester = await self.auth.get_user_by_req(request) | |
312 | await assert_user_is_admin(self.auth, requester.user) | |
313 | ||
314 | ret = await self.store.get_room(room_id) | |
315 | if not ret: | |
316 | raise NotFoundError("Room not found") | |
317 | ||
318 | event_ids = await self.store.get_current_state_ids(room_id) | |
319 | events = await self.store.get_events(event_ids.values()) | |
320 | now = self.clock.time_msec() | |
321 | room_state = await self._event_serializer.serialize_events( | |
322 | events.values(), | |
323 | now, | |
324 | # We don't bother bundling aggregations in when asked for state | |
325 | # events, as clients won't use them. | |
326 | bundle_aggregations=False, | |
327 | ) | |
328 | ret = {"state": room_state} | |
329 | ||
330 | return 200, ret | |
331 | ||
332 | ||
294 | 333 | class JoinRoomAliasServlet(RestServlet): |
295 | 334 | |
296 | 335 | PATTERNS = admin_patterns("/join/(?P<room_identifier>[^/]*)") |
430 | 469 | if not admin_users: |
431 | 470 | raise SynapseError(400, "No local admin user in room") |
432 | 471 | |
433 | admin_user_id = admin_users[-1] | |
472 | admin_user_id = None | |
473 | ||
474 | for admin_user in reversed(admin_users): | |
475 | if room_state.get((EventTypes.Member, admin_user)): | |
476 | admin_user_id = admin_user | |
477 | break | |
478 | ||
479 | if not admin_user_id: | |
480 | raise SynapseError( | |
481 | 400, "No local admin user in room", | |
482 | ) | |
434 | 483 | |
435 | 484 | pl_content = power_levels.content |
436 | 485 | else: |
498 | 547 | ) |
499 | 548 | |
500 | 549 | return 200, {} |
550 | ||
551 | ||
552 | class ForwardExtremitiesRestServlet(RestServlet): | |
553 | """Allows a server admin to get or clear forward extremities. | |
554 | ||
555 | Clearing does not require restarting the server. | |
556 | ||
557 | Clear forward extremities: | |
558 | DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities | |
559 | ||
560 | Get forward_extremities: | |
561 | GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities | |
562 | """ | |
563 | ||
564 | PATTERNS = admin_patterns("/rooms/(?P<room_identifier>[^/]*)/forward_extremities") | |
565 | ||
566 | def __init__(self, hs: "HomeServer"): | |
567 | self.hs = hs | |
568 | self.auth = hs.get_auth() | |
569 | self.room_member_handler = hs.get_room_member_handler() | |
570 | self.store = hs.get_datastore() | |
571 | ||
572 | async def resolve_room_id(self, room_identifier: str) -> str: | |
573 | """Resolve to a room ID, if necessary.""" | |
574 | if RoomID.is_valid(room_identifier): | |
575 | resolved_room_id = room_identifier | |
576 | elif RoomAlias.is_valid(room_identifier): | |
577 | room_alias = RoomAlias.from_string(room_identifier) | |
578 | room_id, _ = await self.room_member_handler.lookup_room_alias(room_alias) | |
579 | resolved_room_id = room_id.to_string() | |
580 | else: | |
581 | raise SynapseError( | |
582 | 400, "%s was not legal room ID or room alias" % (room_identifier,) | |
583 | ) | |
584 | if not resolved_room_id: | |
585 | raise SynapseError( | |
586 | 400, "Unknown room ID or room alias %s" % room_identifier | |
587 | ) | |
588 | return resolved_room_id | |
589 | ||
590 | async def on_DELETE(self, request, room_identifier): | |
591 | requester = await self.auth.get_user_by_req(request) | |
592 | await assert_user_is_admin(self.auth, requester.user) | |
593 | ||
594 | room_id = await self.resolve_room_id(room_identifier) | |
595 | ||
596 | deleted_count = await self.store.delete_forward_extremities_for_room(room_id) | |
597 | return 200, {"deleted": deleted_count} | |
598 | ||
599 | async def on_GET(self, request, room_identifier): | |
600 | requester = await self.auth.get_user_by_req(request) | |
601 | await assert_user_is_admin(self.auth, requester.user) | |
602 | ||
603 | room_id = await self.resolve_room_id(room_identifier) | |
604 | ||
605 | extremities = await self.store.get_forward_extremities_for_room(room_id) | |
606 | return 200, {"count": len(extremities), "results": extremities} |
82 | 82 | The parameter `deactivated` can be used to include deactivated users. |
83 | 83 | """ |
84 | 84 | |
85 | def __init__(self, hs): | |
85 | def __init__(self, hs: "HomeServer"): | |
86 | 86 | self.hs = hs |
87 | 87 | self.store = hs.get_datastore() |
88 | 88 | self.auth = hs.get_auth() |
89 | 89 | self.admin_handler = hs.get_admin_handler() |
90 | 90 | |
91 | async def on_GET(self, request): | |
91 | async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
92 | 92 | await assert_requester_is_admin(self.auth, request) |
93 | 93 | |
94 | 94 | start = parse_integer(request, "from", default=0) |
95 | 95 | limit = parse_integer(request, "limit", default=100) |
96 | ||
97 | if start < 0: | |
98 | raise SynapseError( | |
99 | 400, | |
100 | "Query parameter from must be a string representing a positive integer.", | |
101 | errcode=Codes.INVALID_PARAM, | |
102 | ) | |
103 | ||
104 | if limit < 0: | |
105 | raise SynapseError( | |
106 | 400, | |
107 | "Query parameter limit must be a string representing a positive integer.", | |
108 | errcode=Codes.INVALID_PARAM, | |
109 | ) | |
110 | ||
96 | 111 | user_id = parse_string(request, "user_id", default=None) |
97 | 112 | name = parse_string(request, "name", default=None) |
98 | 113 | guests = parse_boolean(request, "guests", default=True) |
102 | 117 | start, limit, user_id, name, guests, deactivated |
103 | 118 | ) |
104 | 119 | ret = {"users": users, "total": total} |
105 | if len(users) >= limit: | |
120 | if (start + limit) < total: | |
106 | 121 | ret["next_token"] = str(start + len(users)) |
107 | 122 | |
108 | 123 | return 200, ret |
874 | 889 | ) |
875 | 890 | |
876 | 891 | return 200, {"access_token": token} |
892 | ||
893 | ||
894 | class ShadowBanRestServlet(RestServlet): | |
895 | """An admin API for shadow-banning a user. | |
896 | ||
897 | A shadow-banned users receives successful responses to their client-server | |
898 | API requests, but the events are not propagated into rooms. | |
899 | ||
900 | Shadow-banning a user should be used as a tool of last resort and may lead | |
901 | to confusing or broken behaviour for the client. | |
902 | ||
903 | Example: | |
904 | ||
905 | POST /_synapse/admin/v1/users/@test:example.com/shadow_ban | |
906 | {} | |
907 | ||
908 | 200 OK | |
909 | {} | |
910 | """ | |
911 | ||
912 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/shadow_ban") | |
913 | ||
914 | def __init__(self, hs: "HomeServer"): | |
915 | self.hs = hs | |
916 | self.store = hs.get_datastore() | |
917 | self.auth = hs.get_auth() | |
918 | ||
919 | async def on_POST(self, request, user_id): | |
920 | await assert_requester_is_admin(self.auth, request) | |
921 | ||
922 | if not self.hs.is_mine_id(user_id): | |
923 | raise SynapseError(400, "Only local users can be shadow-banned") | |
924 | ||
925 | await self.store.set_shadow_banned(UserID.from_string(user_id), True) | |
926 | ||
927 | return 200, {} |
18 | 18 | from synapse.api.errors import Codes, LoginError, SynapseError |
19 | 19 | from synapse.api.ratelimiting import Ratelimiter |
20 | 20 | from synapse.appservice import ApplicationService |
21 | from synapse.http.server import finish_request | |
21 | from synapse.handlers.sso import SsoIdentityProvider | |
22 | from synapse.http.server import HttpServer, finish_request | |
22 | 23 | from synapse.http.servlet import ( |
23 | 24 | RestServlet, |
24 | 25 | parse_json_object_from_request, |
59 | 60 | self.saml2_enabled = hs.config.saml2_enabled |
60 | 61 | self.cas_enabled = hs.config.cas_enabled |
61 | 62 | self.oidc_enabled = hs.config.oidc_enabled |
63 | self._msc2858_enabled = hs.config.experimental.msc2858_enabled | |
62 | 64 | |
63 | 65 | self.auth = hs.get_auth() |
64 | 66 | |
65 | 67 | self.auth_handler = self.hs.get_auth_handler() |
66 | 68 | self.registration_handler = hs.get_registration_handler() |
69 | self._sso_handler = hs.get_sso_handler() | |
70 | ||
67 | 71 | self._well_known_builder = WellKnownBuilder(hs) |
68 | 72 | self._address_ratelimiter = Ratelimiter( |
69 | 73 | clock=hs.get_clock(), |
88 | 92 | flows.append({"type": LoginRestServlet.CAS_TYPE}) |
89 | 93 | |
90 | 94 | if self.cas_enabled or self.saml2_enabled or self.oidc_enabled: |
91 | flows.append({"type": LoginRestServlet.SSO_TYPE}) | |
92 | # While its valid for us to advertise this login type generally, | |
95 | sso_flow = {"type": LoginRestServlet.SSO_TYPE} # type: JsonDict | |
96 | ||
97 | if self._msc2858_enabled: | |
98 | sso_flow["org.matrix.msc2858.identity_providers"] = [ | |
99 | _get_auth_flow_dict_for_idp(idp) | |
100 | for idp in self._sso_handler.get_identity_providers().values() | |
101 | ] | |
102 | ||
103 | flows.append(sso_flow) | |
104 | ||
105 | # While it's valid for us to advertise this login type generally, | |
93 | 106 | # synapse currently only gives out these tokens as part of the |
94 | 107 | # SSO login flow. |
95 | 108 | # Generally we don't want to advertise login flows that clients |
310 | 323 | return result |
311 | 324 | |
312 | 325 | |
326 | def _get_auth_flow_dict_for_idp(idp: SsoIdentityProvider) -> JsonDict: | |
327 | """Return an entry for the login flow dict | |
328 | ||
329 | Returns an entry suitable for inclusion in "identity_providers" in the | |
330 | response to GET /_matrix/client/r0/login | |
331 | """ | |
332 | e = {"id": idp.idp_id, "name": idp.idp_name} # type: JsonDict | |
333 | if idp.idp_icon: | |
334 | e["icon"] = idp.idp_icon | |
335 | if idp.idp_brand: | |
336 | e["brand"] = idp.idp_brand | |
337 | return e | |
338 | ||
339 | ||
313 | 340 | class SsoRedirectServlet(RestServlet): |
314 | PATTERNS = client_patterns("/login/(cas|sso)/redirect", v1=True) | |
341 | PATTERNS = client_patterns("/login/(cas|sso)/redirect$", v1=True) | |
315 | 342 | |
316 | 343 | def __init__(self, hs: "HomeServer"): |
317 | 344 | # make sure that the relevant handlers are instantiated, so that they |
323 | 350 | if hs.config.oidc_enabled: |
324 | 351 | hs.get_oidc_handler() |
325 | 352 | self._sso_handler = hs.get_sso_handler() |
326 | ||
327 | async def on_GET(self, request: SynapseRequest): | |
353 | self._msc2858_enabled = hs.config.experimental.msc2858_enabled | |
354 | ||
355 | def register(self, http_server: HttpServer) -> None: | |
356 | super().register(http_server) | |
357 | if self._msc2858_enabled: | |
358 | # expose additional endpoint for MSC2858 support | |
359 | http_server.register_paths( | |
360 | "GET", | |
361 | client_patterns( | |
362 | "/org.matrix.msc2858/login/sso/redirect/(?P<idp_id>[A-Za-z0-9_.~-]+)$", | |
363 | releases=(), | |
364 | unstable=True, | |
365 | ), | |
366 | self.on_GET, | |
367 | self.__class__.__name__, | |
368 | ) | |
369 | ||
370 | async def on_GET( | |
371 | self, request: SynapseRequest, idp_id: Optional[str] = None | |
372 | ) -> None: | |
328 | 373 | client_redirect_url = parse_string( |
329 | 374 | request, "redirectUrl", required=True, encoding=None |
330 | 375 | ) |
331 | 376 | sso_url = await self._sso_handler.handle_redirect_request( |
332 | request, client_redirect_url | |
377 | request, client_redirect_url, idp_id, | |
333 | 378 | ) |
334 | 379 | logger.info("Redirecting to %s", sso_url) |
335 | 380 | request.redirect(sso_url) |
53 | 53 | class EmailPasswordRequestTokenRestServlet(RestServlet): |
54 | 54 | PATTERNS = client_patterns("/account/password/email/requestToken$") |
55 | 55 | |
56 | def __init__(self, hs): | |
56 | def __init__(self, hs: "HomeServer"): | |
57 | 57 | super().__init__() |
58 | 58 | self.hs = hs |
59 | 59 | self.datastore = hs.get_datastore() |
101 | 101 | if next_link: |
102 | 102 | # Raise if the provided next_link value isn't valid |
103 | 103 | assert_valid_next_link(self.hs, next_link) |
104 | ||
105 | self.identity_handler.ratelimit_request_token_requests(request, "email", email) | |
104 | 106 | |
105 | 107 | # The email will be sent to the stored address. |
106 | 108 | # This avoids a potential account hijack by requesting a password reset to |
378 | 380 | Codes.THREEPID_DENIED, |
379 | 381 | ) |
380 | 382 | |
383 | self.identity_handler.ratelimit_request_token_requests(request, "email", email) | |
384 | ||
381 | 385 | if next_link: |
382 | 386 | # Raise if the provided next_link value isn't valid |
383 | 387 | assert_valid_next_link(self.hs, next_link) |
429 | 433 | class MsisdnThreepidRequestTokenRestServlet(RestServlet): |
430 | 434 | PATTERNS = client_patterns("/account/3pid/msisdn/requestToken$") |
431 | 435 | |
432 | def __init__(self, hs): | |
436 | def __init__(self, hs: "HomeServer"): | |
433 | 437 | self.hs = hs |
434 | 438 | super().__init__() |
435 | 439 | self.store = self.hs.get_datastore() |
456 | 460 | "Account phone numbers are not authorized on this server", |
457 | 461 | Codes.THREEPID_DENIED, |
458 | 462 | ) |
463 | ||
464 | self.identity_handler.ratelimit_request_token_requests( | |
465 | request, "msisdn", msisdn | |
466 | ) | |
459 | 467 | |
460 | 468 | if next_link: |
461 | 469 | # Raise if the provided next_link value isn't valid |
125 | 125 | Codes.THREEPID_DENIED, |
126 | 126 | ) |
127 | 127 | |
128 | self.identity_handler.ratelimit_request_token_requests(request, "email", email) | |
129 | ||
128 | 130 | existing_user_id = await self.hs.get_datastore().get_user_id_by_threepid( |
129 | 131 | "email", email |
130 | 132 | ) |
203 | 205 | "Phone numbers are not authorized to register on this server", |
204 | 206 | Codes.THREEPID_DENIED, |
205 | 207 | ) |
208 | ||
209 | self.identity_handler.ratelimit_request_token_requests( | |
210 | request, "msisdn", msisdn | |
211 | ) | |
206 | 212 | |
207 | 213 | existing_user_id = await self.hs.get_datastore().get_user_id_by_threepid( |
208 | 214 | "msisdn", msisdn |
99 | 99 | |
100 | 100 | consent_template_directory = hs.config.user_consent_template_dir |
101 | 101 | |
102 | # TODO: switch to synapse.util.templates.build_jinja_env | |
102 | 103 | loader = jinja2.FileSystemLoader(consent_template_directory) |
103 | 104 | self._jinja_env = jinja2.Environment( |
104 | 105 | loader=loader, autoescape=jinja2.select_autoescape(["html", "htm", "xml"]) |
299 | 299 | thumbnail_height (int) |
300 | 300 | thumbnail_method (str) |
301 | 301 | thumbnail_type (str): Content type of thumbnail, e.g. image/png |
302 | thumbnail_length (int): The size of the media file, in bytes. | |
302 | 303 | """ |
303 | 304 | |
304 | 305 | def __init__( |
311 | 312 | thumbnail_height=None, |
312 | 313 | thumbnail_method=None, |
313 | 314 | thumbnail_type=None, |
315 | thumbnail_length=None, | |
314 | 316 | ): |
315 | 317 | self.server_name = server_name |
316 | 318 | self.file_id = file_id |
320 | 322 | self.thumbnail_height = thumbnail_height |
321 | 323 | self.thumbnail_method = thumbnail_method |
322 | 324 | self.thumbnail_type = thumbnail_type |
325 | self.thumbnail_length = thumbnail_length | |
323 | 326 | |
324 | 327 | |
325 | 328 | def get_filename_from_headers(headers: Dict[bytes, List[bytes]]) -> Optional[str]: |
385 | 385 | """ |
386 | 386 | Check whether the URL should be downloaded as oEmbed content instead. |
387 | 387 | |
388 | Params: | |
388 | Args: | |
389 | 389 | url: The URL to check. |
390 | 390 | |
391 | 391 | Returns: |
402 | 402 | """ |
403 | 403 | Request content from an oEmbed endpoint. |
404 | 404 | |
405 | Params: | |
405 | Args: | |
406 | 406 | endpoint: The oEmbed API endpoint. |
407 | 407 | url: The URL to pass to the API. |
408 | 408 | |
691 | 691 | def decode_and_calc_og( |
692 | 692 | body: bytes, media_uri: str, request_encoding: Optional[str] = None |
693 | 693 | ) -> Dict[str, Optional[str]]: |
694 | """ | |
695 | Calculate metadata for an HTML document. | |
696 | ||
697 | This uses lxml to parse the HTML document into the OG response. If errors | |
698 | occur during processing of the document, an empty response is returned. | |
699 | ||
700 | Args: | |
701 | body: The HTML document, as bytes. | |
702 | media_url: The URI used to download the body. | |
703 | request_encoding: The character encoding of the body, as a string. | |
704 | ||
705 | Returns: | |
706 | The OG response as a dictionary. | |
707 | """ | |
694 | 708 | # If there's no body, nothing useful is going to be found. |
695 | 709 | if not body: |
696 | 710 | return {} |
697 | 711 | |
698 | 712 | from lxml import etree |
699 | 713 | |
714 | # Create an HTML parser. If this fails, log and return no metadata. | |
700 | 715 | try: |
701 | 716 | parser = etree.HTMLParser(recover=True, encoding=request_encoding) |
702 | tree = etree.fromstring(body, parser) | |
703 | og = _calc_og(tree, media_uri) | |
717 | except LookupError: | |
718 | # blindly consider the encoding as utf-8. | |
719 | parser = etree.HTMLParser(recover=True, encoding="utf-8") | |
720 | except Exception as e: | |
721 | logger.warning("Unable to create HTML parser: %s" % (e,)) | |
722 | return {} | |
723 | ||
724 | def _attempt_calc_og(body_attempt: Union[bytes, str]) -> Dict[str, Optional[str]]: | |
725 | # Attempt to parse the body. If this fails, log and return no metadata. | |
726 | tree = etree.fromstring(body_attempt, parser) | |
727 | return _calc_og(tree, media_uri) | |
728 | ||
729 | # Attempt to parse the body. If this fails, log and return no metadata. | |
730 | try: | |
731 | return _attempt_calc_og(body) | |
704 | 732 | except UnicodeDecodeError: |
705 | 733 | # blindly try decoding the body as utf-8, which seems to fix |
706 | 734 | # the charset mismatches on https://google.com |
707 | parser = etree.HTMLParser(recover=True, encoding=request_encoding) | |
708 | tree = etree.fromstring(body.decode("utf-8", "ignore"), parser) | |
709 | og = _calc_og(tree, media_uri) | |
710 | ||
711 | return og | |
712 | ||
713 | ||
714 | def _calc_og(tree, media_uri: str) -> Dict[str, Optional[str]]: | |
735 | return _attempt_calc_og(body.decode("utf-8", "ignore")) | |
736 | ||
737 | ||
738 | def _calc_og(tree: "etree.Element", media_uri: str) -> Dict[str, Optional[str]]: | |
715 | 739 | # suck our tree into lxml and define our OG response. |
716 | 740 | |
717 | 741 | # if we see any image URLs in the OG response, then spider them |
15 | 15 | |
16 | 16 | |
17 | 17 | import logging |
18 | from typing import TYPE_CHECKING | |
18 | from typing import TYPE_CHECKING, Any, Dict, List, Optional | |
19 | 19 | |
20 | 20 | from twisted.web.http import Request |
21 | 21 | |
105 | 105 | return |
106 | 106 | |
107 | 107 | thumbnail_infos = await self.store.get_local_media_thumbnails(media_id) |
108 | ||
109 | if thumbnail_infos: | |
110 | thumbnail_info = self._select_thumbnail( | |
111 | width, height, method, m_type, thumbnail_infos | |
112 | ) | |
113 | ||
114 | file_info = FileInfo( | |
115 | server_name=None, | |
116 | file_id=media_id, | |
117 | url_cache=media_info["url_cache"], | |
118 | thumbnail=True, | |
119 | thumbnail_width=thumbnail_info["thumbnail_width"], | |
120 | thumbnail_height=thumbnail_info["thumbnail_height"], | |
121 | thumbnail_type=thumbnail_info["thumbnail_type"], | |
122 | thumbnail_method=thumbnail_info["thumbnail_method"], | |
123 | ) | |
124 | ||
125 | t_type = file_info.thumbnail_type | |
126 | t_length = thumbnail_info["thumbnail_length"] | |
127 | ||
128 | responder = await self.media_storage.fetch_media(file_info) | |
129 | await respond_with_responder(request, responder, t_type, t_length) | |
130 | else: | |
131 | logger.info("Couldn't find any generated thumbnails") | |
132 | respond_404(request) | |
108 | await self._select_and_respond_with_thumbnail( | |
109 | request, | |
110 | width, | |
111 | height, | |
112 | method, | |
113 | m_type, | |
114 | thumbnail_infos, | |
115 | media_id, | |
116 | url_cache=media_info["url_cache"], | |
117 | server_name=None, | |
118 | ) | |
133 | 119 | |
134 | 120 | async def _select_or_generate_local_thumbnail( |
135 | 121 | self, |
275 | 261 | thumbnail_infos = await self.store.get_remote_media_thumbnails( |
276 | 262 | server_name, media_id |
277 | 263 | ) |
278 | ||
264 | await self._select_and_respond_with_thumbnail( | |
265 | request, | |
266 | width, | |
267 | height, | |
268 | method, | |
269 | m_type, | |
270 | thumbnail_infos, | |
271 | media_info["filesystem_id"], | |
272 | url_cache=None, | |
273 | server_name=server_name, | |
274 | ) | |
275 | ||
276 | async def _select_and_respond_with_thumbnail( | |
277 | self, | |
278 | request: Request, | |
279 | desired_width: int, | |
280 | desired_height: int, | |
281 | desired_method: str, | |
282 | desired_type: str, | |
283 | thumbnail_infos: List[Dict[str, Any]], | |
284 | file_id: str, | |
285 | url_cache: Optional[str] = None, | |
286 | server_name: Optional[str] = None, | |
287 | ) -> None: | |
288 | """ | |
289 | Respond to a request with an appropriate thumbnail from the previously generated thumbnails. | |
290 | ||
291 | Args: | |
292 | request: The incoming request. | |
293 | desired_width: The desired width, the returned thumbnail may be larger than this. | |
294 | desired_height: The desired height, the returned thumbnail may be larger than this. | |
295 | desired_method: The desired method used to generate the thumbnail. | |
296 | desired_type: The desired content-type of the thumbnail. | |
297 | thumbnail_infos: A list of dictionaries of candidate thumbnails. | |
298 | file_id: The ID of the media that a thumbnail is being requested for. | |
299 | url_cache: The URL cache value. | |
300 | server_name: The server name, if this is a remote thumbnail. | |
301 | """ | |
279 | 302 | if thumbnail_infos: |
280 | thumbnail_info = self._select_thumbnail( | |
281 | width, height, method, m_type, thumbnail_infos | |
303 | file_info = self._select_thumbnail( | |
304 | desired_width, | |
305 | desired_height, | |
306 | desired_method, | |
307 | desired_type, | |
308 | thumbnail_infos, | |
309 | file_id, | |
310 | url_cache, | |
311 | server_name, | |
282 | 312 | ) |
283 | file_info = FileInfo( | |
313 | if not file_info: | |
314 | logger.info("Couldn't find a thumbnail matching the desired inputs") | |
315 | respond_404(request) | |
316 | return | |
317 | ||
318 | responder = await self.media_storage.fetch_media(file_info) | |
319 | await respond_with_responder( | |
320 | request, responder, file_info.thumbnail_type, file_info.thumbnail_length | |
321 | ) | |
322 | else: | |
323 | logger.info("Failed to find any generated thumbnails") | |
324 | respond_404(request) | |
325 | ||
326 | def _select_thumbnail( | |
327 | self, | |
328 | desired_width: int, | |
329 | desired_height: int, | |
330 | desired_method: str, | |
331 | desired_type: str, | |
332 | thumbnail_infos: List[Dict[str, Any]], | |
333 | file_id: str, | |
334 | url_cache: Optional[str], | |
335 | server_name: Optional[str], | |
336 | ) -> Optional[FileInfo]: | |
337 | """ | |
338 | Choose an appropriate thumbnail from the previously generated thumbnails. | |
339 | ||
340 | Args: | |
341 | desired_width: The desired width, the returned thumbnail may be larger than this. | |
342 | desired_height: The desired height, the returned thumbnail may be larger than this. | |
343 | desired_method: The desired method used to generate the thumbnail. | |
344 | desired_type: The desired content-type of the thumbnail. | |
345 | thumbnail_infos: A list of dictionaries of candidate thumbnails. | |
346 | file_id: The ID of the media that a thumbnail is being requested for. | |
347 | url_cache: The URL cache value. | |
348 | server_name: The server name, if this is a remote thumbnail. | |
349 | ||
350 | Returns: | |
351 | The thumbnail which best matches the desired parameters. | |
352 | """ | |
353 | desired_method = desired_method.lower() | |
354 | ||
355 | # The chosen thumbnail. | |
356 | thumbnail_info = None | |
357 | ||
358 | d_w = desired_width | |
359 | d_h = desired_height | |
360 | ||
361 | if desired_method == "crop": | |
362 | # Thumbnails that match equal or larger sizes of desired width/height. | |
363 | crop_info_list = [] | |
364 | # Other thumbnails. | |
365 | crop_info_list2 = [] | |
366 | for info in thumbnail_infos: | |
367 | # Skip thumbnails generated with different methods. | |
368 | if info["thumbnail_method"] != "crop": | |
369 | continue | |
370 | ||
371 | t_w = info["thumbnail_width"] | |
372 | t_h = info["thumbnail_height"] | |
373 | aspect_quality = abs(d_w * t_h - d_h * t_w) | |
374 | min_quality = 0 if d_w <= t_w and d_h <= t_h else 1 | |
375 | size_quality = abs((d_w - t_w) * (d_h - t_h)) | |
376 | type_quality = desired_type != info["thumbnail_type"] | |
377 | length_quality = info["thumbnail_length"] | |
378 | if t_w >= d_w or t_h >= d_h: | |
379 | crop_info_list.append( | |
380 | ( | |
381 | aspect_quality, | |
382 | min_quality, | |
383 | size_quality, | |
384 | type_quality, | |
385 | length_quality, | |
386 | info, | |
387 | ) | |
388 | ) | |
389 | else: | |
390 | crop_info_list2.append( | |
391 | ( | |
392 | aspect_quality, | |
393 | min_quality, | |
394 | size_quality, | |
395 | type_quality, | |
396 | length_quality, | |
397 | info, | |
398 | ) | |
399 | ) | |
400 | if crop_info_list: | |
401 | thumbnail_info = min(crop_info_list)[-1] | |
402 | elif crop_info_list2: | |
403 | thumbnail_info = min(crop_info_list2)[-1] | |
404 | elif desired_method == "scale": | |
405 | # Thumbnails that match equal or larger sizes of desired width/height. | |
406 | info_list = [] | |
407 | # Other thumbnails. | |
408 | info_list2 = [] | |
409 | ||
410 | for info in thumbnail_infos: | |
411 | # Skip thumbnails generated with different methods. | |
412 | if info["thumbnail_method"] != "scale": | |
413 | continue | |
414 | ||
415 | t_w = info["thumbnail_width"] | |
416 | t_h = info["thumbnail_height"] | |
417 | size_quality = abs((d_w - t_w) * (d_h - t_h)) | |
418 | type_quality = desired_type != info["thumbnail_type"] | |
419 | length_quality = info["thumbnail_length"] | |
420 | if t_w >= d_w or t_h >= d_h: | |
421 | info_list.append((size_quality, type_quality, length_quality, info)) | |
422 | else: | |
423 | info_list2.append( | |
424 | (size_quality, type_quality, length_quality, info) | |
425 | ) | |
426 | if info_list: | |
427 | thumbnail_info = min(info_list)[-1] | |
428 | elif info_list2: | |
429 | thumbnail_info = min(info_list2)[-1] | |
430 | ||
431 | if thumbnail_info: | |
432 | return FileInfo( | |
433 | file_id=file_id, | |
434 | url_cache=url_cache, | |
284 | 435 | server_name=server_name, |
285 | file_id=media_info["filesystem_id"], | |
286 | 436 | thumbnail=True, |
287 | 437 | thumbnail_width=thumbnail_info["thumbnail_width"], |
288 | 438 | thumbnail_height=thumbnail_info["thumbnail_height"], |
289 | 439 | thumbnail_type=thumbnail_info["thumbnail_type"], |
290 | 440 | thumbnail_method=thumbnail_info["thumbnail_method"], |
441 | thumbnail_length=thumbnail_info["thumbnail_length"], | |
291 | 442 | ) |
292 | 443 | |
293 | t_type = file_info.thumbnail_type | |
294 | t_length = thumbnail_info["thumbnail_length"] | |
295 | ||
296 | responder = await self.media_storage.fetch_media(file_info) | |
297 | await respond_with_responder(request, responder, t_type, t_length) | |
298 | else: | |
299 | logger.info("Failed to find any generated thumbnails") | |
300 | respond_404(request) | |
301 | ||
302 | def _select_thumbnail( | |
303 | self, | |
304 | desired_width: int, | |
305 | desired_height: int, | |
306 | desired_method: str, | |
307 | desired_type: str, | |
308 | thumbnail_infos, | |
309 | ) -> dict: | |
310 | d_w = desired_width | |
311 | d_h = desired_height | |
312 | ||
313 | if desired_method.lower() == "crop": | |
314 | crop_info_list = [] | |
315 | crop_info_list2 = [] | |
316 | for info in thumbnail_infos: | |
317 | t_w = info["thumbnail_width"] | |
318 | t_h = info["thumbnail_height"] | |
319 | t_method = info["thumbnail_method"] | |
320 | if t_method == "crop": | |
321 | aspect_quality = abs(d_w * t_h - d_h * t_w) | |
322 | min_quality = 0 if d_w <= t_w and d_h <= t_h else 1 | |
323 | size_quality = abs((d_w - t_w) * (d_h - t_h)) | |
324 | type_quality = desired_type != info["thumbnail_type"] | |
325 | length_quality = info["thumbnail_length"] | |
326 | if t_w >= d_w or t_h >= d_h: | |
327 | crop_info_list.append( | |
328 | ( | |
329 | aspect_quality, | |
330 | min_quality, | |
331 | size_quality, | |
332 | type_quality, | |
333 | length_quality, | |
334 | info, | |
335 | ) | |
336 | ) | |
337 | else: | |
338 | crop_info_list2.append( | |
339 | ( | |
340 | aspect_quality, | |
341 | min_quality, | |
342 | size_quality, | |
343 | type_quality, | |
344 | length_quality, | |
345 | info, | |
346 | ) | |
347 | ) | |
348 | if crop_info_list: | |
349 | return min(crop_info_list)[-1] | |
350 | else: | |
351 | return min(crop_info_list2)[-1] | |
352 | else: | |
353 | info_list = [] | |
354 | info_list2 = [] | |
355 | for info in thumbnail_infos: | |
356 | t_w = info["thumbnail_width"] | |
357 | t_h = info["thumbnail_height"] | |
358 | t_method = info["thumbnail_method"] | |
359 | size_quality = abs((d_w - t_w) * (d_h - t_h)) | |
360 | type_quality = desired_type != info["thumbnail_type"] | |
361 | length_quality = info["thumbnail_length"] | |
362 | if t_method == "scale" and (t_w >= d_w or t_h >= d_h): | |
363 | info_list.append((size_quality, type_quality, length_quality, info)) | |
364 | elif t_method == "scale": | |
365 | info_list2.append( | |
366 | (size_quality, type_quality, length_quality, info) | |
367 | ) | |
368 | if info_list: | |
369 | return min(info_list)[-1] | |
370 | else: | |
371 | return min(info_list2)[-1] | |
444 | # No matching thumbnail was found. | |
445 | return None |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2020 Quentin Gliech | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | from twisted.web.resource import Resource | |
17 | ||
18 | from synapse.rest.oidc.callback_resource import OIDCCallbackResource | |
19 | ||
20 | logger = logging.getLogger(__name__) | |
21 | ||
22 | ||
23 | class OIDCResource(Resource): | |
24 | def __init__(self, hs): | |
25 | Resource.__init__(self) | |
26 | self.putChild(b"callback", OIDCCallbackResource(hs)) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2020 Quentin Gliech | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | from synapse.http.server import DirectServeHtmlResource | |
17 | ||
18 | logger = logging.getLogger(__name__) | |
19 | ||
20 | ||
21 | class OIDCCallbackResource(DirectServeHtmlResource): | |
22 | isLeaf = 1 | |
23 | ||
24 | def __init__(self, hs): | |
25 | super().__init__() | |
26 | self._oidc_handler = hs.get_oidc_handler() | |
27 | ||
28 | async def _async_render_GET(self, request): | |
29 | await self._oidc_handler.handle_oidc_callback(request) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | from twisted.web.resource import Resource | |
17 | ||
18 | from synapse.rest.saml2.metadata_resource import SAML2MetadataResource | |
19 | from synapse.rest.saml2.response_resource import SAML2ResponseResource | |
20 | ||
21 | logger = logging.getLogger(__name__) | |
22 | ||
23 | ||
24 | class SAML2Resource(Resource): | |
25 | def __init__(self, hs): | |
26 | Resource.__init__(self) | |
27 | self.putChild(b"metadata.xml", SAML2MetadataResource(hs)) | |
28 | self.putChild(b"authn_response", SAML2ResponseResource(hs)) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | ||
16 | import saml2.metadata | |
17 | ||
18 | from twisted.web.resource import Resource | |
19 | ||
20 | ||
21 | class SAML2MetadataResource(Resource): | |
22 | """A Twisted web resource which renders the SAML metadata""" | |
23 | ||
24 | isLeaf = 1 | |
25 | ||
26 | def __init__(self, hs): | |
27 | Resource.__init__(self) | |
28 | self.sp_config = hs.config.saml2_sp_config | |
29 | ||
30 | def render_GET(self, request): | |
31 | metadata_xml = saml2.metadata.create_metadata_string( | |
32 | configfile=None, config=self.sp_config | |
33 | ) | |
34 | request.setHeader(b"Content-Type", b"text/xml; charset=utf-8") | |
35 | return metadata_xml |
0 | # -*- coding: utf-8 -*- | |
1 | # | |
2 | # Copyright 2018 New Vector Ltd | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
5 | # you may not use this file except in compliance with the License. | |
6 | # You may obtain a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, | |
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
13 | # See the License for the specific language governing permissions and | |
14 | # limitations under the License. | |
15 | ||
16 | from synapse.http.server import DirectServeHtmlResource | |
17 | ||
18 | ||
19 | class SAML2ResponseResource(DirectServeHtmlResource): | |
20 | """A Twisted web resource which handles the SAML response""" | |
21 | ||
22 | isLeaf = 1 | |
23 | ||
24 | def __init__(self, hs): | |
25 | super().__init__() | |
26 | self._saml_handler = hs.get_saml_handler() | |
27 | ||
28 | async def _async_render_GET(self, request): | |
29 | # We're not expecting any GET request on that resource if everything goes right, | |
30 | # but some IdPs sometimes end up responding with a 302 redirect on this endpoint. | |
31 | # In this case, just tell the user that something went wrong and they should | |
32 | # try to authenticate again. | |
33 | self._saml_handler._render_error( | |
34 | request, "unexpected_get", "Unexpected GET request on /saml2/authn_response" | |
35 | ) | |
36 | ||
37 | async def _async_render_POST(self, request): | |
38 | await self._saml_handler.handle_saml_response(request) |
0 | 0 | # -*- coding: utf-8 -*- |
1 | # Copyright 2020 The Matrix.org Foundation C.I.C. | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | 2 | # |
3 | 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 4 | # you may not use this file except in compliance with the License. |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
15 | from typing import TYPE_CHECKING, Mapping | |
16 | ||
17 | from twisted.web.resource import Resource | |
18 | ||
19 | from synapse.rest.synapse.client.new_user_consent import NewUserConsentResource | |
20 | from synapse.rest.synapse.client.pick_idp import PickIdpResource | |
21 | from synapse.rest.synapse.client.pick_username import pick_username_resource | |
22 | from synapse.rest.synapse.client.sso_register import SsoRegisterResource | |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
26 | ||
27 | ||
28 | def build_synapse_client_resource_tree(hs: "HomeServer") -> Mapping[str, Resource]: | |
29 | """Builds a resource tree to include synapse-specific client resources | |
30 | ||
31 | These are resources which should be loaded on all workers which expose a C-S API: | |
32 | ie, the main process, and any generic workers so configured. | |
33 | ||
34 | Returns: | |
35 | map from path to Resource. | |
36 | """ | |
37 | resources = { | |
38 | # SSO bits. These are always loaded, whether or not SSO login is actually | |
39 | # enabled (they just won't work very well if it's not) | |
40 | "/_synapse/client/pick_idp": PickIdpResource(hs), | |
41 | "/_synapse/client/pick_username": pick_username_resource(hs), | |
42 | "/_synapse/client/new_user_consent": NewUserConsentResource(hs), | |
43 | "/_synapse/client/sso_register": SsoRegisterResource(hs), | |
44 | } | |
45 | ||
46 | # provider-specific SSO bits. Only load these if they are enabled, since they | |
47 | # rely on optional dependencies. | |
48 | if hs.config.oidc_enabled: | |
49 | from synapse.rest.synapse.client.oidc import OIDCResource | |
50 | ||
51 | resources["/_synapse/client/oidc"] = OIDCResource(hs) | |
52 | ||
53 | if hs.config.saml2_enabled: | |
54 | from synapse.rest.synapse.client.saml2 import SAML2Resource | |
55 | ||
56 | res = SAML2Resource(hs) | |
57 | resources["/_synapse/client/saml2"] = res | |
58 | ||
59 | # This is also mounted under '/_matrix' for backwards-compatibility. | |
60 | resources["/_matrix/saml2"] = res | |
61 | ||
62 | return resources | |
63 | ||
64 | ||
65 | __all__ = ["build_synapse_client_resource_tree"] |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | from typing import TYPE_CHECKING | |
16 | ||
17 | from twisted.web.http import Request | |
18 | ||
19 | from synapse.api.errors import SynapseError | |
20 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request | |
21 | from synapse.http.server import DirectServeHtmlResource, respond_with_html | |
22 | from synapse.http.servlet import parse_string | |
23 | from synapse.types import UserID | |
24 | from synapse.util.templates import build_jinja_env | |
25 | ||
26 | if TYPE_CHECKING: | |
27 | from synapse.server import HomeServer | |
28 | ||
29 | logger = logging.getLogger(__name__) | |
30 | ||
31 | ||
32 | class NewUserConsentResource(DirectServeHtmlResource): | |
33 | """A resource which collects consent to the server's terms from a new user | |
34 | ||
35 | This resource gets mounted at /_synapse/client/new_user_consent, and is shown | |
36 | when we are automatically creating a new user due to an SSO login. | |
37 | ||
38 | It shows a template which prompts the user to go and read the Ts and Cs, and click | |
39 | a clickybox if they have done so. | |
40 | """ | |
41 | ||
42 | def __init__(self, hs: "HomeServer"): | |
43 | super().__init__() | |
44 | self._sso_handler = hs.get_sso_handler() | |
45 | self._server_name = hs.hostname | |
46 | self._consent_version = hs.config.consent.user_consent_version | |
47 | ||
48 | def template_search_dirs(): | |
49 | if hs.config.sso.sso_template_dir: | |
50 | yield hs.config.sso.sso_template_dir | |
51 | yield hs.config.sso.default_template_dir | |
52 | ||
53 | self._jinja_env = build_jinja_env(template_search_dirs(), hs.config) | |
54 | ||
55 | async def _async_render_GET(self, request: Request) -> None: | |
56 | try: | |
57 | session_id = get_username_mapping_session_cookie_from_request(request) | |
58 | session = self._sso_handler.get_mapping_session(session_id) | |
59 | except SynapseError as e: | |
60 | logger.warning("Error fetching session: %s", e) | |
61 | self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) | |
62 | return | |
63 | ||
64 | user_id = UserID(session.chosen_localpart, self._server_name) | |
65 | user_profile = { | |
66 | "display_name": session.display_name, | |
67 | } | |
68 | ||
69 | template_params = { | |
70 | "user_id": user_id.to_string(), | |
71 | "user_profile": user_profile, | |
72 | "consent_version": self._consent_version, | |
73 | "terms_url": "/_matrix/consent?v=%s" % (self._consent_version,), | |
74 | } | |
75 | ||
76 | template = self._jinja_env.get_template("sso_new_user_consent.html") | |
77 | html = template.render(template_params) | |
78 | respond_with_html(request, 200, html) | |
79 | ||
80 | async def _async_render_POST(self, request: Request): | |
81 | try: | |
82 | session_id = get_username_mapping_session_cookie_from_request(request) | |
83 | except SynapseError as e: | |
84 | logger.warning("Error fetching session cookie: %s", e) | |
85 | self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) | |
86 | return | |
87 | ||
88 | try: | |
89 | accepted_version = parse_string(request, "accepted_version", required=True) | |
90 | except SynapseError as e: | |
91 | self._sso_handler.render_error(request, "bad_param", e.msg, code=e.code) | |
92 | return | |
93 | ||
94 | await self._sso_handler.handle_terms_accepted( | |
95 | request, session_id, accepted_version | |
96 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2020 Quentin Gliech | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | ||
17 | from twisted.web.resource import Resource | |
18 | ||
19 | from synapse.rest.synapse.client.oidc.callback_resource import OIDCCallbackResource | |
20 | ||
21 | logger = logging.getLogger(__name__) | |
22 | ||
23 | ||
24 | class OIDCResource(Resource): | |
25 | def __init__(self, hs): | |
26 | Resource.__init__(self) | |
27 | self.putChild(b"callback", OIDCCallbackResource(hs)) | |
28 | ||
29 | ||
30 | __all__ = ["OIDCResource"] |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2020 Quentin Gliech | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | from synapse.http.server import DirectServeHtmlResource | |
17 | ||
18 | logger = logging.getLogger(__name__) | |
19 | ||
20 | ||
21 | class OIDCCallbackResource(DirectServeHtmlResource): | |
22 | isLeaf = 1 | |
23 | ||
24 | def __init__(self, hs): | |
25 | super().__init__() | |
26 | self._oidc_handler = hs.get_oidc_handler() | |
27 | ||
28 | async def _async_render_GET(self, request): | |
29 | await self._oidc_handler.handle_oidc_callback(request) |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | from typing import TYPE_CHECKING | |
15 | 14 | |
16 | import pkg_resources | |
15 | import logging | |
16 | from typing import TYPE_CHECKING, List | |
17 | 17 | |
18 | 18 | from twisted.web.http import Request |
19 | 19 | from twisted.web.resource import Resource |
20 | from twisted.web.static import File | |
21 | 20 | |
22 | 21 | from synapse.api.errors import SynapseError |
23 | from synapse.handlers.sso import USERNAME_MAPPING_SESSION_COOKIE_NAME | |
24 | from synapse.http.server import DirectServeHtmlResource, DirectServeJsonResource | |
25 | from synapse.http.servlet import parse_string | |
22 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request | |
23 | from synapse.http.server import ( | |
24 | DirectServeHtmlResource, | |
25 | DirectServeJsonResource, | |
26 | respond_with_html, | |
27 | ) | |
28 | from synapse.http.servlet import parse_boolean, parse_string | |
26 | 29 | from synapse.http.site import SynapseRequest |
30 | from synapse.util.templates import build_jinja_env | |
27 | 31 | |
28 | 32 | if TYPE_CHECKING: |
29 | 33 | from synapse.server import HomeServer |
34 | ||
35 | logger = logging.getLogger(__name__) | |
30 | 36 | |
31 | 37 | |
32 | 38 | def pick_username_resource(hs: "HomeServer") -> Resource: |
33 | 39 | """Factory method to generate the username picker resource. |
34 | 40 | |
35 | This resource gets mounted under /_synapse/client/pick_username. The top-level | |
36 | resource is just a File resource which serves up the static files in the resources | |
37 | "res" directory, but it has a couple of children: | |
41 | This resource gets mounted under /_synapse/client/pick_username and has two | |
42 | children: | |
38 | 43 | |
39 | * "submit", which does the mechanics of registering the new user, and redirects the | |
40 | browser back to the client URL | |
41 | ||
42 | * "check": checks if a userid is free. | |
44 | * "account_details": renders the form and handles the POSTed response | |
45 | * "check": a JSON endpoint which checks if a userid is free. | |
43 | 46 | """ |
44 | 47 | |
45 | # XXX should we make this path customisable so that admins can restyle it? | |
46 | base_path = pkg_resources.resource_filename("synapse", "res/username_picker") | |
47 | ||
48 | res = File(base_path) | |
49 | res.putChild(b"submit", SubmitResource(hs)) | |
48 | res = Resource() | |
49 | res.putChild(b"account_details", AccountDetailsResource(hs)) | |
50 | 50 | res.putChild(b"check", AvailabilityCheckResource(hs)) |
51 | 51 | |
52 | 52 | return res |
60 | 60 | async def _async_render_GET(self, request: Request): |
61 | 61 | localpart = parse_string(request, "username", required=True) |
62 | 62 | |
63 | session_id = request.getCookie(USERNAME_MAPPING_SESSION_COOKIE_NAME) | |
64 | if not session_id: | |
65 | raise SynapseError(code=400, msg="missing session_id") | |
63 | session_id = get_username_mapping_session_cookie_from_request(request) | |
66 | 64 | |
67 | 65 | is_available = await self._sso_handler.check_username_availability( |
68 | localpart, session_id.decode("ascii", errors="replace") | |
66 | localpart, session_id | |
69 | 67 | ) |
70 | 68 | return 200, {"available": is_available} |
71 | 69 | |
72 | 70 | |
73 | class SubmitResource(DirectServeHtmlResource): | |
71 | class AccountDetailsResource(DirectServeHtmlResource): | |
74 | 72 | def __init__(self, hs: "HomeServer"): |
75 | 73 | super().__init__() |
76 | 74 | self._sso_handler = hs.get_sso_handler() |
77 | 75 | |
76 | def template_search_dirs(): | |
77 | if hs.config.sso.sso_template_dir: | |
78 | yield hs.config.sso.sso_template_dir | |
79 | yield hs.config.sso.default_template_dir | |
80 | ||
81 | self._jinja_env = build_jinja_env(template_search_dirs(), hs.config) | |
82 | ||
83 | async def _async_render_GET(self, request: Request) -> None: | |
84 | try: | |
85 | session_id = get_username_mapping_session_cookie_from_request(request) | |
86 | session = self._sso_handler.get_mapping_session(session_id) | |
87 | except SynapseError as e: | |
88 | logger.warning("Error fetching session: %s", e) | |
89 | self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) | |
90 | return | |
91 | ||
92 | idp_id = session.auth_provider_id | |
93 | template_params = { | |
94 | "idp": self._sso_handler.get_identity_providers()[idp_id], | |
95 | "user_attributes": { | |
96 | "display_name": session.display_name, | |
97 | "emails": session.emails, | |
98 | }, | |
99 | } | |
100 | ||
101 | template = self._jinja_env.get_template("sso_auth_account_details.html") | |
102 | html = template.render(template_params) | |
103 | respond_with_html(request, 200, html) | |
104 | ||
78 | 105 | async def _async_render_POST(self, request: SynapseRequest): |
79 | localpart = parse_string(request, "username", required=True) | |
106 | try: | |
107 | session_id = get_username_mapping_session_cookie_from_request(request) | |
108 | except SynapseError as e: | |
109 | logger.warning("Error fetching session cookie: %s", e) | |
110 | self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) | |
111 | return | |
80 | 112 | |
81 | session_id = request.getCookie(USERNAME_MAPPING_SESSION_COOKIE_NAME) | |
82 | if not session_id: | |
83 | raise SynapseError(code=400, msg="missing session_id") | |
113 | try: | |
114 | localpart = parse_string(request, "username", required=True) | |
115 | use_display_name = parse_boolean(request, "use_display_name", default=False) | |
116 | ||
117 | try: | |
118 | emails_to_use = [ | |
119 | val.decode("utf-8") for val in request.args.get(b"use_email", []) | |
120 | ] # type: List[str] | |
121 | except ValueError: | |
122 | raise SynapseError(400, "Query parameter use_email must be utf-8") | |
123 | except SynapseError as e: | |
124 | logger.warning("[session %s] bad param: %s", session_id, e) | |
125 | self._sso_handler.render_error(request, "bad_param", e.msg, code=e.code) | |
126 | return | |
84 | 127 | |
85 | 128 | await self._sso_handler.handle_submit_username_request( |
86 | request, localpart, session_id.decode("ascii", errors="replace") | |
129 | request, session_id, localpart, use_display_name, emails_to_use | |
87 | 130 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | ||
17 | from twisted.web.resource import Resource | |
18 | ||
19 | from synapse.rest.synapse.client.saml2.metadata_resource import SAML2MetadataResource | |
20 | from synapse.rest.synapse.client.saml2.response_resource import SAML2ResponseResource | |
21 | ||
22 | logger = logging.getLogger(__name__) | |
23 | ||
24 | ||
25 | class SAML2Resource(Resource): | |
26 | def __init__(self, hs): | |
27 | Resource.__init__(self) | |
28 | self.putChild(b"metadata.xml", SAML2MetadataResource(hs)) | |
29 | self.putChild(b"authn_response", SAML2ResponseResource(hs)) | |
30 | ||
31 | ||
32 | __all__ = ["SAML2Resource"] |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | ||
16 | import saml2.metadata | |
17 | ||
18 | from twisted.web.resource import Resource | |
19 | ||
20 | ||
21 | class SAML2MetadataResource(Resource): | |
22 | """A Twisted web resource which renders the SAML metadata""" | |
23 | ||
24 | isLeaf = 1 | |
25 | ||
26 | def __init__(self, hs): | |
27 | Resource.__init__(self) | |
28 | self.sp_config = hs.config.saml2_sp_config | |
29 | ||
30 | def render_GET(self, request): | |
31 | metadata_xml = saml2.metadata.create_metadata_string( | |
32 | configfile=None, config=self.sp_config | |
33 | ) | |
34 | request.setHeader(b"Content-Type", b"text/xml; charset=utf-8") | |
35 | return metadata_xml |
0 | # -*- coding: utf-8 -*- | |
1 | # | |
2 | # Copyright 2018 New Vector Ltd | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
5 | # you may not use this file except in compliance with the License. | |
6 | # You may obtain a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, | |
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
13 | # See the License for the specific language governing permissions and | |
14 | # limitations under the License. | |
15 | ||
16 | from synapse.http.server import DirectServeHtmlResource | |
17 | ||
18 | ||
19 | class SAML2ResponseResource(DirectServeHtmlResource): | |
20 | """A Twisted web resource which handles the SAML response""" | |
21 | ||
22 | isLeaf = 1 | |
23 | ||
24 | def __init__(self, hs): | |
25 | super().__init__() | |
26 | self._saml_handler = hs.get_saml_handler() | |
27 | ||
28 | async def _async_render_GET(self, request): | |
29 | # We're not expecting any GET request on that resource if everything goes right, | |
30 | # but some IdPs sometimes end up responding with a 302 redirect on this endpoint. | |
31 | # In this case, just tell the user that something went wrong and they should | |
32 | # try to authenticate again. | |
33 | self._saml_handler._render_error( | |
34 | request, "unexpected_get", "Unexpected GET request on /saml2/authn_response" | |
35 | ) | |
36 | ||
37 | async def _async_render_POST(self, request): | |
38 | await self._saml_handler.handle_saml_response(request) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | from typing import TYPE_CHECKING | |
17 | ||
18 | from twisted.web.http import Request | |
19 | ||
20 | from synapse.api.errors import SynapseError | |
21 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request | |
22 | from synapse.http.server import DirectServeHtmlResource | |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
26 | ||
27 | logger = logging.getLogger(__name__) | |
28 | ||
29 | ||
30 | class SsoRegisterResource(DirectServeHtmlResource): | |
31 | """A resource which completes SSO registration | |
32 | ||
33 | This resource gets mounted at /_synapse/client/sso_register, and is shown | |
34 | after we collect username and/or consent for a new SSO user. It (finally) registers | |
35 | the user, and confirms redirect to the client | |
36 | """ | |
37 | ||
38 | def __init__(self, hs: "HomeServer"): | |
39 | super().__init__() | |
40 | self._sso_handler = hs.get_sso_handler() | |
41 | ||
42 | async def _async_render_GET(self, request: Request) -> None: | |
43 | try: | |
44 | session_id = get_username_mapping_session_cookie_from_request(request) | |
45 | except SynapseError as e: | |
46 | logger.warning("Error fetching session cookie: %s", e) | |
47 | self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code) | |
48 | return | |
49 | await self._sso_handler.register_sso_user(request, session_id) |
33 | 33 | self._config = hs.config |
34 | 34 | |
35 | 35 | def get_well_known(self): |
36 | # if we don't have a public_baseurl, we can't help much here. | |
37 | if self._config.public_baseurl is None: | |
38 | return None | |
39 | ||
36 | 40 | result = {"m.homeserver": {"base_url": self._config.public_baseurl}} |
37 | 41 | |
38 | 42 | if self._config.default_identity_server: |
102 | 102 | from synapse.push.action_generator import ActionGenerator |
103 | 103 | from synapse.push.pusherpool import PusherPool |
104 | 104 | from synapse.replication.tcp.client import ReplicationDataHandler |
105 | from synapse.replication.tcp.external_cache import ExternalCache | |
105 | 106 | from synapse.replication.tcp.handler import ReplicationCommandHandler |
106 | 107 | from synapse.replication.tcp.resource import ReplicationStreamer |
107 | 108 | from synapse.replication.tcp.streams import STREAMS_MAP, Stream |
127 | 128 | logger = logging.getLogger(__name__) |
128 | 129 | |
129 | 130 | if TYPE_CHECKING: |
131 | from txredisapi import RedisProtocol | |
132 | ||
130 | 133 | from synapse.handlers.oidc_handler import OidcHandler |
131 | 134 | from synapse.handlers.saml_handler import SamlHandler |
132 | 135 | |
715 | 718 | def get_account_data_handler(self) -> AccountDataHandler: |
716 | 719 | return AccountDataHandler(self) |
717 | 720 | |
721 | @cache_in_self | |
722 | def get_external_cache(self) -> ExternalCache: | |
723 | return ExternalCache(self) | |
724 | ||
725 | @cache_in_self | |
726 | def get_outbound_redis_connection(self) -> Optional["RedisProtocol"]: | |
727 | if not self.config.redis.redis_enabled: | |
728 | return None | |
729 | ||
730 | # We only want to import redis module if we're using it, as we have | |
731 | # `txredisapi` as an optional dependency. | |
732 | from synapse.replication.tcp.redis import lazyConnection | |
733 | ||
734 | logger.info( | |
735 | "Connecting to redis (host=%r port=%r) for external cache", | |
736 | self.config.redis_host, | |
737 | self.config.redis_port, | |
738 | ) | |
739 | ||
740 | return lazyConnection( | |
741 | hs=self, | |
742 | host=self.config.redis_host, | |
743 | port=self.config.redis_port, | |
744 | password=self.config.redis.redis_password, | |
745 | reconnect=True, | |
746 | ) | |
747 | ||
718 | 748 | async def remove_pusher(self, app_id: str, push_key: str, user_id: str): |
719 | 749 | return await self.get_pusherpool().remove_pusher(app_id, push_key, user_id) |
720 | 750 |
309 | 309 | state_group_before_event = None |
310 | 310 | state_group_before_event_prev_group = None |
311 | 311 | deltas_to_state_group_before_event = None |
312 | entry = None | |
312 | 313 | |
313 | 314 | else: |
314 | 315 | # otherwise, we'll need to resolve the state across the prev_events. |
339 | 340 | current_state_ids=state_ids_before_event, |
340 | 341 | ) |
341 | 342 | |
342 | # XXX: can we update the state cache entry for the new state group? or | |
343 | # could we set a flag on resolve_state_groups_for_events to tell it to | |
344 | # always make a state group? | |
343 | # Assign the new state group to the cached state entry. | |
344 | # | |
345 | # Note that this can race in that we could generate multiple state | |
346 | # groups for the same state entry, but that is just inefficient | |
347 | # rather than dangerous. | |
348 | if entry and entry.state_group is None: | |
349 | entry.state_group = state_group_before_event | |
345 | 350 | |
346 | 351 | # |
347 | 352 | # now if it's not a state event, we're done |
261 | 261 | return self.txn.description |
262 | 262 | |
263 | 263 | def execute_batch(self, sql: str, args: Iterable[Iterable[Any]]) -> None: |
264 | """Similar to `executemany`, except `txn.rowcount` will not be correct | |
265 | afterwards. | |
266 | ||
267 | More efficient than `executemany` on PostgreSQL | |
268 | """ | |
269 | ||
264 | 270 | if isinstance(self.database_engine, PostgresEngine): |
265 | 271 | from psycopg2.extras import execute_batch # type: ignore |
266 | 272 | |
267 | 273 | self._do_execute(lambda *x: execute_batch(self.txn, *x), sql, args) |
268 | 274 | else: |
269 | for val in args: | |
270 | self.execute(sql, val) | |
275 | self.executemany(sql, args) | |
271 | 276 | |
272 | 277 | def execute_values(self, sql: str, *args: Any) -> List[Tuple]: |
273 | 278 | """Corresponds to psycopg2.extras.execute_values. Only available when |
887 | 892 | ", ".join("?" for _ in keys[0]), |
888 | 893 | ) |
889 | 894 | |
890 | txn.executemany(sql, vals) | |
895 | txn.execute_batch(sql, vals) | |
891 | 896 | |
892 | 897 | async def simple_upsert( |
893 | 898 | self, |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | 2 | # Copyright 2018 New Vector Ltd |
3 | # Copyright 2019 The Matrix.org Foundation C.I.C. | |
3 | # Copyright 2019-2021 The Matrix.org Foundation C.I.C. | |
4 | 4 | # |
5 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); |
6 | 6 | # you may not use this file except in compliance with the License. |
42 | 42 | from .event_federation import EventFederationStore |
43 | 43 | from .event_push_actions import EventPushActionsStore |
44 | 44 | from .events_bg_updates import EventsBackgroundUpdatesStore |
45 | from .events_forward_extremities import EventForwardExtremitiesStore | |
45 | 46 | from .filtering import FilteringStore |
46 | 47 | from .group_server import GroupServerStore |
47 | 48 | from .keys import KeyStore |
117 | 118 | UIAuthStore, |
118 | 119 | CacheInvalidationWorkerStore, |
119 | 120 | ServerMetricsStore, |
121 | EventForwardExtremitiesStore, | |
120 | 122 | ): |
121 | 123 | def __init__(self, database: DatabasePool, db_conn, hs): |
122 | 124 | self.hs = hs |
896 | 896 | DELETE FROM device_lists_outbound_last_success |
897 | 897 | WHERE destination = ? AND user_id = ? |
898 | 898 | """ |
899 | txn.executemany(sql, ((row[0], row[1]) for row in rows)) | |
899 | txn.execute_batch(sql, ((row[0], row[1]) for row in rows)) | |
900 | 900 | |
901 | 901 | logger.info("Pruned %d device list outbound pokes", count) |
902 | 902 | |
1342 | 1342 | |
1343 | 1343 | # Delete older entries in the table, as we really only care about |
1344 | 1344 | # when the latest change happened. |
1345 | txn.executemany( | |
1345 | txn.execute_batch( | |
1346 | 1346 | """ |
1347 | 1347 | DELETE FROM device_lists_stream |
1348 | 1348 | WHERE user_id = ? AND device_id = ? AND stream_id < ? |
633 | 633 | |
634 | 634 | async def get_e2e_cross_signing_keys_bulk( |
635 | 635 | self, user_ids: List[str], from_user_id: Optional[str] = None |
636 | ) -> Dict[str, Dict[str, dict]]: | |
636 | ) -> Dict[str, Optional[Dict[str, dict]]]: | |
637 | 637 | """Returns the cross-signing keys for a set of users. |
638 | 638 | |
639 | 639 | Args: |
723 | 723 | |
724 | 724 | async def claim_e2e_one_time_keys( |
725 | 725 | self, query_list: Iterable[Tuple[str, str, str]] |
726 | ) -> Dict[str, Dict[str, Dict[str, bytes]]]: | |
726 | ) -> Dict[str, Dict[str, Dict[str, str]]]: | |
727 | 727 | """Take a list of one time keys out of the database. |
728 | 728 | |
729 | 729 | Args: |
486 | 486 | VALUES (?, ?, ?, ?, ?, ?) |
487 | 487 | """ |
488 | 488 | |
489 | txn.executemany( | |
489 | txn.execute_batch( | |
490 | 490 | sql, |
491 | 491 | ( |
492 | 492 | _gen_entry(user_id, actions) |
802 | 802 | ], |
803 | 803 | ) |
804 | 804 | |
805 | txn.executemany( | |
805 | txn.execute_batch( | |
806 | 806 | """ |
807 | 807 | UPDATE event_push_summary |
808 | 808 | SET notif_count = ?, unread_count = ?, stream_ordering = ? |
472 | 472 | txn, self.db_pool, event_to_room_id, event_to_types, event_to_auth_chain, |
473 | 473 | ) |
474 | 474 | |
475 | @staticmethod | |
475 | @classmethod | |
476 | 476 | def _add_chain_cover_index( |
477 | cls, | |
477 | 478 | txn, |
478 | 479 | db_pool: DatabasePool, |
479 | 480 | event_to_room_id: Dict[str, str], |
613 | 614 | if not events_to_calc_chain_id_for: |
614 | 615 | return |
615 | 616 | |
616 | # We now calculate the chain IDs/sequence numbers for the events. We | |
617 | # do this by looking at the chain ID and sequence number of any auth | |
618 | # event with the same type/state_key and incrementing the sequence | |
619 | # number by one. If there was no match or the chain ID/sequence | |
620 | # number is already taken we generate a new chain. | |
621 | # | |
622 | # We need to do this in a topologically sorted order as we want to | |
623 | # generate chain IDs/sequence numbers of an event's auth events | |
624 | # before the event itself. | |
625 | chains_tuples_allocated = set() # type: Set[Tuple[int, int]] | |
626 | new_chain_tuples = {} # type: Dict[str, Tuple[int, int]] | |
627 | for event_id in sorted_topologically( | |
628 | events_to_calc_chain_id_for, event_to_auth_chain | |
629 | ): | |
630 | existing_chain_id = None | |
631 | for auth_id in event_to_auth_chain.get(event_id, []): | |
632 | if event_to_types.get(event_id) == event_to_types.get(auth_id): | |
633 | existing_chain_id = chain_map[auth_id] | |
634 | break | |
635 | ||
636 | new_chain_tuple = None | |
637 | if existing_chain_id: | |
638 | # We found a chain ID/sequence number candidate, check its | |
639 | # not already taken. | |
640 | proposed_new_id = existing_chain_id[0] | |
641 | proposed_new_seq = existing_chain_id[1] + 1 | |
642 | if (proposed_new_id, proposed_new_seq) not in chains_tuples_allocated: | |
643 | already_allocated = db_pool.simple_select_one_onecol_txn( | |
644 | txn, | |
645 | table="event_auth_chains", | |
646 | keyvalues={ | |
647 | "chain_id": proposed_new_id, | |
648 | "sequence_number": proposed_new_seq, | |
649 | }, | |
650 | retcol="event_id", | |
651 | allow_none=True, | |
652 | ) | |
653 | if already_allocated: | |
654 | # Mark it as already allocated so we don't need to hit | |
655 | # the DB again. | |
656 | chains_tuples_allocated.add((proposed_new_id, proposed_new_seq)) | |
657 | else: | |
658 | new_chain_tuple = ( | |
659 | proposed_new_id, | |
660 | proposed_new_seq, | |
661 | ) | |
662 | ||
663 | if not new_chain_tuple: | |
664 | new_chain_tuple = (db_pool.event_chain_id_gen.get_next_id_txn(txn), 1) | |
665 | ||
666 | chains_tuples_allocated.add(new_chain_tuple) | |
667 | ||
668 | chain_map[event_id] = new_chain_tuple | |
669 | new_chain_tuples[event_id] = new_chain_tuple | |
617 | # Allocate chain ID/sequence numbers to each new event. | |
618 | new_chain_tuples = cls._allocate_chain_ids( | |
619 | txn, | |
620 | db_pool, | |
621 | event_to_room_id, | |
622 | event_to_types, | |
623 | event_to_auth_chain, | |
624 | events_to_calc_chain_id_for, | |
625 | chain_map, | |
626 | ) | |
627 | chain_map.update(new_chain_tuples) | |
670 | 628 | |
671 | 629 | db_pool.simple_insert_many_txn( |
672 | 630 | txn, |
793 | 751 | ], |
794 | 752 | ) |
795 | 753 | |
754 | @staticmethod | |
755 | def _allocate_chain_ids( | |
756 | txn, | |
757 | db_pool: DatabasePool, | |
758 | event_to_room_id: Dict[str, str], | |
759 | event_to_types: Dict[str, Tuple[str, str]], | |
760 | event_to_auth_chain: Dict[str, List[str]], | |
761 | events_to_calc_chain_id_for: Set[str], | |
762 | chain_map: Dict[str, Tuple[int, int]], | |
763 | ) -> Dict[str, Tuple[int, int]]: | |
764 | """Allocates, but does not persist, chain ID/sequence numbers for the | |
765 | events in `events_to_calc_chain_id_for`. (c.f. _add_chain_cover_index | |
766 | for info on args) | |
767 | """ | |
768 | ||
769 | # We now calculate the chain IDs/sequence numbers for the events. We do | |
770 | # this by looking at the chain ID and sequence number of any auth event | |
771 | # with the same type/state_key and incrementing the sequence number by | |
772 | # one. If there was no match or the chain ID/sequence number is already | |
773 | # taken we generate a new chain. | |
774 | # | |
775 | # We try to reduce the number of times that we hit the database by | |
776 | # batching up calls, to make this more efficient when persisting large | |
777 | # numbers of state events (e.g. during joins). | |
778 | # | |
779 | # We do this by: | |
780 | # 1. Calculating for each event which auth event will be used to | |
781 | # inherit the chain ID, i.e. converting the auth chain graph to a | |
782 | # tree that we can allocate chains on. We also keep track of which | |
783 | # existing chain IDs have been referenced. | |
784 | # 2. Fetching the max allocated sequence number for each referenced | |
785 | # existing chain ID, generating a map from chain ID to the max | |
786 | # allocated sequence number. | |
787 | # 3. Iterating over the tree and allocating a chain ID/seq no. to the | |
788 | # new event, by incrementing the sequence number from the | |
789 | # referenced event's chain ID/seq no. and checking that the | |
790 | # incremented sequence number hasn't already been allocated (by | |
791 | # looking in the map generated in the previous step). We generate a | |
792 | # new chain if the sequence number has already been allocated. | |
793 | # | |
794 | ||
795 | existing_chains = set() # type: Set[int] | |
796 | tree = [] # type: List[Tuple[str, Optional[str]]] | |
797 | ||
798 | # We need to do this in a topologically sorted order as we want to | |
799 | # generate chain IDs/sequence numbers of an event's auth events before | |
800 | # the event itself. | |
801 | for event_id in sorted_topologically( | |
802 | events_to_calc_chain_id_for, event_to_auth_chain | |
803 | ): | |
804 | for auth_id in event_to_auth_chain.get(event_id, []): | |
805 | if event_to_types.get(event_id) == event_to_types.get(auth_id): | |
806 | existing_chain_id = chain_map.get(auth_id) | |
807 | if existing_chain_id: | |
808 | existing_chains.add(existing_chain_id[0]) | |
809 | ||
810 | tree.append((event_id, auth_id)) | |
811 | break | |
812 | else: | |
813 | tree.append((event_id, None)) | |
814 | ||
815 | # Fetch the current max sequence number for each existing referenced chain. | |
816 | sql = """ | |
817 | SELECT chain_id, MAX(sequence_number) FROM event_auth_chains | |
818 | WHERE %s | |
819 | GROUP BY chain_id | |
820 | """ | |
821 | clause, args = make_in_list_sql_clause( | |
822 | db_pool.engine, "chain_id", existing_chains | |
823 | ) | |
824 | txn.execute(sql % (clause,), args) | |
825 | ||
826 | chain_to_max_seq_no = {row[0]: row[1] for row in txn} # type: Dict[Any, int] | |
827 | ||
828 | # Allocate the new events chain ID/sequence numbers. | |
829 | # | |
830 | # To reduce the number of calls to the database we don't allocate a | |
831 | # chain ID number in the loop, instead we use a temporary `object()` for | |
832 | # each new chain ID. Once we've done the loop we generate the necessary | |
833 | # number of new chain IDs in one call, replacing all temporary | |
834 | # objects with real allocated chain IDs. | |
835 | ||
836 | unallocated_chain_ids = set() # type: Set[object] | |
837 | new_chain_tuples = {} # type: Dict[str, Tuple[Any, int]] | |
838 | for event_id, auth_event_id in tree: | |
839 | # If we reference an auth_event_id we fetch the allocated chain ID, | |
840 | # either from the existing `chain_map` or the newly generated | |
841 | # `new_chain_tuples` map. | |
842 | existing_chain_id = None | |
843 | if auth_event_id: | |
844 | existing_chain_id = new_chain_tuples.get(auth_event_id) | |
845 | if not existing_chain_id: | |
846 | existing_chain_id = chain_map[auth_event_id] | |
847 | ||
848 | new_chain_tuple = None # type: Optional[Tuple[Any, int]] | |
849 | if existing_chain_id: | |
850 | # We found a chain ID/sequence number candidate, check its | |
851 | # not already taken. | |
852 | proposed_new_id = existing_chain_id[0] | |
853 | proposed_new_seq = existing_chain_id[1] + 1 | |
854 | ||
855 | if chain_to_max_seq_no[proposed_new_id] < proposed_new_seq: | |
856 | new_chain_tuple = ( | |
857 | proposed_new_id, | |
858 | proposed_new_seq, | |
859 | ) | |
860 | ||
861 | # If we need to start a new chain we allocate a temporary chain ID. | |
862 | if not new_chain_tuple: | |
863 | new_chain_tuple = (object(), 1) | |
864 | unallocated_chain_ids.add(new_chain_tuple[0]) | |
865 | ||
866 | new_chain_tuples[event_id] = new_chain_tuple | |
867 | chain_to_max_seq_no[new_chain_tuple[0]] = new_chain_tuple[1] | |
868 | ||
869 | # Generate new chain IDs for all unallocated chain IDs. | |
870 | newly_allocated_chain_ids = db_pool.event_chain_id_gen.get_next_mult_txn( | |
871 | txn, len(unallocated_chain_ids) | |
872 | ) | |
873 | ||
874 | # Map from potentially temporary chain ID to real chain ID | |
875 | chain_id_to_allocated_map = dict( | |
876 | zip(unallocated_chain_ids, newly_allocated_chain_ids) | |
877 | ) # type: Dict[Any, int] | |
878 | chain_id_to_allocated_map.update((c, c) for c in existing_chains) | |
879 | ||
880 | return { | |
881 | event_id: (chain_id_to_allocated_map[chain_id], seq) | |
882 | for event_id, (chain_id, seq) in new_chain_tuples.items() | |
883 | } | |
884 | ||
796 | 885 | def _persist_transaction_ids_txn( |
797 | 886 | self, |
798 | 887 | txn: LoggingTransaction, |
875 | 964 | WHERE room_id = ? AND type = ? AND state_key = ? |
876 | 965 | ) |
877 | 966 | """ |
878 | txn.executemany( | |
967 | txn.execute_batch( | |
879 | 968 | sql, |
880 | 969 | ( |
881 | 970 | ( |
894 | 983 | ) |
895 | 984 | # Now we actually update the current_state_events table |
896 | 985 | |
897 | txn.executemany( | |
986 | txn.execute_batch( | |
898 | 987 | "DELETE FROM current_state_events" |
899 | 988 | " WHERE room_id = ? AND type = ? AND state_key = ?", |
900 | 989 | ( |
906 | 995 | # We include the membership in the current state table, hence we do |
907 | 996 | # a lookup when we insert. This assumes that all events have already |
908 | 997 | # been inserted into room_memberships. |
909 | txn.executemany( | |
998 | txn.execute_batch( | |
910 | 999 | """INSERT INTO current_state_events |
911 | 1000 | (room_id, type, state_key, event_id, membership) |
912 | 1001 | VALUES (?, ?, ?, ?, (SELECT membership FROM room_memberships WHERE event_id = ?)) |
926 | 1015 | # we have no record of the fact the user *was* a member of the |
927 | 1016 | # room but got, say, state reset out of it. |
928 | 1017 | if to_delete or to_insert: |
929 | txn.executemany( | |
1018 | txn.execute_batch( | |
930 | 1019 | "DELETE FROM local_current_membership" |
931 | 1020 | " WHERE room_id = ? AND user_id = ?", |
932 | 1021 | ( |
937 | 1026 | ) |
938 | 1027 | |
939 | 1028 | if to_insert: |
940 | txn.executemany( | |
1029 | txn.execute_batch( | |
941 | 1030 | """INSERT INTO local_current_membership |
942 | 1031 | (room_id, user_id, event_id, membership) |
943 | 1032 | VALUES (?, ?, ?, (SELECT membership FROM room_memberships WHERE event_id = ?)) |
1737 | 1826 | """ |
1738 | 1827 | |
1739 | 1828 | if events_and_contexts: |
1740 | txn.executemany( | |
1829 | txn.execute_batch( | |
1741 | 1830 | sql, |
1742 | 1831 | ( |
1743 | 1832 | ( |
1766 | 1855 | |
1767 | 1856 | # Now we delete the staging area for *all* events that were being |
1768 | 1857 | # persisted. |
1769 | txn.executemany( | |
1858 | txn.execute_batch( | |
1770 | 1859 | "DELETE FROM event_push_actions_staging WHERE event_id = ?", |
1771 | 1860 | ((event.event_id,) for event, _ in all_events_and_contexts), |
1772 | 1861 | ) |
1885 | 1974 | " )" |
1886 | 1975 | ) |
1887 | 1976 | |
1888 | txn.executemany( | |
1977 | txn.execute_batch( | |
1889 | 1978 | query, |
1890 | 1979 | [ |
1891 | 1980 | (e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False) |
1899 | 1988 | "DELETE FROM event_backward_extremities" |
1900 | 1989 | " WHERE event_id = ? AND room_id = ?" |
1901 | 1990 | ) |
1902 | txn.executemany( | |
1991 | txn.execute_batch( | |
1903 | 1992 | query, |
1904 | 1993 | [ |
1905 | 1994 | (ev.event_id, ev.room_id) |
138 | 138 | max_stream_id = progress["max_stream_id_exclusive"] |
139 | 139 | rows_inserted = progress.get("rows_inserted", 0) |
140 | 140 | |
141 | INSERT_CLUMP_SIZE = 1000 | |
142 | ||
143 | 141 | def reindex_txn(txn): |
144 | 142 | sql = ( |
145 | 143 | "SELECT stream_ordering, event_id, json FROM events" |
177 | 175 | |
178 | 176 | sql = "UPDATE events SET sender = ?, contains_url = ? WHERE event_id = ?" |
179 | 177 | |
180 | for index in range(0, len(update_rows), INSERT_CLUMP_SIZE): | |
181 | clump = update_rows[index : index + INSERT_CLUMP_SIZE] | |
182 | txn.executemany(sql, clump) | |
178 | txn.execute_batch(sql, update_rows) | |
183 | 179 | |
184 | 180 | progress = { |
185 | 181 | "target_min_stream_id_inclusive": target_min_stream_id, |
208 | 204 | target_min_stream_id = progress["target_min_stream_id_inclusive"] |
209 | 205 | max_stream_id = progress["max_stream_id_exclusive"] |
210 | 206 | rows_inserted = progress.get("rows_inserted", 0) |
211 | ||
212 | INSERT_CLUMP_SIZE = 1000 | |
213 | 207 | |
214 | 208 | def reindex_search_txn(txn): |
215 | 209 | sql = ( |
255 | 249 | |
256 | 250 | sql = "UPDATE events SET origin_server_ts = ? WHERE event_id = ?" |
257 | 251 | |
258 | for index in range(0, len(rows_to_update), INSERT_CLUMP_SIZE): | |
259 | clump = rows_to_update[index : index + INSERT_CLUMP_SIZE] | |
260 | txn.executemany(sql, clump) | |
252 | txn.execute_batch(sql, rows_to_update) | |
261 | 253 | |
262 | 254 | progress = { |
263 | 255 | "target_min_stream_id_inclusive": target_min_stream_id, |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | from typing import Dict, List | |
17 | ||
18 | from synapse.api.errors import SynapseError | |
19 | from synapse.storage._base import SQLBaseStore | |
20 | ||
21 | logger = logging.getLogger(__name__) | |
22 | ||
23 | ||
24 | class EventForwardExtremitiesStore(SQLBaseStore): | |
25 | async def delete_forward_extremities_for_room(self, room_id: str) -> int: | |
26 | """Delete any extra forward extremities for a room. | |
27 | ||
28 | Invalidates the "get_latest_event_ids_in_room" cache if any forward | |
29 | extremities were deleted. | |
30 | ||
31 | Returns count deleted. | |
32 | """ | |
33 | ||
34 | def delete_forward_extremities_for_room_txn(txn): | |
35 | # First we need to get the event_id to not delete | |
36 | sql = """ | |
37 | SELECT event_id FROM event_forward_extremities | |
38 | INNER JOIN events USING (room_id, event_id) | |
39 | WHERE room_id = ? | |
40 | ORDER BY stream_ordering DESC | |
41 | LIMIT 1 | |
42 | """ | |
43 | txn.execute(sql, (room_id,)) | |
44 | rows = txn.fetchall() | |
45 | try: | |
46 | event_id = rows[0][0] | |
47 | logger.debug( | |
48 | "Found event_id %s as the forward extremity to keep for room %s", | |
49 | event_id, | |
50 | room_id, | |
51 | ) | |
52 | except KeyError: | |
53 | msg = "No forward extremity event found for room %s" % room_id | |
54 | logger.warning(msg) | |
55 | raise SynapseError(400, msg) | |
56 | ||
57 | # Now delete the extra forward extremities | |
58 | sql = """ | |
59 | DELETE FROM event_forward_extremities | |
60 | WHERE event_id != ? AND room_id = ? | |
61 | """ | |
62 | ||
63 | txn.execute(sql, (event_id, room_id)) | |
64 | logger.info( | |
65 | "Deleted %s extra forward extremities for room %s", | |
66 | txn.rowcount, | |
67 | room_id, | |
68 | ) | |
69 | ||
70 | if txn.rowcount > 0: | |
71 | # Invalidate the cache | |
72 | self._invalidate_cache_and_stream( | |
73 | txn, self.get_latest_event_ids_in_room, (room_id,), | |
74 | ) | |
75 | ||
76 | return txn.rowcount | |
77 | ||
78 | return await self.db_pool.runInteraction( | |
79 | "delete_forward_extremities_for_room", | |
80 | delete_forward_extremities_for_room_txn, | |
81 | ) | |
82 | ||
83 | async def get_forward_extremities_for_room(self, room_id: str) -> List[Dict]: | |
84 | """Get list of forward extremities for a room.""" | |
85 | ||
86 | def get_forward_extremities_for_room_txn(txn): | |
87 | sql = """ | |
88 | SELECT event_id, state_group, depth, received_ts | |
89 | FROM event_forward_extremities | |
90 | INNER JOIN event_to_state_groups USING (event_id) | |
91 | INNER JOIN events USING (room_id, event_id) | |
92 | WHERE room_id = ? | |
93 | """ | |
94 | ||
95 | txn.execute(sql, (room_id,)) | |
96 | return self.db_pool.cursor_to_dict(txn) | |
97 | ||
98 | return await self.db_pool.runInteraction( | |
99 | "get_forward_extremities_for_room", get_forward_extremities_for_room_txn, | |
100 | ) |
416 | 416 | " WHERE media_origin = ? AND media_id = ?" |
417 | 417 | ) |
418 | 418 | |
419 | txn.executemany( | |
419 | txn.execute_batch( | |
420 | 420 | sql, |
421 | 421 | ( |
422 | 422 | (time_ms, media_origin, media_id) |
429 | 429 | " WHERE media_id = ?" |
430 | 430 | ) |
431 | 431 | |
432 | txn.executemany(sql, ((time_ms, media_id) for media_id in local_media)) | |
432 | txn.execute_batch(sql, ((time_ms, media_id) for media_id in local_media)) | |
433 | 433 | |
434 | 434 | return await self.db_pool.runInteraction( |
435 | 435 | "update_cached_last_access_time", update_cache_txn |
556 | 556 | sql = "DELETE FROM local_media_repository_url_cache WHERE media_id = ?" |
557 | 557 | |
558 | 558 | def _delete_url_cache_txn(txn): |
559 | txn.executemany(sql, [(media_id,) for media_id in media_ids]) | |
559 | txn.execute_batch(sql, [(media_id,) for media_id in media_ids]) | |
560 | 560 | |
561 | 561 | return await self.db_pool.runInteraction( |
562 | 562 | "delete_url_cache", _delete_url_cache_txn |
585 | 585 | def _delete_url_cache_media_txn(txn): |
586 | 586 | sql = "DELETE FROM local_media_repository WHERE media_id = ?" |
587 | 587 | |
588 | txn.executemany(sql, [(media_id,) for media_id in media_ids]) | |
588 | txn.execute_batch(sql, [(media_id,) for media_id in media_ids]) | |
589 | 589 | |
590 | 590 | sql = "DELETE FROM local_media_repository_thumbnails WHERE media_id = ?" |
591 | 591 | |
592 | txn.executemany(sql, [(media_id,) for media_id in media_ids]) | |
592 | txn.execute_batch(sql, [(media_id,) for media_id in media_ids]) | |
593 | 593 | |
594 | 594 | return await self.db_pool.runInteraction( |
595 | 595 | "delete_url_cache_media", _delete_url_cache_media_txn |
85 | 85 | |
86 | 86 | _excess_state_events_collecter.update_data( |
87 | 87 | (x[0] - 1) * x[1] for x in res if x[1] |
88 | ) | |
89 | ||
90 | async def count_daily_e2ee_messages(self): | |
91 | """ | |
92 | Returns an estimate of the number of messages sent in the last day. | |
93 | ||
94 | If it has been significantly less or more than one day since the last | |
95 | call to this function, it will return None. | |
96 | """ | |
97 | ||
98 | def _count_messages(txn): | |
99 | sql = """ | |
100 | SELECT COALESCE(COUNT(*), 0) FROM events | |
101 | WHERE type = 'm.room.encrypted' | |
102 | AND stream_ordering > ? | |
103 | """ | |
104 | txn.execute(sql, (self.stream_ordering_day_ago,)) | |
105 | (count,) = txn.fetchone() | |
106 | return count | |
107 | ||
108 | return await self.db_pool.runInteraction("count_e2ee_messages", _count_messages) | |
109 | ||
110 | async def count_daily_sent_e2ee_messages(self): | |
111 | def _count_messages(txn): | |
112 | # This is good enough as if you have silly characters in your own | |
113 | # hostname then thats your own fault. | |
114 | like_clause = "%:" + self.hs.hostname | |
115 | ||
116 | sql = """ | |
117 | SELECT COALESCE(COUNT(*), 0) FROM events | |
118 | WHERE type = 'm.room.encrypted' | |
119 | AND sender LIKE ? | |
120 | AND stream_ordering > ? | |
121 | """ | |
122 | ||
123 | txn.execute(sql, (like_clause, self.stream_ordering_day_ago)) | |
124 | (count,) = txn.fetchone() | |
125 | return count | |
126 | ||
127 | return await self.db_pool.runInteraction( | |
128 | "count_daily_sent_e2ee_messages", _count_messages | |
129 | ) | |
130 | ||
131 | async def count_daily_active_e2ee_rooms(self): | |
132 | def _count(txn): | |
133 | sql = """ | |
134 | SELECT COALESCE(COUNT(DISTINCT room_id), 0) FROM events | |
135 | WHERE type = 'm.room.encrypted' | |
136 | AND stream_ordering > ? | |
137 | """ | |
138 | txn.execute(sql, (self.stream_ordering_day_ago,)) | |
139 | (count,) = txn.fetchone() | |
140 | return count | |
141 | ||
142 | return await self.db_pool.runInteraction( | |
143 | "count_daily_active_e2ee_rooms", _count | |
88 | 144 | ) |
89 | 145 | |
90 | 146 | async def count_daily_messages(self): |
171 | 171 | ) |
172 | 172 | |
173 | 173 | # Update backward extremeties |
174 | txn.executemany( | |
174 | txn.execute_batch( | |
175 | 175 | "INSERT INTO event_backward_extremities (room_id, event_id)" |
176 | 176 | " VALUES (?, ?)", |
177 | 177 | [(room_id, event_id) for event_id, in new_backwards_extrems], |
343 | 343 | txn, self.get_if_user_has_pusher, (user_id,) |
344 | 344 | ) |
345 | 345 | |
346 | self.db_pool.simple_delete_one_txn( | |
346 | # It is expected that there is exactly one pusher to delete, but | |
347 | # if it isn't there (or there are multiple) delete them all. | |
348 | self.db_pool.simple_delete_txn( | |
347 | 349 | txn, |
348 | 350 | "pushers", |
349 | 351 | {"app_id": app_id, "pushkey": pushkey, "user_name": user_id}, |
359 | 359 | |
360 | 360 | await self.db_pool.runInteraction("set_server_admin", set_server_admin_txn) |
361 | 361 | |
362 | async def set_shadow_banned(self, user: UserID, shadow_banned: bool) -> None: | |
363 | """Sets whether a user shadow-banned. | |
364 | ||
365 | Args: | |
366 | user: user ID of the user to test | |
367 | shadow_banned: true iff the user is to be shadow-banned, false otherwise. | |
368 | """ | |
369 | ||
370 | def set_shadow_banned_txn(txn): | |
371 | self.db_pool.simple_update_one_txn( | |
372 | txn, | |
373 | table="users", | |
374 | keyvalues={"name": user.to_string()}, | |
375 | updatevalues={"shadow_banned": shadow_banned}, | |
376 | ) | |
377 | # In order for this to apply immediately, clear the cache for this user. | |
378 | tokens = self.db_pool.simple_select_onecol_txn( | |
379 | txn, | |
380 | table="access_tokens", | |
381 | keyvalues={"user_id": user.to_string()}, | |
382 | retcol="token", | |
383 | ) | |
384 | for token in tokens: | |
385 | self._invalidate_cache_and_stream( | |
386 | txn, self.get_user_by_access_token, (token,) | |
387 | ) | |
388 | ||
389 | await self.db_pool.runInteraction("set_shadow_banned", set_shadow_banned_txn) | |
390 | ||
362 | 391 | def _query_for_auth(self, txn, token: str) -> Optional[TokenLookupResult]: |
363 | 392 | sql = """ |
364 | 393 | SELECT users.name as user_id, |
441 | 470 | return dict(txn) |
442 | 471 | |
443 | 472 | return await self.db_pool.runInteraction("get_users_by_id_case_insensitive", f) |
473 | ||
474 | async def record_user_external_id( | |
475 | self, auth_provider: str, external_id: str, user_id: str | |
476 | ) -> None: | |
477 | """Record a mapping from an external user id to a mxid | |
478 | ||
479 | Args: | |
480 | auth_provider: identifier for the remote auth provider | |
481 | external_id: id on that system | |
482 | user_id: complete mxid that it is mapped to | |
483 | """ | |
484 | await self.db_pool.simple_insert( | |
485 | table="user_external_ids", | |
486 | values={ | |
487 | "auth_provider": auth_provider, | |
488 | "external_id": external_id, | |
489 | "user_id": user_id, | |
490 | }, | |
491 | desc="record_user_external_id", | |
492 | ) | |
444 | 493 | |
445 | 494 | async def get_user_by_external_id( |
446 | 495 | self, auth_provider: str, external_id: str |
1103 | 1152 | FROM user_threepids |
1104 | 1153 | """ |
1105 | 1154 | |
1106 | txn.executemany(sql, [(id_server,) for id_server in id_servers]) | |
1155 | txn.execute_batch(sql, [(id_server,) for id_server in id_servers]) | |
1107 | 1156 | |
1108 | 1157 | if id_servers: |
1109 | 1158 | await self.db_pool.runInteraction( |
1370 | 1419 | |
1371 | 1420 | self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,)) |
1372 | 1421 | |
1373 | async def record_user_external_id( | |
1374 | self, auth_provider: str, external_id: str, user_id: str | |
1375 | ) -> None: | |
1376 | """Record a mapping from an external user id to a mxid | |
1377 | ||
1378 | Args: | |
1379 | auth_provider: identifier for the remote auth provider | |
1380 | external_id: id on that system | |
1381 | user_id: complete mxid that it is mapped to | |
1382 | """ | |
1383 | await self.db_pool.simple_insert( | |
1384 | table="user_external_ids", | |
1385 | values={ | |
1386 | "auth_provider": auth_provider, | |
1387 | "external_id": external_id, | |
1388 | "user_id": user_id, | |
1389 | }, | |
1390 | desc="record_user_external_id", | |
1391 | ) | |
1392 | ||
1393 | 1422 | async def user_set_password_hash( |
1394 | 1423 | self, user_id: str, password_hash: Optional[str] |
1395 | 1424 | ) -> None: |
872 | 872 | "max_stream_id_exclusive", self._stream_order_on_start + 1 |
873 | 873 | ) |
874 | 874 | |
875 | INSERT_CLUMP_SIZE = 1000 | |
876 | ||
877 | 875 | def add_membership_profile_txn(txn): |
878 | 876 | sql = """ |
879 | 877 | SELECT stream_ordering, event_id, events.room_id, event_json.json |
914 | 912 | UPDATE room_memberships SET display_name = ?, avatar_url = ? |
915 | 913 | WHERE event_id = ? AND room_id = ? |
916 | 914 | """ |
917 | for index in range(0, len(to_update), INSERT_CLUMP_SIZE): | |
918 | clump = to_update[index : index + INSERT_CLUMP_SIZE] | |
919 | txn.executemany(to_update_sql, clump) | |
915 | txn.execute_batch(to_update_sql, to_update) | |
920 | 916 | |
921 | 917 | progress = { |
922 | 918 | "target_min_stream_id_inclusive": target_min_stream_id, |
54 | 54 | # { "ignored_users": "@someone:example.org": {} } |
55 | 55 | ignored_users = content.get("ignored_users", {}) |
56 | 56 | if isinstance(ignored_users, dict) and ignored_users: |
57 | cur.executemany(insert_sql, [(user_id, u) for u in ignored_users]) | |
57 | cur.execute_batch(insert_sql, [(user_id, u) for u in ignored_users]) | |
58 | 58 | |
59 | 59 | # Add indexes after inserting data for efficiency. |
60 | 60 | logger.info("Adding constraints to ignored_users table") |
23 | 23 | from synapse.storage.database import DatabasePool |
24 | 24 | from synapse.storage.databases.main.events_worker import EventRedactBehaviour |
25 | 25 | from synapse.storage.engines import PostgresEngine, Sqlite3Engine |
26 | from synapse.types import Collection | |
26 | 27 | |
27 | 28 | logger = logging.getLogger(__name__) |
28 | 29 | |
62 | 63 | for entry in entries |
63 | 64 | ) |
64 | 65 | |
65 | txn.executemany(sql, args) | |
66 | txn.execute_batch(sql, args) | |
66 | 67 | |
67 | 68 | elif isinstance(self.database_engine, Sqlite3Engine): |
68 | 69 | sql = ( |
74 | 75 | for entry in entries |
75 | 76 | ) |
76 | 77 | |
77 | txn.executemany(sql, args) | |
78 | txn.execute_batch(sql, args) | |
78 | 79 | else: |
79 | 80 | # This should be unreachable. |
80 | 81 | raise Exception("Unrecognized database engine") |
459 | 460 | |
460 | 461 | async def search_rooms( |
461 | 462 | self, |
462 | room_ids: List[str], | |
463 | room_ids: Collection[str], | |
463 | 464 | search_term: str, |
464 | 465 | keys: List[str], |
465 | 466 | limit, |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | from collections import Counter | |
18 | 17 | from enum import Enum |
19 | 18 | from itertools import chain |
20 | 19 | from typing import Any, Dict, List, Optional, Tuple |
20 | ||
21 | from typing_extensions import Counter | |
21 | 22 | |
22 | 23 | from twisted.internet.defer import DeferredLock |
23 | 24 | |
318 | 319 | return slice_list |
319 | 320 | |
320 | 321 | @cached() |
321 | async def get_earliest_token_for_stats(self, stats_type: str, id: str) -> int: | |
322 | async def get_earliest_token_for_stats( | |
323 | self, stats_type: str, id: str | |
324 | ) -> Optional[int]: | |
322 | 325 | """ |
323 | 326 | Fetch the "earliest token". This is used by the room stats delta |
324 | 327 | processor to ignore deltas that have been processed between the |
338 | 341 | ) |
339 | 342 | |
340 | 343 | async def bulk_update_stats_delta( |
341 | self, ts: int, updates: Dict[str, Dict[str, Dict[str, Counter]]], stream_id: int | |
344 | self, ts: int, updates: Dict[str, Dict[str, Counter[str]]], stream_id: int | |
342 | 345 | ) -> None: |
343 | 346 | """Bulk update stats tables for a given stream_id and updates the stats |
344 | 347 | incremental position. |
664 | 667 | |
665 | 668 | async def get_changes_room_total_events_and_bytes( |
666 | 669 | self, min_pos: int, max_pos: int |
667 | ) -> Dict[str, Dict[str, int]]: | |
670 | ) -> Tuple[Dict[str, Dict[str, int]], Dict[str, Dict[str, int]]]: | |
668 | 671 | """Fetches the counts of events in the given range of stream IDs. |
669 | 672 | |
670 | 673 | Args: |
682 | 685 | max_pos, |
683 | 686 | ) |
684 | 687 | |
685 | def get_changes_room_total_events_and_bytes_txn(self, txn, low_pos, high_pos): | |
688 | def get_changes_room_total_events_and_bytes_txn( | |
689 | self, txn, low_pos: int, high_pos: int | |
690 | ) -> Tuple[Dict[str, Dict[str, int]], Dict[str, Dict[str, int]]]: | |
686 | 691 | """Gets the total_events and total_event_bytes counts for rooms and |
687 | 692 | senders, in a range of stream_orderings (including backfilled events). |
688 | 693 | |
689 | 694 | Args: |
690 | 695 | txn |
691 | low_pos (int): Low stream ordering | |
692 | high_pos (int): High stream ordering | |
696 | low_pos: Low stream ordering | |
697 | high_pos: High stream ordering | |
693 | 698 | |
694 | 699 | Returns: |
695 | tuple[dict[str, dict[str, int]], dict[str, dict[str, int]]]: The | |
696 | room and user deltas for total_events/total_event_bytes in the | |
700 | The room and user deltas for total_events/total_event_bytes in the | |
697 | 701 | format of `stats_id` -> fields |
698 | 702 | """ |
699 | 703 |
539 | 539 | desc="get_user_in_directory", |
540 | 540 | ) |
541 | 541 | |
542 | async def update_user_directory_stream_pos(self, stream_id: str) -> None: | |
542 | async def update_user_directory_stream_pos(self, stream_id: int) -> None: | |
543 | 543 | await self.db_pool.simple_update_one( |
544 | 544 | table="user_directory_stream_pos", |
545 | 545 | keyvalues={}, |
564 | 564 | ) |
565 | 565 | |
566 | 566 | logger.info("[purge] removing redundant state groups") |
567 | txn.executemany( | |
567 | txn.execute_batch( | |
568 | 568 | "DELETE FROM state_groups_state WHERE state_group = ?", |
569 | 569 | ((sg,) for sg in state_groups_to_delete), |
570 | 570 | ) |
571 | txn.executemany( | |
571 | txn.execute_batch( | |
572 | 572 | "DELETE FROM state_groups WHERE id = ?", |
573 | 573 | ((sg,) for sg in state_groups_to_delete), |
574 | 574 | ) |
14 | 14 | import heapq |
15 | 15 | import logging |
16 | 16 | import threading |
17 | from collections import deque | |
17 | from collections import OrderedDict | |
18 | 18 | from contextlib import contextmanager |
19 | 19 | from typing import Dict, List, Optional, Set, Tuple, Union |
20 | 20 | |
21 | 21 | import attr |
22 | from typing_extensions import Deque | |
23 | 22 | |
24 | 23 | from synapse.metrics.background_process_metrics import run_as_background_process |
25 | 24 | from synapse.storage.database import DatabasePool, LoggingTransaction |
100 | 99 | self._current = (max if step > 0 else min)( |
101 | 100 | self._current, _load_current_id(db_conn, table, column, step) |
102 | 101 | ) |
103 | self._unfinished_ids = deque() # type: Deque[int] | |
102 | ||
103 | # We use this as an ordered set, as we want to efficiently append items, | |
104 | # remove items and get the first item. Since we insert IDs in order, the | |
105 | # insertion ordering will ensure its in the correct ordering. | |
106 | # | |
107 | # The key and values are the same, but we never look at the values. | |
108 | self._unfinished_ids = OrderedDict() # type: OrderedDict[int, int] | |
104 | 109 | |
105 | 110 | def get_next(self): |
106 | 111 | """ |
112 | 117 | self._current += self._step |
113 | 118 | next_id = self._current |
114 | 119 | |
115 | self._unfinished_ids.append(next_id) | |
120 | self._unfinished_ids[next_id] = next_id | |
116 | 121 | |
117 | 122 | @contextmanager |
118 | 123 | def manager(): |
120 | 125 | yield next_id |
121 | 126 | finally: |
122 | 127 | with self._lock: |
123 | self._unfinished_ids.remove(next_id) | |
128 | self._unfinished_ids.pop(next_id) | |
124 | 129 | |
125 | 130 | return _AsyncCtxManagerWrapper(manager()) |
126 | 131 | |
139 | 144 | self._current += n * self._step |
140 | 145 | |
141 | 146 | for next_id in next_ids: |
142 | self._unfinished_ids.append(next_id) | |
147 | self._unfinished_ids[next_id] = next_id | |
143 | 148 | |
144 | 149 | @contextmanager |
145 | 150 | def manager(): |
148 | 153 | finally: |
149 | 154 | with self._lock: |
150 | 155 | for next_id in next_ids: |
151 | self._unfinished_ids.remove(next_id) | |
156 | self._unfinished_ids.pop(next_id) | |
152 | 157 | |
153 | 158 | return _AsyncCtxManagerWrapper(manager()) |
154 | 159 | |
161 | 166 | """ |
162 | 167 | with self._lock: |
163 | 168 | if self._unfinished_ids: |
164 | return self._unfinished_ids[0] - self._step | |
169 | return next(iter(self._unfinished_ids)) - self._step | |
165 | 170 | |
166 | 171 | return self._current |
167 | 172 |
69 | 69 | ... |
70 | 70 | |
71 | 71 | @abc.abstractmethod |
72 | def get_next_mult_txn(self, txn: Cursor, n: int) -> List[int]: | |
73 | """Get the next `n` IDs in the sequence""" | |
74 | ... | |
75 | ||
76 | @abc.abstractmethod | |
72 | 77 | def check_consistency( |
73 | 78 | self, |
74 | 79 | db_conn: "LoggingDatabaseConnection", |
218 | 223 | self._current_max_id += 1 |
219 | 224 | return self._current_max_id |
220 | 225 | |
226 | def get_next_mult_txn(self, txn: Cursor, n: int) -> List[int]: | |
227 | with self._lock: | |
228 | if self._current_max_id is None: | |
229 | assert self._callback is not None | |
230 | self._current_max_id = self._callback(txn) | |
231 | self._callback = None | |
232 | ||
233 | first_id = self._current_max_id + 1 | |
234 | self._current_max_id += n | |
235 | return [first_id + i for i in range(n)] | |
236 | ||
221 | 237 | def check_consistency( |
222 | 238 | self, |
223 | 239 | db_conn: Connection, |
48 | 48 | module = importlib.import_module(module) |
49 | 49 | provider_class = getattr(module, clz) |
50 | 50 | |
51 | module_config = provider.get("config") | |
51 | # Load the module config. If None, pass an empty dictionary instead | |
52 | module_config = provider.get("config") or {} | |
52 | 53 | try: |
53 | 54 | provider_config = provider_class.parse_config(module_config) |
54 | 55 | except jsonschema.ValidationError as e: |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | """Utilities for dealing with jinja2 templates""" | |
16 | ||
17 | import time | |
18 | import urllib.parse | |
19 | from typing import TYPE_CHECKING, Callable, Iterable, Optional, Union | |
20 | ||
21 | import jinja2 | |
22 | ||
23 | if TYPE_CHECKING: | |
24 | from synapse.config.homeserver import HomeServerConfig | |
25 | ||
26 | ||
27 | def build_jinja_env( | |
28 | template_search_directories: Iterable[str], | |
29 | config: "HomeServerConfig", | |
30 | autoescape: Union[bool, Callable[[str], bool], None] = None, | |
31 | ) -> jinja2.Environment: | |
32 | """Set up a Jinja2 environment to load templates from the given search path | |
33 | ||
34 | The returned environment defines the following filters: | |
35 | - format_ts: formats timestamps as strings in the server's local timezone | |
36 | (XXX: why is that useful??) | |
37 | - mxc_to_http: converts mxc: uris to http URIs. Args are: | |
38 | (uri, width, height, resize_method="crop") | |
39 | ||
40 | and the following global variables: | |
41 | - server_name: matrix server name | |
42 | ||
43 | Args: | |
44 | template_search_directories: directories to search for templates | |
45 | ||
46 | config: homeserver config, for things like `server_name` and `public_baseurl` | |
47 | ||
48 | autoescape: whether template variables should be autoescaped. bool, or | |
49 | a function mapping from template name to bool. Defaults to escaping templates | |
50 | whose names end in .html, .xml or .htm. | |
51 | ||
52 | Returns: | |
53 | jinja environment | |
54 | """ | |
55 | ||
56 | if autoescape is None: | |
57 | autoescape = jinja2.select_autoescape() | |
58 | ||
59 | loader = jinja2.FileSystemLoader(template_search_directories) | |
60 | env = jinja2.Environment(loader=loader, autoescape=autoescape) | |
61 | ||
62 | # Update the environment with our custom filters | |
63 | env.filters.update( | |
64 | { | |
65 | "format_ts": _format_ts_filter, | |
66 | "mxc_to_http": _create_mxc_to_http_filter(config.public_baseurl), | |
67 | } | |
68 | ) | |
69 | ||
70 | # common variables for all templates | |
71 | env.globals.update({"server_name": config.server_name}) | |
72 | ||
73 | return env | |
74 | ||
75 | ||
76 | def _create_mxc_to_http_filter( | |
77 | public_baseurl: Optional[str], | |
78 | ) -> Callable[[str, int, int, str], str]: | |
79 | """Create and return a jinja2 filter that converts MXC urls to HTTP | |
80 | ||
81 | Args: | |
82 | public_baseurl: The public, accessible base URL of the homeserver | |
83 | """ | |
84 | ||
85 | def mxc_to_http_filter( | |
86 | value: str, width: int, height: int, resize_method: str = "crop" | |
87 | ) -> str: | |
88 | if not public_baseurl: | |
89 | raise RuntimeError( | |
90 | "public_baseurl must be set in the homeserver config to convert MXC URLs to HTTP URLs." | |
91 | ) | |
92 | ||
93 | if value[0:6] != "mxc://": | |
94 | return "" | |
95 | ||
96 | server_and_media_id = value[6:] | |
97 | fragment = None | |
98 | if "#" in server_and_media_id: | |
99 | server_and_media_id, fragment = server_and_media_id.split("#", 1) | |
100 | fragment = "#" + fragment | |
101 | ||
102 | params = {"width": width, "height": height, "method": resize_method} | |
103 | return "%s_matrix/media/v1/thumbnail/%s?%s%s" % ( | |
104 | public_baseurl, | |
105 | server_and_media_id, | |
106 | urllib.parse.urlencode(params), | |
107 | fragment or "", | |
108 | ) | |
109 | ||
110 | return mxc_to_http_filter | |
111 | ||
112 | ||
113 | def _format_ts_filter(value: int, format: str): | |
114 | return time.strftime(format, time.localtime(value / 1000)) |
61 | 61 | |
62 | 62 | # check that the auth handler got called as expected |
63 | 63 | auth_handler.complete_sso_login.assert_called_once_with( |
64 | "@test_user:test", request, "redirect_uri", None | |
64 | "@test_user:test", request, "redirect_uri", None, new_user=True | |
65 | 65 | ) |
66 | 66 | |
67 | 67 | def test_map_cas_user_to_existing_user(self): |
84 | 84 | |
85 | 85 | # check that the auth handler got called as expected |
86 | 86 | auth_handler.complete_sso_login.assert_called_once_with( |
87 | "@test_user:test", request, "redirect_uri", None | |
87 | "@test_user:test", request, "redirect_uri", None, new_user=False | |
88 | 88 | ) |
89 | 89 | |
90 | 90 | # Subsequent calls should map to the same mxid. |
93 | 93 | self.handler._handle_cas_response(request, cas_response, "redirect_uri", "") |
94 | 94 | ) |
95 | 95 | auth_handler.complete_sso_login.assert_called_once_with( |
96 | "@test_user:test", request, "redirect_uri", None | |
96 | "@test_user:test", request, "redirect_uri", None, new_user=False | |
97 | 97 | ) |
98 | 98 | |
99 | 99 | def test_map_cas_user_to_invalid_localpart(self): |
111 | 111 | |
112 | 112 | # check that the auth handler got called as expected |
113 | 113 | auth_handler.complete_sso_login.assert_called_once_with( |
114 | "@f=c3=b6=c3=b6:test", request, "redirect_uri", None | |
114 | "@f=c3=b6=c3=b6:test", request, "redirect_uri", None, new_user=True | |
115 | 115 | ) |
116 | 116 | |
117 | 117 |
15 | 15 | from unittest import TestCase |
16 | 16 | |
17 | 17 | from synapse.api.constants import EventTypes |
18 | from synapse.api.errors import AuthError, Codes, SynapseError | |
18 | from synapse.api.errors import AuthError, Codes, LimitExceededError, SynapseError | |
19 | 19 | from synapse.api.room_versions import RoomVersions |
20 | 20 | from synapse.events import EventBase |
21 | 21 | from synapse.federation.federation_base import event_from_pdu_json |
189 | 189 | sg2 = self.successResultOf(self.store._get_state_group_for_event(ev.event_id)) |
190 | 190 | |
191 | 191 | self.assertEqual(sg, sg2) |
192 | ||
193 | @unittest.override_config( | |
194 | {"rc_invites": {"per_user": {"per_second": 0.5, "burst_count": 3}}} | |
195 | ) | |
196 | def test_invite_by_user_ratelimit(self): | |
197 | """Tests that invites from federation to a particular user are | |
198 | actually rate-limited. | |
199 | """ | |
200 | other_server = "otherserver" | |
201 | other_user = "@otheruser:" + other_server | |
202 | ||
203 | # create the room | |
204 | user_id = self.register_user("kermit", "test") | |
205 | tok = self.login("kermit", "test") | |
206 | ||
207 | def create_invite(): | |
208 | room_id = self.helper.create_room_as(room_creator=user_id, tok=tok) | |
209 | room_version = self.get_success(self.store.get_room_version(room_id)) | |
210 | return event_from_pdu_json( | |
211 | { | |
212 | "type": EventTypes.Member, | |
213 | "content": {"membership": "invite"}, | |
214 | "room_id": room_id, | |
215 | "sender": other_user, | |
216 | "state_key": "@user:test", | |
217 | "depth": 32, | |
218 | "prev_events": [], | |
219 | "auth_events": [], | |
220 | "origin_server_ts": self.clock.time_msec(), | |
221 | }, | |
222 | room_version, | |
223 | ) | |
224 | ||
225 | for i in range(3): | |
226 | event = create_invite() | |
227 | self.get_success( | |
228 | self.handler.on_invite_request(other_server, event, event.room_version,) | |
229 | ) | |
230 | ||
231 | event = create_invite() | |
232 | self.get_failure( | |
233 | self.handler.on_invite_request(other_server, event, event.room_version,), | |
234 | exc=LimitExceededError, | |
235 | ) | |
192 | 236 | |
193 | 237 | def _build_and_send_join_event(self, other_server, other_user, room_id): |
194 | 238 | join_event = self.get_success( |
39 | 39 | CLIENT_ID = "test-client-id" |
40 | 40 | CLIENT_SECRET = "test-client-secret" |
41 | 41 | BASE_URL = "https://synapse/" |
42 | CALLBACK_URL = BASE_URL + "_synapse/oidc/callback" | |
42 | CALLBACK_URL = BASE_URL + "_synapse/client/oidc/callback" | |
43 | 43 | SCOPES = ["openid"] |
44 | 44 | |
45 | 45 | AUTHORIZATION_ENDPOINT = ISSUER + "authorize" |
55 | 55 | "token_endpoint": TOKEN_ENDPOINT, |
56 | 56 | "jwks_uri": JWKS_URI, |
57 | 57 | } |
58 | ||
59 | ||
60 | # The cookie name and path don't really matter, just that it has to be coherent | |
61 | # between the callback & redirect handlers. | |
62 | COOKIE_NAME = b"oidc_session" | |
63 | COOKIE_PATH = "/_synapse/oidc" | |
64 | 58 | |
65 | 59 | |
66 | 60 | class TestMappingProvider: |
339 | 333 | # For some reason, call.args does not work with python3.5 |
340 | 334 | args = calls[0][0] |
341 | 335 | kwargs = calls[0][1] |
342 | self.assertEqual(args[0], COOKIE_NAME) | |
343 | self.assertEqual(kwargs["path"], COOKIE_PATH) | |
336 | ||
337 | # The cookie name and path don't really matter, just that it has to be coherent | |
338 | # between the callback & redirect handlers. | |
339 | self.assertEqual(args[0], b"oidc_session") | |
340 | self.assertEqual(kwargs["path"], "/_synapse/client/oidc") | |
344 | 341 | cookie = args[1] |
345 | 342 | |
346 | 343 | macaroon = pymacaroons.Macaroon.deserialize(cookie) |
418 | 415 | self.get_success(self.handler.handle_oidc_callback(request)) |
419 | 416 | |
420 | 417 | auth_handler.complete_sso_login.assert_called_once_with( |
421 | expected_user_id, request, client_redirect_url, None, | |
418 | expected_user_id, request, client_redirect_url, None, new_user=True | |
422 | 419 | ) |
423 | 420 | self.provider._exchange_code.assert_called_once_with(code) |
424 | 421 | self.provider._parse_id_token.assert_called_once_with(token, nonce=nonce) |
449 | 446 | self.get_success(self.handler.handle_oidc_callback(request)) |
450 | 447 | |
451 | 448 | auth_handler.complete_sso_login.assert_called_once_with( |
452 | expected_user_id, request, client_redirect_url, None, | |
449 | expected_user_id, request, client_redirect_url, None, new_user=False | |
453 | 450 | ) |
454 | 451 | self.provider._exchange_code.assert_called_once_with(code) |
455 | 452 | self.provider._parse_id_token.assert_not_called() |
622 | 619 | self.get_success(self.handler.handle_oidc_callback(request)) |
623 | 620 | |
624 | 621 | auth_handler.complete_sso_login.assert_called_once_with( |
625 | "@foo:test", request, client_redirect_url, {"phone": "1234567"}, | |
622 | "@foo:test", | |
623 | request, | |
624 | client_redirect_url, | |
625 | {"phone": "1234567"}, | |
626 | new_user=True, | |
626 | 627 | ) |
627 | 628 | |
628 | 629 | def test_map_userinfo_to_user(self): |
636 | 637 | } |
637 | 638 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
638 | 639 | auth_handler.complete_sso_login.assert_called_once_with( |
639 | "@test_user:test", ANY, ANY, None, | |
640 | "@test_user:test", ANY, ANY, None, new_user=True | |
640 | 641 | ) |
641 | 642 | auth_handler.complete_sso_login.reset_mock() |
642 | 643 | |
647 | 648 | } |
648 | 649 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
649 | 650 | auth_handler.complete_sso_login.assert_called_once_with( |
650 | "@test_user_2:test", ANY, ANY, None, | |
651 | "@test_user_2:test", ANY, ANY, None, new_user=True | |
651 | 652 | ) |
652 | 653 | auth_handler.complete_sso_login.reset_mock() |
653 | 654 | |
684 | 685 | } |
685 | 686 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
686 | 687 | auth_handler.complete_sso_login.assert_called_once_with( |
687 | user.to_string(), ANY, ANY, None, | |
688 | user.to_string(), ANY, ANY, None, new_user=False | |
688 | 689 | ) |
689 | 690 | auth_handler.complete_sso_login.reset_mock() |
690 | 691 | |
691 | 692 | # Subsequent calls should map to the same mxid. |
692 | 693 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
693 | 694 | auth_handler.complete_sso_login.assert_called_once_with( |
694 | user.to_string(), ANY, ANY, None, | |
695 | user.to_string(), ANY, ANY, None, new_user=False | |
695 | 696 | ) |
696 | 697 | auth_handler.complete_sso_login.reset_mock() |
697 | 698 | |
706 | 707 | } |
707 | 708 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
708 | 709 | auth_handler.complete_sso_login.assert_called_once_with( |
709 | user.to_string(), ANY, ANY, None, | |
710 | user.to_string(), ANY, ANY, None, new_user=False | |
710 | 711 | ) |
711 | 712 | auth_handler.complete_sso_login.reset_mock() |
712 | 713 | |
742 | 743 | |
743 | 744 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
744 | 745 | auth_handler.complete_sso_login.assert_called_once_with( |
745 | "@TEST_USER_2:test", ANY, ANY, None, | |
746 | "@TEST_USER_2:test", ANY, ANY, None, new_user=False | |
746 | 747 | ) |
747 | 748 | |
748 | 749 | def test_map_userinfo_to_invalid_localpart(self): |
778 | 779 | |
779 | 780 | # test_user is already taken, so test_user1 gets registered instead. |
780 | 781 | auth_handler.complete_sso_login.assert_called_once_with( |
781 | "@test_user1:test", ANY, ANY, None, | |
782 | "@test_user1:test", ANY, ANY, None, new_user=True | |
782 | 783 | ) |
783 | 784 | auth_handler.complete_sso_login.reset_mock() |
784 | 785 |
130 | 130 | |
131 | 131 | # check that the auth handler got called as expected |
132 | 132 | auth_handler.complete_sso_login.assert_called_once_with( |
133 | "@test_user:test", request, "redirect_uri", None | |
133 | "@test_user:test", request, "redirect_uri", None, new_user=True | |
134 | 134 | ) |
135 | 135 | |
136 | 136 | @override_config({"saml2_config": {"grandfathered_mxid_source_attribute": "mxid"}}) |
156 | 156 | |
157 | 157 | # check that the auth handler got called as expected |
158 | 158 | auth_handler.complete_sso_login.assert_called_once_with( |
159 | "@test_user:test", request, "", None | |
159 | "@test_user:test", request, "", None, new_user=False | |
160 | 160 | ) |
161 | 161 | |
162 | 162 | # Subsequent calls should map to the same mxid. |
165 | 165 | self.handler._handle_authn_response(request, saml_response, "") |
166 | 166 | ) |
167 | 167 | auth_handler.complete_sso_login.assert_called_once_with( |
168 | "@test_user:test", request, "", None | |
168 | "@test_user:test", request, "", None, new_user=False | |
169 | 169 | ) |
170 | 170 | |
171 | 171 | def test_map_saml_response_to_invalid_localpart(self): |
213 | 213 | |
214 | 214 | # test_user is already taken, so test_user1 gets registered instead. |
215 | 215 | auth_handler.complete_sso_login.assert_called_once_with( |
216 | "@test_user1:test", request, "", None | |
216 | "@test_user1:test", request, "", None, new_user=True | |
217 | 217 | ) |
218 | 218 | auth_handler.complete_sso_login.reset_mock() |
219 | 219 |
186 | 186 | # We should get emailed about those messages |
187 | 187 | self._check_for_mail() |
188 | 188 | |
189 | def test_multiple_rooms(self): | |
190 | # We want to test multiple notifications from multiple rooms, so we pause | |
191 | # processing of push while we send messages. | |
192 | self.pusher._pause_processing() | |
193 | ||
194 | # Create a simple room with multiple other users | |
195 | rooms = [ | |
196 | self.helper.create_room_as(self.user_id, tok=self.access_token), | |
197 | self.helper.create_room_as(self.user_id, tok=self.access_token), | |
198 | ] | |
199 | ||
200 | for r, other in zip(rooms, self.others): | |
201 | self.helper.invite( | |
202 | room=r, src=self.user_id, tok=self.access_token, targ=other.id | |
203 | ) | |
204 | self.helper.join(room=r, user=other.id, tok=other.token) | |
205 | ||
206 | # The other users send some messages | |
207 | self.helper.send(rooms[0], body="Hi!", tok=self.others[0].token) | |
208 | self.helper.send(rooms[1], body="There!", tok=self.others[1].token) | |
209 | self.helper.send(rooms[1], body="There!", tok=self.others[1].token) | |
210 | ||
211 | # Nothing should have happened yet, as we're paused. | |
212 | assert not self.email_attempts | |
213 | ||
214 | self.pusher._resume_processing() | |
215 | ||
216 | # We should get emailed about those messages | |
217 | self._check_for_mail() | |
218 | ||
189 | 219 | def test_encrypted_message(self): |
190 | 220 | room = self.helper.create_room_as(self.user_id, tok=self.access_token) |
191 | 221 | self.helper.invite( |
0 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | from typing import Iterable, Optional, Tuple | |
15 | ||
16 | from synapse.api.constants import EventTypes, Membership | |
17 | from synapse.api.room_versions import RoomVersions | |
18 | from synapse.events import FrozenEvent | |
19 | from synapse.push.presentable_names import calculate_room_name | |
20 | from synapse.types import StateKey, StateMap | |
21 | ||
22 | from tests import unittest | |
23 | ||
24 | ||
25 | class MockDataStore: | |
26 | """ | |
27 | A fake data store which stores a mapping of state key to event content. | |
28 | (I.e. the state key is used as the event ID.) | |
29 | """ | |
30 | ||
31 | def __init__(self, events: Iterable[Tuple[StateKey, dict]]): | |
32 | """ | |
33 | Args: | |
34 | events: A state map to event contents. | |
35 | """ | |
36 | self._events = {} | |
37 | ||
38 | for i, (event_id, content) in enumerate(events): | |
39 | self._events[event_id] = FrozenEvent( | |
40 | { | |
41 | "event_id": "$event_id", | |
42 | "type": event_id[0], | |
43 | "sender": "@user:test", | |
44 | "state_key": event_id[1], | |
45 | "room_id": "#room:test", | |
46 | "content": content, | |
47 | "origin_server_ts": i, | |
48 | }, | |
49 | RoomVersions.V1, | |
50 | ) | |
51 | ||
52 | async def get_event( | |
53 | self, event_id: StateKey, allow_none: bool = False | |
54 | ) -> Optional[FrozenEvent]: | |
55 | assert allow_none, "Mock not configured for allow_none = False" | |
56 | ||
57 | return self._events.get(event_id) | |
58 | ||
59 | async def get_events(self, event_ids: Iterable[StateKey]): | |
60 | # This is cheating since it just returns all events. | |
61 | return self._events | |
62 | ||
63 | ||
64 | class PresentableNamesTestCase(unittest.HomeserverTestCase): | |
65 | USER_ID = "@test:test" | |
66 | OTHER_USER_ID = "@user:test" | |
67 | ||
68 | def _calculate_room_name( | |
69 | self, | |
70 | events: StateMap[dict], | |
71 | user_id: str = "", | |
72 | fallback_to_members: bool = True, | |
73 | fallback_to_single_member: bool = True, | |
74 | ): | |
75 | # This isn't 100% accurate, but works with MockDataStore. | |
76 | room_state_ids = {k[0]: k[0] for k in events} | |
77 | ||
78 | return self.get_success( | |
79 | calculate_room_name( | |
80 | MockDataStore(events), | |
81 | room_state_ids, | |
82 | user_id or self.USER_ID, | |
83 | fallback_to_members, | |
84 | fallback_to_single_member, | |
85 | ) | |
86 | ) | |
87 | ||
88 | def test_name(self): | |
89 | """A room name event should be used.""" | |
90 | events = [ | |
91 | ((EventTypes.Name, ""), {"name": "test-name"}), | |
92 | ] | |
93 | self.assertEqual("test-name", self._calculate_room_name(events)) | |
94 | ||
95 | # Check if the event content has garbage. | |
96 | events = [((EventTypes.Name, ""), {"foo": 1})] | |
97 | self.assertEqual("Empty Room", self._calculate_room_name(events)) | |
98 | ||
99 | events = [((EventTypes.Name, ""), {"name": 1})] | |
100 | self.assertEqual(1, self._calculate_room_name(events)) | |
101 | ||
102 | def test_canonical_alias(self): | |
103 | """An canonical alias should be used.""" | |
104 | events = [ | |
105 | ((EventTypes.CanonicalAlias, ""), {"alias": "#test-name:test"}), | |
106 | ] | |
107 | self.assertEqual("#test-name:test", self._calculate_room_name(events)) | |
108 | ||
109 | # Check if the event content has garbage. | |
110 | events = [((EventTypes.CanonicalAlias, ""), {"foo": 1})] | |
111 | self.assertEqual("Empty Room", self._calculate_room_name(events)) | |
112 | ||
113 | events = [((EventTypes.CanonicalAlias, ""), {"alias": "test-name"})] | |
114 | self.assertEqual("Empty Room", self._calculate_room_name(events)) | |
115 | ||
116 | def test_invite(self): | |
117 | """An invite has special behaviour.""" | |
118 | events = [ | |
119 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.INVITE}), | |
120 | ((EventTypes.Member, self.OTHER_USER_ID), {"displayname": "Other User"}), | |
121 | ] | |
122 | self.assertEqual("Invite from Other User", self._calculate_room_name(events)) | |
123 | self.assertIsNone( | |
124 | self._calculate_room_name(events, fallback_to_single_member=False) | |
125 | ) | |
126 | # Ensure this logic is skipped if we don't fallback to members. | |
127 | self.assertIsNone(self._calculate_room_name(events, fallback_to_members=False)) | |
128 | ||
129 | # Check if the event content has garbage. | |
130 | events = [ | |
131 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.INVITE}), | |
132 | ((EventTypes.Member, self.OTHER_USER_ID), {"foo": 1}), | |
133 | ] | |
134 | self.assertEqual("Invite from @user:test", self._calculate_room_name(events)) | |
135 | ||
136 | # No member event for sender. | |
137 | events = [ | |
138 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.INVITE}), | |
139 | ] | |
140 | self.assertEqual("Room Invite", self._calculate_room_name(events)) | |
141 | ||
142 | def test_no_members(self): | |
143 | """Behaviour of an empty room.""" | |
144 | events = [] | |
145 | self.assertEqual("Empty Room", self._calculate_room_name(events)) | |
146 | ||
147 | # Note that events with invalid (or missing) membership are ignored. | |
148 | events = [ | |
149 | ((EventTypes.Member, self.OTHER_USER_ID), {"foo": 1}), | |
150 | ((EventTypes.Member, "@foo:test"), {"membership": "foo"}), | |
151 | ] | |
152 | self.assertEqual("Empty Room", self._calculate_room_name(events)) | |
153 | ||
154 | def test_no_other_members(self): | |
155 | """Behaviour of a room with no other members in it.""" | |
156 | events = [ | |
157 | ( | |
158 | (EventTypes.Member, self.USER_ID), | |
159 | {"membership": Membership.JOIN, "displayname": "Me"}, | |
160 | ), | |
161 | ] | |
162 | self.assertEqual("Me", self._calculate_room_name(events)) | |
163 | ||
164 | # Check if the event content has no displayname. | |
165 | events = [ | |
166 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.JOIN}), | |
167 | ] | |
168 | self.assertEqual("@test:test", self._calculate_room_name(events)) | |
169 | ||
170 | # 3pid invite, use the other user (who is set as the sender). | |
171 | events = [ | |
172 | ((EventTypes.Member, self.OTHER_USER_ID), {"membership": Membership.JOIN}), | |
173 | ] | |
174 | self.assertEqual( | |
175 | "nobody", self._calculate_room_name(events, user_id=self.OTHER_USER_ID) | |
176 | ) | |
177 | ||
178 | events = [ | |
179 | ((EventTypes.Member, self.OTHER_USER_ID), {"membership": Membership.JOIN}), | |
180 | ((EventTypes.ThirdPartyInvite, self.OTHER_USER_ID), {}), | |
181 | ] | |
182 | self.assertEqual( | |
183 | "Inviting email address", | |
184 | self._calculate_room_name(events, user_id=self.OTHER_USER_ID), | |
185 | ) | |
186 | ||
187 | def test_one_other_member(self): | |
188 | """Behaviour of a room with a single other member.""" | |
189 | events = [ | |
190 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.JOIN}), | |
191 | ( | |
192 | (EventTypes.Member, self.OTHER_USER_ID), | |
193 | {"membership": Membership.JOIN, "displayname": "Other User"}, | |
194 | ), | |
195 | ] | |
196 | self.assertEqual("Other User", self._calculate_room_name(events)) | |
197 | self.assertIsNone( | |
198 | self._calculate_room_name(events, fallback_to_single_member=False) | |
199 | ) | |
200 | ||
201 | # Check if the event content has no displayname and is an invite. | |
202 | events = [ | |
203 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.JOIN}), | |
204 | ( | |
205 | (EventTypes.Member, self.OTHER_USER_ID), | |
206 | {"membership": Membership.INVITE}, | |
207 | ), | |
208 | ] | |
209 | self.assertEqual("@user:test", self._calculate_room_name(events)) | |
210 | ||
211 | def test_other_members(self): | |
212 | """Behaviour of a room with multiple other members.""" | |
213 | # Two other members. | |
214 | events = [ | |
215 | ((EventTypes.Member, self.USER_ID), {"membership": Membership.JOIN}), | |
216 | ( | |
217 | (EventTypes.Member, self.OTHER_USER_ID), | |
218 | {"membership": Membership.JOIN, "displayname": "Other User"}, | |
219 | ), | |
220 | ((EventTypes.Member, "@foo:test"), {"membership": Membership.JOIN}), | |
221 | ] | |
222 | self.assertEqual("Other User and @foo:test", self._calculate_room_name(events)) | |
223 | ||
224 | # Three or more other members. | |
225 | events.append( | |
226 | ((EventTypes.Member, "@fourth:test"), {"membership": Membership.INVITE}) | |
227 | ) | |
228 | self.assertEqual("Other User and 2 others", self._calculate_room_name(events)) |
28 | 28 | "type": "m.room.history_visibility", |
29 | 29 | "sender": "@user:test", |
30 | 30 | "state_key": "", |
31 | "room_id": "@room:test", | |
31 | "room_id": "#room:test", | |
32 | 32 | "content": content, |
33 | 33 | }, |
34 | 34 | RoomVersions.V1, |
211 | 211 | # Fake in memory Redis server that servers can connect to. |
212 | 212 | self._redis_server = FakeRedisPubSubServer() |
213 | 213 | |
214 | # We may have an attempt to connect to redis for the external cache already. | |
215 | self.connect_any_redis_attempts() | |
216 | ||
214 | 217 | store = self.hs.get_datastore() |
215 | 218 | self.database_pool = store.db_pool |
216 | 219 | |
400 | 403 | fake one. |
401 | 404 | """ |
402 | 405 | clients = self.reactor.tcpClients |
403 | self.assertEqual(len(clients), 1) | |
404 | (host, port, client_factory, _timeout, _bindAddress) = clients.pop(0) | |
405 | self.assertEqual(host, "localhost") | |
406 | self.assertEqual(port, 6379) | |
407 | ||
408 | client_protocol = client_factory.buildProtocol(None) | |
409 | server_protocol = self._redis_server.buildProtocol(None) | |
410 | ||
411 | client_to_server_transport = FakeTransport( | |
412 | server_protocol, self.reactor, client_protocol | |
413 | ) | |
414 | client_protocol.makeConnection(client_to_server_transport) | |
415 | ||
416 | server_to_client_transport = FakeTransport( | |
417 | client_protocol, self.reactor, server_protocol | |
418 | ) | |
419 | server_protocol.makeConnection(server_to_client_transport) | |
420 | ||
421 | return client_to_server_transport, server_to_client_transport | |
406 | while clients: | |
407 | (host, port, client_factory, _timeout, _bindAddress) = clients.pop(0) | |
408 | self.assertEqual(host, "localhost") | |
409 | self.assertEqual(port, 6379) | |
410 | ||
411 | client_protocol = client_factory.buildProtocol(None) | |
412 | server_protocol = self._redis_server.buildProtocol(None) | |
413 | ||
414 | client_to_server_transport = FakeTransport( | |
415 | server_protocol, self.reactor, client_protocol | |
416 | ) | |
417 | client_protocol.makeConnection(client_to_server_transport) | |
418 | ||
419 | server_to_client_transport = FakeTransport( | |
420 | client_protocol, self.reactor, server_protocol | |
421 | ) | |
422 | server_protocol.makeConnection(server_to_client_transport) | |
422 | 423 | |
423 | 424 | |
424 | 425 | class TestReplicationDataHandler(GenericWorkerReplicationHandler): |
623 | 624 | (channel,) = args |
624 | 625 | self._server.add_subscriber(self) |
625 | 626 | self.send(["subscribe", channel, 1]) |
627 | ||
628 | # Since we use SET/GET to cache things we can safely no-op them. | |
629 | elif command == b"SET": | |
630 | self.send("OK") | |
631 | elif command == b"GET": | |
632 | self.send(None) | |
626 | 633 | else: |
627 | 634 | raise Exception("Unknown command") |
628 | 635 | |
644 | 651 | # We assume bytes are just unicode strings. |
645 | 652 | obj = obj.decode("utf-8") |
646 | 653 | |
654 | if obj is None: | |
655 | return "$-1\r\n" | |
647 | 656 | if isinstance(obj, str): |
648 | 657 | return "${len}\r\n{str}\r\n".format(len=len(obj), str=obj) |
649 | 658 | if isinstance(obj, int): |
1178 | 1178 | ["@admin:test", "@bar:test", "@foobar:test"], channel.json_body["members"] |
1179 | 1179 | ) |
1180 | 1180 | self.assertEqual(channel.json_body["total"], 3) |
1181 | ||
1182 | def test_room_state(self): | |
1183 | """Test that room state can be requested correctly""" | |
1184 | # Create two test rooms | |
1185 | room_id = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok) | |
1186 | ||
1187 | url = "/_synapse/admin/v1/rooms/%s/state" % (room_id,) | |
1188 | channel = self.make_request( | |
1189 | "GET", url.encode("ascii"), access_token=self.admin_user_tok, | |
1190 | ) | |
1191 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
1192 | self.assertIn("state", channel.json_body) | |
1193 | # testing that the state events match is painful and not done here. We assume that | |
1194 | # the create_room already does the right thing, so no need to verify that we got | |
1195 | # the state events it created. | |
1181 | 1196 | |
1182 | 1197 | |
1183 | 1198 | class JoinAliasRoomTestCase(unittest.HomeserverTestCase): |
27 | 27 | from synapse.api.room_versions import RoomVersions |
28 | 28 | from synapse.rest.client.v1 import login, logout, profile, room |
29 | 29 | from synapse.rest.client.v2_alpha import devices, sync |
30 | from synapse.types import JsonDict | |
30 | 31 | |
31 | 32 | from tests import unittest |
32 | 33 | from tests.test_utils import make_awaitable |
467 | 468 | self.admin_user = self.register_user("admin", "pass", admin=True) |
468 | 469 | self.admin_user_tok = self.login("admin", "pass") |
469 | 470 | |
470 | self.user1 = self.register_user( | |
471 | "user1", "pass1", admin=False, displayname="Name 1" | |
472 | ) | |
473 | self.user2 = self.register_user( | |
474 | "user2", "pass2", admin=False, displayname="Name 2" | |
475 | ) | |
476 | ||
477 | 471 | def test_no_auth(self): |
478 | 472 | """ |
479 | 473 | Try to list users without authentication. |
487 | 481 | """ |
488 | 482 | If the user is not a server admin, an error is returned. |
489 | 483 | """ |
484 | self._create_users(1) | |
490 | 485 | other_user_token = self.login("user1", "pass1") |
491 | 486 | |
492 | 487 | channel = self.make_request("GET", self.url, access_token=other_user_token) |
498 | 493 | """ |
499 | 494 | List all users, including deactivated users. |
500 | 495 | """ |
496 | self._create_users(2) | |
497 | ||
501 | 498 | channel = self.make_request( |
502 | 499 | "GET", |
503 | 500 | self.url + "?deactivated=true", |
510 | 507 | self.assertEqual(3, channel.json_body["total"]) |
511 | 508 | |
512 | 509 | # Check that all fields are available |
513 | for u in channel.json_body["users"]: | |
514 | self.assertIn("name", u) | |
515 | self.assertIn("is_guest", u) | |
516 | self.assertIn("admin", u) | |
517 | self.assertIn("user_type", u) | |
518 | self.assertIn("deactivated", u) | |
519 | self.assertIn("displayname", u) | |
520 | self.assertIn("avatar_url", u) | |
510 | self._check_fields(channel.json_body["users"]) | |
521 | 511 | |
522 | 512 | def test_search_term(self): |
523 | 513 | """Test that searching for a users works correctly""" |
548 | 538 | |
549 | 539 | # Check that users were returned |
550 | 540 | self.assertTrue("users" in channel.json_body) |
541 | self._check_fields(channel.json_body["users"]) | |
551 | 542 | users = channel.json_body["users"] |
552 | 543 | |
553 | 544 | # Check that the expected number of users were returned |
560 | 551 | u = users[0] |
561 | 552 | self.assertEqual(expected_user_id, u["name"]) |
562 | 553 | |
554 | self._create_users(2) | |
555 | ||
556 | user1 = "@user1:test" | |
557 | user2 = "@user2:test" | |
558 | ||
563 | 559 | # Perform search tests |
564 | _search_test(self.user1, "er1") | |
565 | _search_test(self.user1, "me 1") | |
566 | ||
567 | _search_test(self.user2, "er2") | |
568 | _search_test(self.user2, "me 2") | |
569 | ||
570 | _search_test(self.user1, "er1", "user_id") | |
571 | _search_test(self.user2, "er2", "user_id") | |
560 | _search_test(user1, "er1") | |
561 | _search_test(user1, "me 1") | |
562 | ||
563 | _search_test(user2, "er2") | |
564 | _search_test(user2, "me 2") | |
565 | ||
566 | _search_test(user1, "er1", "user_id") | |
567 | _search_test(user2, "er2", "user_id") | |
572 | 568 | |
573 | 569 | # Test case insensitive |
574 | _search_test(self.user1, "ER1") | |
575 | _search_test(self.user1, "NAME 1") | |
576 | ||
577 | _search_test(self.user2, "ER2") | |
578 | _search_test(self.user2, "NAME 2") | |
579 | ||
580 | _search_test(self.user1, "ER1", "user_id") | |
581 | _search_test(self.user2, "ER2", "user_id") | |
570 | _search_test(user1, "ER1") | |
571 | _search_test(user1, "NAME 1") | |
572 | ||
573 | _search_test(user2, "ER2") | |
574 | _search_test(user2, "NAME 2") | |
575 | ||
576 | _search_test(user1, "ER1", "user_id") | |
577 | _search_test(user2, "ER2", "user_id") | |
582 | 578 | |
583 | 579 | _search_test(None, "foo") |
584 | 580 | _search_test(None, "bar") |
585 | 581 | |
586 | 582 | _search_test(None, "foo", "user_id") |
587 | 583 | _search_test(None, "bar", "user_id") |
584 | ||
585 | def test_invalid_parameter(self): | |
586 | """ | |
587 | If parameters are invalid, an error is returned. | |
588 | """ | |
589 | ||
590 | # negative limit | |
591 | channel = self.make_request( | |
592 | "GET", self.url + "?limit=-5", access_token=self.admin_user_tok, | |
593 | ) | |
594 | ||
595 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
596 | self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"]) | |
597 | ||
598 | # negative from | |
599 | channel = self.make_request( | |
600 | "GET", self.url + "?from=-5", access_token=self.admin_user_tok, | |
601 | ) | |
602 | ||
603 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
604 | self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"]) | |
605 | ||
606 | # invalid guests | |
607 | channel = self.make_request( | |
608 | "GET", self.url + "?guests=not_bool", access_token=self.admin_user_tok, | |
609 | ) | |
610 | ||
611 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
612 | self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"]) | |
613 | ||
614 | # invalid deactivated | |
615 | channel = self.make_request( | |
616 | "GET", self.url + "?deactivated=not_bool", access_token=self.admin_user_tok, | |
617 | ) | |
618 | ||
619 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
620 | self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"]) | |
621 | ||
622 | def test_limit(self): | |
623 | """ | |
624 | Testing list of users with limit | |
625 | """ | |
626 | ||
627 | number_users = 20 | |
628 | # Create one less user (since there's already an admin user). | |
629 | self._create_users(number_users - 1) | |
630 | ||
631 | channel = self.make_request( | |
632 | "GET", self.url + "?limit=5", access_token=self.admin_user_tok, | |
633 | ) | |
634 | ||
635 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
636 | self.assertEqual(channel.json_body["total"], number_users) | |
637 | self.assertEqual(len(channel.json_body["users"]), 5) | |
638 | self.assertEqual(channel.json_body["next_token"], "5") | |
639 | self._check_fields(channel.json_body["users"]) | |
640 | ||
641 | def test_from(self): | |
642 | """ | |
643 | Testing list of users with a defined starting point (from) | |
644 | """ | |
645 | ||
646 | number_users = 20 | |
647 | # Create one less user (since there's already an admin user). | |
648 | self._create_users(number_users - 1) | |
649 | ||
650 | channel = self.make_request( | |
651 | "GET", self.url + "?from=5", access_token=self.admin_user_tok, | |
652 | ) | |
653 | ||
654 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
655 | self.assertEqual(channel.json_body["total"], number_users) | |
656 | self.assertEqual(len(channel.json_body["users"]), 15) | |
657 | self.assertNotIn("next_token", channel.json_body) | |
658 | self._check_fields(channel.json_body["users"]) | |
659 | ||
660 | def test_limit_and_from(self): | |
661 | """ | |
662 | Testing list of users with a defined starting point and limit | |
663 | """ | |
664 | ||
665 | number_users = 20 | |
666 | # Create one less user (since there's already an admin user). | |
667 | self._create_users(number_users - 1) | |
668 | ||
669 | channel = self.make_request( | |
670 | "GET", self.url + "?from=5&limit=10", access_token=self.admin_user_tok, | |
671 | ) | |
672 | ||
673 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
674 | self.assertEqual(channel.json_body["total"], number_users) | |
675 | self.assertEqual(channel.json_body["next_token"], "15") | |
676 | self.assertEqual(len(channel.json_body["users"]), 10) | |
677 | self._check_fields(channel.json_body["users"]) | |
678 | ||
679 | def test_next_token(self): | |
680 | """ | |
681 | Testing that `next_token` appears at the right place | |
682 | """ | |
683 | ||
684 | number_users = 20 | |
685 | # Create one less user (since there's already an admin user). | |
686 | self._create_users(number_users - 1) | |
687 | ||
688 | # `next_token` does not appear | |
689 | # Number of results is the number of entries | |
690 | channel = self.make_request( | |
691 | "GET", self.url + "?limit=20", access_token=self.admin_user_tok, | |
692 | ) | |
693 | ||
694 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
695 | self.assertEqual(channel.json_body["total"], number_users) | |
696 | self.assertEqual(len(channel.json_body["users"]), number_users) | |
697 | self.assertNotIn("next_token", channel.json_body) | |
698 | ||
699 | # `next_token` does not appear | |
700 | # Number of max results is larger than the number of entries | |
701 | channel = self.make_request( | |
702 | "GET", self.url + "?limit=21", access_token=self.admin_user_tok, | |
703 | ) | |
704 | ||
705 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
706 | self.assertEqual(channel.json_body["total"], number_users) | |
707 | self.assertEqual(len(channel.json_body["users"]), number_users) | |
708 | self.assertNotIn("next_token", channel.json_body) | |
709 | ||
710 | # `next_token` does appear | |
711 | # Number of max results is smaller than the number of entries | |
712 | channel = self.make_request( | |
713 | "GET", self.url + "?limit=19", access_token=self.admin_user_tok, | |
714 | ) | |
715 | ||
716 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
717 | self.assertEqual(channel.json_body["total"], number_users) | |
718 | self.assertEqual(len(channel.json_body["users"]), 19) | |
719 | self.assertEqual(channel.json_body["next_token"], "19") | |
720 | ||
721 | # Check | |
722 | # Set `from` to value of `next_token` for request remaining entries | |
723 | # `next_token` does not appear | |
724 | channel = self.make_request( | |
725 | "GET", self.url + "?from=19", access_token=self.admin_user_tok, | |
726 | ) | |
727 | ||
728 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
729 | self.assertEqual(channel.json_body["total"], number_users) | |
730 | self.assertEqual(len(channel.json_body["users"]), 1) | |
731 | self.assertNotIn("next_token", channel.json_body) | |
732 | ||
733 | def _check_fields(self, content: JsonDict): | |
734 | """Checks that the expected user attributes are present in content | |
735 | Args: | |
736 | content: List that is checked for content | |
737 | """ | |
738 | for u in content: | |
739 | self.assertIn("name", u) | |
740 | self.assertIn("is_guest", u) | |
741 | self.assertIn("admin", u) | |
742 | self.assertIn("user_type", u) | |
743 | self.assertIn("deactivated", u) | |
744 | self.assertIn("displayname", u) | |
745 | self.assertIn("avatar_url", u) | |
746 | ||
747 | def _create_users(self, number_users: int): | |
748 | """ | |
749 | Create a number of users | |
750 | Args: | |
751 | number_users: Number of users to be created | |
752 | """ | |
753 | for i in range(1, number_users + 1): | |
754 | self.register_user( | |
755 | "user%d" % i, "pass%d" % i, admin=False, displayname="Name %d" % i, | |
756 | ) | |
588 | 757 | |
589 | 758 | |
590 | 759 | class DeactivateAccountTestCase(unittest.HomeserverTestCase): |
2210 | 2379 | self.assertEqual(200, channel.code, msg=channel.json_body) |
2211 | 2380 | self.assertEqual(self.other_user, channel.json_body["user_id"]) |
2212 | 2381 | self.assertIn("devices", channel.json_body) |
2382 | ||
2383 | ||
2384 | class ShadowBanRestTestCase(unittest.HomeserverTestCase): | |
2385 | ||
2386 | servlets = [ | |
2387 | synapse.rest.admin.register_servlets, | |
2388 | login.register_servlets, | |
2389 | ] | |
2390 | ||
2391 | def prepare(self, reactor, clock, hs): | |
2392 | self.store = hs.get_datastore() | |
2393 | ||
2394 | self.admin_user = self.register_user("admin", "pass", admin=True) | |
2395 | self.admin_user_tok = self.login("admin", "pass") | |
2396 | ||
2397 | self.other_user = self.register_user("user", "pass") | |
2398 | ||
2399 | self.url = "/_synapse/admin/v1/users/%s/shadow_ban" % urllib.parse.quote( | |
2400 | self.other_user | |
2401 | ) | |
2402 | ||
2403 | def test_no_auth(self): | |
2404 | """ | |
2405 | Try to get information of an user without authentication. | |
2406 | """ | |
2407 | channel = self.make_request("POST", self.url) | |
2408 | self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"]) | |
2409 | self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"]) | |
2410 | ||
2411 | def test_requester_is_not_admin(self): | |
2412 | """ | |
2413 | If the user is not a server admin, an error is returned. | |
2414 | """ | |
2415 | other_user_token = self.login("user", "pass") | |
2416 | ||
2417 | channel = self.make_request("POST", self.url, access_token=other_user_token) | |
2418 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) | |
2419 | self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) | |
2420 | ||
2421 | def test_user_is_not_local(self): | |
2422 | """ | |
2423 | Tests that shadow-banning for a user that is not a local returns a 400 | |
2424 | """ | |
2425 | url = "/_synapse/admin/v1/whois/@unknown_person:unknown_domain" | |
2426 | ||
2427 | channel = self.make_request("POST", url, access_token=self.admin_user_tok) | |
2428 | self.assertEqual(400, channel.code, msg=channel.json_body) | |
2429 | ||
2430 | def test_success(self): | |
2431 | """ | |
2432 | Shadow-banning should succeed for an admin. | |
2433 | """ | |
2434 | # The user starts off as not shadow-banned. | |
2435 | other_user_token = self.login("user", "pass") | |
2436 | result = self.get_success(self.store.get_user_by_access_token(other_user_token)) | |
2437 | self.assertFalse(result.shadow_banned) | |
2438 | ||
2439 | channel = self.make_request("POST", self.url, access_token=self.admin_user_tok) | |
2440 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
2441 | self.assertEqual({}, channel.json_body) | |
2442 | ||
2443 | # Ensure the user is shadow-banned (and the cache was cleared). | |
2444 | result = self.get_success(self.store.get_user_by_access_token(other_user_token)) | |
2445 | self.assertTrue(result.shadow_banned) |
17 | 17 | from synapse.api.constants import EventTypes |
18 | 18 | from synapse.rest.client.v1 import directory, login, profile, room |
19 | 19 | from synapse.rest.client.v2_alpha import room_upgrade_rest_servlet |
20 | from synapse.types import UserID | |
20 | 21 | |
21 | 22 | from tests import unittest |
22 | 23 | |
30 | 31 | self.store = self.hs.get_datastore() |
31 | 32 | |
32 | 33 | self.get_success( |
33 | self.store.db_pool.simple_update( | |
34 | table="users", | |
35 | keyvalues={"name": self.banned_user_id}, | |
36 | updatevalues={"shadow_banned": True}, | |
37 | desc="shadow_ban", | |
38 | ) | |
34 | self.store.set_shadow_banned(UserID.from_string(self.banned_user_id), True) | |
39 | 35 | ) |
40 | 36 | |
41 | 37 | self.other_user_id = self.register_user("otheruser", "pass") |
28 | 28 | from synapse.rest.client.v1 import login, logout |
29 | 29 | from synapse.rest.client.v2_alpha import devices, register |
30 | 30 | from synapse.rest.client.v2_alpha.account import WhoamiRestServlet |
31 | from synapse.rest.synapse.client.pick_idp import PickIdpResource | |
32 | from synapse.rest.synapse.client.pick_username import pick_username_resource | |
31 | from synapse.rest.synapse.client import build_synapse_client_resource_tree | |
33 | 32 | from synapse.types import create_requester |
34 | 33 | |
35 | 34 | from tests import unittest |
73 | 72 | |
74 | 73 | # the query params in TEST_CLIENT_REDIRECT_URL |
75 | 74 | EXPECTED_CLIENT_REDIRECT_URL_PARAMS = [("<ab c>", ""), ('q" =+"', '"fö&=o"')] |
75 | ||
76 | # (possibly experimental) login flows we expect to appear in the list after the normal | |
77 | # ones | |
78 | ADDITIONAL_LOGIN_FLOWS = [{"type": "uk.half-shot.msc2778.login.application_service"}] | |
76 | 79 | |
77 | 80 | |
78 | 81 | class LoginRestServletTestCase(unittest.HomeserverTestCase): |
418 | 421 | return config |
419 | 422 | |
420 | 423 | def create_resource_dict(self) -> Dict[str, Resource]: |
421 | from synapse.rest.oidc import OIDCResource | |
422 | ||
423 | 424 | d = super().create_resource_dict() |
424 | d["/_synapse/client/pick_idp"] = PickIdpResource(self.hs) | |
425 | d["/_synapse/oidc"] = OIDCResource(self.hs) | |
425 | d.update(build_synapse_client_resource_tree(self.hs)) | |
426 | 426 | return d |
427 | ||
428 | def test_get_login_flows(self): | |
429 | """GET /login should return password and SSO flows""" | |
430 | channel = self.make_request("GET", "/_matrix/client/r0/login") | |
431 | self.assertEqual(channel.code, 200, channel.result) | |
432 | ||
433 | expected_flows = [ | |
434 | {"type": "m.login.cas"}, | |
435 | {"type": "m.login.sso"}, | |
436 | {"type": "m.login.token"}, | |
437 | {"type": "m.login.password"}, | |
438 | ] + ADDITIONAL_LOGIN_FLOWS | |
439 | ||
440 | self.assertCountEqual(channel.json_body["flows"], expected_flows) | |
441 | ||
442 | @override_config({"experimental_features": {"msc2858_enabled": True}}) | |
443 | def test_get_msc2858_login_flows(self): | |
444 | """The SSO flow should include IdP info if MSC2858 is enabled""" | |
445 | channel = self.make_request("GET", "/_matrix/client/r0/login") | |
446 | self.assertEqual(channel.code, 200, channel.result) | |
447 | ||
448 | # stick the flows results in a dict by type | |
449 | flow_results = {} # type: Dict[str, Any] | |
450 | for f in channel.json_body["flows"]: | |
451 | flow_type = f["type"] | |
452 | self.assertNotIn( | |
453 | flow_type, flow_results, "duplicate flow type %s" % (flow_type,) | |
454 | ) | |
455 | flow_results[flow_type] = f | |
456 | ||
457 | self.assertIn("m.login.sso", flow_results, "m.login.sso was not returned") | |
458 | sso_flow = flow_results.pop("m.login.sso") | |
459 | # we should have a set of IdPs | |
460 | self.assertCountEqual( | |
461 | sso_flow["org.matrix.msc2858.identity_providers"], | |
462 | [ | |
463 | {"id": "cas", "name": "CAS"}, | |
464 | {"id": "saml", "name": "SAML"}, | |
465 | {"id": "oidc-idp1", "name": "IDP1"}, | |
466 | {"id": "oidc", "name": "OIDC"}, | |
467 | ], | |
468 | ) | |
469 | ||
470 | # the rest of the flows are simple | |
471 | expected_flows = [ | |
472 | {"type": "m.login.cas"}, | |
473 | {"type": "m.login.token"}, | |
474 | {"type": "m.login.password"}, | |
475 | ] + ADDITIONAL_LOGIN_FLOWS | |
476 | ||
477 | self.assertCountEqual(flow_results.values(), expected_flows) | |
427 | 478 | |
428 | 479 | def test_multi_sso_redirect(self): |
429 | 480 | """/login/sso/redirect should redirect to an identity picker""" |
563 | 614 | ) |
564 | 615 | self.assertEqual(channel.code, 400, channel.result) |
565 | 616 | |
617 | def test_client_idp_redirect_msc2858_disabled(self): | |
618 | """If the client tries to pick an IdP but MSC2858 is disabled, return a 400""" | |
619 | channel = self.make_request( | |
620 | "GET", | |
621 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl=" | |
622 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
623 | ) | |
624 | self.assertEqual(channel.code, 400, channel.result) | |
625 | self.assertEqual(channel.json_body["errcode"], "M_UNRECOGNIZED") | |
626 | ||
627 | @override_config({"experimental_features": {"msc2858_enabled": True}}) | |
628 | def test_client_idp_redirect_to_unknown(self): | |
629 | """If the client tries to pick an unknown IdP, return a 404""" | |
630 | channel = self.make_request( | |
631 | "GET", | |
632 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/xxx?redirectUrl=" | |
633 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
634 | ) | |
635 | self.assertEqual(channel.code, 404, channel.result) | |
636 | self.assertEqual(channel.json_body["errcode"], "M_NOT_FOUND") | |
637 | ||
638 | @override_config({"experimental_features": {"msc2858_enabled": True}}) | |
639 | def test_client_idp_redirect_to_oidc(self): | |
640 | """If the client pick a known IdP, redirect to it""" | |
641 | channel = self.make_request( | |
642 | "GET", | |
643 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl=" | |
644 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
645 | ) | |
646 | ||
647 | self.assertEqual(channel.code, 302, channel.result) | |
648 | oidc_uri = channel.headers.getRawHeaders("Location")[0] | |
649 | oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1) | |
650 | ||
651 | # it should redirect us to the auth page of the OIDC server | |
652 | self.assertEqual(oidc_uri_path, TEST_OIDC_AUTH_ENDPOINT) | |
653 | ||
566 | 654 | @staticmethod |
567 | 655 | def _get_value_from_macaroon(macaroon: pymacaroons.Macaroon, key: str) -> str: |
568 | 656 | prefix = key + " = " |
583 | 671 | self.redirect_path = "_synapse/client/login/sso/redirect/confirm" |
584 | 672 | |
585 | 673 | config = self.default_config() |
674 | config["public_baseurl"] = ( | |
675 | config.get("public_baseurl") or "https://matrix.goodserver.com:8448" | |
676 | ) | |
586 | 677 | config["cas_config"] = { |
587 | 678 | "enabled": True, |
588 | 679 | "server_url": CAS_SERVER, |
589 | "service_url": "https://matrix.goodserver.com:8448", | |
590 | 680 | } |
591 | 681 | |
592 | 682 | cas_user_id = "username" |
1118 | 1208 | return config |
1119 | 1209 | |
1120 | 1210 | def create_resource_dict(self) -> Dict[str, Resource]: |
1121 | from synapse.rest.oidc import OIDCResource | |
1122 | ||
1123 | 1211 | d = super().create_resource_dict() |
1124 | d["/_synapse/client/pick_username"] = pick_username_resource(self.hs) | |
1125 | d["/_synapse/oidc"] = OIDCResource(self.hs) | |
1212 | d.update(build_synapse_client_resource_tree(self.hs)) | |
1126 | 1213 | return d |
1127 | 1214 | |
1128 | 1215 | def test_username_picker(self): |
1136 | 1223 | # that should redirect to the username picker |
1137 | 1224 | self.assertEqual(channel.code, 302, channel.result) |
1138 | 1225 | picker_url = channel.headers.getRawHeaders("Location")[0] |
1139 | self.assertEqual(picker_url, "/_synapse/client/pick_username") | |
1226 | self.assertEqual(picker_url, "/_synapse/client/pick_username/account_details") | |
1140 | 1227 | |
1141 | 1228 | # ... with a username_mapping_session cookie |
1142 | 1229 | cookies = {} # type: Dict[str,str] |
1160 | 1247 | self.assertApproximates(session.expiry_time_ms, expected_expiry, tolerance=1000) |
1161 | 1248 | |
1162 | 1249 | # Now, submit a username to the username picker, which should serve a redirect |
1163 | # back to the client | |
1164 | submit_path = picker_url + "/submit" | |
1250 | # to the completion page | |
1165 | 1251 | content = urlencode({b"username": b"bobby"}).encode("utf8") |
1166 | 1252 | chan = self.make_request( |
1167 | 1253 | "POST", |
1168 | path=submit_path, | |
1254 | path=picker_url, | |
1169 | 1255 | content=content, |
1170 | 1256 | content_is_form=True, |
1171 | 1257 | custom_headers=[ |
1177 | 1263 | ) |
1178 | 1264 | self.assertEqual(chan.code, 302, chan.result) |
1179 | 1265 | location_headers = chan.headers.getRawHeaders("Location") |
1266 | ||
1267 | # send a request to the completion page, which should 302 to the client redirectUrl | |
1268 | chan = self.make_request( | |
1269 | "GET", | |
1270 | path=location_headers[0], | |
1271 | custom_headers=[("Cookie", "username_mapping_session=" + session_id)], | |
1272 | ) | |
1273 | self.assertEqual(chan.code, 302, chan.result) | |
1274 | location_headers = chan.headers.getRawHeaders("Location") | |
1275 | ||
1180 | 1276 | # ensure that the returned location matches the requested redirect URL |
1181 | 1277 | path, query = location_headers[0].split("?", 1) |
1182 | 1278 | self.assertEqual(path, "https://x") |
615 | 615 | self.assertEquals(json.loads(content), channel.json_body) |
616 | 616 | |
617 | 617 | |
618 | class RoomInviteRatelimitTestCase(RoomBase): | |
619 | user_id = "@sid1:red" | |
620 | ||
621 | servlets = [ | |
622 | admin.register_servlets, | |
623 | profile.register_servlets, | |
624 | room.register_servlets, | |
625 | ] | |
626 | ||
627 | @unittest.override_config( | |
628 | {"rc_invites": {"per_room": {"per_second": 0.5, "burst_count": 3}}} | |
629 | ) | |
630 | def test_invites_by_rooms_ratelimit(self): | |
631 | """Tests that invites in a room are actually rate-limited.""" | |
632 | room_id = self.helper.create_room_as(self.user_id) | |
633 | ||
634 | for i in range(3): | |
635 | self.helper.invite(room_id, self.user_id, "@user-%s:red" % (i,)) | |
636 | ||
637 | self.helper.invite(room_id, self.user_id, "@user-4:red", expect_code=429) | |
638 | ||
639 | @unittest.override_config( | |
640 | {"rc_invites": {"per_user": {"per_second": 0.5, "burst_count": 3}}} | |
641 | ) | |
642 | def test_invites_by_users_ratelimit(self): | |
643 | """Tests that invites to a specific user are actually rate-limited.""" | |
644 | ||
645 | for i in range(3): | |
646 | room_id = self.helper.create_room_as(self.user_id) | |
647 | self.helper.invite(room_id, self.user_id, "@other-users:red") | |
648 | ||
649 | room_id = self.helper.create_room_as(self.user_id) | |
650 | self.helper.invite(room_id, self.user_id, "@other-users:red", expect_code=429) | |
651 | ||
652 | ||
618 | 653 | class RoomJoinRatelimitTestCase(RoomBase): |
619 | 654 | user_id = "@sid1:red" |
620 | 655 |
23 | 23 | |
24 | 24 | import synapse.rest.admin |
25 | 25 | from synapse.api.constants import LoginType, Membership |
26 | from synapse.api.errors import Codes | |
26 | from synapse.api.errors import Codes, HttpResponseException | |
27 | 27 | from synapse.rest.client.v1 import login, room |
28 | 28 | from synapse.rest.client.v2_alpha import account, register |
29 | 29 | from synapse.rest.synapse.client.password_reset import PasswordResetSubmitTokenResource |
111 | 111 | # Assert we can't log in with the old password |
112 | 112 | self.attempt_wrong_password_login("kermit", old_password) |
113 | 113 | |
114 | @override_config({"rc_3pid_validation": {"burst_count": 3}}) | |
115 | def test_ratelimit_by_email(self): | |
116 | """Test that we ratelimit /requestToken for the same email. | |
117 | """ | |
118 | old_password = "monkey" | |
119 | new_password = "kangeroo" | |
120 | ||
121 | user_id = self.register_user("kermit", old_password) | |
122 | self.login("kermit", old_password) | |
123 | ||
124 | email = "test1@example.com" | |
125 | ||
126 | # Add a threepid | |
127 | self.get_success( | |
128 | self.store.user_add_threepid( | |
129 | user_id=user_id, | |
130 | medium="email", | |
131 | address=email, | |
132 | validated_at=0, | |
133 | added_at=0, | |
134 | ) | |
135 | ) | |
136 | ||
137 | def reset(ip): | |
138 | client_secret = "foobar" | |
139 | session_id = self._request_token(email, client_secret, ip) | |
140 | ||
141 | self.assertEquals(len(self.email_attempts), 1) | |
142 | link = self._get_link_from_email() | |
143 | ||
144 | self._validate_token(link) | |
145 | ||
146 | self._reset_password(new_password, session_id, client_secret) | |
147 | ||
148 | self.email_attempts.clear() | |
149 | ||
150 | # We expect to be able to make three requests before getting rate | |
151 | # limited. | |
152 | # | |
153 | # We change IPs to ensure that we're not being ratelimited due to the | |
154 | # same IP | |
155 | reset("127.0.0.1") | |
156 | reset("127.0.0.2") | |
157 | reset("127.0.0.3") | |
158 | ||
159 | with self.assertRaises(HttpResponseException) as cm: | |
160 | reset("127.0.0.4") | |
161 | ||
162 | self.assertEqual(cm.exception.code, 429) | |
163 | ||
114 | 164 | def test_basic_password_reset_canonicalise_email(self): |
115 | 165 | """Test basic password reset flow |
116 | 166 | Request password reset with different spelling |
238 | 288 | |
239 | 289 | self.assertIsNotNone(session_id) |
240 | 290 | |
241 | def _request_token(self, email, client_secret): | |
291 | def _request_token(self, email, client_secret, ip="127.0.0.1"): | |
242 | 292 | channel = self.make_request( |
243 | 293 | "POST", |
244 | 294 | b"account/password/email/requestToken", |
245 | 295 | {"client_secret": client_secret, "email": email, "send_attempt": 1}, |
246 | ) | |
247 | self.assertEquals(200, channel.code, channel.result) | |
296 | client_ip=ip, | |
297 | ) | |
298 | ||
299 | if channel.code != 200: | |
300 | raise HttpResponseException( | |
301 | channel.code, channel.result["reason"], channel.result["body"], | |
302 | ) | |
248 | 303 | |
249 | 304 | return channel.json_body["sid"] |
250 | 305 | |
508 | 563 | def test_address_trim(self): |
509 | 564 | self.get_success(self._add_email(" foo@test.bar ", "foo@test.bar")) |
510 | 565 | |
566 | @override_config({"rc_3pid_validation": {"burst_count": 3}}) | |
567 | def test_ratelimit_by_ip(self): | |
568 | """Tests that adding emails is ratelimited by IP | |
569 | """ | |
570 | ||
571 | # We expect to be able to set three emails before getting ratelimited. | |
572 | self.get_success(self._add_email("foo1@test.bar", "foo1@test.bar")) | |
573 | self.get_success(self._add_email("foo2@test.bar", "foo2@test.bar")) | |
574 | self.get_success(self._add_email("foo3@test.bar", "foo3@test.bar")) | |
575 | ||
576 | with self.assertRaises(HttpResponseException) as cm: | |
577 | self.get_success(self._add_email("foo4@test.bar", "foo4@test.bar")) | |
578 | ||
579 | self.assertEqual(cm.exception.code, 429) | |
580 | ||
511 | 581 | def test_add_email_if_disabled(self): |
512 | 582 | """Test adding email to profile when doing so is disallowed |
513 | 583 | """ |
776 | 846 | body["next_link"] = next_link |
777 | 847 | |
778 | 848 | channel = self.make_request("POST", b"account/3pid/email/requestToken", body,) |
779 | self.assertEquals(expect_code, channel.code, channel.result) | |
849 | ||
850 | if channel.code != expect_code: | |
851 | raise HttpResponseException( | |
852 | channel.code, channel.result["reason"], channel.result["body"], | |
853 | ) | |
780 | 854 | |
781 | 855 | return channel.json_body.get("sid") |
782 | 856 | |
822 | 896 | def _add_email(self, request_email, expected_email): |
823 | 897 | """Test adding an email to profile |
824 | 898 | """ |
899 | previous_email_attempts = len(self.email_attempts) | |
900 | ||
825 | 901 | client_secret = "foobar" |
826 | 902 | session_id = self._request_token(request_email, client_secret) |
827 | 903 | |
828 | self.assertEquals(len(self.email_attempts), 1) | |
904 | self.assertEquals(len(self.email_attempts) - previous_email_attempts, 1) | |
829 | 905 | link = self._get_link_from_email() |
830 | 906 | |
831 | 907 | self._validate_token(link) |
854 | 930 | |
855 | 931 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
856 | 932 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) |
857 | self.assertEqual(expected_email, channel.json_body["threepids"][0]["address"]) | |
933 | ||
934 | threepids = {threepid["address"] for threepid in channel.json_body["threepids"]} | |
935 | self.assertIn(expected_email, threepids) |
21 | 21 | from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker |
22 | 22 | from synapse.rest.client.v1 import login |
23 | 23 | from synapse.rest.client.v2_alpha import auth, devices, register |
24 | from synapse.rest.oidc import OIDCResource | |
24 | from synapse.rest.synapse.client import build_synapse_client_resource_tree | |
25 | 25 | from synapse.types import JsonDict, UserID |
26 | 26 | |
27 | 27 | from tests import unittest |
172 | 172 | |
173 | 173 | def create_resource_dict(self): |
174 | 174 | resource_dict = super().create_resource_dict() |
175 | if HAS_OIDC: | |
176 | # mount the OIDC resource at /_synapse/oidc | |
177 | resource_dict["/_synapse/oidc"] = OIDCResource(self.hs) | |
175 | resource_dict.update(build_synapse_client_resource_tree(self.hs)) | |
178 | 176 | return resource_dict |
179 | 177 | |
180 | 178 | def prepare(self, reactor, clock, hs): |
201 | 201 | |
202 | 202 | config = self.default_config() |
203 | 203 | config["media_store_path"] = self.media_store_path |
204 | config["thumbnail_requirements"] = {} | |
205 | 204 | config["max_image_pixels"] = 2000000 |
206 | 205 | |
207 | 206 | provider_config = { |
312 | 311 | self.assertEqual(headers.getRawHeaders(b"Content-Disposition"), None) |
313 | 312 | |
314 | 313 | def test_thumbnail_crop(self): |
314 | """Test that a cropped remote thumbnail is available.""" | |
315 | 315 | self._test_thumbnail( |
316 | 316 | "crop", self.test_image.expected_cropped, self.test_image.expected_found |
317 | 317 | ) |
318 | 318 | |
319 | 319 | def test_thumbnail_scale(self): |
320 | """Test that a scaled remote thumbnail is available.""" | |
320 | 321 | self._test_thumbnail( |
321 | 322 | "scale", self.test_image.expected_scaled, self.test_image.expected_found |
322 | 323 | ) |
324 | ||
325 | def test_invalid_type(self): | |
326 | """An invalid thumbnail type is never available.""" | |
327 | self._test_thumbnail("invalid", None, False) | |
328 | ||
329 | @unittest.override_config( | |
330 | {"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}]} | |
331 | ) | |
332 | def test_no_thumbnail_crop(self): | |
333 | """ | |
334 | Override the config to generate only scaled thumbnails, but request a cropped one. | |
335 | """ | |
336 | self._test_thumbnail("crop", None, False) | |
337 | ||
338 | @unittest.override_config( | |
339 | {"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}]} | |
340 | ) | |
341 | def test_no_thumbnail_scale(self): | |
342 | """ | |
343 | Override the config to generate only cropped thumbnails, but request a scaled one. | |
344 | """ | |
345 | self._test_thumbnail("scale", None, False) | |
323 | 346 | |
324 | 347 | def _test_thumbnail(self, method, expected_body, expected_found): |
325 | 348 | params = "?width=32&height=32&method=" + method |
39 | 39 | "m.identity_server": {"base_url": "https://testis"}, |
40 | 40 | }, |
41 | 41 | ) |
42 | ||
43 | def test_well_known_no_public_baseurl(self): | |
44 | self.hs.config.public_baseurl = None | |
45 | ||
46 | channel = self.make_request( | |
47 | "GET", "/.well-known/matrix/client", shorthand=False | |
48 | ) | |
49 | ||
50 | self.assertEqual(channel.code, 404) |
46 | 46 | site = attr.ib(type=Site) |
47 | 47 | _reactor = attr.ib() |
48 | 48 | result = attr.ib(type=dict, default=attr.Factory(dict)) |
49 | _ip = attr.ib(type=str, default="127.0.0.1") | |
49 | 50 | _producer = None |
50 | 51 | |
51 | 52 | @property |
119 | 120 | def getPeer(self): |
120 | 121 | # We give an address so that getClientIP returns a non null entry, |
121 | 122 | # causing us to record the MAU |
122 | return address.IPv4Address("TCP", "127.0.0.1", 3423) | |
123 | return address.IPv4Address("TCP", self._ip, 3423) | |
123 | 124 | |
124 | 125 | def getHost(self): |
125 | 126 | return None |
195 | 196 | custom_headers: Optional[ |
196 | 197 | Iterable[Tuple[Union[bytes, str], Union[bytes, str]]] |
197 | 198 | ] = None, |
199 | client_ip: str = "127.0.0.1", | |
198 | 200 | ) -> FakeChannel: |
199 | 201 | """ |
200 | 202 | Make a web request using the given method, path and content, and render it |
222 | 224 | will pump the reactor until the the renderer tells the channel the request |
223 | 225 | is finished. |
224 | 226 | |
227 | client_ip: The IP to use as the requesting IP. Useful for testing | |
228 | ratelimiting. | |
229 | ||
225 | 230 | Returns: |
226 | 231 | channel |
227 | 232 | """ |
249 | 254 | if isinstance(content, str): |
250 | 255 | content = content.encode("utf8") |
251 | 256 | |
252 | channel = FakeChannel(site, reactor) | |
257 | channel = FakeChannel(site, reactor, ip=client_ip) | |
253 | 258 | |
254 | 259 | req = request(channel) |
255 | 260 | req.content = BytesIO(content) |
260 | 260 | html = "" |
261 | 261 | og = decode_and_calc_og(html, "http://example.com/test.html") |
262 | 262 | self.assertEqual(og, {}) |
263 | ||
264 | def test_invalid_encoding(self): | |
265 | """An invalid character encoding should be ignored and treated as UTF-8, if possible.""" | |
266 | html = """ | |
267 | <html> | |
268 | <head><title>Foo</title></head> | |
269 | <body> | |
270 | Some text. | |
271 | </body> | |
272 | </html> | |
273 | """ | |
274 | og = decode_and_calc_og( | |
275 | html, "http://example.com/test.html", "invalid-encoding" | |
276 | ) | |
277 | self.assertEqual(og, {"og:title": "Foo", "og:description": "Some text."}) | |
278 | ||
279 | def test_invalid_encoding2(self): | |
280 | """A body which doesn't match the sent character encoding.""" | |
281 | # Note that this contains an invalid UTF-8 sequence in the title. | |
282 | html = b""" | |
283 | <html> | |
284 | <head><title>\xff\xff Foo</title></head> | |
285 | <body> | |
286 | Some text. | |
287 | </body> | |
288 | </html> | |
289 | """ | |
290 | og = decode_and_calc_og(html, "http://example.com/test.html") | |
291 | self.assertEqual(og, {"og:title": "ÿÿ Foo", "og:description": "Some text."}) |
385 | 385 | custom_headers: Optional[ |
386 | 386 | Iterable[Tuple[Union[bytes, str], Union[bytes, str]]] |
387 | 387 | ] = None, |
388 | client_ip: str = "127.0.0.1", | |
388 | 389 | ) -> FakeChannel: |
389 | 390 | """ |
390 | 391 | Create a SynapseRequest at the path using the method and containing the |
408 | 409 | tells the channel the request is finished. |
409 | 410 | |
410 | 411 | custom_headers: (name, value) pairs to add as request headers |
412 | ||
413 | client_ip: The IP to use as the requesting IP. Useful for testing | |
414 | ratelimiting. | |
411 | 415 | |
412 | 416 | Returns: |
413 | 417 | The FakeChannel object which stores the result of the request. |
425 | 429 | content_is_form, |
426 | 430 | await_result, |
427 | 431 | custom_headers, |
432 | client_ip, | |
428 | 433 | ) |
429 | 434 | |
430 | 435 | def setup_test_homeserver(self, *args, **kwargs): |
32 | 32 | from synapse.config.database import DatabaseConnectionConfig |
33 | 33 | from synapse.config.homeserver import HomeServerConfig |
34 | 34 | from synapse.config.server import DEFAULT_ROOM_VERSION |
35 | from synapse.http.server import HttpServer | |
36 | 35 | from synapse.logging.context import current_context, set_current_context |
37 | 36 | from synapse.server import HomeServer |
38 | 37 | from synapse.storage import DataStore |
157 | 156 | "local": {"per_second": 10000, "burst_count": 10000}, |
158 | 157 | "remote": {"per_second": 10000, "burst_count": 10000}, |
159 | 158 | }, |
159 | "rc_3pid_validation": {"per_second": 10000, "burst_count": 10000}, | |
160 | 160 | "saml2_enabled": False, |
161 | "public_baseurl": None, | |
161 | 162 | "default_identity_server": None, |
162 | 163 | "key_refresh_interval": 24 * 60 * 60 * 1000, |
163 | 164 | "old_signing_keys": {}, |
350 | 351 | |
351 | 352 | |
352 | 353 | # This is a mock /resource/ not an entire server |
353 | class MockHttpResource(HttpServer): | |
354 | class MockHttpResource: | |
354 | 355 | def __init__(self, prefix=""): |
355 | 356 | self.callbacks = [] # 3-tuple of method/pattern/function |
356 | 357 | self.prefix = prefix |
17 | 17 | # installed on that). |
18 | 18 | # |
19 | 19 | # anyway, make sure that we have a recent enough setuptools. |
20 | setuptools>=18.5 | |
20 | setuptools>=18.5 ; python_version >= '3.6' | |
21 | setuptools>=18.5,<51.0.0 ; python_version < '3.6' | |
21 | 22 | |
22 | 23 | # we also need a semi-recent version of pip, because old ones fail to |
23 | 24 | # install the "enum34" dependency of cryptography. |
24 | pip>=10 | |
25 | pip>=10 ; python_version >= '3.6' | |
26 | pip>=10,<21.0 ; python_version < '3.6' | |
25 | 27 | |
26 | # directories/files we run the linters on | |
28 | # directories/files we run the linters on. | |
29 | # if you update this list, make sure to do the same in scripts-dev/lint.sh | |
27 | 30 | lint_targets = |
28 | 31 | setup.py |
29 | 32 | synapse |
102 | 105 | [testenv:py35-old] |
103 | 106 | skip_install=True |
104 | 107 | deps = |
105 | # Ensure a version of setuptools that supports Python 3.5 is installed. | |
106 | setuptools < 51.0.0 | |
107 | ||
108 | 108 | # Old automat version for Twisted |
109 | 109 | Automat == 0.3.0 |
110 | ||
111 | 110 | lxml |
112 | coverage | |
113 | coverage-enable-subprocess==1.0 | |
111 | {[base]deps} | |
114 | 112 | |
115 | 113 | commands = |
116 | 114 | # Make all greater-thans equals so we test the oldest version of our direct |
167 | 165 | skip_install = True |
168 | 166 | deps = |
169 | 167 | coverage |
168 | pip>=10 ; python_version >= '3.6' | |
169 | pip>=10,<21.0 ; python_version < '3.6' | |
170 | 170 | commands= |
171 | 171 | coverage combine |
172 | 172 | coverage report |