New upstream version 1.29.0
Andrej Shadura
3 years ago
5 | 5 | *.egg |
6 | 6 | *.egg-info |
7 | 7 | *.lock |
8 | *.pyc | |
8 | *.py[cod] | |
9 | 9 | *.snap |
10 | 10 | *.tac |
11 | 11 | _trial_temp/ |
12 | 12 | _trial_temp*/ |
13 | 13 | /out |
14 | 14 | .DS_Store |
15 | __pycache__/ | |
15 | 16 | |
16 | 17 | # stuff that is likely to exist when you run a server locally |
17 | 18 | /*.db |
0 | Synapse 1.29.0 (2021-03-08) | |
1 | =========================== | |
2 | ||
3 | Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change. | |
4 | ||
5 | ||
6 | No significant changes. | |
7 | ||
8 | ||
9 | Synapse 1.29.0rc1 (2021-03-04) | |
10 | ============================== | |
11 | ||
12 | Features | |
13 | -------- | |
14 | ||
15 | - Add rate limiters to cross-user key sharing requests. ([\#8957](https://github.com/matrix-org/synapse/issues/8957)) | |
16 | - Add `order_by` to the admin API `GET /_synapse/admin/v1/users/<user_id>/media`. Contributed by @dklimpel. ([\#8978](https://github.com/matrix-org/synapse/issues/8978)) | |
17 | - Add some configuration settings to make users' profile data more private. ([\#9203](https://github.com/matrix-org/synapse/issues/9203)) | |
18 | - The `no_proxy` and `NO_PROXY` environment variables are now respected in proxied HTTP clients with the lowercase form taking precedence if both are present. Additionally, the lowercase `https_proxy` environment variable is now respected in proxied HTTP clients on top of existing support for the uppercase `HTTPS_PROXY` form and takes precedence if both are present. Contributed by Timothy Leung. ([\#9372](https://github.com/matrix-org/synapse/issues/9372)) | |
19 | - Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users. ([\#9383](https://github.com/matrix-org/synapse/issues/9383), [\#9385](https://github.com/matrix-org/synapse/issues/9385)) | |
20 | - Add support for regenerating thumbnails if they have been deleted but the original image is still stored. ([\#9438](https://github.com/matrix-org/synapse/issues/9438)) | |
21 | - Add support for `X-Forwarded-Proto` header when using a reverse proxy. ([\#9472](https://github.com/matrix-org/synapse/issues/9472), [\#9501](https://github.com/matrix-org/synapse/issues/9501), [\#9512](https://github.com/matrix-org/synapse/issues/9512), [\#9539](https://github.com/matrix-org/synapse/issues/9539)) | |
22 | ||
23 | ||
24 | Bugfixes | |
25 | -------- | |
26 | ||
27 | - Fix a bug where users' pushers were not all deleted when they deactivated their account. ([\#9285](https://github.com/matrix-org/synapse/issues/9285), [\#9516](https://github.com/matrix-org/synapse/issues/9516)) | |
28 | - Fix a bug where a lot of unnecessary presence updates were sent when joining a room. ([\#9402](https://github.com/matrix-org/synapse/issues/9402)) | |
29 | - Fix a bug that caused multiple calls to the experimental `shared_rooms` endpoint to return stale results. ([\#9416](https://github.com/matrix-org/synapse/issues/9416)) | |
30 | - Fix a bug in single sign-on which could cause a "No session cookie found" error. ([\#9436](https://github.com/matrix-org/synapse/issues/9436)) | |
31 | - Fix bug introduced in v1.27.0 where allowing a user to choose their own username when logging in via single sign-on did not work unless an `idp_icon` was defined. ([\#9440](https://github.com/matrix-org/synapse/issues/9440)) | |
32 | - Fix a bug introduced in v1.26.0 where some sequences were not properly configured when running `synapse_port_db`. ([\#9449](https://github.com/matrix-org/synapse/issues/9449)) | |
33 | - Fix deleting pushers when using sharded pushers. ([\#9465](https://github.com/matrix-org/synapse/issues/9465), [\#9466](https://github.com/matrix-org/synapse/issues/9466), [\#9479](https://github.com/matrix-org/synapse/issues/9479), [\#9536](https://github.com/matrix-org/synapse/issues/9536)) | |
34 | - Fix missing startup checks for the consistency of certain PostgreSQL sequences. ([\#9470](https://github.com/matrix-org/synapse/issues/9470)) | |
35 | - Fix a long-standing bug where the media repository could leak file descriptors while previewing media. ([\#9497](https://github.com/matrix-org/synapse/issues/9497)) | |
36 | - Properly purge the event chain cover index when purging history. ([\#9498](https://github.com/matrix-org/synapse/issues/9498)) | |
37 | - Fix missing chain cover index due to a schema delta not being applied correctly. Only affected servers that ran development versions. ([\#9503](https://github.com/matrix-org/synapse/issues/9503)) | |
38 | - Fix a bug introduced in v1.25.0 where `/_synapse/admin/join/` would fail when given a room alias. ([\#9506](https://github.com/matrix-org/synapse/issues/9506)) | |
39 | - Prevent presence background jobs from running when presence is disabled. ([\#9530](https://github.com/matrix-org/synapse/issues/9530)) | |
40 | - Fix rare edge case that caused a background update to fail if the server had rejected an event that had duplicate auth events. ([\#9537](https://github.com/matrix-org/synapse/issues/9537)) | |
41 | ||
42 | ||
43 | Improved Documentation | |
44 | ---------------------- | |
45 | ||
46 | - Update the example systemd config to propagate reloads to individual units. ([\#9463](https://github.com/matrix-org/synapse/issues/9463)) | |
47 | ||
48 | ||
49 | Internal Changes | |
50 | ---------------- | |
51 | ||
52 | - Add documentation and type hints to `parse_duration`. ([\#9432](https://github.com/matrix-org/synapse/issues/9432)) | |
53 | - Remove vestiges of `uploads_path` configuration setting. ([\#9462](https://github.com/matrix-org/synapse/issues/9462)) | |
54 | - Add a comment about systemd-python. ([\#9464](https://github.com/matrix-org/synapse/issues/9464)) | |
55 | - Test that we require validated email for email pushers. ([\#9496](https://github.com/matrix-org/synapse/issues/9496)) | |
56 | - Allow python to generate bytecode for synapse. ([\#9502](https://github.com/matrix-org/synapse/issues/9502)) | |
57 | - Fix incorrect type hints. ([\#9515](https://github.com/matrix-org/synapse/issues/9515), [\#9518](https://github.com/matrix-org/synapse/issues/9518)) | |
58 | - Add type hints to device and event report admin API. ([\#9519](https://github.com/matrix-org/synapse/issues/9519)) | |
59 | - Add type hints to user admin API. ([\#9521](https://github.com/matrix-org/synapse/issues/9521)) | |
60 | - Bump the versions of mypy and mypy-zope used for static type checking. ([\#9529](https://github.com/matrix-org/synapse/issues/9529)) | |
61 | ||
62 | ||
0 | 63 | Synapse 1.28.0 (2021-02-25) |
1 | 64 | =========================== |
2 | 65 |
83 | 83 | # replace `1.3.0` and `stretch` accordingly: |
84 | 84 | wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
85 | 85 | dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
86 | ||
87 | Upgrading to v1.29.0 | |
88 | ==================== | |
89 | ||
90 | Requirement for X-Forwarded-Proto header | |
91 | ---------------------------------------- | |
92 | ||
93 | When using Synapse with a reverse proxy (in particular, when using the | |
94 | `x_forwarded` option on an HTTP listener), Synapse now expects to receive an | |
95 | `X-Forwarded-Proto` header on incoming HTTP requests. If it is not set, Synapse | |
96 | will log a warning on each received request. | |
97 | ||
98 | To avoid the warning, administrators using a reverse proxy should ensure that | |
99 | the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to | |
100 | indicate the protocol used by the client. See the `reverse proxy documentation | |
101 | <docs/reverse_proxy.md>`_, where the example configurations have been updated to | |
102 | show how to set this header. | |
103 | ||
104 | (Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it | |
105 | sets `X-Forwarded-Proto` by default.) | |
86 | 106 | |
87 | 107 | Upgrading to v1.27.0 |
88 | 108 | ==================== |
57 | 57 | cp -r tests "$tmpdir" |
58 | 58 | |
59 | 59 | PYTHONPATH="$tmpdir" \ |
60 | "${TARGET_PYTHON}" -B -m twisted.trial --reporter=text -j2 tests | |
60 | "${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests | |
61 | 61 | |
62 | 62 | # build the config file |
63 | "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_config" \ | |
63 | "${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \ | |
64 | 64 | --config-dir="/etc/matrix-synapse" \ |
65 | 65 | --data-dir="/var/lib/matrix-synapse" | |
66 | 66 | perl -pe ' |
86 | 86 | ' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml" |
87 | 87 | |
88 | 88 | # build the log config file |
89 | "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \ | |
89 | "${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_log_config" \ | |
90 | 90 | --output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml" |
91 | 91 | |
92 | 92 | # add a dependency on the right version of python to substvars. |
0 | matrix-synapse-py3 (1.29.0) stable; urgency=medium | |
1 | ||
2 | [ Jonathan de Jong ] | |
3 | * Remove the python -B flag (don't generate bytecode) in scripts and documentation. | |
4 | ||
5 | [ Synapse Packaging team ] | |
6 | * New synapse release 1.29.0. | |
7 | ||
8 | -- Synapse Packaging team <packages@matrix.org> Mon, 08 Mar 2021 13:51:50 +0000 | |
9 | ||
0 | 10 | matrix-synapse-py3 (1.28.0) stable; urgency=medium |
1 | 11 | |
2 | 12 | * New synapse release 1.28.0. |
43 | 43 | . |
44 | 44 | .nf |
45 | 45 | |
46 | $ python \-B \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name> | |
46 | $ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name> | |
47 | 47 | . |
48 | 48 | .fi |
49 | 49 | . |
40 | 40 | |
41 | 41 | Configuration file may be generated as follows: |
42 | 42 | |
43 | $ python -B -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name> | |
43 | $ python -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name> | |
44 | 44 | |
45 | 45 | ## ENVIRONMENT |
46 | 46 |
10 | 10 | By default, the image expects a single volume, located at ``/data``, that will hold: |
11 | 11 | |
12 | 12 | * configuration files; |
13 | * temporary files during uploads; | |
14 | 13 | * uploaded media and thumbnails; |
15 | 14 | * the SQLite database if you do not configure postgres; |
16 | 15 | * the appservices configuration. |
88 | 88 | ## Files ## |
89 | 89 | |
90 | 90 | media_store_path: "/data/media" |
91 | uploads_path: "/data/uploads" | |
92 | 91 | max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "50M" }}" |
93 | 92 | max_image_pixels: "32M" |
94 | 93 | dynamic_thumbnails: false |
378 | 378 | - ``total`` - Number of rooms. |
379 | 379 | |
380 | 380 | |
381 | List media of an user | |
382 | ================================ | |
381 | List media of a user | |
382 | ==================== | |
383 | 383 | Gets a list of all local media that a specific ``user_id`` has created. |
384 | The response is ordered by creation date descending and media ID descending. | |
385 | The newest media is on top. | |
384 | By default, the response is ordered by descending creation date and ascending media ID. | |
385 | The newest media is on top. You can change the order with parameters | |
386 | ``order_by`` and ``dir``. | |
386 | 387 | |
387 | 388 | The API is:: |
388 | 389 | |
439 | 440 | denoting the offset in the returned results. This should be treated as an opaque value and |
440 | 441 | not explicitly set to anything other than the return value of ``next_token`` from a previous call. |
441 | 442 | Defaults to ``0``. |
443 | - ``order_by`` - The method by which to sort the returned list of media. | |
444 | If the ordered field has duplicates, the second order is always by ascending ``media_id``, | |
445 | which guarantees a stable ordering. Valid values are: | |
446 | ||
447 | - ``media_id`` - Media are ordered alphabetically by ``media_id``. | |
448 | - ``upload_name`` - Media are ordered alphabetically by name the media was uploaded with. | |
449 | - ``created_ts`` - Media are ordered by when the content was uploaded in ms. | |
450 | Smallest to largest. This is the default. | |
451 | - ``last_access_ts`` - Media are ordered by when the content was last accessed in ms. | |
452 | Smallest to largest. | |
453 | - ``media_length`` - Media are ordered by length of the media in bytes. | |
454 | Smallest to largest. | |
455 | - ``media_type`` - Media are ordered alphabetically by MIME-type. | |
456 | - ``quarantined_by`` - Media are ordered alphabetically by the user ID that | |
457 | initiated the quarantine request for this media. | |
458 | - ``safe_from_quarantine`` - Media are ordered by the status if this media is safe | |
459 | from quarantining. | |
460 | ||
461 | - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards. | |
462 | Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``. | |
463 | ||
464 | If neither ``order_by`` nor ``dir`` is set, the default order is newest media on top | |
465 | (corresponds to ``order_by`` = ``created_ts`` and ``dir`` = ``b``). | |
466 | ||
467 | Caution. The database only has indexes on the columns ``media_id``, | |
468 | ``user_id`` and ``created_ts``. This means that if a different sort order is used | |
469 | (``upload_name``, ``last_access_ts``, ``media_length``, ``media_type``, | |
470 | ``quarantined_by`` or ``safe_from_quarantine``), this can cause a large load on the | |
471 | database, especially for large environments. | |
442 | 472 | |
443 | 473 | **Response** |
444 | 474 |
8 | 8 | (443) to Matrix clients without needing to run Synapse with root |
9 | 9 | privileges. |
10 | 10 | |
11 | You should configure your reverse proxy to forward requests to `/_matrix` or | |
12 | `/_synapse/client` to Synapse, and have it set the `X-Forwarded-For` and | |
13 | `X-Forwarded-Proto` request headers. | |
14 | ||
15 | You should remember that Matrix clients and other Matrix servers do not | |
16 | necessarily need to connect to your server via the same server name or | |
17 | port. Indeed, clients will use port 443 by default, whereas servers default to | |
18 | port 8448. Where these are different, we refer to the 'client port' and the | |
19 | 'federation port'. See [the Matrix | |
20 | specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names) | |
21 | for more details of the algorithm used for federation connections, and | |
22 | [delegate.md](<delegate.md>) for instructions on setting up delegation. | |
23 | ||
11 | 24 | **NOTE**: Your reverse proxy must not `canonicalise` or `normalise` |
12 | 25 | the requested URI in any way (for example, by decoding `%xx` escapes). |
13 | 26 | Beware that Apache *will* canonicalise URIs unless you specify |
14 | 27 | `nocanon`. |
15 | ||
16 | When setting up a reverse proxy, remember that Matrix clients and other | |
17 | Matrix servers do not necessarily need to connect to your server via the | |
18 | same server name or port. Indeed, clients will use port 443 by default, | |
19 | whereas servers default to port 8448. Where these are different, we | |
20 | refer to the 'client port' and the 'federation port'. See [the Matrix | |
21 | specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names) | |
22 | for more details of the algorithm used for federation connections, and | |
23 | [delegate.md](<delegate.md>) for instructions on setting up delegation. | |
24 | ||
25 | Endpoints that are part of the standardised Matrix specification are | |
26 | located under `/_matrix`, whereas endpoints specific to Synapse are | |
27 | located under `/_synapse/client`. | |
28 | 28 | |
29 | 29 | Let's assume that we expect clients to connect to our server at |
30 | 30 | `https://matrix.example.com`, and other servers to connect at |
51 | 51 | location ~* ^(\/_matrix|\/_synapse\/client) { |
52 | 52 | proxy_pass http://localhost:8008; |
53 | 53 | proxy_set_header X-Forwarded-For $remote_addr; |
54 | proxy_set_header X-Forwarded-Proto $scheme; | |
55 | proxy_set_header Host $host; | |
56 | ||
54 | 57 | # Nginx by default only allows file uploads up to 1M in size |
55 | 58 | # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml |
56 | 59 | client_max_body_size 50M; |
101 | 104 | SSLEngine on |
102 | 105 | ServerName matrix.example.com; |
103 | 106 | |
107 | RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} | |
104 | 108 | AllowEncodedSlashes NoDecode |
105 | 109 | ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon |
106 | 110 | ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix |
112 | 116 | SSLEngine on |
113 | 117 | ServerName example.com; |
114 | 118 | |
119 | RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} | |
115 | 120 | AllowEncodedSlashes NoDecode |
116 | 121 | ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon |
117 | 122 | ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix |
133 | 138 | ``` |
134 | 139 | frontend https |
135 | 140 | bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1 |
141 | http-request set-header X-Forwarded-Proto https if { ssl_fc } | |
142 | http-request set-header X-Forwarded-Proto http if !{ ssl_fc } | |
143 | http-request set-header X-Forwarded-For %[src] | |
136 | 144 | |
137 | 145 | # Matrix client traffic |
138 | 146 | acl matrix-host hdr(host) -i matrix.example.com |
143 | 151 | |
144 | 152 | frontend matrix-federation |
145 | 153 | bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1 |
154 | http-request set-header X-Forwarded-Proto https if { ssl_fc } | |
155 | http-request set-header X-Forwarded-Proto http if !{ ssl_fc } | |
156 | http-request set-header X-Forwarded-For %[src] | |
157 | ||
146 | 158 | default_backend matrix |
147 | 159 | |
148 | 160 | backend matrix |
99 | 99 | # requesting server. Defaults to 'false'. |
100 | 100 | # |
101 | 101 | #limit_profile_requests_to_users_who_share_rooms: true |
102 | ||
103 | # Uncomment to prevent a user's profile data from being retrieved and | |
104 | # displayed in a room until they have joined it. By default, a user's | |
105 | # profile data is included in an invite event, regardless of the values | |
106 | # of the above two settings, and whether or not the users share a server. | |
107 | # Defaults to 'true'. | |
108 | # | |
109 | #include_profile_data_on_invite: false | |
102 | 110 | |
103 | 111 | # If set to 'true', removes the need for authentication to access the server's |
104 | 112 | # public rooms directory through the client API, meaning that anyone can |
697 | 705 | #federation_metrics_domains: |
698 | 706 | # - matrix.org |
699 | 707 | # - example.com |
708 | ||
709 | # Uncomment to disable profile lookup over federation. By default, the | |
710 | # Federation API allows other homeservers to obtain profile data of any user | |
711 | # on this homeserver. Defaults to 'true'. | |
712 | # | |
713 | #allow_profile_lookup_over_federation: false | |
700 | 714 | |
701 | 715 | |
702 | 716 | ## Caching ## |
2529 | 2543 | |
2530 | 2544 | # User Directory configuration |
2531 | 2545 | # |
2532 | # 'enabled' defines whether users can search the user directory. If | |
2533 | # false then empty responses are returned to all queries. Defaults to | |
2534 | # true. | |
2535 | # | |
2536 | # 'search_all_users' defines whether to search all users visible to your HS | |
2537 | # when searching the user directory, rather than limiting to users visible | |
2538 | # in public rooms. Defaults to false. If you set it True, you'll have to | |
2539 | # rebuild the user_directory search indexes, see | |
2540 | # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md | |
2541 | # | |
2542 | #user_directory: | |
2543 | # enabled: true | |
2544 | # search_all_users: false | |
2546 | user_directory: | |
2547 | # Defines whether users can search the user directory. If false then | |
2548 | # empty responses are returned to all queries. Defaults to true. | |
2549 | # | |
2550 | # Uncomment to disable the user directory. | |
2551 | # | |
2552 | #enabled: false | |
2553 | ||
2554 | # Defines whether to search all users visible to your HS when searching | |
2555 | # the user directory, rather than limiting to users visible in public | |
2556 | # rooms. Defaults to false. | |
2557 | # | |
2558 | # If you set it true, you'll have to rebuild the user_directory search | |
2559 | # indexes, see: | |
2560 | # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md | |
2561 | # | |
2562 | # Uncomment to return search results containing all known users, even if that | |
2563 | # user does not share a room with the requester. | |
2564 | # | |
2565 | #search_all_users: true | |
2566 | ||
2567 | # Defines whether to prefer local users in search query results. | |
2568 | # If True, local users are more likely to appear above remote users | |
2569 | # when searching the user directory. Defaults to false. | |
2570 | # | |
2571 | # Uncomment to prefer local over remote users in user directory search | |
2572 | # results. | |
2573 | # | |
2574 | #prefer_local_users: true | |
2545 | 2575 | |
2546 | 2576 | |
2547 | 2577 | # User Consent configuration |
24 | 24 | * `check_username_for_spam` |
25 | 25 | * `check_registration_for_spam` |
26 | 26 | |
27 | The details of the each of these methods (as well as their inputs and outputs) | |
27 | The details of each of these methods (as well as their inputs and outputs) | |
28 | 28 | are documented in the `synapse.events.spamcheck.SpamChecker` class. |
29 | 29 | |
30 | 30 | The `ModuleApi` class provides a way for the custom spam checker class to |
3 | 3 | |
4 | 4 | # This service should be restarted when the synapse target is restarted. |
5 | 5 | PartOf=matrix-synapse.target |
6 | ReloadPropagatedFrom=matrix-synapse.target | |
6 | 7 | |
7 | 8 | # if this is started at the same time as the main, let the main process start |
8 | 9 | # first, to initialise the database schema. |
2 | 2 | |
3 | 3 | # This service should be restarted when the synapse target is restarted. |
4 | 4 | PartOf=matrix-synapse.target |
5 | ReloadPropagatedFrom=matrix-synapse.target | |
5 | 6 | |
6 | 7 | [Service] |
7 | 8 | Type=notify |
219 | 219 | |
220 | 220 | Acknowledge receipt of some federation data |
221 | 221 | |
222 | #### REMOVE_PUSHER (C) | |
223 | ||
224 | Inform the server a pusher should be removed | |
225 | ||
226 | 222 | ### REMOTE_SERVER_UP (S, C) |
227 | 223 | |
228 | 224 | Inform other processes that a remote server may have come back online. |
21 | 21 | import sys |
22 | 22 | import time |
23 | 23 | import traceback |
24 | from typing import Dict, Optional, Set | |
24 | from typing import Dict, Iterable, Optional, Set | |
25 | 25 | |
26 | 26 | import yaml |
27 | 27 | |
46 | 46 | from synapse.storage.databases.main.media_repository import ( |
47 | 47 | MediaRepositoryBackgroundUpdateStore, |
48 | 48 | ) |
49 | from synapse.storage.databases.main.pusher import PusherWorkerStore | |
49 | 50 | from synapse.storage.databases.main.registration import ( |
50 | 51 | RegistrationBackgroundUpdateStore, |
51 | 52 | find_max_generated_user_id_localpart, |
176 | 177 | UserDirectoryBackgroundUpdateStore, |
177 | 178 | EndToEndKeyBackgroundStore, |
178 | 179 | StatsStore, |
180 | PusherWorkerStore, | |
179 | 181 | ): |
180 | 182 | def execute(self, f, *args, **kwargs): |
181 | 183 | return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs) |
628 | 630 | await self._setup_state_group_id_seq() |
629 | 631 | await self._setup_user_id_seq() |
630 | 632 | await self._setup_events_stream_seqs() |
631 | await self._setup_device_inbox_seq() | |
633 | await self._setup_sequence( | |
634 | "device_inbox_sequence", ("device_inbox", "device_federation_outbox") | |
635 | ) | |
636 | await self._setup_sequence( | |
637 | "account_data_sequence", ("room_account_data", "room_tags_revisions", "account_data")) | |
638 | await self._setup_sequence("receipts_sequence", ("receipts_linearized", )) | |
639 | await self._setup_auth_chain_sequence() | |
632 | 640 | |
633 | 641 | # Step 3. Get tables. |
634 | 642 | self.progress.set_state("Fetching tables") |
853 | 861 | |
854 | 862 | return done, remaining + done |
855 | 863 | |
856 | async def _setup_state_group_id_seq(self): | |
864 | async def _setup_state_group_id_seq(self) -> None: | |
857 | 865 | curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol( |
858 | 866 | table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True |
859 | 867 | ) |
867 | 875 | |
868 | 876 | await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r) |
869 | 877 | |
870 | async def _setup_user_id_seq(self): | |
878 | async def _setup_user_id_seq(self) -> None: | |
871 | 879 | curr_id = await self.sqlite_store.db_pool.runInteraction( |
872 | 880 | "setup_user_id_seq", find_max_generated_user_id_localpart |
873 | 881 | ) |
876 | 884 | next_id = curr_id + 1 |
877 | 885 | txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,)) |
878 | 886 | |
879 | return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r) | |
880 | ||
881 | async def _setup_events_stream_seqs(self): | |
887 | await self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r) | |
888 | ||
889 | async def _setup_events_stream_seqs(self) -> None: | |
882 | 890 | """Set the event stream sequences to the correct values. |
883 | 891 | """ |
884 | 892 | |
907 | 915 | (curr_backward_id + 1,), |
908 | 916 | ) |
909 | 917 | |
910 | return await self.postgres_store.db_pool.runInteraction( | |
918 | await self.postgres_store.db_pool.runInteraction( | |
911 | 919 | "_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos, |
912 | 920 | ) |
913 | 921 | |
914 | async def _setup_device_inbox_seq(self): | |
915 | """Set the device inbox sequence to the correct value. | |
922 | async def _setup_sequence(self, sequence_name: str, stream_id_tables: Iterable[str]) -> None: | |
923 | """Set a sequence to the correct value. | |
916 | 924 | """ |
917 | curr_local_id = await self.sqlite_store.db_pool.simple_select_one_onecol( | |
918 | table="device_inbox", | |
919 | keyvalues={}, | |
920 | retcol="COALESCE(MAX(stream_id), 1)", | |
921 | allow_none=True, | |
922 | ) | |
923 | ||
924 | curr_federation_id = await self.sqlite_store.db_pool.simple_select_one_onecol( | |
925 | table="device_federation_outbox", | |
926 | keyvalues={}, | |
927 | retcol="COALESCE(MAX(stream_id), 1)", | |
928 | allow_none=True, | |
929 | ) | |
930 | ||
931 | next_id = max(curr_local_id, curr_federation_id) + 1 | |
925 | current_stream_ids = [] | |
926 | for stream_id_table in stream_id_tables: | |
927 | max_stream_id = await self.sqlite_store.db_pool.simple_select_one_onecol( | |
928 | table=stream_id_table, | |
929 | keyvalues={}, | |
930 | retcol="COALESCE(MAX(stream_id), 1)", | |
931 | allow_none=True, | |
932 | ) | |
933 | current_stream_ids.append(max_stream_id) | |
934 | ||
935 | next_id = max(current_stream_ids) + 1 | |
936 | ||
937 | def r(txn): | |
938 | sql = "ALTER SEQUENCE %s RESTART WITH" % (sequence_name, ) | |
939 | txn.execute(sql + " %s", (next_id, )) | |
940 | ||
941 | await self.postgres_store.db_pool.runInteraction("_setup_%s" % (sequence_name,), r) | |
942 | ||
943 | async def _setup_auth_chain_sequence(self) -> None: | |
944 | curr_chain_id = await self.sqlite_store.db_pool.simple_select_one_onecol( | |
945 | table="event_auth_chains", keyvalues={}, retcol="MAX(chain_id)", allow_none=True | |
946 | ) | |
932 | 947 | |
933 | 948 | def r(txn): |
934 | 949 | txn.execute( |
935 | "ALTER SEQUENCE device_inbox_sequence RESTART WITH %s", (next_id,) | |
936 | ) | |
937 | ||
938 | return self.postgres_store.db_pool.runInteraction("_setup_device_inbox_seq", r) | |
950 | "ALTER SEQUENCE event_auth_chain_id RESTART WITH %s", | |
951 | (curr_chain_id,), | |
952 | ) | |
953 | ||
954 | await self.postgres_store.db_pool.runInteraction( | |
955 | "_setup_event_auth_chain_id", r, | |
956 | ) | |
957 | ||
939 | 958 | |
940 | 959 | |
941 | 960 | ############################################## |
101 | 101 | "flake8", |
102 | 102 | ] |
103 | 103 | |
104 | CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.790", "mypy-zope==0.2.8"] | |
104 | CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"] | |
105 | 105 | |
106 | 106 | # Dependencies which are exclusively required by unit test code. This is |
107 | 107 | # NOT a list of all modules that are necessary to run the unit tests. |
47 | 47 | except ImportError: |
48 | 48 | pass |
49 | 49 | |
50 | __version__ = "1.28.0" | |
50 | __version__ = "1.29.0" | |
51 | 51 | |
52 | 52 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
53 | 53 | # We import here so that we don't have to install a bunch of deps when |
97 | 97 | |
98 | 98 | Retention = "m.room.retention" |
99 | 99 | |
100 | Dummy = "org.matrix.dummy_event" | |
101 | ||
102 | ||
103 | class EduTypes: | |
100 | 104 | Presence = "m.presence" |
101 | ||
102 | Dummy = "org.matrix.dummy_event" | |
105 | RoomKeyRequest = "m.room_key_request" | |
103 | 106 | |
104 | 107 | |
105 | 108 | class RejectedReason: |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | from collections import OrderedDict |
16 | from typing import Any, Optional, Tuple | |
16 | from typing import Hashable, Optional, Tuple | |
17 | 17 | |
18 | 18 | from synapse.api.errors import LimitExceededError |
19 | 19 | from synapse.types import Requester |
41 | 41 | # * How many times an action has occurred since a point in time |
42 | 42 | # * The point in time |
43 | 43 | # * The rate_hz of this particular entry. This can vary per request |
44 | self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]] | |
44 | self.actions = ( | |
45 | OrderedDict() | |
46 | ) # type: OrderedDict[Hashable, Tuple[float, int, float]] | |
45 | 47 | |
46 | 48 | def can_requester_do_action( |
47 | 49 | self, |
81 | 83 | |
82 | 84 | def can_do_action( |
83 | 85 | self, |
84 | key: Any, | |
86 | key: Hashable, | |
85 | 87 | rate_hz: Optional[float] = None, |
86 | 88 | burst_count: Optional[int] = None, |
87 | 89 | update: bool = True, |
174 | 176 | |
175 | 177 | def ratelimit( |
176 | 178 | self, |
177 | key: Any, | |
179 | key: Hashable, | |
178 | 180 | rate_hz: Optional[float] = None, |
179 | 181 | burst_count: Optional[int] = None, |
180 | 182 | update: bool = True, |
15 | 15 | import sys |
16 | 16 | |
17 | 17 | from synapse import python_dependencies # noqa: E402 |
18 | ||
19 | sys.dont_write_bytecode = True | |
20 | 18 | |
21 | 19 | logger = logging.getLogger(__name__) |
22 | 20 |
209 | 209 | config.update_user_directory = False |
210 | 210 | config.run_background_tasks = False |
211 | 211 | config.start_pushers = False |
212 | config.pusher_shard_config.instances = [] | |
212 | 213 | config.send_federation = False |
214 | config.federation_shard_config.instances = [] | |
213 | 215 | |
214 | 216 | synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts |
215 | 217 |
22 | 22 | |
23 | 23 | from twisted.internet import address |
24 | 24 | from twisted.web.resource import IResource |
25 | from twisted.web.server import Request | |
25 | 26 | |
26 | 27 | import synapse |
27 | 28 | import synapse.events |
189 | 190 | self.http_client = hs.get_simple_http_client() |
190 | 191 | self.main_uri = hs.config.worker_main_http_uri |
191 | 192 | |
192 | async def on_POST(self, request, device_id): | |
193 | async def on_POST(self, request: Request, device_id: Optional[str]): | |
193 | 194 | requester = await self.auth.get_user_by_req(request, allow_guest=True) |
194 | 195 | user_id = requester.user.to_string() |
195 | 196 | body = parse_json_object_from_request(request) |
222 | 223 | header: request.requestHeaders.getRawHeaders(header, []) |
223 | 224 | for header in (b"Authorization", b"User-Agent") |
224 | 225 | } |
225 | # Add the previous hop the the X-Forwarded-For header. | |
226 | # Add the previous hop to the X-Forwarded-For header. | |
226 | 227 | x_forwarded_for = request.requestHeaders.getRawHeaders( |
227 | 228 | b"X-Forwarded-For", [] |
228 | 229 | ) |
230 | # we use request.client here, since we want the previous hop, not the | |
231 | # original client (as returned by request.getClientAddress()). | |
229 | 232 | if isinstance(request.client, (address.IPv4Address, address.IPv6Address)): |
230 | 233 | previous_host = request.client.host.encode("ascii") |
231 | 234 | # If the header exists, add to the comma-separated list of the first |
237 | 240 | else: |
238 | 241 | x_forwarded_for = [previous_host] |
239 | 242 | headers[b"X-Forwarded-For"] = x_forwarded_for |
243 | ||
244 | # Replicate the original X-Forwarded-Proto header. Note that | |
245 | # XForwardedForRequest overrides isSecure() to give us the original protocol | |
246 | # used by the client, as opposed to the protocol used by our upstream proxy | |
247 | # - which is what we want here. | |
248 | headers[b"X-Forwarded-Proto"] = [ | |
249 | b"https" if request.isSecure() else b"http" | |
250 | ] | |
240 | 251 | |
241 | 252 | try: |
242 | 253 | result = await self.http_client.post_json_get_json( |
643 | 654 | logger.warning("Unsupported listener type: %s", listener.type) |
644 | 655 | |
645 | 656 | self.get_tcp_replication().start_replication(self) |
646 | ||
647 | async def remove_pusher(self, app_id, push_key, user_id): | |
648 | self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id) | |
649 | 657 | |
650 | 658 | @cache_in_self |
651 | 659 | def get_replication_data_handler(self): |
921 | 929 | # For other worker types we force this to off. |
922 | 930 | config.appservice.notify_appservices = False |
923 | 931 | |
924 | if config.worker_app == "synapse.app.pusher": | |
925 | if config.server.start_pushers: | |
926 | sys.stderr.write( | |
927 | "\nThe pushers must be disabled in the main synapse process" | |
928 | "\nbefore they can be run in a separate worker." | |
929 | "\nPlease add ``start_pushers: false`` to the main config" | |
930 | "\n" | |
931 | ) | |
932 | sys.exit(1) | |
933 | ||
934 | # Force the pushers to start since they will be disabled in the main config | |
935 | config.server.start_pushers = True | |
936 | else: | |
937 | # For other worker types we force this to off. | |
938 | config.server.start_pushers = False | |
939 | ||
940 | 932 | if config.worker_app == "synapse.app.user_dir": |
941 | 933 | if config.server.update_user_directory: |
942 | 934 | sys.stderr.write( |
953 | 945 | # For other worker types we force this to off. |
954 | 946 | config.server.update_user_directory = False |
955 | 947 | |
956 | if config.worker_app == "synapse.app.federation_sender": | |
957 | if config.worker.send_federation: | |
958 | sys.stderr.write( | |
959 | "\nThe send_federation must be disabled in the main synapse process" | |
960 | "\nbefore they can be run in a separate worker." | |
961 | "\nPlease add ``send_federation: false`` to the main config" | |
962 | "\n" | |
963 | ) | |
964 | sys.exit(1) | |
965 | ||
966 | # Force the pushers to start since they will be disabled in the main config | |
967 | config.worker.send_federation = True | |
968 | else: | |
969 | # For other worker types we force this to off. | |
970 | config.worker.send_federation = False | |
971 | ||
972 | 948 | synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts |
973 | 949 | |
974 | 950 | hs = GenericWorkerServer( |
20 | 20 | from collections import OrderedDict |
21 | 21 | from hashlib import sha256 |
22 | 22 | from textwrap import dedent |
23 | from typing import Any, Iterable, List, MutableMapping, Optional | |
23 | from typing import Any, Iterable, List, MutableMapping, Optional, Union | |
24 | 24 | |
25 | 25 | import attr |
26 | 26 | import jinja2 |
146 | 146 | return int(value) * size |
147 | 147 | |
148 | 148 | @staticmethod |
149 | def parse_duration(value): | |
149 | def parse_duration(value: Union[str, int]) -> int: | |
150 | """Convert a duration as a string or integer to a number of milliseconds. | |
151 | ||
152 | If an integer is provided it is treated as milliseconds and is unchanged. | |
153 | ||
154 | String durations can have a suffix of 's', 'm', 'h', 'd', 'w', or 'y'. | |
155 | No suffix is treated as milliseconds. | |
156 | ||
157 | Args: | |
158 | value: The duration to parse. | |
159 | ||
160 | Returns: | |
161 | The number of milliseconds in the duration. | |
162 | """ | |
150 | 163 | if isinstance(value, int): |
151 | 164 | return value |
152 | 165 | second = 1000 |
830 | 843 | |
831 | 844 | def should_handle(self, instance_name: str, key: str) -> bool: |
832 | 845 | """Whether this instance is responsible for handling the given key.""" |
833 | # If multiple instances are not defined we always return true | |
834 | if not self.instances or len(self.instances) == 1: | |
835 | return True | |
836 | ||
837 | return self.get_instance(key) == instance_name | |
838 | ||
839 | def get_instance(self, key: str) -> str: | |
846 | # If no instances are defined we assume some other worker is handling | |
847 | # this. | |
848 | if not self.instances: | |
849 | return False | |
850 | ||
851 | return self._get_instance(key) == instance_name | |
852 | ||
853 | def _get_instance(self, key: str) -> str: | |
840 | 854 | """Get the instance responsible for handling the given key. |
841 | 855 | |
842 | Note: For things like federation sending the config for which instance | |
843 | is sending is known only to the sender instance if there is only one. | |
844 | Therefore `should_handle` should be used where possible. | |
856 | Note: For federation sending and pushers the config for which instance | |
857 | is sending is known only to the sender instance, so we don't expose this | |
858 | method by default. | |
845 | 859 | """ |
846 | 860 | |
847 | 861 | if not self.instances: |
848 | return "master" | |
862 | raise Exception("Unknown worker") | |
849 | 863 | |
850 | 864 | if len(self.instances) == 1: |
851 | 865 | return self.instances[0] |
862 | 876 | return self.instances[remainder] |
863 | 877 | |
864 | 878 | |
879 | @attr.s | |
880 | class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig): | |
881 | """A version of `ShardedWorkerHandlingConfig` that is used for config | |
882 | options where all instances know which instances are responsible for the | |
883 | sharded work. | |
884 | """ | |
885 | ||
886 | def __attrs_post_init__(self): | |
887 | # We require that `self.instances` is non-empty. | |
888 | if not self.instances: | |
889 | raise Exception("Got empty list of instances for shard config") | |
890 | ||
891 | def get_instance(self, key: str) -> str: | |
892 | """Get the instance responsible for handling the given key.""" | |
893 | return self._get_instance(key) | |
894 | ||
895 | ||
865 | 896 | __all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"] |
148 | 148 | instances: List[str] |
149 | 149 | def __init__(self, instances: List[str]) -> None: ... |
150 | 150 | def should_handle(self, instance_name: str, key: str) -> bool: ... |
151 | ||
152 | class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig): | |
151 | 153 | def get_instance(self, key: str) -> str: ... |
40 | 40 | ) |
41 | 41 | self.federation_metrics_domains = set(federation_metrics_domains) |
42 | 42 | |
43 | self.allow_profile_lookup_over_federation = config.get( | |
44 | "allow_profile_lookup_over_federation", True | |
45 | ) | |
46 | ||
43 | 47 | def generate_config_section(self, config_dir_path, server_name, **kwargs): |
44 | 48 | return """\ |
45 | 49 | ## Federation ## |
65 | 69 | #federation_metrics_domains: |
66 | 70 | # - matrix.org |
67 | 71 | # - example.com |
72 | ||
73 | # Uncomment to disable profile lookup over federation. By default, the | |
74 | # Federation API allows other homeservers to obtain profile data of any user | |
75 | # on this homeserver. Defaults to 'true'. | |
76 | # | |
77 | #allow_profile_lookup_over_federation: false | |
68 | 78 | """ |
69 | 79 | |
70 | 80 |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | from ._base import Config, ShardedWorkerHandlingConfig | |
16 | from ._base import Config | |
17 | 17 | |
18 | 18 | |
19 | 19 | class PushConfig(Config): |
25 | 25 | self.push_group_unread_count_by_room = push_config.get( |
26 | 26 | "group_unread_count_by_room", True |
27 | 27 | ) |
28 | ||
29 | pusher_instances = config.get("pusher_instances") or [] | |
30 | self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances) | |
31 | 28 | |
32 | 29 | # There was a a 'redact_content' setting but mistakenly read from the |
33 | 30 | # 'email'section'. Check for the flag in the 'push' section, and log, |
99 | 99 | self.rc_joins_remote = RateLimitConfig( |
100 | 100 | config.get("rc_joins", {}).get("remote", {}), |
101 | 101 | defaults={"per_second": 0.01, "burst_count": 3}, |
102 | ) | |
103 | ||
104 | # Ratelimit cross-user key requests: | |
105 | # * For local requests this is keyed by the sending device. | |
106 | # * For requests received over federation this is keyed by the origin. | |
107 | # | |
108 | # Note that this isn't exposed in the configuration as it is obscure. | |
109 | self.rc_key_requests = RateLimitConfig( | |
110 | config.get("rc_key_requests", {}), | |
111 | defaults={"per_second": 20, "burst_count": 100}, | |
102 | 112 | ) |
103 | 113 | |
104 | 114 | self.rc_3pid_validation = RateLimitConfig( |
205 | 205 | |
206 | 206 | def generate_config_section(self, data_dir_path, **kwargs): |
207 | 207 | media_store = os.path.join(data_dir_path, "media_store") |
208 | uploads_path = os.path.join(data_dir_path, "uploads") | |
209 | 208 | |
210 | 209 | formatted_thumbnail_sizes = "".join( |
211 | 210 | THUMBNAIL_SIZE_YAML % s for s in DEFAULT_THUMBNAIL_SIZES |
262 | 262 | False, |
263 | 263 | ) |
264 | 264 | |
265 | # Whether to retrieve and display profile data for a user when they | |
266 | # are invited to a room | |
267 | self.include_profile_data_on_invite = config.get( | |
268 | "include_profile_data_on_invite", True | |
269 | ) | |
270 | ||
265 | 271 | if "restrict_public_rooms_to_local_users" in config and ( |
266 | 272 | "allow_public_rooms_without_auth" in config |
267 | 273 | or "allow_public_rooms_over_federation" in config |
390 | 396 | if self.public_baseurl is not None: |
391 | 397 | if self.public_baseurl[-1] != "/": |
392 | 398 | self.public_baseurl += "/" |
393 | self.start_pushers = config.get("start_pushers", True) | |
394 | 399 | |
395 | 400 | # (undocumented) option for torturing the worker-mode replication a bit, |
396 | 401 | # for testing. The value defines the number of milliseconds to pause before |
847 | 852 | # |
848 | 853 | #limit_profile_requests_to_users_who_share_rooms: true |
849 | 854 | |
855 | # Uncomment to prevent a user's profile data from being retrieved and | |
856 | # displayed in a room until they have joined it. By default, a user's | |
857 | # profile data is included in an invite event, regardless of the values | |
858 | # of the above two settings, and whether or not the users share a server. | |
859 | # Defaults to 'true'. | |
860 | # | |
861 | #include_profile_data_on_invite: false | |
862 | ||
850 | 863 | # If set to 'true', removes the need for authentication to access the server's |
851 | 864 | # public rooms directory through the client API, meaning that anyone can |
852 | 865 | # query the room directory. Defaults to 'false'. |
23 | 23 | section = "userdirectory" |
24 | 24 | |
25 | 25 | def read_config(self, config, **kwargs): |
26 | self.user_directory_search_enabled = True | |
27 | self.user_directory_search_all_users = False | |
28 | user_directory_config = config.get("user_directory", None) | |
29 | if user_directory_config: | |
30 | self.user_directory_search_enabled = user_directory_config.get( | |
31 | "enabled", True | |
32 | ) | |
33 | self.user_directory_search_all_users = user_directory_config.get( | |
34 | "search_all_users", False | |
35 | ) | |
26 | user_directory_config = config.get("user_directory") or {} | |
27 | self.user_directory_search_enabled = user_directory_config.get("enabled", True) | |
28 | self.user_directory_search_all_users = user_directory_config.get( | |
29 | "search_all_users", False | |
30 | ) | |
31 | self.user_directory_search_prefer_local_users = user_directory_config.get( | |
32 | "prefer_local_users", False | |
33 | ) | |
36 | 34 | |
37 | 35 | def generate_config_section(self, config_dir_path, server_name, **kwargs): |
38 | 36 | return """ |
39 | 37 | # User Directory configuration |
40 | 38 | # |
41 | # 'enabled' defines whether users can search the user directory. If | |
42 | # false then empty responses are returned to all queries. Defaults to | |
43 | # true. | |
44 | # | |
45 | # 'search_all_users' defines whether to search all users visible to your HS | |
46 | # when searching the user directory, rather than limiting to users visible | |
47 | # in public rooms. Defaults to false. If you set it True, you'll have to | |
48 | # rebuild the user_directory search indexes, see | |
49 | # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md | |
50 | # | |
51 | #user_directory: | |
52 | # enabled: true | |
53 | # search_all_users: false | |
39 | user_directory: | |
40 | # Defines whether users can search the user directory. If false then | |
41 | # empty responses are returned to all queries. Defaults to true. | |
42 | # | |
43 | # Uncomment to disable the user directory. | |
44 | # | |
45 | #enabled: false | |
46 | ||
47 | # Defines whether to search all users visible to your HS when searching | |
48 | # the user directory, rather than limiting to users visible in public | |
49 | # rooms. Defaults to false. | |
50 | # | |
51 | # If you set it true, you'll have to rebuild the user_directory search | |
52 | # indexes, see: | |
53 | # https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md | |
54 | # | |
55 | # Uncomment to return search results containing all known users, even if that | |
56 | # user does not share a room with the requester. | |
57 | # | |
58 | #search_all_users: true | |
59 | ||
60 | # Defines whether to prefer local users in search query results. | |
61 | # If True, local users are more likely to appear above remote users | |
62 | # when searching the user directory. Defaults to false. | |
63 | # | |
64 | # Uncomment to prefer local over remote users in user directory search | |
65 | # results. | |
66 | # | |
67 | #prefer_local_users: true | |
54 | 68 | """ |
16 | 16 | |
17 | 17 | import attr |
18 | 18 | |
19 | from ._base import Config, ConfigError, ShardedWorkerHandlingConfig | |
19 | from ._base import ( | |
20 | Config, | |
21 | ConfigError, | |
22 | RoutableShardedWorkerHandlingConfig, | |
23 | ShardedWorkerHandlingConfig, | |
24 | ) | |
20 | 25 | from .server import ListenerConfig, parse_listener_def |
26 | ||
27 | _FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR = """ | |
28 | The send_federation config option must be disabled in the main | |
29 | synapse process before they can be run in a separate worker. | |
30 | ||
31 | Please add ``send_federation: false`` to the main config | |
32 | """ | |
33 | ||
34 | _PUSHER_WITH_START_PUSHERS_ENABLED_ERROR = """ | |
35 | The start_pushers config option must be disabled in the main | |
36 | synapse process before they can be run in a separate worker. | |
37 | ||
38 | Please add ``start_pushers: false`` to the main config | |
39 | """ | |
21 | 40 | |
22 | 41 | |
23 | 42 | def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]: |
102 | 121 | self.worker_replication_secret = config.get("worker_replication_secret", None) |
103 | 122 | |
104 | 123 | self.worker_name = config.get("worker_name", self.worker_app) |
124 | self.instance_name = self.worker_name or "master" | |
105 | 125 | |
106 | 126 | self.worker_main_http_uri = config.get("worker_main_http_uri", None) |
107 | 127 | |
117 | 137 | ) |
118 | 138 | ) |
119 | 139 | |
120 | # Whether to send federation traffic out in this process. This only | |
121 | # applies to some federation traffic, and so shouldn't be used to | |
122 | # "disable" federation | |
123 | self.send_federation = config.get("send_federation", True) | |
124 | ||
125 | federation_sender_instances = config.get("federation_sender_instances") or [] | |
140 | # Handle federation sender configuration. | |
141 | # | |
142 | # There are two ways of configuring which instances handle federation | |
143 | # sending: | |
144 | # 1. The old way where "send_federation" is set to false and running a | |
145 | # `synapse.app.federation_sender` worker app. | |
146 | # 2. Specifying the workers sending federation in | |
147 | # `federation_sender_instances`. | |
148 | # | |
149 | ||
150 | send_federation = config.get("send_federation", True) | |
151 | ||
152 | federation_sender_instances = config.get("federation_sender_instances") | |
153 | if federation_sender_instances is None: | |
154 | # Default to an empty list, which means "another, unknown, worker is | |
155 | # responsible for it". | |
156 | federation_sender_instances = [] | |
157 | ||
158 | # If no federation sender instances are set we check if | |
159 | # `send_federation` is set, which means use master | |
160 | if send_federation: | |
161 | federation_sender_instances = ["master"] | |
162 | ||
163 | if self.worker_app == "synapse.app.federation_sender": | |
164 | if send_federation: | |
165 | # If we're running federation senders, and not using | |
166 | # `federation_sender_instances`, then we should have | |
167 | # explicitly set `send_federation` to false. | |
168 | raise ConfigError( | |
169 | _FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR | |
170 | ) | |
171 | ||
172 | federation_sender_instances = [self.worker_name] | |
173 | ||
174 | self.send_federation = self.instance_name in federation_sender_instances | |
126 | 175 | self.federation_shard_config = ShardedWorkerHandlingConfig( |
127 | 176 | federation_sender_instances |
128 | 177 | ) |
163 | 212 | "Must only specify one instance to handle `receipts` messages." |
164 | 213 | ) |
165 | 214 | |
166 | self.events_shard_config = ShardedWorkerHandlingConfig(self.writers.events) | |
215 | if len(self.writers.events) == 0: | |
216 | raise ConfigError("Must specify at least one instance to handle `events`.") | |
217 | ||
218 | self.events_shard_config = RoutableShardedWorkerHandlingConfig( | |
219 | self.writers.events | |
220 | ) | |
221 | ||
222 | # Handle sharded push | |
223 | start_pushers = config.get("start_pushers", True) | |
224 | pusher_instances = config.get("pusher_instances") | |
225 | if pusher_instances is None: | |
226 | # Default to an empty list, which means "another, unknown, worker is | |
227 | # responsible for it". | |
228 | pusher_instances = [] | |
229 | ||
230 | # If no pushers instances are set we check if `start_pushers` is | |
231 | # set, which means use master | |
232 | if start_pushers: | |
233 | pusher_instances = ["master"] | |
234 | ||
235 | if self.worker_app == "synapse.app.pusher": | |
236 | if start_pushers: | |
237 | # If we're running pushers, and not using | |
238 | # `pusher_instances`, then we should have explicitly set | |
239 | # `start_pushers` to false. | |
240 | raise ConfigError(_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR) | |
241 | ||
242 | pusher_instances = [self.instance_name] | |
243 | ||
244 | self.start_pushers = self.instance_name in pusher_instances | |
245 | self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances) | |
167 | 246 | |
168 | 247 | # Whether this worker should run background tasks or not. |
169 | 248 | # |
33 | 33 | from twisted.internet.abstract import isIPAddress |
34 | 34 | from twisted.python import failure |
35 | 35 | |
36 | from synapse.api.constants import EventTypes, Membership | |
36 | from synapse.api.constants import EduTypes, EventTypes, Membership | |
37 | 37 | from synapse.api.errors import ( |
38 | 38 | AuthError, |
39 | 39 | Codes, |
43 | 43 | SynapseError, |
44 | 44 | UnsupportedRoomVersionError, |
45 | 45 | ) |
46 | from synapse.api.ratelimiting import Ratelimiter | |
46 | 47 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS |
47 | 48 | from synapse.events import EventBase |
48 | 49 | from synapse.federation.federation_base import FederationBase, event_from_pdu_json |
868 | 869 | # EDU received. |
869 | 870 | self._edu_type_to_instance = {} # type: Dict[str, List[str]] |
870 | 871 | |
872 | # A rate limiter for incoming room key requests per origin. | |
873 | self._room_key_request_rate_limiter = Ratelimiter( | |
874 | clock=self.clock, | |
875 | rate_hz=self.config.rc_key_requests.per_second, | |
876 | burst_count=self.config.rc_key_requests.burst_count, | |
877 | ) | |
878 | ||
871 | 879 | def register_edu_handler( |
872 | 880 | self, edu_type: str, handler: Callable[[str, JsonDict], Awaitable[None]] |
873 | 881 | ): |
916 | 924 | self._edu_type_to_instance[edu_type] = instance_names |
917 | 925 | |
918 | 926 | async def on_edu(self, edu_type: str, origin: str, content: dict): |
919 | if not self.config.use_presence and edu_type == "m.presence": | |
927 | if not self.config.use_presence and edu_type == EduTypes.Presence: | |
928 | return | |
929 | ||
930 | # If the incoming room key requests from a particular origin are over | |
931 | # the limit, drop them. | |
932 | if ( | |
933 | edu_type == EduTypes.RoomKeyRequest | |
934 | and not self._room_key_request_rate_limiter.can_do_action(origin) | |
935 | ): | |
920 | 936 | return |
921 | 937 | |
922 | 938 | # Check if we have a handler on this instance |
473 | 473 | self._processing_pending_presence = False |
474 | 474 | |
475 | 475 | def send_presence_to_destinations( |
476 | self, states: List[UserPresenceState], destinations: List[str] | |
476 | self, states: Iterable[UserPresenceState], destinations: Iterable[str] | |
477 | 477 | ) -> None: |
478 | 478 | """Send the given presence states to the given destinations. |
479 | 479 | destinations (list[str]) |
483 | 483 | |
484 | 484 | # This is when we receive a server-server Query |
485 | 485 | async def on_GET(self, origin, content, query, query_type): |
486 | return await self.handler.on_query_request( | |
487 | query_type, | |
488 | {k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()}, | |
489 | ) | |
486 | args = {k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()} | |
487 | args["origin"] = origin | |
488 | return await self.handler.on_query_request(query_type, args) | |
490 | 489 | |
491 | 490 | |
492 | 491 | class FederationMakeJoinServlet(BaseFederationServlet): |
35 | 35 | import bcrypt |
36 | 36 | import pymacaroons |
37 | 37 | |
38 | from twisted.web.http import Request | |
38 | from twisted.web.server import Request | |
39 | 39 | |
40 | 40 | from synapse.api.constants import LoginType |
41 | 41 | from synapse.api.errors import ( |
480 | 480 | sid = authdict["session"] |
481 | 481 | |
482 | 482 | # Convert the URI and method to strings. |
483 | uri = request.uri.decode("utf-8") | |
483 | uri = request.uri.decode("utf-8") # type: ignore | |
484 | 484 | method = request.method.decode("utf-8") |
485 | 485 | |
486 | 486 | # If there's no session ID, create a new session. |
119 | 119 | |
120 | 120 | await self.store.user_set_password_hash(user_id, None) |
121 | 121 | |
122 | # Most of the pushers will have been deleted when we logged out the | |
123 | # associated devices above, but we still need to delete pushers not | |
124 | # associated with devices, e.g. email pushers. | |
125 | await self.store.delete_all_pushers_for_user(user_id) | |
126 | ||
122 | 127 | # Add the user to a table of users pending deactivation (ie. |
123 | 128 | # removal from all the rooms they're a member of) |
124 | 129 | await self.store.add_user_pending_deactivation(user_id) |
15 | 15 | import logging |
16 | 16 | from typing import TYPE_CHECKING, Any, Dict |
17 | 17 | |
18 | from synapse.api.constants import EduTypes | |
18 | 19 | from synapse.api.errors import SynapseError |
20 | from synapse.api.ratelimiting import Ratelimiter | |
19 | 21 | from synapse.logging.context import run_in_background |
20 | 22 | from synapse.logging.opentracing import ( |
21 | 23 | get_active_span_text_map, |
24 | 26 | start_active_span, |
25 | 27 | ) |
26 | 28 | from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet |
27 | from synapse.types import JsonDict, UserID, get_domain_from_id | |
29 | from synapse.types import JsonDict, Requester, UserID, get_domain_from_id | |
28 | 30 | from synapse.util import json_encoder |
29 | 31 | from synapse.util.stringutils import random_string |
30 | 32 | |
77 | 79 | ReplicationUserDevicesResyncRestServlet.make_client(hs) |
78 | 80 | ) |
79 | 81 | |
82 | self._ratelimiter = Ratelimiter( | |
83 | clock=hs.get_clock(), | |
84 | rate_hz=hs.config.rc_key_requests.per_second, | |
85 | burst_count=hs.config.rc_key_requests.burst_count, | |
86 | ) | |
87 | ||
80 | 88 | async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None: |
81 | 89 | local_messages = {} |
82 | 90 | sender_user_id = content["sender"] |
167 | 175 | |
168 | 176 | async def send_device_message( |
169 | 177 | self, |
170 | sender_user_id: str, | |
178 | requester: Requester, | |
171 | 179 | message_type: str, |
172 | 180 | messages: Dict[str, Dict[str, JsonDict]], |
173 | 181 | ) -> None: |
182 | sender_user_id = requester.user.to_string() | |
183 | ||
174 | 184 | set_tag("number_of_messages", len(messages)) |
175 | 185 | set_tag("sender", sender_user_id) |
176 | 186 | local_messages = {} |
177 | 187 | remote_messages = {} # type: Dict[str, Dict[str, Dict[str, JsonDict]]] |
178 | 188 | for user_id, by_device in messages.items(): |
189 | # Ratelimit local cross-user key requests by the sending device. | |
190 | if ( | |
191 | message_type == EduTypes.RoomKeyRequest | |
192 | and user_id != sender_user_id | |
193 | and self._ratelimiter.can_do_action( | |
194 | (sender_user_id, requester.device_id) | |
195 | ) | |
196 | ): | |
197 | continue | |
198 | ||
179 | 199 | # we use UserID.from_string to catch invalid user ids |
180 | 200 | if self.is_mine(UserID.from_string(user_id)): |
181 | 201 | messages_by_device = { |
16 | 16 | import random |
17 | 17 | from typing import TYPE_CHECKING, Iterable, List, Optional |
18 | 18 | |
19 | from synapse.api.constants import EventTypes, Membership | |
19 | from synapse.api.constants import EduTypes, EventTypes, Membership | |
20 | 20 | from synapse.api.errors import AuthError, SynapseError |
21 | 21 | from synapse.events import EventBase |
22 | 22 | from synapse.handlers.presence import format_user_presence_state |
112 | 112 | states = await presence_handler.get_states(users) |
113 | 113 | to_add.extend( |
114 | 114 | { |
115 | "type": EventTypes.Presence, | |
115 | "type": EduTypes.Presence, | |
116 | 116 | "content": format_user_presence_state(state, time_now), |
117 | 117 | } |
118 | 118 | for state in states |
17 | 17 | |
18 | 18 | from twisted.internet import defer |
19 | 19 | |
20 | from synapse.api.constants import EventTypes, Membership | |
20 | from synapse.api.constants import EduTypes, EventTypes, Membership | |
21 | 21 | from synapse.api.errors import SynapseError |
22 | 22 | from synapse.events.validator import EventValidator |
23 | 23 | from synapse.handlers.presence import format_user_presence_state |
411 | 411 | |
412 | 412 | return [ |
413 | 413 | { |
414 | "type": EventTypes.Presence, | |
414 | "type": EduTypes.Presence, | |
415 | 415 | "content": format_user_presence_state(s, time_now), |
416 | 416 | } |
417 | 417 | for s in states |
386 | 386 | |
387 | 387 | self.room_invite_state_types = self.hs.config.room_invite_state_types |
388 | 388 | |
389 | self.membership_types_to_include_profile_data_in = ( | |
390 | {Membership.JOIN, Membership.INVITE} | |
391 | if self.hs.config.include_profile_data_on_invite | |
392 | else {Membership.JOIN} | |
393 | ) | |
394 | ||
389 | 395 | self.send_event = ReplicationSendEventRestServlet.make_client(hs) |
390 | 396 | |
391 | 397 | # This is only used to get at ratelimit function, and maybe_kick_guest_users |
499 | 505 | membership = builder.content.get("membership", None) |
500 | 506 | target = UserID.from_string(builder.state_key) |
501 | 507 | |
502 | if membership in {Membership.JOIN, Membership.INVITE}: | |
508 | if membership in self.membership_types_to_include_profile_data_in: | |
503 | 509 | # If event doesn't include a display name, add one. |
504 | 510 | profile = self.profile_handler |
505 | 511 | content = builder.content |
273 | 273 | |
274 | 274 | self.external_sync_linearizer = Linearizer(name="external_sync_linearizer") |
275 | 275 | |
276 | # Start a LoopingCall in 30s that fires every 5s. | |
277 | # The initial delay is to allow disconnected clients a chance to | |
278 | # reconnect before we treat them as offline. | |
279 | def run_timeout_handler(): | |
280 | return run_as_background_process( | |
281 | "handle_presence_timeouts", self._handle_timeouts | |
282 | ) | |
283 | ||
284 | self.clock.call_later(30, self.clock.looping_call, run_timeout_handler, 5000) | |
285 | ||
286 | def run_persister(): | |
287 | return run_as_background_process( | |
288 | "persist_presence_changes", self._persist_unpersisted_changes | |
289 | ) | |
290 | ||
291 | self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000) | |
276 | if self._presence_enabled: | |
277 | # Start a LoopingCall in 30s that fires every 5s. | |
278 | # The initial delay is to allow disconnected clients a chance to | |
279 | # reconnect before we treat them as offline. | |
280 | def run_timeout_handler(): | |
281 | return run_as_background_process( | |
282 | "handle_presence_timeouts", self._handle_timeouts | |
283 | ) | |
284 | ||
285 | self.clock.call_later( | |
286 | 30, self.clock.looping_call, run_timeout_handler, 5000 | |
287 | ) | |
288 | ||
289 | def run_persister(): | |
290 | return run_as_background_process( | |
291 | "persist_presence_changes", self._persist_unpersisted_changes | |
292 | ) | |
293 | ||
294 | self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000) | |
292 | 295 | |
293 | 296 | LaterGauge( |
294 | 297 | "synapse_handlers_presence_wheel_timer_size", |
298 | 301 | ) |
299 | 302 | |
300 | 303 | # Used to handle sending of presence to newly joined users/servers |
301 | if hs.config.use_presence: | |
304 | if self._presence_enabled: | |
302 | 305 | self.notifier.add_replication_callback(self.notify_new_event) |
303 | 306 | |
304 | 307 | # Presence is best effort and quickly heals itself, so lets just always |
848 | 851 | """Process current state deltas to find new joins that need to be |
849 | 852 | handled. |
850 | 853 | """ |
854 | # A map of destination to a set of user state that they should receive | |
855 | presence_destinations = {} # type: Dict[str, Set[UserPresenceState]] | |
856 | ||
851 | 857 | for delta in deltas: |
852 | 858 | typ = delta["type"] |
853 | 859 | state_key = delta["state_key"] |
857 | 863 | |
858 | 864 | logger.debug("Handling: %r %r, %s", typ, state_key, event_id) |
859 | 865 | |
866 | # Drop any event that isn't a membership join | |
860 | 867 | if typ != EventTypes.Member: |
861 | 868 | continue |
862 | 869 | |
879 | 886 | # Ignore changes to join events. |
880 | 887 | continue |
881 | 888 | |
882 | await self._on_user_joined_room(room_id, state_key) | |
883 | ||
884 | async def _on_user_joined_room(self, room_id: str, user_id: str) -> None: | |
889 | # Retrieve any user presence state updates that need to be sent as a result, | |
890 | # and the destinations that need to receive it | |
891 | destinations, user_presence_states = await self._on_user_joined_room( | |
892 | room_id, state_key | |
893 | ) | |
894 | ||
895 | # Insert the destinations and respective updates into our destinations dict | |
896 | for destination in destinations: | |
897 | presence_destinations.setdefault(destination, set()).update( | |
898 | user_presence_states | |
899 | ) | |
900 | ||
901 | # Send out user presence updates for each destination | |
902 | for destination, user_state_set in presence_destinations.items(): | |
903 | self.federation.send_presence_to_destinations( | |
904 | destinations=[destination], states=user_state_set | |
905 | ) | |
906 | ||
907 | async def _on_user_joined_room( | |
908 | self, room_id: str, user_id: str | |
909 | ) -> Tuple[List[str], List[UserPresenceState]]: | |
885 | 910 | """Called when we detect a user joining the room via the current state |
886 | delta stream. | |
887 | """ | |
888 | ||
911 | delta stream. Returns the destinations that need to be updated and the | |
912 | presence updates to send to them. | |
913 | ||
914 | Args: | |
915 | room_id: The ID of the room that the user has joined. | |
916 | user_id: The ID of the user that has joined the room. | |
917 | ||
918 | Returns: | |
919 | A tuple of destinations and presence updates to send to them. | |
920 | """ | |
889 | 921 | if self.is_mine_id(user_id): |
890 | 922 | # If this is a local user then we need to send their presence |
891 | 923 | # out to hosts in the room (who don't already have it) |
893 | 925 | # TODO: We should be able to filter the hosts down to those that |
894 | 926 | # haven't previously seen the user |
895 | 927 | |
928 | remote_hosts = await self.state.get_current_hosts_in_room(room_id) | |
929 | ||
930 | # Filter out ourselves. | |
931 | filtered_remote_hosts = [ | |
932 | host for host in remote_hosts if host != self.server_name | |
933 | ] | |
934 | ||
896 | 935 | state = await self.current_state_for_user(user_id) |
897 | hosts = await self.state.get_current_hosts_in_room(room_id) | |
898 | ||
899 | # Filter out ourselves. | |
900 | hosts = {host for host in hosts if host != self.server_name} | |
901 | ||
902 | self.federation.send_presence_to_destinations( | |
903 | states=[state], destinations=hosts | |
904 | ) | |
936 | return filtered_remote_hosts, [state] | |
905 | 937 | else: |
906 | 938 | # A remote user has joined the room, so we need to: |
907 | 939 | # 1. Check if this is a new server in the room |
913 | 945 | |
914 | 946 | # TODO: Check that this is actually a new server joining the |
915 | 947 | # room. |
948 | ||
949 | remote_host = get_domain_from_id(user_id) | |
916 | 950 | |
917 | 951 | users = await self.state.get_current_users_in_room(room_id) |
918 | 952 | user_ids = list(filter(self.is_mine_id, users)) |
933 | 967 | or state.status_msg is not None |
934 | 968 | ] |
935 | 969 | |
936 | if states: | |
937 | self.federation.send_presence_to_destinations( | |
938 | states=states, destinations=[get_domain_from_id(user_id)] | |
939 | ) | |
970 | return [remote_host], states | |
940 | 971 | |
941 | 972 | |
942 | 973 | def should_notify(old_state, new_state): |
309 | 309 | await self._update_join_states(requester, target_user) |
310 | 310 | |
311 | 311 | async def on_profile_query(self, args: JsonDict) -> JsonDict: |
312 | """Handles federation profile query requests.""" | |
313 | ||
314 | if not self.hs.config.allow_profile_lookup_over_federation: | |
315 | raise SynapseError( | |
316 | 403, | |
317 | "Profile lookup over federation is disabled on this homeserver", | |
318 | Codes.FORBIDDEN, | |
319 | ) | |
320 | ||
312 | 321 | user = UserID.from_string(args["user_id"]) |
313 | 322 | if not self.hs.is_mine(user): |
314 | 323 | raise SynapseError(400, "User is not hosted on this homeserver") |
30 | 30 | import attr |
31 | 31 | from typing_extensions import NoReturn, Protocol |
32 | 32 | |
33 | from twisted.web.http import Request | |
34 | 33 | from twisted.web.iweb import IRequest |
34 | from twisted.web.server import Request | |
35 | 35 | |
36 | 36 | from synapse.api.constants import LoginType |
37 | 37 | from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | import re |
16 | from typing import Union | |
16 | 17 | |
17 | from twisted.internet import task | |
18 | from twisted.internet import address, task | |
18 | 19 | from twisted.web.client import FileBodyProducer |
19 | 20 | from twisted.web.iweb import IRequest |
20 | 21 | |
52 | 53 | pass |
53 | 54 | |
54 | 55 | |
56 | def get_request_uri(request: IRequest) -> bytes: | |
57 | """Return the full URI that was requested by the client""" | |
58 | return b"%s://%s%s" % ( | |
59 | b"https" if request.isSecure() else b"http", | |
60 | _get_requested_host(request), | |
61 | # despite its name, "request.uri" is only the path and query-string. | |
62 | request.uri, | |
63 | ) | |
64 | ||
65 | ||
66 | def _get_requested_host(request: IRequest) -> bytes: | |
67 | hostname = request.getHeader(b"host") | |
68 | if hostname: | |
69 | return hostname | |
70 | ||
71 | # no Host header, use the address/port that the request arrived on | |
72 | host = request.getHost() # type: Union[address.IPv4Address, address.IPv6Address] | |
73 | ||
74 | hostname = host.host.encode("ascii") | |
75 | ||
76 | if request.isSecure() and host.port == 443: | |
77 | # default port for https | |
78 | return hostname | |
79 | ||
80 | if not request.isSecure() and host.port == 80: | |
81 | # default port for http | |
82 | return hostname | |
83 | ||
84 | return b"%s:%i" % ( | |
85 | hostname, | |
86 | host.port, | |
87 | ) | |
88 | ||
89 | ||
55 | 90 | def get_request_user_agent(request: IRequest, default: str = "") -> str: |
56 | 91 | """Return the last User-Agent header, or the given default.""" |
57 | 92 | # There could be raw utf-8 bytes in the User-Agent header. |
288 | 288 | treq_args: Dict[str, Any] = {}, |
289 | 289 | ip_whitelist: Optional[IPSet] = None, |
290 | 290 | ip_blacklist: Optional[IPSet] = None, |
291 | http_proxy: Optional[bytes] = None, | |
292 | https_proxy: Optional[bytes] = None, | |
291 | use_proxy: bool = False, | |
293 | 292 | ): |
294 | 293 | """ |
295 | 294 | Args: |
299 | 298 | we may not request. |
300 | 299 | ip_whitelist: The whitelisted IP addresses, that we can |
301 | 300 | request if it were otherwise caught in a blacklist. |
302 | http_proxy: proxy server to use for http connections. host[:port] | |
303 | https_proxy: proxy server to use for https connections. host[:port] | |
301 | use_proxy: Whether proxy settings should be discovered and used | |
302 | from conventional environment variables. | |
304 | 303 | """ |
305 | 304 | self.hs = hs |
306 | 305 | |
344 | 343 | connectTimeout=15, |
345 | 344 | contextFactory=self.hs.get_http_client_context_factory(), |
346 | 345 | pool=pool, |
347 | http_proxy=http_proxy, | |
348 | https_proxy=https_proxy, | |
346 | use_proxy=use_proxy, | |
349 | 347 | ) |
350 | 348 | |
351 | 349 | if self._ip_blacklist: |
749 | 747 | """The maximum allowed size of the HTTP body was exceeded.""" |
750 | 748 | |
751 | 749 | |
750 | class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol): | |
751 | """A protocol which immediately errors upon receiving data.""" | |
752 | ||
753 | def __init__(self, deferred: defer.Deferred): | |
754 | self.deferred = deferred | |
755 | ||
756 | def _maybe_fail(self): | |
757 | """ | |
758 | Report a max size exceed error and disconnect the first time this is called. | |
759 | """ | |
760 | if not self.deferred.called: | |
761 | self.deferred.errback(BodyExceededMaxSize()) | |
762 | # Close the connection (forcefully) since all the data will get | |
763 | # discarded anyway. | |
764 | self.transport.abortConnection() | |
765 | ||
766 | def dataReceived(self, data: bytes) -> None: | |
767 | self._maybe_fail() | |
768 | ||
769 | def connectionLost(self, reason: Failure) -> None: | |
770 | self._maybe_fail() | |
771 | ||
772 | ||
752 | 773 | class _ReadBodyWithMaxSizeProtocol(protocol.Protocol): |
774 | """A protocol which reads body to a stream, erroring if the body exceeds a maximum size.""" | |
775 | ||
753 | 776 | def __init__( |
754 | 777 | self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int] |
755 | 778 | ): |
806 | 829 | Returns: |
807 | 830 | A Deferred which resolves to the length of the read body. |
808 | 831 | """ |
832 | d = defer.Deferred() | |
833 | ||
809 | 834 | # If the Content-Length header gives a size larger than the maximum allowed |
810 | 835 | # size, do not bother downloading the body. |
811 | 836 | if max_size is not None and response.length != UNKNOWN_LENGTH: |
812 | 837 | if response.length > max_size: |
813 | return defer.fail(BodyExceededMaxSize()) | |
814 | ||
815 | d = defer.Deferred() | |
838 | response.deliverBody(_DiscardBodyWithMaxSizeProtocol(d)) | |
839 | return d | |
840 | ||
816 | 841 | response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size)) |
817 | 842 | return d |
818 | 843 |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | 15 | import urllib.parse |
16 | from typing import List, Optional | |
16 | from typing import Any, Generator, List, Optional | |
17 | 17 | |
18 | 18 | from netaddr import AddrFormatError, IPAddress, IPSet |
19 | 19 | from zope.interface import implementer |
115 | 115 | uri: bytes, |
116 | 116 | headers: Optional[Headers] = None, |
117 | 117 | bodyProducer: Optional[IBodyProducer] = None, |
118 | ) -> defer.Deferred: | |
118 | ) -> Generator[defer.Deferred, Any, defer.Deferred]: | |
119 | 119 | """ |
120 | 120 | Args: |
121 | 121 | method: HTTP method: GET/POST/etc |
176 | 176 | # We need to make sure the host header is set to the netloc of the |
177 | 177 | # server and that a user-agent is provided. |
178 | 178 | if headers is None: |
179 | headers = Headers() | |
179 | request_headers = Headers() | |
180 | 180 | else: |
181 | headers = headers.copy() | |
182 | ||
183 | if not headers.hasHeader(b"host"): | |
184 | headers.addRawHeader(b"host", parsed_uri.netloc) | |
185 | if not headers.hasHeader(b"user-agent"): | |
186 | headers.addRawHeader(b"user-agent", self.user_agent) | |
181 | request_headers = headers.copy() | |
182 | ||
183 | if not request_headers.hasHeader(b"host"): | |
184 | request_headers.addRawHeader(b"host", parsed_uri.netloc) | |
185 | if not request_headers.hasHeader(b"user-agent"): | |
186 | request_headers.addRawHeader(b"user-agent", self.user_agent) | |
187 | 187 | |
188 | 188 | res = yield make_deferred_yieldable( |
189 | self._agent.request(method, uri, headers, bodyProducer) | |
189 | self._agent.request(method, uri, request_headers, bodyProducer) | |
190 | 190 | ) |
191 | 191 | |
192 | 192 | return res |
1048 | 1048 | RequestSendFailed: if the Content-Type header is missing or isn't JSON |
1049 | 1049 | |
1050 | 1050 | """ |
1051 | c_type = headers.getRawHeaders(b"Content-Type") | |
1052 | if c_type is None: | |
1051 | content_type_headers = headers.getRawHeaders(b"Content-Type") | |
1052 | if content_type_headers is None: | |
1053 | 1053 | raise RequestSendFailed( |
1054 | 1054 | RuntimeError("No Content-Type header received from remote server"), |
1055 | 1055 | can_retry=False, |
1056 | 1056 | ) |
1057 | 1057 | |
1058 | c_type = c_type[0].decode("ascii") # only the first header | |
1058 | c_type = content_type_headers[0].decode("ascii") # only the first header | |
1059 | 1059 | val, options = cgi.parse_header(c_type) |
1060 | 1060 | if val != "application/json": |
1061 | 1061 | raise RequestSendFailed( |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | 15 | import re |
16 | from urllib.request import getproxies_environment, proxy_bypass_environment | |
16 | 17 | |
17 | 18 | from zope.interface import implementer |
18 | 19 | |
57 | 58 | |
58 | 59 | pool (HTTPConnectionPool|None): connection pool to be used. If None, a |
59 | 60 | non-persistent pool instance will be created. |
61 | ||
62 | use_proxy (bool): Whether proxy settings should be discovered and used | |
63 | from conventional environment variables. | |
60 | 64 | """ |
61 | 65 | |
62 | 66 | def __init__( |
67 | 71 | connectTimeout=None, |
68 | 72 | bindAddress=None, |
69 | 73 | pool=None, |
70 | http_proxy=None, | |
71 | https_proxy=None, | |
74 | use_proxy=False, | |
72 | 75 | ): |
73 | 76 | _AgentBase.__init__(self, reactor, pool) |
74 | 77 | |
83 | 86 | if bindAddress is not None: |
84 | 87 | self._endpoint_kwargs["bindAddress"] = bindAddress |
85 | 88 | |
89 | http_proxy = None | |
90 | https_proxy = None | |
91 | no_proxy = None | |
92 | if use_proxy: | |
93 | proxies = getproxies_environment() | |
94 | http_proxy = proxies["http"].encode() if "http" in proxies else None | |
95 | https_proxy = proxies["https"].encode() if "https" in proxies else None | |
96 | no_proxy = proxies["no"] if "no" in proxies else None | |
97 | ||
86 | 98 | self.http_proxy_endpoint = _http_proxy_endpoint( |
87 | 99 | http_proxy, self.proxy_reactor, **self._endpoint_kwargs |
88 | 100 | ) |
90 | 102 | self.https_proxy_endpoint = _http_proxy_endpoint( |
91 | 103 | https_proxy, self.proxy_reactor, **self._endpoint_kwargs |
92 | 104 | ) |
105 | ||
106 | self.no_proxy = no_proxy | |
93 | 107 | |
94 | 108 | self._policy_for_https = contextFactory |
95 | 109 | self._reactor = reactor |
138 | 152 | pool_key = (parsed_uri.scheme, parsed_uri.host, parsed_uri.port) |
139 | 153 | request_path = parsed_uri.originForm |
140 | 154 | |
141 | if parsed_uri.scheme == b"http" and self.http_proxy_endpoint: | |
155 | should_skip_proxy = False | |
156 | if self.no_proxy is not None: | |
157 | should_skip_proxy = proxy_bypass_environment( | |
158 | parsed_uri.host.decode(), | |
159 | proxies={"no": self.no_proxy}, | |
160 | ) | |
161 | ||
162 | if ( | |
163 | parsed_uri.scheme == b"http" | |
164 | and self.http_proxy_endpoint | |
165 | and not should_skip_proxy | |
166 | ): | |
142 | 167 | # Cache *all* connections under the same key, since we are only |
143 | 168 | # connecting to a single destination, the proxy: |
144 | 169 | pool_key = ("http-proxy", self.http_proxy_endpoint) |
145 | 170 | endpoint = self.http_proxy_endpoint |
146 | 171 | request_path = uri |
147 | elif parsed_uri.scheme == b"https" and self.https_proxy_endpoint: | |
172 | elif ( | |
173 | parsed_uri.scheme == b"https" | |
174 | and self.https_proxy_endpoint | |
175 | and not should_skip_proxy | |
176 | ): | |
148 | 177 | endpoint = HTTPConnectProxyEndpoint( |
149 | 178 | self.proxy_reactor, |
150 | 179 | self.https_proxy_endpoint, |
20 | 20 | import types |
21 | 21 | import urllib |
22 | 22 | from http import HTTPStatus |
23 | from inspect import isawaitable | |
23 | 24 | from io import BytesIO |
24 | 25 | from typing import ( |
25 | 26 | Any, |
29 | 30 | Iterable, |
30 | 31 | Iterator, |
31 | 32 | List, |
33 | Optional, | |
32 | 34 | Pattern, |
33 | 35 | Tuple, |
34 | 36 | Union, |
78 | 80 | """Sends a JSON error response to clients.""" |
79 | 81 | |
80 | 82 | if f.check(SynapseError): |
81 | error_code = f.value.code | |
82 | error_dict = f.value.error_dict() | |
83 | ||
84 | logger.info("%s SynapseError: %s - %s", request, error_code, f.value.msg) | |
83 | # mypy doesn't understand that f.check asserts the type. | |
84 | exc = f.value # type: SynapseError # type: ignore | |
85 | error_code = exc.code | |
86 | error_dict = exc.error_dict() | |
87 | ||
88 | logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg) | |
85 | 89 | else: |
86 | 90 | error_code = 500 |
87 | 91 | error_dict = {"error": "Internal server error", "errcode": Codes.UNKNOWN} |
90 | 94 | "Failed handle request via %r: %r", |
91 | 95 | request.request_metrics.name, |
92 | 96 | request, |
93 | exc_info=(f.type, f.value, f.getTracebackObject()), | |
97 | exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore | |
94 | 98 | ) |
95 | 99 | |
96 | 100 | # Only respond with an error response if we haven't already started writing, |
127 | 131 | `{msg}` placeholders), or a jinja2 template |
128 | 132 | """ |
129 | 133 | if f.check(CodeMessageException): |
130 | cme = f.value | |
134 | # mypy doesn't understand that f.check asserts the type. | |
135 | cme = f.value # type: CodeMessageException # type: ignore | |
131 | 136 | code = cme.code |
132 | 137 | msg = cme.msg |
133 | 138 | |
141 | 146 | logger.error( |
142 | 147 | "Failed handle request %r", |
143 | 148 | request, |
144 | exc_info=(f.type, f.value, f.getTracebackObject()), | |
149 | exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore | |
145 | 150 | ) |
146 | 151 | else: |
147 | 152 | code = HTTPStatus.INTERNAL_SERVER_ERROR |
150 | 155 | logger.error( |
151 | 156 | "Failed handle request %r", |
152 | 157 | request, |
153 | exc_info=(f.type, f.value, f.getTracebackObject()), | |
158 | exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore | |
154 | 159 | ) |
155 | 160 | |
156 | 161 | if isinstance(error_template, str): |
277 | 282 | raw_callback_return = method_handler(request) |
278 | 283 | |
279 | 284 | # Is it synchronous? We'll allow this for now. |
280 | if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)): | |
285 | if isawaitable(raw_callback_return): | |
281 | 286 | callback_return = await raw_callback_return |
282 | 287 | else: |
283 | 288 | callback_return = raw_callback_return # type: ignore |
398 | 403 | A tuple of the callback to use, the name of the servlet, and the |
399 | 404 | key word arguments to pass to the callback |
400 | 405 | """ |
406 | # At this point the path must be bytes. | |
407 | request_path_bytes = request.path # type: bytes # type: ignore | |
408 | request_path = request_path_bytes.decode("ascii") | |
401 | 409 | # Treat HEAD requests as GET requests. |
402 | request_path = request.path.decode("ascii") | |
403 | 410 | request_method = request.method |
404 | 411 | if request_method == b"HEAD": |
405 | 412 | request_method = b"GET" |
550 | 557 | request: Request, |
551 | 558 | iterator: Iterator[bytes], |
552 | 559 | ): |
553 | self._request = request | |
560 | self._request = request # type: Optional[Request] | |
554 | 561 | self._iterator = iterator |
555 | 562 | self._paused = False |
556 | 563 | |
562 | 569 | """ |
563 | 570 | Send a list of bytes as a chunk of a response. |
564 | 571 | """ |
565 | if not data: | |
572 | if not data or not self._request: | |
566 | 573 | return |
567 | 574 | self._request.write(b"".join(data)) |
568 | 575 |
13 | 13 | import contextlib |
14 | 14 | import logging |
15 | 15 | import time |
16 | from typing import Optional, Union | |
17 | ||
16 | from typing import Optional, Type, Union | |
17 | ||
18 | import attr | |
19 | from zope.interface import implementer | |
20 | ||
21 | from twisted.internet.interfaces import IAddress | |
18 | 22 | from twisted.python.failure import Failure |
19 | 23 | from twisted.web.server import Request, Site |
20 | 24 | |
52 | 56 | |
53 | 57 | def __init__(self, channel, *args, **kw): |
54 | 58 | Request.__init__(self, channel, *args, **kw) |
55 | self.site = channel.site | |
59 | self.site = channel.site # type: SynapseSite | |
56 | 60 | self._channel = channel # this is used by the tests |
57 | 61 | self.start_time = 0.0 |
58 | 62 | |
91 | 95 | def get_request_id(self): |
92 | 96 | return "%s-%i" % (self.get_method(), self.request_seq) |
93 | 97 | |
94 | def get_redacted_uri(self): | |
95 | uri = self.uri | |
98 | def get_redacted_uri(self) -> str: | |
99 | """Gets the redacted URI associated with the request (or placeholder if the URI | |
100 | has not yet been received). | |
101 | ||
102 | Note: This is necessary as the placeholder value in twisted is str | |
103 | rather than bytes, so we need to sanitise `self.uri`. | |
104 | ||
105 | Returns: | |
106 | The redacted URI as a string. | |
107 | """ | |
108 | uri = self.uri # type: Union[bytes, str] | |
96 | 109 | if isinstance(uri, bytes): |
97 | uri = self.uri.decode("ascii", errors="replace") | |
110 | uri = uri.decode("ascii", errors="replace") | |
98 | 111 | return redact_uri(uri) |
99 | 112 | |
100 | def get_method(self): | |
101 | """Gets the method associated with the request (or placeholder if not | |
102 | method has yet been received). | |
113 | def get_method(self) -> str: | |
114 | """Gets the method associated with the request (or placeholder if method | |
115 | has not yet been received). | |
103 | 116 | |
104 | 117 | Note: This is necessary as the placeholder value in twisted is str |
105 | 118 | rather than bytes, so we need to sanitise `self.method`. |
106 | 119 | |
107 | 120 | Returns: |
108 | str | |
109 | """ | |
110 | method = self.method | |
121 | The request method as a string. | |
122 | """ | |
123 | method = self.method # type: Union[bytes, str] | |
111 | 124 | if isinstance(method, bytes): |
112 | method = self.method.decode("ascii") | |
125 | return self.method.decode("ascii") | |
113 | 126 | return method |
114 | 127 | |
115 | 128 | def render(self, resrc): |
332 | 345 | |
333 | 346 | |
334 | 347 | class XForwardedForRequest(SynapseRequest): |
335 | def __init__(self, *args, **kw): | |
336 | SynapseRequest.__init__(self, *args, **kw) | |
337 | ||
348 | """Request object which honours proxy headers | |
349 | ||
350 | Extends SynapseRequest to replace getClientIP, getClientAddress, and isSecure with | |
351 | information from request headers. | |
338 | 352 | """ |
339 | Add a layer on top of another request that only uses the value of an | |
340 | X-Forwarded-For header as the result of C{getClientIP}. | |
341 | """ | |
342 | ||
343 | def getClientIP(self): | |
344 | """ | |
345 | @return: The client address (the first address) in the value of the | |
346 | I{X-Forwarded-For header}. If the header is not present, return | |
347 | C{b"-"}. | |
348 | """ | |
349 | return ( | |
350 | self.requestHeaders.getRawHeaders(b"x-forwarded-for", [b"-"])[0] | |
351 | .split(b",")[0] | |
352 | .strip() | |
353 | .decode("ascii") | |
353 | ||
354 | # the client IP and ssl flag, as extracted from the headers. | |
355 | _forwarded_for = None # type: Optional[_XForwardedForAddress] | |
356 | _forwarded_https = False # type: bool | |
357 | ||
358 | def requestReceived(self, command, path, version): | |
359 | # this method is called by the Channel once the full request has been | |
360 | # received, to dispatch the request to a resource. | |
361 | # We can use it to set the IP address and protocol according to the | |
362 | # headers. | |
363 | self._process_forwarded_headers() | |
364 | return super().requestReceived(command, path, version) | |
365 | ||
366 | def _process_forwarded_headers(self): | |
367 | headers = self.requestHeaders.getRawHeaders(b"x-forwarded-for") | |
368 | if not headers: | |
369 | return | |
370 | ||
371 | # for now, we just use the first x-forwarded-for header. Really, we ought | |
372 | # to start from the client IP address, and check whether it is trusted; if it | |
373 | # is, work backwards through the headers until we find an untrusted address. | |
374 | # see https://github.com/matrix-org/synapse/issues/9471 | |
375 | self._forwarded_for = _XForwardedForAddress( | |
376 | headers[0].split(b",")[0].strip().decode("ascii") | |
354 | 377 | ) |
378 | ||
379 | # if we got an x-forwarded-for header, also look for an x-forwarded-proto header | |
380 | header = self.getHeader(b"x-forwarded-proto") | |
381 | if header is not None: | |
382 | self._forwarded_https = header.lower() == b"https" | |
383 | else: | |
384 | # this is done largely for backwards-compatibility so that people that | |
385 | # haven't set an x-forwarded-proto header don't get a redirect loop. | |
386 | logger.warning( | |
387 | "forwarded request lacks an x-forwarded-proto header: assuming https" | |
388 | ) | |
389 | self._forwarded_https = True | |
390 | ||
391 | def isSecure(self): | |
392 | if self._forwarded_https: | |
393 | return True | |
394 | return super().isSecure() | |
395 | ||
396 | def getClientIP(self) -> str: | |
397 | """ | |
398 | Return the IP address of the client who submitted this request. | |
399 | ||
400 | This method is deprecated. Use getClientAddress() instead. | |
401 | """ | |
402 | if self._forwarded_for is not None: | |
403 | return self._forwarded_for.host | |
404 | return super().getClientIP() | |
405 | ||
406 | def getClientAddress(self) -> IAddress: | |
407 | """ | |
408 | Return the address of the client who submitted this request. | |
409 | """ | |
410 | if self._forwarded_for is not None: | |
411 | return self._forwarded_for | |
412 | return super().getClientAddress() | |
413 | ||
414 | ||
415 | @implementer(IAddress) | |
416 | @attr.s(frozen=True, slots=True) | |
417 | class _XForwardedForAddress: | |
418 | host = attr.ib(type=str) | |
355 | 419 | |
356 | 420 | |
357 | 421 | class SynapseSite(Site): |
376 | 440 | |
377 | 441 | assert config.http_options is not None |
378 | 442 | proxied = config.http_options.x_forwarded |
379 | self.requestFactory = XForwardedForRequest if proxied else SynapseRequest | |
443 | self.requestFactory = ( | |
444 | XForwardedForRequest if proxied else SynapseRequest | |
445 | ) # type: Type[Request] | |
380 | 446 | self.access_logger = logging.getLogger(logger_name) |
381 | 447 | self.server_version_string = server_version_string.encode("ascii") |
382 | 448 |
31 | 31 | TCP4ClientEndpoint, |
32 | 32 | TCP6ClientEndpoint, |
33 | 33 | ) |
34 | from twisted.internet.interfaces import IPushProducer, ITransport | |
34 | from twisted.internet.interfaces import IPushProducer, IStreamClientEndpoint, ITransport | |
35 | 35 | from twisted.internet.protocol import Factory, Protocol |
36 | 36 | from twisted.python.failure import Failure |
37 | 37 | |
120 | 120 | try: |
121 | 121 | ip = ip_address(self.host) |
122 | 122 | if isinstance(ip, IPv4Address): |
123 | endpoint = TCP4ClientEndpoint(_reactor, self.host, self.port) | |
123 | endpoint = TCP4ClientEndpoint( | |
124 | _reactor, self.host, self.port | |
125 | ) # type: IStreamClientEndpoint | |
124 | 126 | elif isinstance(ip, IPv6Address): |
125 | 127 | endpoint = TCP6ClientEndpoint(_reactor, self.host, self.port) |
126 | 128 | else: |
526 | 526 | REGISTRY.register(ReactorLastSeenMetric()) |
527 | 527 | |
528 | 528 | |
529 | def runUntilCurrentTimer(func): | |
529 | def runUntilCurrentTimer(reactor, func): | |
530 | 530 | @functools.wraps(func) |
531 | 531 | def f(*args, **kwargs): |
532 | 532 | now = reactor.seconds() |
589 | 589 | |
590 | 590 | try: |
591 | 591 | # Ensure the reactor has all the attributes we expect |
592 | reactor.runUntilCurrent | |
593 | reactor._newTimedCalls | |
594 | reactor.threadCallQueue | |
592 | reactor.seconds # type: ignore | |
593 | reactor.runUntilCurrent # type: ignore | |
594 | reactor._newTimedCalls # type: ignore | |
595 | reactor.threadCallQueue # type: ignore | |
595 | 596 | |
596 | 597 | # runUntilCurrent is called when we have pending calls. It is called once |
597 | 598 | # per iteratation after fd polling. |
598 | reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent) | |
599 | reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore | |
599 | 600 | |
600 | 601 | # We manually run the GC each reactor tick so that we can get some metrics |
601 | 602 | # about time spent doing GC, |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | import logging |
16 | from typing import TYPE_CHECKING, Iterable, Optional, Tuple | |
16 | from typing import TYPE_CHECKING, Any, Generator, Iterable, Optional, Tuple | |
17 | 17 | |
18 | 18 | from twisted.internet import defer |
19 | 19 | |
306 | 306 | @defer.inlineCallbacks |
307 | 307 | def get_state_events_in_room( |
308 | 308 | self, room_id: str, types: Iterable[Tuple[str, Optional[str]]] |
309 | ) -> defer.Deferred: | |
309 | ) -> Generator[defer.Deferred, Any, defer.Deferred]: | |
310 | 310 | """Gets current state events for the given room. |
311 | 311 | |
312 | 312 | (This is exposed for compatibility with the old SpamCheckerApi. We should |
14 | 14 | # limitations under the License. |
15 | 15 | import logging |
16 | 16 | import urllib.parse |
17 | from typing import TYPE_CHECKING, Any, Dict, Iterable, Union | |
17 | from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Union | |
18 | 18 | |
19 | 19 | from prometheus_client import Counter |
20 | 20 | |
21 | 21 | from twisted.internet.error import AlreadyCalled, AlreadyCancelled |
22 | from twisted.internet.interfaces import IDelayedCall | |
22 | 23 | |
23 | 24 | from synapse.api.constants import EventTypes |
24 | 25 | from synapse.events import EventBase |
70 | 71 | self.data = pusher_config.data |
71 | 72 | self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC |
72 | 73 | self.failing_since = pusher_config.failing_since |
73 | self.timed_call = None | |
74 | self.timed_call = None # type: Optional[IDelayedCall] | |
74 | 75 | self._is_processing = False |
75 | 76 | self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room |
77 | self._pusherpool = hs.get_pusherpool() | |
76 | 78 | |
77 | 79 | self.data = pusher_config.data |
78 | 80 | if self.data is None: |
298 | 300 | ) |
299 | 301 | else: |
300 | 302 | logger.info("Pushkey %s was rejected: removing", pk) |
301 | await self.hs.remove_pusher(self.app_id, pk, self.user_id) | |
303 | await self._pusherpool.remove_pusher(self.app_id, pk, self.user_id) | |
302 | 304 | return True |
303 | 305 | |
304 | 306 | async def _build_notification_dict( |
18 | 18 | |
19 | 19 | from prometheus_client import Gauge |
20 | 20 | |
21 | from synapse.api.errors import Codes, SynapseError | |
21 | 22 | from synapse.metrics.background_process_metrics import ( |
22 | 23 | run_as_background_process, |
23 | 24 | wrap_as_background_process, |
24 | 25 | ) |
25 | 26 | from synapse.push import Pusher, PusherConfig, PusherConfigException |
26 | 27 | from synapse.push.pusher import PusherFactory |
28 | from synapse.replication.http.push import ReplicationRemovePusherRestServlet | |
27 | 29 | from synapse.types import JsonDict, RoomStreamToken |
28 | 30 | from synapse.util.async_helpers import concurrently_execute |
29 | 31 | |
57 | 59 | def __init__(self, hs: "HomeServer"): |
58 | 60 | self.hs = hs |
59 | 61 | self.pusher_factory = PusherFactory(hs) |
60 | self._should_start_pushers = hs.config.start_pushers | |
61 | 62 | self.store = self.hs.get_datastore() |
62 | 63 | self.clock = self.hs.get_clock() |
63 | 64 | |
66 | 67 | # We shard the handling of push notifications by user ID. |
67 | 68 | self._pusher_shard_config = hs.config.push.pusher_shard_config |
68 | 69 | self._instance_name = hs.get_instance_name() |
70 | self._should_start_pushers = ( | |
71 | self._instance_name in self._pusher_shard_config.instances | |
72 | ) | |
73 | ||
74 | # We can only delete pushers on master. | |
75 | self._remove_pusher_client = None | |
76 | if hs.config.worker.worker_app: | |
77 | self._remove_pusher_client = ReplicationRemovePusherRestServlet.make_client( | |
78 | hs | |
79 | ) | |
69 | 80 | |
70 | 81 | # Record the last stream ID that we were poked about so we can get |
71 | 82 | # changes since then. We set this to the current max stream ID on |
101 | 112 | Returns: |
102 | 113 | The newly created pusher. |
103 | 114 | """ |
115 | ||
116 | if kind == "email": | |
117 | email_owner = await self.store.get_user_id_by_threepid("email", pushkey) | |
118 | if email_owner != user_id: | |
119 | raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND) | |
104 | 120 | |
105 | 121 | time_now_msec = self.clock.time_msec() |
106 | 122 | |
174 | 190 | user_id: user to remove pushers for |
175 | 191 | access_tokens: access token *ids* to remove pushers for |
176 | 192 | """ |
177 | if not self._pusher_shard_config.should_handle(self._instance_name, user_id): | |
178 | return | |
179 | ||
180 | 193 | tokens = set(access_tokens) |
181 | 194 | for p in await self.store.get_pushers_by_user_id(user_id): |
182 | 195 | if p.access_token in tokens: |
379 | 392 | |
380 | 393 | synapse_pushers.labels(type(pusher).__name__, pusher.app_id).dec() |
381 | 394 | |
382 | await self.store.delete_pusher_by_app_id_pushkey_user_id( | |
383 | app_id, pushkey, user_id | |
384 | ) | |
395 | # We can only delete pushers on master. | |
396 | if self._remove_pusher_client: | |
397 | await self._remove_pusher_client( | |
398 | app_id=app_id, pushkey=pushkey, user_id=user_id | |
399 | ) | |
400 | else: | |
401 | await self.store.delete_pusher_by_app_id_pushkey_user_id( | |
402 | app_id, pushkey, user_id | |
403 | ) |
105 | 105 | "pysaml2>=4.5.0;python_version>='3.6'", |
106 | 106 | ], |
107 | 107 | "oidc": ["authlib>=0.14.0"], |
108 | # systemd-python is necessary for logging to the systemd journal via | |
109 | # `systemd.journal.JournalHandler`, as is documented in | |
110 | # `contrib/systemd/log_config.yaml`. | |
108 | 111 | "systemd": ["systemd-python>=231"], |
109 | 112 | "url_preview": ["lxml>=3.5.0"], |
110 | 113 | "sentry": ["sentry-sdk>=0.7.2"], |
20 | 20 | login, |
21 | 21 | membership, |
22 | 22 | presence, |
23 | push, | |
23 | 24 | register, |
24 | 25 | send_event, |
25 | 26 | streams, |
41 | 42 | membership.register_servlets(hs, self) |
42 | 43 | streams.register_servlets(hs, self) |
43 | 44 | account_data.register_servlets(hs, self) |
45 | push.register_servlets(hs, self) | |
44 | 46 | |
45 | 47 | # The following can't currently be instantiated on workers. |
46 | 48 | if hs.config.worker.worker_app is None: |
212 | 212 | content = parse_json_object_from_request(request) |
213 | 213 | |
214 | 214 | args = content["args"] |
215 | ||
216 | logger.info("Got %r query", query_type) | |
215 | args["origin"] = content["origin"] | |
216 | ||
217 | logger.info("Got %r query from %s", query_type, args["origin"]) | |
217 | 218 | |
218 | 219 | result = await self.registry.on_query(query_type, args) |
219 | 220 |
14 | 14 | import logging |
15 | 15 | from typing import TYPE_CHECKING, List, Optional, Tuple |
16 | 16 | |
17 | from twisted.web.http import Request | |
17 | from twisted.web.server import Request | |
18 | 18 | |
19 | 19 | from synapse.http.servlet import parse_json_object_from_request |
20 | from synapse.http.site import SynapseRequest | |
20 | 21 | from synapse.replication.http._base import ReplicationEndpoint |
21 | 22 | from synapse.types import JsonDict, Requester, UserID |
22 | 23 | from synapse.util.distributor import user_left_room |
77 | 78 | } |
78 | 79 | |
79 | 80 | async def _handle_request( # type: ignore |
80 | self, request: Request, room_id: str, user_id: str | |
81 | self, request: SynapseRequest, room_id: str, user_id: str | |
81 | 82 | ) -> Tuple[int, JsonDict]: |
82 | 83 | content = parse_json_object_from_request(request) |
83 | 84 | |
85 | 86 | event_content = content["content"] |
86 | 87 | |
87 | 88 | requester = Requester.deserialize(self.store, content["requester"]) |
88 | ||
89 | 89 | request.requester = requester |
90 | 90 | |
91 | 91 | logger.info("remote_join: %s into room: %s", user_id, room_id) |
146 | 146 | } |
147 | 147 | |
148 | 148 | async def _handle_request( # type: ignore |
149 | self, request: Request, invite_event_id: str | |
149 | self, request: SynapseRequest, invite_event_id: str | |
150 | 150 | ) -> Tuple[int, JsonDict]: |
151 | 151 | content = parse_json_object_from_request(request) |
152 | 152 | |
154 | 154 | event_content = content["content"] |
155 | 155 | |
156 | 156 | requester = Requester.deserialize(self.store, content["requester"]) |
157 | ||
158 | 157 | request.requester = requester |
159 | 158 | |
160 | 159 | # hopefully we're now on the master, so this won't recurse! |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import logging | |
16 | from typing import TYPE_CHECKING | |
17 | ||
18 | from synapse.http.servlet import parse_json_object_from_request | |
19 | from synapse.replication.http._base import ReplicationEndpoint | |
20 | ||
21 | if TYPE_CHECKING: | |
22 | from synapse.server import HomeServer | |
23 | ||
24 | logger = logging.getLogger(__name__) | |
25 | ||
26 | ||
27 | class ReplicationRemovePusherRestServlet(ReplicationEndpoint): | |
28 | """Deletes the given pusher. | |
29 | ||
30 | Request format: | |
31 | ||
32 | POST /_synapse/replication/remove_pusher/:user_id | |
33 | ||
34 | { | |
35 | "app_id": "<some_id>", | |
36 | "pushkey": "<some_key>" | |
37 | } | |
38 | ||
39 | """ | |
40 | ||
41 | NAME = "add_user_account_data" | |
42 | PATH_ARGS = ("user_id",) | |
43 | CACHE = False | |
44 | ||
45 | def __init__(self, hs: "HomeServer"): | |
46 | super().__init__(hs) | |
47 | ||
48 | self.pusher_pool = hs.get_pusherpool() | |
49 | ||
50 | @staticmethod | |
51 | async def _serialize_payload(app_id, pushkey, user_id): | |
52 | payload = { | |
53 | "app_id": app_id, | |
54 | "pushkey": pushkey, | |
55 | } | |
56 | ||
57 | return payload | |
58 | ||
59 | async def _handle_request(self, request, user_id): | |
60 | content = parse_json_object_from_request(request) | |
61 | ||
62 | app_id = content["app_id"] | |
63 | pushkey = content["pushkey"] | |
64 | ||
65 | await self.pusher_pool.remove_pusher(app_id, pushkey, user_id) | |
66 | ||
67 | return 200, {} | |
68 | ||
69 | ||
70 | def register_servlets(hs, http_server): | |
71 | ReplicationRemovePusherRestServlet(hs).register(http_server) |
107 | 107 | |
108 | 108 | # Map from stream to list of deferreds waiting for the stream to |
109 | 109 | # arrive at a particular position. The lists are sorted by stream position. |
110 | self._streams_to_waiters = ( | |
111 | {} | |
112 | ) # type: Dict[str, List[Tuple[int, Deferred[None]]]] | |
110 | self._streams_to_waiters = {} # type: Dict[str, List[Tuple[int, Deferred]]] | |
113 | 111 | |
114 | 112 | async def on_rdata( |
115 | 113 | self, stream_name: str, instance_name: str, token: int, rows: list |
324 | 324 | return "%s %s" % (self.instance_name, self.token) |
325 | 325 | |
326 | 326 | |
327 | class RemovePusherCommand(Command): | |
328 | """Sent by the client to request the master remove the given pusher. | |
329 | ||
330 | Format:: | |
331 | ||
332 | REMOVE_PUSHER <app_id> <push_key> <user_id> | |
333 | """ | |
334 | ||
335 | NAME = "REMOVE_PUSHER" | |
336 | ||
337 | def __init__(self, app_id, push_key, user_id): | |
338 | self.user_id = user_id | |
339 | self.app_id = app_id | |
340 | self.push_key = push_key | |
341 | ||
342 | @classmethod | |
343 | def from_line(cls, line): | |
344 | app_id, push_key, user_id = line.split(" ", 2) | |
345 | ||
346 | return cls(app_id, push_key, user_id) | |
347 | ||
348 | def to_line(self): | |
349 | return " ".join((self.app_id, self.push_key, self.user_id)) | |
350 | ||
351 | ||
352 | 327 | class UserIpCommand(Command): |
353 | 328 | """Sent periodically when a worker sees activity from a client. |
354 | 329 | |
415 | 390 | ReplicateCommand, |
416 | 391 | UserSyncCommand, |
417 | 392 | FederationAckCommand, |
418 | RemovePusherCommand, | |
419 | 393 | UserIpCommand, |
420 | 394 | RemoteServerUpCommand, |
421 | 395 | ClearUserSyncsCommand, |
442 | 416 | UserSyncCommand.NAME, |
443 | 417 | ClearUserSyncsCommand.NAME, |
444 | 418 | FederationAckCommand.NAME, |
445 | RemovePusherCommand.NAME, | |
446 | 419 | UserIpCommand.NAME, |
447 | 420 | ErrorCommand.NAME, |
448 | 421 | RemoteServerUpCommand.NAME, |
43 | 43 | PositionCommand, |
44 | 44 | RdataCommand, |
45 | 45 | RemoteServerUpCommand, |
46 | RemovePusherCommand, | |
47 | 46 | ReplicateCommand, |
48 | 47 | UserIpCommand, |
49 | 48 | UserSyncCommand, |
372 | 371 | if self._federation_sender: |
373 | 372 | self._federation_sender.federation_ack(cmd.instance_name, cmd.token) |
374 | 373 | |
375 | def on_REMOVE_PUSHER( | |
376 | self, conn: AbstractConnection, cmd: RemovePusherCommand | |
377 | ) -> Optional[Awaitable[None]]: | |
378 | remove_pusher_counter.inc() | |
379 | ||
380 | if self._is_master: | |
381 | return self._handle_remove_pusher(cmd) | |
382 | else: | |
383 | return None | |
384 | ||
385 | async def _handle_remove_pusher(self, cmd: RemovePusherCommand): | |
386 | await self._store.delete_pusher_by_app_id_pushkey_user_id( | |
387 | app_id=cmd.app_id, pushkey=cmd.push_key, user_id=cmd.user_id | |
388 | ) | |
389 | ||
390 | self._notifier.on_new_replication_data() | |
391 | ||
392 | 374 | def on_USER_IP( |
393 | 375 | self, conn: AbstractConnection, cmd: UserIpCommand |
394 | 376 | ) -> Optional[Awaitable[None]]: |
683 | 665 | UserSyncCommand(instance_id, user_id, is_syncing, last_sync_ms) |
684 | 666 | ) |
685 | 667 | |
686 | def send_remove_pusher(self, app_id: str, push_key: str, user_id: str): | |
687 | """Poke the master to remove a pusher for a user""" | |
688 | cmd = RemovePusherCommand(app_id, push_key, user_id) | |
689 | self.send_command(cmd) | |
690 | ||
691 | 668 | def send_user_ip( |
692 | 669 | self, |
693 | 670 | user_id: str, |
501 | 501 | """Global or per room account data was changed""" |
502 | 502 | |
503 | 503 | AccountDataStreamRow = namedtuple( |
504 | "AccountDataStream", | |
504 | "AccountDataStreamRow", | |
505 | 505 | ("user_id", "room_id", "data_type"), # str # Optional[str] # str |
506 | 506 | ) |
507 | 507 |
144 | 144 | <input type="submit" value="Continue" class="primary-button"> |
145 | 145 | {% if user_attributes.avatar_url or user_attributes.display_name or user_attributes.emails %} |
146 | 146 | <section class="idp-pick-details"> |
147 | <h2><img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>Information from {{ idp.idp_name }}</h2> | |
147 | <h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Information from {{ idp.idp_name }}</h2> | |
148 | 148 | {% if user_attributes.avatar_url %} |
149 | 149 | <label class="idp-detail idp-avatar" for="idp-avatar"> |
150 | 150 | <div class="check-row"> |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import TYPE_CHECKING, Tuple | |
15 | 16 | |
16 | 17 | from synapse.api.errors import NotFoundError, SynapseError |
17 | 18 | from synapse.http.servlet import ( |
19 | 20 | assert_params_in_dict, |
20 | 21 | parse_json_object_from_request, |
21 | 22 | ) |
23 | from synapse.http.site import SynapseRequest | |
22 | 24 | from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin |
23 | from synapse.types import UserID | |
25 | from synapse.types import JsonDict, UserID | |
26 | ||
27 | if TYPE_CHECKING: | |
28 | from synapse.server import HomeServer | |
24 | 29 | |
25 | 30 | logger = logging.getLogger(__name__) |
26 | 31 | |
34 | 39 | "/users/(?P<user_id>[^/]*)/devices/(?P<device_id>[^/]*)$", "v2" |
35 | 40 | ) |
36 | 41 | |
37 | def __init__(self, hs): | |
42 | def __init__(self, hs: "HomeServer"): | |
38 | 43 | super().__init__() |
39 | 44 | self.hs = hs |
40 | 45 | self.auth = hs.get_auth() |
41 | 46 | self.device_handler = hs.get_device_handler() |
42 | 47 | self.store = hs.get_datastore() |
43 | 48 | |
44 | async def on_GET(self, request, user_id, device_id): | |
49 | async def on_GET( | |
50 | self, request: SynapseRequest, user_id, device_id: str | |
51 | ) -> Tuple[int, JsonDict]: | |
45 | 52 | await assert_requester_is_admin(self.auth, request) |
46 | 53 | |
47 | 54 | target_user = UserID.from_string(user_id) |
57 | 64 | ) |
58 | 65 | return 200, device |
59 | 66 | |
60 | async def on_DELETE(self, request, user_id, device_id): | |
67 | async def on_DELETE( | |
68 | self, request: SynapseRequest, user_id: str, device_id: str | |
69 | ) -> Tuple[int, JsonDict]: | |
61 | 70 | await assert_requester_is_admin(self.auth, request) |
62 | 71 | |
63 | 72 | target_user = UserID.from_string(user_id) |
71 | 80 | await self.device_handler.delete_device(target_user.to_string(), device_id) |
72 | 81 | return 200, {} |
73 | 82 | |
74 | async def on_PUT(self, request, user_id, device_id): | |
83 | async def on_PUT( | |
84 | self, request: SynapseRequest, user_id: str, device_id: str | |
85 | ) -> Tuple[int, JsonDict]: | |
75 | 86 | await assert_requester_is_admin(self.auth, request) |
76 | 87 | |
77 | 88 | target_user = UserID.from_string(user_id) |
96 | 107 | |
97 | 108 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2") |
98 | 109 | |
99 | def __init__(self, hs): | |
110 | def __init__(self, hs: "HomeServer"): | |
100 | 111 | """ |
101 | 112 | Args: |
102 | 113 | hs (synapse.server.HomeServer): server |
106 | 117 | self.device_handler = hs.get_device_handler() |
107 | 118 | self.store = hs.get_datastore() |
108 | 119 | |
109 | async def on_GET(self, request, user_id): | |
120 | async def on_GET( | |
121 | self, request: SynapseRequest, user_id: str | |
122 | ) -> Tuple[int, JsonDict]: | |
110 | 123 | await assert_requester_is_admin(self.auth, request) |
111 | 124 | |
112 | 125 | target_user = UserID.from_string(user_id) |
129 | 142 | |
130 | 143 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/delete_devices$", "v2") |
131 | 144 | |
132 | def __init__(self, hs): | |
145 | def __init__(self, hs: "HomeServer"): | |
133 | 146 | self.hs = hs |
134 | 147 | self.auth = hs.get_auth() |
135 | 148 | self.device_handler = hs.get_device_handler() |
136 | 149 | self.store = hs.get_datastore() |
137 | 150 | |
138 | async def on_POST(self, request, user_id): | |
151 | async def on_POST( | |
152 | self, request: SynapseRequest, user_id: str | |
153 | ) -> Tuple[int, JsonDict]: | |
139 | 154 | await assert_requester_is_admin(self.auth, request) |
140 | 155 | |
141 | 156 | target_user = UserID.from_string(user_id) |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import TYPE_CHECKING, Tuple | |
16 | 17 | |
17 | 18 | from synapse.api.errors import Codes, NotFoundError, SynapseError |
18 | 19 | from synapse.http.servlet import RestServlet, parse_integer, parse_string |
20 | from synapse.http.site import SynapseRequest | |
19 | 21 | from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin |
22 | from synapse.types import JsonDict | |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
20 | 26 | |
21 | 27 | logger = logging.getLogger(__name__) |
22 | 28 | |
44 | 50 | |
45 | 51 | PATTERNS = admin_patterns("/event_reports$") |
46 | 52 | |
47 | def __init__(self, hs): | |
53 | def __init__(self, hs: "HomeServer"): | |
48 | 54 | self.hs = hs |
49 | 55 | self.auth = hs.get_auth() |
50 | 56 | self.store = hs.get_datastore() |
51 | 57 | |
52 | async def on_GET(self, request): | |
58 | async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
53 | 59 | await assert_requester_is_admin(self.auth, request) |
54 | 60 | |
55 | 61 | start = parse_integer(request, "from", default=0) |
105 | 111 | |
106 | 112 | PATTERNS = admin_patterns("/event_reports/(?P<report_id>[^/]*)$") |
107 | 113 | |
108 | def __init__(self, hs): | |
114 | def __init__(self, hs: "HomeServer"): | |
109 | 115 | self.hs = hs |
110 | 116 | self.auth = hs.get_auth() |
111 | 117 | self.store = hs.get_datastore() |
112 | 118 | |
113 | async def on_GET(self, request, report_id): | |
119 | async def on_GET( | |
120 | self, request: SynapseRequest, report_id: str | |
121 | ) -> Tuple[int, JsonDict]: | |
114 | 122 | await assert_requester_is_admin(self.auth, request) |
115 | 123 | |
116 | 124 | message = ( |
117 | 125 | "The report_id parameter must be a string representing a positive integer." |
118 | 126 | ) |
119 | 127 | try: |
120 | report_id = int(report_id) | |
128 | resolved_report_id = int(report_id) | |
121 | 129 | except ValueError: |
122 | 130 | raise SynapseError(400, message, errcode=Codes.INVALID_PARAM) |
123 | 131 | |
124 | if report_id < 0: | |
132 | if resolved_report_id < 0: | |
125 | 133 | raise SynapseError(400, message, errcode=Codes.INVALID_PARAM) |
126 | 134 | |
127 | ret = await self.store.get_event_report(report_id) | |
135 | ret = await self.store.get_event_report(resolved_report_id) | |
128 | 136 | if not ret: |
129 | 137 | raise NotFoundError("Event report not found") |
130 | 138 |
16 | 16 | import logging |
17 | 17 | from typing import TYPE_CHECKING, Tuple |
18 | 18 | |
19 | from twisted.web.http import Request | |
19 | from twisted.web.server import Request | |
20 | 20 | |
21 | 21 | from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError |
22 | 22 | from synapse.http.servlet import RestServlet, parse_boolean, parse_integer |
41 | 41 | |
42 | 42 | |
43 | 43 | logger = logging.getLogger(__name__) |
44 | ||
45 | ||
46 | class ResolveRoomIdMixin: | |
47 | def __init__(self, hs: "HomeServer"): | |
48 | self.room_member_handler = hs.get_room_member_handler() | |
49 | ||
50 | async def resolve_room_id( | |
51 | self, room_identifier: str, remote_room_hosts: Optional[List[str]] = None | |
52 | ) -> Tuple[str, Optional[List[str]]]: | |
53 | """ | |
54 | Resolve a room identifier to a room ID, if necessary. | |
55 | ||
56 | This also performanes checks to ensure the room ID is of the proper form. | |
57 | ||
58 | Args: | |
59 | room_identifier: The room ID or alias. | |
60 | remote_room_hosts: The potential remote room hosts to use. | |
61 | ||
62 | Returns: | |
63 | The resolved room ID. | |
64 | ||
65 | Raises: | |
66 | SynapseError if the room ID is of the wrong form. | |
67 | """ | |
68 | if RoomID.is_valid(room_identifier): | |
69 | resolved_room_id = room_identifier | |
70 | elif RoomAlias.is_valid(room_identifier): | |
71 | room_alias = RoomAlias.from_string(room_identifier) | |
72 | ( | |
73 | room_id, | |
74 | remote_room_hosts, | |
75 | ) = await self.room_member_handler.lookup_room_alias(room_alias) | |
76 | resolved_room_id = room_id.to_string() | |
77 | else: | |
78 | raise SynapseError( | |
79 | 400, "%s was not legal room ID or room alias" % (room_identifier,) | |
80 | ) | |
81 | if not resolved_room_id: | |
82 | raise SynapseError( | |
83 | 400, "Unknown room ID or room alias %s" % room_identifier | |
84 | ) | |
85 | return resolved_room_id, remote_room_hosts | |
44 | 86 | |
45 | 87 | |
46 | 88 | class ShutdownRoomRestServlet(RestServlet): |
333 | 375 | return 200, ret |
334 | 376 | |
335 | 377 | |
336 | class JoinRoomAliasServlet(RestServlet): | |
378 | class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet): | |
337 | 379 | |
338 | 380 | PATTERNS = admin_patterns("/join/(?P<room_identifier>[^/]*)") |
339 | 381 | |
340 | 382 | def __init__(self, hs: "HomeServer"): |
341 | self.hs = hs | |
342 | self.auth = hs.get_auth() | |
343 | self.room_member_handler = hs.get_room_member_handler() | |
383 | super().__init__(hs) | |
384 | self.hs = hs | |
385 | self.auth = hs.get_auth() | |
344 | 386 | self.admin_handler = hs.get_admin_handler() |
345 | 387 | self.state_handler = hs.get_state_handler() |
346 | 388 | |
361 | 403 | if not await self.admin_handler.get_user(target_user): |
362 | 404 | raise NotFoundError("User not found") |
363 | 405 | |
364 | if RoomID.is_valid(room_identifier): | |
365 | room_id = room_identifier | |
366 | try: | |
367 | remote_room_hosts = [ | |
368 | x.decode("ascii") for x in request.args[b"server_name"] | |
369 | ] # type: Optional[List[str]] | |
370 | except Exception: | |
371 | remote_room_hosts = None | |
372 | elif RoomAlias.is_valid(room_identifier): | |
373 | handler = self.room_member_handler | |
374 | room_alias = RoomAlias.from_string(room_identifier) | |
375 | room_id, remote_room_hosts = await handler.lookup_room_alias(room_alias) | |
376 | else: | |
377 | raise SynapseError( | |
378 | 400, "%s was not legal room ID or room alias" % (room_identifier,) | |
379 | ) | |
406 | # Get the room ID from the identifier. | |
407 | try: | |
408 | remote_room_hosts = [ | |
409 | x.decode("ascii") for x in request.args[b"server_name"] | |
410 | ] # type: Optional[List[str]] | |
411 | except Exception: | |
412 | remote_room_hosts = None | |
413 | room_id, remote_room_hosts = await self.resolve_room_id( | |
414 | room_identifier, remote_room_hosts | |
415 | ) | |
380 | 416 | |
381 | 417 | fake_requester = create_requester( |
382 | 418 | target_user, authenticated_entity=requester.authenticated_entity |
411 | 447 | return 200, {"room_id": room_id} |
412 | 448 | |
413 | 449 | |
414 | class MakeRoomAdminRestServlet(RestServlet): | |
450 | class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet): | |
415 | 451 | """Allows a server admin to get power in a room if a local user has power in |
416 | 452 | a room. Will also invite the user if they're not in the room and it's a |
417 | 453 | private room. Can specify another user (rather than the admin user) to be |
426 | 462 | PATTERNS = admin_patterns("/rooms/(?P<room_identifier>[^/]*)/make_room_admin") |
427 | 463 | |
428 | 464 | def __init__(self, hs: "HomeServer"): |
429 | self.hs = hs | |
430 | self.auth = hs.get_auth() | |
431 | self.room_member_handler = hs.get_room_member_handler() | |
465 | super().__init__(hs) | |
466 | self.hs = hs | |
467 | self.auth = hs.get_auth() | |
432 | 468 | self.event_creation_handler = hs.get_event_creation_handler() |
433 | 469 | self.state_handler = hs.get_state_handler() |
434 | 470 | self.is_mine_id = hs.is_mine_id |
435 | 471 | |
436 | async def on_POST(self, request, room_identifier): | |
472 | async def on_POST( | |
473 | self, request: SynapseRequest, room_identifier: str | |
474 | ) -> Tuple[int, JsonDict]: | |
437 | 475 | requester = await self.auth.get_user_by_req(request) |
438 | 476 | await assert_user_is_admin(self.auth, requester.user) |
439 | 477 | content = parse_json_object_from_request(request, allow_empty_body=True) |
440 | 478 | |
441 | # Resolve to a room ID, if necessary. | |
442 | if RoomID.is_valid(room_identifier): | |
443 | room_id = room_identifier | |
444 | elif RoomAlias.is_valid(room_identifier): | |
445 | room_alias = RoomAlias.from_string(room_identifier) | |
446 | room_id, _ = await self.room_member_handler.lookup_room_alias(room_alias) | |
447 | room_id = room_id.to_string() | |
448 | else: | |
449 | raise SynapseError( | |
450 | 400, "%s was not legal room ID or room alias" % (room_identifier,) | |
451 | ) | |
479 | room_id, _ = await self.resolve_room_id(room_identifier) | |
452 | 480 | |
453 | 481 | # Which user to grant room admin rights to. |
454 | 482 | user_to_add = content.get("user_id", requester.user.to_string()) |
555 | 583 | return 200, {} |
556 | 584 | |
557 | 585 | |
558 | class ForwardExtremitiesRestServlet(RestServlet): | |
586 | class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet): | |
559 | 587 | """Allows a server admin to get or clear forward extremities. |
560 | 588 | |
561 | 589 | Clearing does not require restarting the server. |
570 | 598 | PATTERNS = admin_patterns("/rooms/(?P<room_identifier>[^/]*)/forward_extremities") |
571 | 599 | |
572 | 600 | def __init__(self, hs: "HomeServer"): |
573 | self.hs = hs | |
574 | self.auth = hs.get_auth() | |
575 | self.room_member_handler = hs.get_room_member_handler() | |
601 | super().__init__(hs) | |
602 | self.hs = hs | |
603 | self.auth = hs.get_auth() | |
576 | 604 | self.store = hs.get_datastore() |
577 | 605 | |
578 | async def resolve_room_id(self, room_identifier: str) -> str: | |
579 | """Resolve to a room ID, if necessary.""" | |
580 | if RoomID.is_valid(room_identifier): | |
581 | resolved_room_id = room_identifier | |
582 | elif RoomAlias.is_valid(room_identifier): | |
583 | room_alias = RoomAlias.from_string(room_identifier) | |
584 | room_id, _ = await self.room_member_handler.lookup_room_alias(room_alias) | |
585 | resolved_room_id = room_id.to_string() | |
586 | else: | |
587 | raise SynapseError( | |
588 | 400, "%s was not legal room ID or room alias" % (room_identifier,) | |
589 | ) | |
590 | if not resolved_room_id: | |
591 | raise SynapseError( | |
592 | 400, "Unknown room ID or room alias %s" % room_identifier | |
593 | ) | |
594 | return resolved_room_id | |
595 | ||
596 | async def on_DELETE(self, request, room_identifier): | |
597 | requester = await self.auth.get_user_by_req(request) | |
598 | await assert_user_is_admin(self.auth, requester.user) | |
599 | ||
600 | room_id = await self.resolve_room_id(room_identifier) | |
606 | async def on_DELETE( | |
607 | self, request: SynapseRequest, room_identifier: str | |
608 | ) -> Tuple[int, JsonDict]: | |
609 | requester = await self.auth.get_user_by_req(request) | |
610 | await assert_user_is_admin(self.auth, requester.user) | |
611 | ||
612 | room_id, _ = await self.resolve_room_id(room_identifier) | |
601 | 613 | |
602 | 614 | deleted_count = await self.store.delete_forward_extremities_for_room(room_id) |
603 | 615 | return 200, {"deleted": deleted_count} |
604 | 616 | |
605 | async def on_GET(self, request, room_identifier): | |
606 | requester = await self.auth.get_user_by_req(request) | |
607 | await assert_user_is_admin(self.auth, requester.user) | |
608 | ||
609 | room_id = await self.resolve_room_id(room_identifier) | |
617 | async def on_GET( | |
618 | self, request: SynapseRequest, room_identifier: str | |
619 | ) -> Tuple[int, JsonDict]: | |
620 | requester = await self.auth.get_user_by_req(request) | |
621 | await assert_user_is_admin(self.auth, requester.user) | |
622 | ||
623 | room_id, _ = await self.resolve_room_id(room_identifier) | |
610 | 624 | |
611 | 625 | extremities = await self.store.get_forward_extremities_for_room(room_id) |
612 | 626 | return 200, {"count": len(extremities), "results": extremities} |
622 | 636 | |
623 | 637 | PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]*)/context/(?P<event_id>[^/]*)$") |
624 | 638 | |
625 | def __init__(self, hs): | |
639 | def __init__(self, hs: "HomeServer"): | |
626 | 640 | super().__init__() |
627 | 641 | self.clock = hs.get_clock() |
628 | 642 | self.room_context_handler = hs.get_room_context_handler() |
629 | 643 | self._event_serializer = hs.get_event_client_serializer() |
630 | 644 | self.auth = hs.get_auth() |
631 | 645 | |
632 | async def on_GET(self, request, room_id, event_id): | |
646 | async def on_GET( | |
647 | self, request: SynapseRequest, room_id: str, event_id: str | |
648 | ) -> Tuple[int, JsonDict]: | |
633 | 649 | requester = await self.auth.get_user_by_req(request, allow_guest=False) |
634 | 650 | await assert_user_is_admin(self.auth, requester.user) |
635 | 651 |
15 | 15 | import hmac |
16 | 16 | import logging |
17 | 17 | from http import HTTPStatus |
18 | from typing import TYPE_CHECKING, Tuple | |
18 | from typing import TYPE_CHECKING, Dict, List, Optional, Tuple | |
19 | 19 | |
20 | 20 | from synapse.api.constants import UserTypes |
21 | 21 | from synapse.api.errors import Codes, NotFoundError, SynapseError |
34 | 34 | assert_user_is_admin, |
35 | 35 | ) |
36 | 36 | from synapse.rest.client.v2_alpha._base import client_patterns |
37 | from synapse.storage.databases.main.media_repository import MediaSortOrder | |
37 | 38 | from synapse.types import JsonDict, UserID |
38 | 39 | |
39 | 40 | if TYPE_CHECKING: |
45 | 46 | class UsersRestServlet(RestServlet): |
46 | 47 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)$") |
47 | 48 | |
48 | def __init__(self, hs): | |
49 | def __init__(self, hs: "HomeServer"): | |
49 | 50 | self.hs = hs |
50 | 51 | self.store = hs.get_datastore() |
51 | 52 | self.auth = hs.get_auth() |
52 | 53 | self.admin_handler = hs.get_admin_handler() |
53 | 54 | |
54 | async def on_GET(self, request, user_id): | |
55 | async def on_GET( | |
56 | self, request: SynapseRequest, user_id: str | |
57 | ) -> Tuple[int, List[JsonDict]]: | |
55 | 58 | target_user = UserID.from_string(user_id) |
56 | 59 | await assert_requester_is_admin(self.auth, request) |
57 | 60 | |
151 | 154 | otherwise an error. |
152 | 155 | """ |
153 | 156 | |
154 | def __init__(self, hs): | |
157 | def __init__(self, hs: "HomeServer"): | |
155 | 158 | self.hs = hs |
156 | 159 | self.auth = hs.get_auth() |
157 | 160 | self.admin_handler = hs.get_admin_handler() |
163 | 166 | self.registration_handler = hs.get_registration_handler() |
164 | 167 | self.pusher_pool = hs.get_pusherpool() |
165 | 168 | |
166 | async def on_GET(self, request, user_id): | |
169 | async def on_GET( | |
170 | self, request: SynapseRequest, user_id: str | |
171 | ) -> Tuple[int, JsonDict]: | |
167 | 172 | await assert_requester_is_admin(self.auth, request) |
168 | 173 | |
169 | 174 | target_user = UserID.from_string(user_id) |
177 | 182 | |
178 | 183 | return 200, ret |
179 | 184 | |
180 | async def on_PUT(self, request, user_id): | |
185 | async def on_PUT( | |
186 | self, request: SynapseRequest, user_id: str | |
187 | ) -> Tuple[int, JsonDict]: | |
181 | 188 | requester = await self.auth.get_user_by_req(request) |
182 | 189 | await assert_user_is_admin(self.auth, requester.user) |
183 | 190 | |
271 | 278 | ) |
272 | 279 | |
273 | 280 | user = await self.admin_handler.get_user(target_user) |
281 | assert user is not None | |
282 | ||
274 | 283 | return 200, user |
275 | 284 | |
276 | 285 | else: # create user |
328 | 337 | target_user, requester, body["avatar_url"], True |
329 | 338 | ) |
330 | 339 | |
331 | ret = await self.admin_handler.get_user(target_user) | |
332 | ||
333 | return 201, ret | |
340 | user = await self.admin_handler.get_user(target_user) | |
341 | assert user is not None | |
342 | ||
343 | return 201, user | |
334 | 344 | |
335 | 345 | |
336 | 346 | class UserRegisterServlet(RestServlet): |
344 | 354 | PATTERNS = admin_patterns("/register") |
345 | 355 | NONCE_TIMEOUT = 60 |
346 | 356 | |
347 | def __init__(self, hs): | |
357 | def __init__(self, hs: "HomeServer"): | |
348 | 358 | self.auth_handler = hs.get_auth_handler() |
349 | 359 | self.reactor = hs.get_reactor() |
350 | self.nonces = {} | |
360 | self.nonces = {} # type: Dict[str, int] | |
351 | 361 | self.hs = hs |
352 | 362 | |
353 | 363 | def _clear_old_nonces(self): |
360 | 370 | if now - v > self.NONCE_TIMEOUT: |
361 | 371 | del self.nonces[k] |
362 | 372 | |
363 | def on_GET(self, request): | |
373 | def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
364 | 374 | """ |
365 | 375 | Generate a new nonce. |
366 | 376 | """ |
370 | 380 | self.nonces[nonce] = int(self.reactor.seconds()) |
371 | 381 | return 200, {"nonce": nonce} |
372 | 382 | |
373 | async def on_POST(self, request): | |
383 | async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
374 | 384 | self._clear_old_nonces() |
375 | 385 | |
376 | 386 | if not self.hs.config.registration_shared_secret: |
476 | 486 | client_patterns("/admin" + path_regex, v1=True) |
477 | 487 | ) |
478 | 488 | |
479 | def __init__(self, hs): | |
489 | def __init__(self, hs: "HomeServer"): | |
480 | 490 | self.hs = hs |
481 | 491 | self.auth = hs.get_auth() |
482 | 492 | self.admin_handler = hs.get_admin_handler() |
483 | 493 | |
484 | async def on_GET(self, request, user_id): | |
494 | async def on_GET( | |
495 | self, request: SynapseRequest, user_id: str | |
496 | ) -> Tuple[int, JsonDict]: | |
485 | 497 | target_user = UserID.from_string(user_id) |
486 | 498 | requester = await self.auth.get_user_by_req(request) |
487 | 499 | auth_user = requester.user |
506 | 518 | self.is_mine = hs.is_mine |
507 | 519 | self.store = hs.get_datastore() |
508 | 520 | |
509 | async def on_POST(self, request: str, target_user_id: str) -> Tuple[int, JsonDict]: | |
521 | async def on_POST( | |
522 | self, request: SynapseRequest, target_user_id: str | |
523 | ) -> Tuple[int, JsonDict]: | |
510 | 524 | requester = await self.auth.get_user_by_req(request) |
511 | 525 | await assert_user_is_admin(self.auth, requester.user) |
512 | 526 | |
548 | 562 | self.account_activity_handler = hs.get_account_validity_handler() |
549 | 563 | self.auth = hs.get_auth() |
550 | 564 | |
551 | async def on_POST(self, request): | |
565 | async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
552 | 566 | await assert_requester_is_admin(self.auth, request) |
553 | 567 | |
554 | 568 | body = parse_json_object_from_request(request) |
582 | 596 | |
583 | 597 | PATTERNS = admin_patterns("/reset_password/(?P<target_user_id>[^/]*)") |
584 | 598 | |
585 | def __init__(self, hs): | |
599 | def __init__(self, hs: "HomeServer"): | |
586 | 600 | self.store = hs.get_datastore() |
587 | 601 | self.hs = hs |
588 | 602 | self.auth = hs.get_auth() |
589 | 603 | self.auth_handler = hs.get_auth_handler() |
590 | 604 | self._set_password_handler = hs.get_set_password_handler() |
591 | 605 | |
592 | async def on_POST(self, request, target_user_id): | |
606 | async def on_POST( | |
607 | self, request: SynapseRequest, target_user_id: str | |
608 | ) -> Tuple[int, JsonDict]: | |
593 | 609 | """Post request to allow an administrator reset password for a user. |
594 | 610 | This needs user to have administrator access in Synapse. |
595 | 611 | """ |
624 | 640 | |
625 | 641 | PATTERNS = admin_patterns("/search_users/(?P<target_user_id>[^/]*)") |
626 | 642 | |
627 | def __init__(self, hs): | |
628 | self.hs = hs | |
629 | self.store = hs.get_datastore() | |
630 | self.auth = hs.get_auth() | |
631 | ||
632 | async def on_GET(self, request, target_user_id): | |
643 | def __init__(self, hs: "HomeServer"): | |
644 | self.hs = hs | |
645 | self.store = hs.get_datastore() | |
646 | self.auth = hs.get_auth() | |
647 | ||
648 | async def on_GET( | |
649 | self, request: SynapseRequest, target_user_id: str | |
650 | ) -> Tuple[int, Optional[List[JsonDict]]]: | |
633 | 651 | """Get request to search user table for specific users according to |
634 | 652 | search term. |
635 | 653 | This needs user to have a administrator access in Synapse. |
680 | 698 | |
681 | 699 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/admin$") |
682 | 700 | |
683 | def __init__(self, hs): | |
684 | self.hs = hs | |
685 | self.store = hs.get_datastore() | |
686 | self.auth = hs.get_auth() | |
687 | ||
688 | async def on_GET(self, request, user_id): | |
701 | def __init__(self, hs: "HomeServer"): | |
702 | self.hs = hs | |
703 | self.store = hs.get_datastore() | |
704 | self.auth = hs.get_auth() | |
705 | ||
706 | async def on_GET( | |
707 | self, request: SynapseRequest, user_id: str | |
708 | ) -> Tuple[int, JsonDict]: | |
689 | 709 | await assert_requester_is_admin(self.auth, request) |
690 | 710 | |
691 | 711 | target_user = UserID.from_string(user_id) |
697 | 717 | |
698 | 718 | return 200, {"admin": is_admin} |
699 | 719 | |
700 | async def on_PUT(self, request, user_id): | |
720 | async def on_PUT( | |
721 | self, request: SynapseRequest, user_id: str | |
722 | ) -> Tuple[int, JsonDict]: | |
701 | 723 | requester = await self.auth.get_user_by_req(request) |
702 | 724 | await assert_user_is_admin(self.auth, requester.user) |
703 | 725 | auth_user = requester.user |
728 | 750 | |
729 | 751 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]+)/joined_rooms$") |
730 | 752 | |
731 | def __init__(self, hs): | |
753 | def __init__(self, hs: "HomeServer"): | |
732 | 754 | self.is_mine = hs.is_mine |
733 | 755 | self.auth = hs.get_auth() |
734 | 756 | self.store = hs.get_datastore() |
735 | 757 | |
736 | async def on_GET(self, request, user_id): | |
758 | async def on_GET( | |
759 | self, request: SynapseRequest, user_id: str | |
760 | ) -> Tuple[int, JsonDict]: | |
737 | 761 | await assert_requester_is_admin(self.auth, request) |
738 | 762 | |
739 | 763 | room_ids = await self.store.get_rooms_for_user(user_id) |
756 | 780 | |
757 | 781 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/pushers$") |
758 | 782 | |
759 | def __init__(self, hs): | |
783 | def __init__(self, hs: "HomeServer"): | |
760 | 784 | self.is_mine = hs.is_mine |
761 | 785 | self.store = hs.get_datastore() |
762 | 786 | self.auth = hs.get_auth() |
797 | 821 | |
798 | 822 | PATTERNS = admin_patterns("/users/(?P<user_id>[^/]+)/media$") |
799 | 823 | |
800 | def __init__(self, hs): | |
824 | def __init__(self, hs: "HomeServer"): | |
801 | 825 | self.is_mine = hs.is_mine |
802 | 826 | self.auth = hs.get_auth() |
803 | 827 | self.store = hs.get_datastore() |
831 | 855 | errcode=Codes.INVALID_PARAM, |
832 | 856 | ) |
833 | 857 | |
858 | # If neither `order_by` nor `dir` is set, set the default order | |
859 | # to newest media is on top for backward compatibility. | |
860 | if b"order_by" not in request.args and b"dir" not in request.args: | |
861 | order_by = MediaSortOrder.CREATED_TS.value | |
862 | direction = "b" | |
863 | else: | |
864 | order_by = parse_string( | |
865 | request, | |
866 | "order_by", | |
867 | default=MediaSortOrder.CREATED_TS.value, | |
868 | allowed_values=( | |
869 | MediaSortOrder.MEDIA_ID.value, | |
870 | MediaSortOrder.UPLOAD_NAME.value, | |
871 | MediaSortOrder.CREATED_TS.value, | |
872 | MediaSortOrder.LAST_ACCESS_TS.value, | |
873 | MediaSortOrder.MEDIA_LENGTH.value, | |
874 | MediaSortOrder.MEDIA_TYPE.value, | |
875 | MediaSortOrder.QUARANTINED_BY.value, | |
876 | MediaSortOrder.SAFE_FROM_QUARANTINE.value, | |
877 | ), | |
878 | ) | |
879 | direction = parse_string( | |
880 | request, "dir", default="f", allowed_values=("f", "b") | |
881 | ) | |
882 | ||
834 | 883 | media, total = await self.store.get_local_media_by_user_paginate( |
835 | start, limit, user_id | |
884 | start, limit, user_id, order_by, direction | |
836 | 885 | ) |
837 | 886 | |
838 | 887 | ret = {"media": media, "total": total} |
864 | 913 | self.auth = hs.get_auth() |
865 | 914 | self.auth_handler = hs.get_auth_handler() |
866 | 915 | |
867 | async def on_POST(self, request, user_id): | |
916 | async def on_POST( | |
917 | self, request: SynapseRequest, user_id: str | |
918 | ) -> Tuple[int, JsonDict]: | |
868 | 919 | requester = await self.auth.get_user_by_req(request) |
869 | 920 | await assert_user_is_admin(self.auth, requester.user) |
870 | 921 | auth_user = requester.user |
916 | 967 | self.store = hs.get_datastore() |
917 | 968 | self.auth = hs.get_auth() |
918 | 969 | |
919 | async def on_POST(self, request, user_id): | |
970 | async def on_POST( | |
971 | self, request: SynapseRequest, user_id: str | |
972 | ) -> Tuple[int, JsonDict]: | |
920 | 973 | await assert_requester_is_admin(self.auth, request) |
921 | 974 | |
922 | 975 | if not self.hs.is_mine_id(user_id): |
19 | 19 | from synapse.api.ratelimiting import Ratelimiter |
20 | 20 | from synapse.appservice import ApplicationService |
21 | 21 | from synapse.handlers.sso import SsoIdentityProvider |
22 | from synapse.http import get_request_uri | |
22 | 23 | from synapse.http.server import HttpServer, finish_request |
23 | 24 | from synapse.http.servlet import ( |
24 | 25 | RestServlet, |
353 | 354 | hs.get_oidc_handler() |
354 | 355 | self._sso_handler = hs.get_sso_handler() |
355 | 356 | self._msc2858_enabled = hs.config.experimental.msc2858_enabled |
357 | self._public_baseurl = hs.config.public_baseurl | |
356 | 358 | |
357 | 359 | def register(self, http_server: HttpServer) -> None: |
358 | 360 | super().register(http_server) |
372 | 374 | async def on_GET( |
373 | 375 | self, request: SynapseRequest, idp_id: Optional[str] = None |
374 | 376 | ) -> None: |
377 | if not self._public_baseurl: | |
378 | raise SynapseError(400, "SSO requires a valid public_baseurl") | |
379 | ||
380 | # if this isn't the expected hostname, redirect to the right one, so that we | |
381 | # get our cookies back. | |
382 | requested_uri = get_request_uri(request) | |
383 | baseurl_bytes = self._public_baseurl.encode("utf-8") | |
384 | if not requested_uri.startswith(baseurl_bytes): | |
385 | # swap out the incorrect base URL for the right one. | |
386 | # | |
387 | # The idea here is to redirect from | |
388 | # https://foo.bar/whatever/_matrix/... | |
389 | # to | |
390 | # https://public.baseurl/_matrix/... | |
391 | # | |
392 | i = requested_uri.index(b"/_matrix") | |
393 | new_uri = baseurl_bytes[:-1] + requested_uri[i:] | |
394 | logger.info( | |
395 | "Requested URI %s is not canonical: redirecting to %s", | |
396 | requested_uri.decode("utf-8", errors="replace"), | |
397 | new_uri.decode("utf-8", errors="replace"), | |
398 | ) | |
399 | request.redirect(new_uri) | |
400 | finish_request(request) | |
401 | return | |
402 | ||
375 | 403 | client_redirect_url = parse_string( |
376 | 404 | request, "redirectUrl", required=True, encoding=None |
377 | 405 | ) |
17 | 17 | from functools import wraps |
18 | 18 | from typing import TYPE_CHECKING, Optional, Tuple |
19 | 19 | |
20 | from twisted.web.http import Request | |
20 | from twisted.web.server import Request | |
21 | 21 | |
22 | 22 | from synapse.api.constants import ( |
23 | 23 | MAX_GROUP_CATEGORYID_LENGTH, |
55 | 55 | content = parse_json_object_from_request(request) |
56 | 56 | assert_params_in_dict(content, ("messages",)) |
57 | 57 | |
58 | sender_user_id = requester.user.to_string() | |
59 | ||
60 | 58 | await self.device_message_handler.send_device_message( |
61 | sender_user_id, message_type, content["messages"] | |
59 | requester, message_type, content["messages"] | |
62 | 60 | ) |
63 | 61 | |
64 | 62 | response = (200, {}) # type: Tuple[int, dict] |
20 | 20 | |
21 | 21 | from twisted.internet.interfaces import IConsumer |
22 | 22 | from twisted.protocols.basic import FileSender |
23 | from twisted.web.http import Request | |
23 | from twisted.web.server import Request | |
24 | 24 | |
25 | 25 | from synapse.api.errors import Codes, SynapseError, cs_error |
26 | 26 | from synapse.http.server import finish_request, respond_with_json |
48 | 48 | |
49 | 49 | def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]: |
50 | 50 | try: |
51 | # The type on postpath seems incorrect in Twisted 21.2.0. | |
52 | postpath = request.postpath # type: List[bytes] # type: ignore | |
53 | assert postpath | |
54 | ||
51 | 55 | # This allows users to append e.g. /test.png to the URL. Useful for |
52 | 56 | # clients that parse the URL to see content type. |
53 | server_name, media_id = request.postpath[:2] | |
54 | ||
55 | if isinstance(server_name, bytes): | |
56 | server_name = server_name.decode("utf-8") | |
57 | media_id = media_id.decode("utf8") | |
57 | server_name_bytes, media_id_bytes = postpath[:2] | |
58 | server_name = server_name_bytes.decode("utf-8") | |
59 | media_id = media_id_bytes.decode("utf8") | |
58 | 60 | |
59 | 61 | file_name = None |
60 | if len(request.postpath) > 2: | |
62 | if len(postpath) > 2: | |
61 | 63 | try: |
62 | file_name = urllib.parse.unquote(request.postpath[-1].decode("utf-8")) | |
64 | file_name = urllib.parse.unquote(postpath[-1].decode("utf-8")) | |
63 | 65 | except UnicodeDecodeError: |
64 | 66 | pass |
65 | 67 | return server_name, media_id, file_name |
16 | 16 | |
17 | 17 | from typing import TYPE_CHECKING |
18 | 18 | |
19 | from twisted.web.http import Request | |
19 | from twisted.web.server import Request | |
20 | 20 | |
21 | 21 | from synapse.http.server import DirectServeJsonResource, respond_with_json |
22 | 22 |
15 | 15 | import logging |
16 | 16 | from typing import TYPE_CHECKING |
17 | 17 | |
18 | from twisted.web.http import Request | |
18 | from twisted.web.server import Request | |
19 | 19 | |
20 | 20 | from synapse.http.server import DirectServeJsonResource, set_cors_headers |
21 | 21 | from synapse.http.servlet import parse_boolean |
21 | 21 | |
22 | 22 | import twisted.internet.error |
23 | 23 | import twisted.web.http |
24 | from twisted.web.http import Request | |
25 | 24 | from twisted.web.resource import Resource |
25 | from twisted.web.server import Request | |
26 | 26 | |
27 | 27 | from synapse.api.errors import ( |
28 | 28 | FederationDeniedError, |
508 | 508 | t_height: int, |
509 | 509 | t_method: str, |
510 | 510 | t_type: str, |
511 | url_cache: str, | |
511 | url_cache: Optional[str], | |
512 | 512 | ) -> Optional[str]: |
513 | 513 | input_path = await self.media_storage.ensure_media_is_in_local_cache( |
514 | 514 | FileInfo(None, media_id, url_cache=url_cache) |
243 | 243 | await consumer.wait() |
244 | 244 | return local_path |
245 | 245 | |
246 | raise Exception("file could not be found") | |
246 | raise NotFoundError() | |
247 | 247 | |
248 | 248 | def _file_info_to_path(self, file_info: FileInfo) -> str: |
249 | 249 | """Converts file_info into a relative path. |
28 | 28 | import attr |
29 | 29 | |
30 | 30 | from twisted.internet.error import DNSLookupError |
31 | from twisted.web.http import Request | |
31 | from twisted.web.server import Request | |
32 | 32 | |
33 | 33 | from synapse.api.errors import Codes, SynapseError |
34 | 34 | from synapse.http.client import SimpleHttpClient |
148 | 148 | treq_args={"browser_like_redirects": True}, |
149 | 149 | ip_whitelist=hs.config.url_preview_ip_range_whitelist, |
150 | 150 | ip_blacklist=hs.config.url_preview_ip_range_blacklist, |
151 | http_proxy=os.getenvb(b"http_proxy"), | |
152 | https_proxy=os.getenvb(b"HTTPS_PROXY"), | |
151 | use_proxy=True, | |
153 | 152 | ) |
154 | 153 | self.media_repo = media_repo |
155 | 154 | self.primary_base_path = media_repo.primary_base_path |
17 | 17 | import logging |
18 | 18 | from typing import TYPE_CHECKING, Any, Dict, List, Optional |
19 | 19 | |
20 | from twisted.web.http import Request | |
20 | from twisted.web.server import Request | |
21 | 21 | |
22 | 22 | from synapse.api.errors import SynapseError |
23 | 23 | from synapse.http.server import DirectServeJsonResource, set_cors_headers |
112 | 112 | method, |
113 | 113 | m_type, |
114 | 114 | thumbnail_infos, |
115 | media_id, | |
115 | 116 | media_id, |
116 | 117 | url_cache=media_info["url_cache"], |
117 | 118 | server_name=None, |
268 | 269 | method, |
269 | 270 | m_type, |
270 | 271 | thumbnail_infos, |
272 | media_id, | |
271 | 273 | media_info["filesystem_id"], |
272 | 274 | url_cache=None, |
273 | 275 | server_name=server_name, |
281 | 283 | desired_method: str, |
282 | 284 | desired_type: str, |
283 | 285 | thumbnail_infos: List[Dict[str, Any]], |
286 | media_id: str, | |
284 | 287 | file_id: str, |
285 | 288 | url_cache: Optional[str] = None, |
286 | 289 | server_name: Optional[str] = None, |
316 | 319 | return |
317 | 320 | |
318 | 321 | responder = await self.media_storage.fetch_media(file_info) |
322 | if responder: | |
323 | await respond_with_responder( | |
324 | request, | |
325 | responder, | |
326 | file_info.thumbnail_type, | |
327 | file_info.thumbnail_length, | |
328 | ) | |
329 | return | |
330 | ||
331 | # If we can't find the thumbnail we regenerate it. This can happen | |
332 | # if e.g. we've deleted the thumbnails but still have the original | |
333 | # image somewhere. | |
334 | # | |
335 | # Since we have an entry for the thumbnail in the DB we a) know we | |
336 | # have have successfully generated the thumbnail in the past (so we | |
337 | # don't need to worry about repeatedly failing to generate | |
338 | # thumbnails), and b) have already calculated that appropriate | |
339 | # width/height/method so we can just call the "generate exact" | |
340 | # methods. | |
341 | ||
342 | # First let's check that we do actually have the original image | |
343 | # still. This will throw a 404 if we don't. | |
344 | # TODO: We should refetch the thumbnails for remote media. | |
345 | await self.media_storage.ensure_media_is_in_local_cache( | |
346 | FileInfo(server_name, file_id, url_cache=url_cache) | |
347 | ) | |
348 | ||
349 | if server_name: | |
350 | await self.media_repo.generate_remote_exact_thumbnail( | |
351 | server_name, | |
352 | file_id=file_id, | |
353 | media_id=media_id, | |
354 | t_width=file_info.thumbnail_width, | |
355 | t_height=file_info.thumbnail_height, | |
356 | t_method=file_info.thumbnail_method, | |
357 | t_type=file_info.thumbnail_type, | |
358 | ) | |
359 | else: | |
360 | await self.media_repo.generate_local_exact_thumbnail( | |
361 | media_id=media_id, | |
362 | t_width=file_info.thumbnail_width, | |
363 | t_height=file_info.thumbnail_height, | |
364 | t_method=file_info.thumbnail_method, | |
365 | t_type=file_info.thumbnail_type, | |
366 | url_cache=url_cache, | |
367 | ) | |
368 | ||
369 | responder = await self.media_storage.fetch_media(file_info) | |
319 | 370 | await respond_with_responder( |
320 | request, responder, file_info.thumbnail_type, file_info.thumbnail_length | |
371 | request, | |
372 | responder, | |
373 | file_info.thumbnail_type, | |
374 | file_info.thumbnail_length, | |
321 | 375 | ) |
322 | 376 | else: |
323 | 377 | logger.info("Failed to find any generated thumbnails") |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING | |
17 | from typing import IO, TYPE_CHECKING | |
18 | 18 | |
19 | from twisted.web.http import Request | |
19 | from twisted.web.server import Request | |
20 | 20 | |
21 | 21 | from synapse.api.errors import Codes, SynapseError |
22 | 22 | from synapse.http.server import DirectServeJsonResource, respond_with_json |
78 | 78 | headers = request.requestHeaders |
79 | 79 | |
80 | 80 | if headers.hasHeader(b"Content-Type"): |
81 | media_type = headers.getRawHeaders(b"Content-Type")[0].decode("ascii") | |
81 | content_type_headers = headers.getRawHeaders(b"Content-Type") | |
82 | assert content_type_headers # for mypy | |
83 | media_type = content_type_headers[0].decode("ascii") | |
82 | 84 | else: |
83 | 85 | raise SynapseError(msg="Upload request missing 'Content-Type'", code=400) |
84 | 86 | |
87 | 89 | # TODO(markjh): parse content-dispostion |
88 | 90 | |
89 | 91 | try: |
92 | content = request.content # type: IO # type: ignore | |
90 | 93 | content_uri = await self.media_repo.create_content( |
91 | media_type, upload_name, request.content, content_length, requester.user | |
94 | media_type, upload_name, content, content_length, requester.user | |
92 | 95 | ) |
93 | 96 | except SpamMediaException: |
94 | 97 | # For uploading of media we want to respond with a 400, instead of |
14 | 14 | import logging |
15 | 15 | from typing import TYPE_CHECKING |
16 | 16 | |
17 | from twisted.web.http import Request | |
17 | from twisted.web.server import Request | |
18 | 18 | |
19 | 19 | from synapse.api.errors import SynapseError |
20 | 20 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request |
14 | 14 | import logging |
15 | 15 | from typing import TYPE_CHECKING, Tuple |
16 | 16 | |
17 | from twisted.web.http import Request | |
17 | from twisted.web.server import Request | |
18 | 18 | |
19 | 19 | from synapse.api.errors import ThreepidValidationError |
20 | 20 | from synapse.config.emailconfig import ThreepidBehaviour |
15 | 15 | import logging |
16 | 16 | from typing import TYPE_CHECKING, List |
17 | 17 | |
18 | from twisted.web.http import Request | |
19 | 18 | from twisted.web.resource import Resource |
19 | from twisted.web.server import Request | |
20 | 20 | |
21 | 21 | from synapse.api.errors import SynapseError |
22 | 22 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request |
15 | 15 | import logging |
16 | 16 | from typing import TYPE_CHECKING |
17 | 17 | |
18 | from twisted.web.http import Request | |
18 | from twisted.web.server import Request | |
19 | 19 | |
20 | 20 | from synapse.api.errors import SynapseError |
21 | 21 | from synapse.handlers.sso import get_username_mapping_session_cookie_from_request |
23 | 23 | import abc |
24 | 24 | import functools |
25 | 25 | import logging |
26 | import os | |
27 | 26 | from typing import ( |
28 | 27 | TYPE_CHECKING, |
29 | 28 | Any, |
38 | 37 | |
39 | 38 | import twisted.internet.base |
40 | 39 | import twisted.internet.tcp |
40 | from twisted.internet import defer | |
41 | 41 | from twisted.mail.smtp import sendmail |
42 | 42 | from twisted.web.iweb import IPolicyForHTTPS |
43 | 43 | |
247 | 247 | self.start_time = None # type: Optional[int] |
248 | 248 | |
249 | 249 | self._instance_id = random_string(5) |
250 | self._instance_name = config.worker_name or "master" | |
250 | self._instance_name = config.worker.instance_name | |
251 | 251 | |
252 | 252 | self.version_string = version_string |
253 | 253 | |
369 | 369 | """ |
370 | 370 | An HTTP client that uses configured HTTP(S) proxies. |
371 | 371 | """ |
372 | return SimpleHttpClient( | |
373 | self, | |
374 | http_proxy=os.getenvb(b"http_proxy"), | |
375 | https_proxy=os.getenvb(b"HTTPS_PROXY"), | |
376 | ) | |
372 | return SimpleHttpClient(self, use_proxy=True) | |
377 | 373 | |
378 | 374 | @cache_in_self |
379 | 375 | def get_proxied_blacklisted_http_client(self) -> SimpleHttpClient: |
385 | 381 | self, |
386 | 382 | ip_whitelist=self.config.ip_range_whitelist, |
387 | 383 | ip_blacklist=self.config.ip_range_blacklist, |
388 | http_proxy=os.getenvb(b"http_proxy"), | |
389 | https_proxy=os.getenvb(b"HTTPS_PROXY"), | |
384 | use_proxy=True, | |
390 | 385 | ) |
391 | 386 | |
392 | 387 | @cache_in_self |
408 | 403 | return RoomShutdownHandler(self) |
409 | 404 | |
410 | 405 | @cache_in_self |
411 | def get_sendmail(self) -> sendmail: | |
406 | def get_sendmail(self) -> Callable[..., defer.Deferred]: | |
412 | 407 | return sendmail |
413 | 408 | |
414 | 409 | @cache_in_self |
757 | 752 | reconnect=True, |
758 | 753 | ) |
759 | 754 | |
760 | async def remove_pusher(self, app_id: str, push_key: str, user_id: str): | |
761 | return await self.get_pusherpool().remove_pusher(app_id, push_key, user_id) | |
762 | ||
763 | 755 | def should_send_federation(self) -> bool: |
764 | 756 | "Should this server be sending federation traffic directly?" |
765 | return self.config.send_federation and ( | |
766 | not self.config.worker_app | |
767 | or self.config.worker_app == "synapse.app.federation_sender" | |
768 | ) | |
757 | return self.config.send_federation |
48 | 48 | from synapse.storage.background_updates import BackgroundUpdater |
49 | 49 | from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine |
50 | 50 | from synapse.storage.types import Connection, Cursor |
51 | from synapse.storage.util.sequence import build_sequence_generator | |
52 | 51 | from synapse.types import Collection |
53 | 52 | |
54 | 53 | # python 3 does not have a maximum int value |
380 | 379 | _TXN_ID = 0 |
381 | 380 | |
382 | 381 | def __init__( |
383 | self, hs, database_config: DatabaseConnectionConfig, engine: BaseDatabaseEngine | |
382 | self, | |
383 | hs, | |
384 | database_config: DatabaseConnectionConfig, | |
385 | engine: BaseDatabaseEngine, | |
384 | 386 | ): |
385 | 387 | self.hs = hs |
386 | 388 | self._clock = hs.get_clock() |
418 | 420 | "upsert_safety_check", |
419 | 421 | self._check_safe_to_upsert, |
420 | 422 | ) |
421 | ||
422 | # We define this sequence here so that it can be referenced from both | |
423 | # the DataStore and PersistEventStore. | |
424 | def get_chain_id_txn(txn): | |
425 | txn.execute("SELECT COALESCE(max(chain_id), 0) FROM event_auth_chains") | |
426 | return txn.fetchone()[0] | |
427 | ||
428 | self.event_chain_id_gen = build_sequence_generator( | |
429 | engine, get_chain_id_txn, "event_auth_chain_id" | |
430 | ) | |
431 | 423 | |
432 | 424 | def is_running(self) -> bool: |
433 | 425 | """Is the database pool currently running""" |
78 | 78 | # If we're on a process that can persist events also |
79 | 79 | # instantiate a `PersistEventsStore` |
80 | 80 | if hs.get_instance_name() in hs.config.worker.writers.events: |
81 | persist_events = PersistEventsStore(hs, database, main) | |
81 | persist_events = PersistEventsStore(hs, database, main, db_conn) | |
82 | 82 | |
83 | 83 | if "state" in database_config.databases: |
84 | 84 | logger.info( |
15 | 15 | # limitations under the License. |
16 | 16 | |
17 | 17 | import logging |
18 | from typing import Any, Dict, List, Optional, Tuple | |
18 | from typing import List, Optional, Tuple | |
19 | 19 | |
20 | 20 | from synapse.api.constants import PresenceState |
21 | 21 | from synapse.config.homeserver import HomeServerConfig |
26 | 26 | MultiWriterIdGenerator, |
27 | 27 | StreamIdGenerator, |
28 | 28 | ) |
29 | from synapse.types import get_domain_from_id | |
29 | from synapse.types import JsonDict, get_domain_from_id | |
30 | 30 | from synapse.util.caches.stream_change_cache import StreamChangeCache |
31 | 31 | |
32 | 32 | from .account_data import AccountDataStore |
263 | 263 | |
264 | 264 | return [UserPresenceState(**row) for row in rows] |
265 | 265 | |
266 | async def get_users(self) -> List[Dict[str, Any]]: | |
266 | async def get_users(self) -> List[JsonDict]: | |
267 | 267 | """Function to retrieve a list of users in users table. |
268 | 268 | |
269 | 269 | Returns: |
291 | 291 | name: Optional[str] = None, |
292 | 292 | guests: bool = True, |
293 | 293 | deactivated: bool = False, |
294 | ) -> Tuple[List[Dict[str, Any]], int]: | |
294 | ) -> Tuple[List[JsonDict], int]: | |
295 | 295 | """Function to retrieve a paginated list of users from |
296 | 296 | users list. This will return a json list of users and the |
297 | 297 | total number of users matching the filter criteria. |
352 | 352 | "get_users_paginate_txn", get_users_paginate_txn |
353 | 353 | ) |
354 | 354 | |
355 | async def search_users(self, term: str) -> Optional[List[Dict[str, Any]]]: | |
355 | async def search_users(self, term: str) -> Optional[List[JsonDict]]: | |
356 | 356 | """Function to search users list for one or more users with |
357 | 357 | the matched term. |
358 | 358 |
41 | 41 | from synapse.storage._base import db_to_json, make_in_list_sql_clause |
42 | 42 | from synapse.storage.database import DatabasePool, LoggingTransaction |
43 | 43 | from synapse.storage.databases.main.search import SearchEntry |
44 | from synapse.storage.types import Connection | |
44 | 45 | from synapse.storage.util.id_generators import MultiWriterIdGenerator |
46 | from synapse.storage.util.sequence import SequenceGenerator | |
45 | 47 | from synapse.types import StateMap, get_domain_from_id |
46 | 48 | from synapse.util import json_encoder |
47 | 49 | from synapse.util.iterutils import batch_iter, sorted_topologically |
89 | 91 | """ |
90 | 92 | |
91 | 93 | def __init__( |
92 | self, hs: "HomeServer", db: DatabasePool, main_data_store: "DataStore" | |
94 | self, | |
95 | hs: "HomeServer", | |
96 | db: DatabasePool, | |
97 | main_data_store: "DataStore", | |
98 | db_conn: Connection, | |
93 | 99 | ): |
94 | 100 | self.hs = hs |
95 | 101 | self.db_pool = db |
473 | 479 | self._add_chain_cover_index( |
474 | 480 | txn, |
475 | 481 | self.db_pool, |
482 | self.store.event_chain_id_gen, | |
476 | 483 | event_to_room_id, |
477 | 484 | event_to_types, |
478 | 485 | event_to_auth_chain, |
483 | 490 | cls, |
484 | 491 | txn, |
485 | 492 | db_pool: DatabasePool, |
493 | event_chain_id_gen: SequenceGenerator, | |
486 | 494 | event_to_room_id: Dict[str, str], |
487 | 495 | event_to_types: Dict[str, Tuple[str, str]], |
488 | 496 | event_to_auth_chain: Dict[str, List[str]], |
629 | 637 | new_chain_tuples = cls._allocate_chain_ids( |
630 | 638 | txn, |
631 | 639 | db_pool, |
640 | event_chain_id_gen, | |
632 | 641 | event_to_room_id, |
633 | 642 | event_to_types, |
634 | 643 | event_to_auth_chain, |
767 | 776 | def _allocate_chain_ids( |
768 | 777 | txn, |
769 | 778 | db_pool: DatabasePool, |
779 | event_chain_id_gen: SequenceGenerator, | |
770 | 780 | event_to_room_id: Dict[str, str], |
771 | 781 | event_to_types: Dict[str, Tuple[str, str]], |
772 | 782 | event_to_auth_chain: Dict[str, List[str]], |
879 | 889 | chain_to_max_seq_no[new_chain_tuple[0]] = new_chain_tuple[1] |
880 | 890 | |
881 | 891 | # Generate new chain IDs for all unallocated chain IDs. |
882 | newly_allocated_chain_ids = db_pool.event_chain_id_gen.get_next_mult_txn( | |
892 | newly_allocated_chain_ids = event_chain_id_gen.get_next_mult_txn( | |
883 | 893 | txn, len(unallocated_chain_ids) |
884 | 894 | ) |
885 | 895 |
695 | 695 | ) |
696 | 696 | |
697 | 697 | if not has_event_auth: |
698 | for auth_id in event.auth_event_ids(): | |
698 | # Old, dodgy, events may have duplicate auth events, which we | |
699 | # need to deduplicate as we have a unique constraint. | |
700 | for auth_id in set(event.auth_event_ids()): | |
699 | 701 | auth_events.append( |
700 | 702 | { |
701 | 703 | "room_id": event.room_id, |
916 | 918 | PersistEventsStore._add_chain_cover_index( |
917 | 919 | txn, |
918 | 920 | self.db_pool, |
921 | self.event_chain_id_gen, | |
919 | 922 | event_to_room_id, |
920 | 923 | event_to_types, |
921 | 924 | event_to_auth_chain, |
44 | 44 | from synapse.storage.database import DatabasePool |
45 | 45 | from synapse.storage.engines import PostgresEngine |
46 | 46 | from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator |
47 | from synapse.storage.util.sequence import build_sequence_generator | |
47 | 48 | from synapse.types import Collection, JsonDict, get_domain_from_id |
48 | 49 | from synapse.util.caches.descriptors import cached |
49 | 50 | from synapse.util.caches.lrucache import LruCache |
155 | 156 | self._event_fetch_list = [] |
156 | 157 | self._event_fetch_ongoing = 0 |
157 | 158 | |
159 | # We define this sequence here so that it can be referenced from both | |
160 | # the DataStore and PersistEventStore. | |
161 | def get_chain_id_txn(txn): | |
162 | txn.execute("SELECT COALESCE(max(chain_id), 0) FROM event_auth_chains") | |
163 | return txn.fetchone()[0] | |
164 | ||
165 | self.event_chain_id_gen = build_sequence_generator( | |
166 | db_conn, | |
167 | database.engine, | |
168 | get_chain_id_txn, | |
169 | "event_auth_chain_id", | |
170 | table="event_auth_chains", | |
171 | id_column="chain_id", | |
172 | ) | |
173 | ||
158 | 174 | def process_replication_rows(self, stream_name, instance_name, token, rows): |
159 | 175 | if stream_name == EventsStream.NAME: |
160 | 176 | self._stream_id_gen.advance(instance_name, token) |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | from enum import Enum | |
15 | 16 | from typing import Any, Dict, Iterable, List, Optional, Tuple |
16 | 17 | |
17 | 18 | from synapse.storage._base import SQLBaseStore |
20 | 21 | BG_UPDATE_REMOVE_MEDIA_REPO_INDEX_WITHOUT_METHOD = ( |
21 | 22 | "media_repository_drop_index_wo_method" |
22 | 23 | ) |
24 | ||
25 | ||
26 | class MediaSortOrder(Enum): | |
27 | """ | |
28 | Enum to define the sorting method used when returning media with | |
29 | get_local_media_by_user_paginate | |
30 | """ | |
31 | ||
32 | MEDIA_ID = "media_id" | |
33 | UPLOAD_NAME = "upload_name" | |
34 | CREATED_TS = "created_ts" | |
35 | LAST_ACCESS_TS = "last_access_ts" | |
36 | MEDIA_LENGTH = "media_length" | |
37 | MEDIA_TYPE = "media_type" | |
38 | QUARANTINED_BY = "quarantined_by" | |
39 | SAFE_FROM_QUARANTINE = "safe_from_quarantine" | |
23 | 40 | |
24 | 41 | |
25 | 42 | class MediaRepositoryBackgroundUpdateStore(SQLBaseStore): |
117 | 134 | ) |
118 | 135 | |
119 | 136 | async def get_local_media_by_user_paginate( |
120 | self, start: int, limit: int, user_id: str | |
137 | self, | |
138 | start: int, | |
139 | limit: int, | |
140 | user_id: str, | |
141 | order_by: str = MediaSortOrder.CREATED_TS.value, | |
142 | direction: str = "f", | |
121 | 143 | ) -> Tuple[List[Dict[str, Any]], int]: |
122 | 144 | """Get a paginated list of metadata for a local piece of media |
123 | 145 | which an user_id has uploaded |
126 | 148 | start: offset in the list |
127 | 149 | limit: maximum amount of media_ids to retrieve |
128 | 150 | user_id: fully-qualified user id |
151 | order_by: the sort order of the returned list | |
152 | direction: sort ascending or descending | |
129 | 153 | Returns: |
130 | 154 | A paginated list of all metadata of user's media, |
131 | 155 | plus the total count of all the user's media |
132 | 156 | """ |
133 | 157 | |
134 | 158 | def get_local_media_by_user_paginate_txn(txn): |
159 | ||
160 | # Set ordering | |
161 | order_by_column = MediaSortOrder(order_by).value | |
162 | ||
163 | if direction == "b": | |
164 | order = "DESC" | |
165 | else: | |
166 | order = "ASC" | |
135 | 167 | |
136 | 168 | args = [user_id] |
137 | 169 | sql = """ |
154 | 186 | "safe_from_quarantine" |
155 | 187 | FROM local_media_repository |
156 | 188 | WHERE user_id = ? |
157 | ORDER BY created_ts DESC, media_id DESC | |
189 | ORDER BY {order_by_column} {order}, media_id ASC | |
158 | 190 | LIMIT ? OFFSET ? |
159 | """ | |
191 | """.format( | |
192 | order_by_column=order_by_column, | |
193 | order=order, | |
194 | ) | |
160 | 195 | |
161 | 196 | args += [limit, start] |
162 | 197 | txn.execute(sql, args) |
343 | 378 | thumbnail_method, |
344 | 379 | thumbnail_length, |
345 | 380 | ): |
346 | await self.db_pool.simple_insert( | |
347 | "local_media_repository_thumbnails", | |
348 | { | |
381 | await self.db_pool.simple_upsert( | |
382 | table="local_media_repository_thumbnails", | |
383 | keyvalues={ | |
349 | 384 | "media_id": media_id, |
350 | 385 | "thumbnail_width": thumbnail_width, |
351 | 386 | "thumbnail_height": thumbnail_height, |
352 | 387 | "thumbnail_method": thumbnail_method, |
353 | 388 | "thumbnail_type": thumbnail_type, |
354 | "thumbnail_length": thumbnail_length, | |
355 | 389 | }, |
390 | values={"thumbnail_length": thumbnail_length}, | |
356 | 391 | desc="store_local_thumbnail", |
357 | 392 | ) |
358 | 393 | |
497 | 532 | thumbnail_method, |
498 | 533 | thumbnail_length, |
499 | 534 | ): |
500 | await self.db_pool.simple_insert( | |
501 | "remote_media_cache_thumbnails", | |
502 | { | |
535 | await self.db_pool.simple_upsert( | |
536 | table="remote_media_cache_thumbnails", | |
537 | keyvalues={ | |
503 | 538 | "media_origin": origin, |
504 | 539 | "media_id": media_id, |
505 | 540 | "thumbnail_width": thumbnail_width, |
506 | 541 | "thumbnail_height": thumbnail_height, |
507 | 542 | "thumbnail_method": thumbnail_method, |
508 | 543 | "thumbnail_type": thumbnail_type, |
509 | "thumbnail_length": thumbnail_length, | |
510 | "filesystem_id": filesystem_id, | |
511 | 544 | }, |
545 | values={"thumbnail_length": thumbnail_length}, | |
546 | insertion_values={"filesystem_id": filesystem_id}, | |
512 | 547 | desc="store_remote_media_thumbnail", |
513 | 548 | ) |
514 | 549 |
27 | 27 | async def purge_history( |
28 | 28 | self, room_id: str, token: str, delete_local_events: bool |
29 | 29 | ) -> Set[int]: |
30 | """Deletes room history before a certain point | |
30 | """Deletes room history before a certain point. | |
31 | ||
32 | Note that only a single purge can occur at once, this is guaranteed via | |
33 | a higher level (in the PaginationHandler). | |
31 | 34 | |
32 | 35 | Args: |
33 | 36 | room_id: |
51 | 54 | delete_local_events, |
52 | 55 | ) |
53 | 56 | |
54 | def _purge_history_txn(self, txn, room_id, token, delete_local_events): | |
57 | def _purge_history_txn( | |
58 | self, txn, room_id: str, token: RoomStreamToken, delete_local_events: bool | |
59 | ) -> Set[int]: | |
55 | 60 | # Tables that should be pruned: |
56 | 61 | # event_auth |
57 | 62 | # event_backward_extremities |
102 | 107 | if max_depth < token.topological: |
103 | 108 | # We need to ensure we don't delete all the events from the database |
104 | 109 | # otherwise we wouldn't be able to send any events (due to not |
105 | # having any backwards extremeties) | |
110 | # having any backwards extremities) | |
106 | 111 | raise SynapseError( |
107 | 112 | 400, "topological_ordering is greater than forward extremeties" |
108 | 113 | ) |
153 | 158 | |
154 | 159 | logger.info("[purge] Finding new backward extremities") |
155 | 160 | |
156 | # We calculate the new entries for the backward extremeties by finding | |
161 | # We calculate the new entries for the backward extremities by finding | |
157 | 162 | # events to be purged that are pointed to by events we're not going to |
158 | 163 | # purge. |
159 | 164 | txn.execute( |
295 | 300 | "purge_room", self._purge_room_txn, room_id |
296 | 301 | ) |
297 | 302 | |
298 | def _purge_room_txn(self, txn, room_id): | |
303 | def _purge_room_txn(self, txn, room_id: str) -> List[int]: | |
299 | 304 | # First we fetch all the state groups that should be deleted, before |
300 | 305 | # we delete that information. |
301 | 306 | txn.execute( |
308 | 313 | ) |
309 | 314 | |
310 | 315 | state_groups = [row[0] for row in txn] |
316 | ||
317 | # Get all the auth chains that are referenced by events that are to be | |
318 | # deleted. | |
319 | txn.execute( | |
320 | """ | |
321 | SELECT chain_id, sequence_number FROM events | |
322 | LEFT JOIN event_auth_chains USING (event_id) | |
323 | WHERE room_id = ? | |
324 | """, | |
325 | (room_id,), | |
326 | ) | |
327 | referenced_chain_id_tuples = list(txn) | |
328 | ||
329 | logger.info("[purge] removing events from event_auth_chain_links") | |
330 | txn.executemany( | |
331 | """ | |
332 | DELETE FROM event_auth_chain_links WHERE | |
333 | (origin_chain_id = ? AND origin_sequence_number = ?) OR | |
334 | (target_chain_id = ? AND target_sequence_number = ?) | |
335 | """, | |
336 | ( | |
337 | (chain_id, seq_num, chain_id, seq_num) | |
338 | for (chain_id, seq_num) in referenced_chain_id_tuples | |
339 | ), | |
340 | ) | |
311 | 341 | |
312 | 342 | # Now we delete tables which lack an index on room_id but have one on event_id |
313 | 343 | for table in ( |
318 | 348 | "event_reference_hashes", |
319 | 349 | "event_relations", |
320 | 350 | "event_to_state_groups", |
351 | "event_auth_chains", | |
352 | "event_auth_chain_to_calculate", | |
321 | 353 | "redactions", |
322 | 354 | "rejections", |
323 | 355 | "state_events", |
36 | 36 | super().__init__(database, db_conn, hs) |
37 | 37 | self._pushers_id_gen = StreamIdGenerator( |
38 | 38 | db_conn, "pushers", "id", extra_tables=[("deleted_pushers", "stream_id")] |
39 | ) | |
40 | ||
41 | self.db_pool.updates.register_background_update_handler( | |
42 | "remove_deactivated_pushers", | |
43 | self._remove_deactivated_pushers, | |
44 | ) | |
45 | ||
46 | self.db_pool.updates.register_background_update_handler( | |
47 | "remove_stale_pushers", | |
48 | self._remove_stale_pushers, | |
39 | 49 | ) |
40 | 50 | |
41 | 51 | def _decode_pushers_rows(self, rows: Iterable[dict]) -> Iterator[PusherConfig]: |
282 | 292 | desc="set_throttle_params", |
283 | 293 | lock=False, |
284 | 294 | ) |
295 | ||
296 | async def _remove_deactivated_pushers(self, progress: dict, batch_size: int) -> int: | |
297 | """A background update that deletes all pushers for deactivated users. | |
298 | ||
299 | Note that we don't proacively tell the pusherpool that we've deleted | |
300 | these (just because its a bit off a faff to do from here), but they will | |
301 | get cleaned up at the next restart | |
302 | """ | |
303 | ||
304 | last_user = progress.get("last_user", "") | |
305 | ||
306 | def _delete_pushers(txn) -> int: | |
307 | ||
308 | sql = """ | |
309 | SELECT name FROM users | |
310 | WHERE deactivated = ? and name > ? | |
311 | ORDER BY name ASC | |
312 | LIMIT ? | |
313 | """ | |
314 | ||
315 | txn.execute(sql, (1, last_user, batch_size)) | |
316 | users = [row[0] for row in txn] | |
317 | ||
318 | self.db_pool.simple_delete_many_txn( | |
319 | txn, | |
320 | table="pushers", | |
321 | column="user_name", | |
322 | iterable=users, | |
323 | keyvalues={}, | |
324 | ) | |
325 | ||
326 | if users: | |
327 | self.db_pool.updates._background_update_progress_txn( | |
328 | txn, "remove_deactivated_pushers", {"last_user": users[-1]} | |
329 | ) | |
330 | ||
331 | return len(users) | |
332 | ||
333 | number_deleted = await self.db_pool.runInteraction( | |
334 | "_remove_deactivated_pushers", _delete_pushers | |
335 | ) | |
336 | ||
337 | if number_deleted < batch_size: | |
338 | await self.db_pool.updates._end_background_update( | |
339 | "remove_deactivated_pushers" | |
340 | ) | |
341 | ||
342 | return number_deleted | |
343 | ||
344 | async def _remove_stale_pushers(self, progress: dict, batch_size: int) -> int: | |
345 | """A background update that deletes all pushers for logged out devices. | |
346 | ||
347 | Note that we don't proacively tell the pusherpool that we've deleted | |
348 | these (just because its a bit off a faff to do from here), but they will | |
349 | get cleaned up at the next restart | |
350 | """ | |
351 | ||
352 | last_pusher = progress.get("last_pusher", 0) | |
353 | ||
354 | def _delete_pushers(txn) -> int: | |
355 | ||
356 | sql = """ | |
357 | SELECT p.id, access_token FROM pushers AS p | |
358 | LEFT JOIN access_tokens AS a ON (p.access_token = a.id) | |
359 | WHERE p.id > ? | |
360 | ORDER BY p.id ASC | |
361 | LIMIT ? | |
362 | """ | |
363 | ||
364 | txn.execute(sql, (last_pusher, batch_size)) | |
365 | pushers = [(row[0], row[1]) for row in txn] | |
366 | ||
367 | self.db_pool.simple_delete_many_txn( | |
368 | txn, | |
369 | table="pushers", | |
370 | column="id", | |
371 | iterable=(pusher_id for pusher_id, token in pushers if token is None), | |
372 | keyvalues={}, | |
373 | ) | |
374 | ||
375 | if pushers: | |
376 | self.db_pool.updates._background_update_progress_txn( | |
377 | txn, "remove_stale_pushers", {"last_pusher": pushers[-1][0]} | |
378 | ) | |
379 | ||
380 | return len(pushers) | |
381 | ||
382 | number_deleted = await self.db_pool.runInteraction( | |
383 | "_remove_stale_pushers", _delete_pushers | |
384 | ) | |
385 | ||
386 | if number_deleted < batch_size: | |
387 | await self.db_pool.updates._end_background_update("remove_stale_pushers") | |
388 | ||
389 | return number_deleted | |
285 | 390 | |
286 | 391 | |
287 | 392 | class PusherStore(PusherWorkerStore): |
372 | 477 | await self.db_pool.runInteraction( |
373 | 478 | "delete_pusher", delete_pusher_txn, stream_id |
374 | 479 | ) |
480 | ||
481 | async def delete_all_pushers_for_user(self, user_id: str) -> None: | |
482 | """Delete all pushers associated with an account.""" | |
483 | ||
484 | # We want to generate a row in `deleted_pushers` for each pusher we're | |
485 | # deleting, so we fetch the list now so we can generate the appropriate | |
486 | # number of stream IDs. | |
487 | # | |
488 | # Note: technically there could be a race here between adding/deleting | |
489 | # pushers, but a) the worst case if we don't stop a pusher until the | |
490 | # next restart and b) this is only called when we're deactivating an | |
491 | # account. | |
492 | pushers = list(await self.get_pushers_by_user_id(user_id)) | |
493 | ||
494 | def delete_pushers_txn(txn, stream_ids): | |
495 | self._invalidate_cache_and_stream( # type: ignore | |
496 | txn, self.get_if_user_has_pusher, (user_id,) | |
497 | ) | |
498 | ||
499 | self.db_pool.simple_delete_txn( | |
500 | txn, | |
501 | table="pushers", | |
502 | keyvalues={"user_name": user_id}, | |
503 | ) | |
504 | ||
505 | self.db_pool.simple_insert_many_txn( | |
506 | txn, | |
507 | table="deleted_pushers", | |
508 | values=[ | |
509 | { | |
510 | "stream_id": stream_id, | |
511 | "app_id": pusher.app_id, | |
512 | "pushkey": pusher.pushkey, | |
513 | "user_id": user_id, | |
514 | } | |
515 | for stream_id, pusher in zip(stream_ids, pushers) | |
516 | ], | |
517 | ) | |
518 | ||
519 | async with self._pushers_id_gen.get_next_mult(len(pushers)) as stream_ids: | |
520 | await self.db_pool.runInteraction( | |
521 | "delete_all_pushers_for_user", delete_pushers_txn, stream_ids | |
522 | ) |
22 | 22 | from synapse.api.constants import UserTypes |
23 | 23 | from synapse.api.errors import Codes, StoreError, SynapseError, ThreepidValidationError |
24 | 24 | from synapse.metrics.background_process_metrics import wrap_as_background_process |
25 | from synapse.storage.database import DatabasePool | |
25 | from synapse.storage.database import DatabasePool, LoggingDatabaseConnection | |
26 | 26 | from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore |
27 | 27 | from synapse.storage.databases.main.stats import StatsStore |
28 | 28 | from synapse.storage.types import Connection, Cursor |
69 | 69 | |
70 | 70 | |
71 | 71 | class RegistrationWorkerStore(CacheInvalidationWorkerStore): |
72 | def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"): | |
72 | def __init__( | |
73 | self, | |
74 | database: DatabasePool, | |
75 | db_conn: LoggingDatabaseConnection, | |
76 | hs: "HomeServer", | |
77 | ): | |
73 | 78 | super().__init__(database, db_conn, hs) |
74 | 79 | |
75 | 80 | self.config = hs.config |
78 | 83 | # call `find_max_generated_user_id_localpart` each time, which is |
79 | 84 | # expensive if there are many entries. |
80 | 85 | self._user_id_seq = build_sequence_generator( |
86 | db_conn, | |
81 | 87 | database.engine, |
82 | 88 | find_max_generated_user_id_localpart, |
83 | 89 | "user_id_seq", |
90 | table=None, | |
91 | id_column=None, | |
84 | 92 | ) |
85 | 93 | |
86 | 94 | self._account_validity = hs.config.account_validity |
1035 | 1043 | |
1036 | 1044 | |
1037 | 1045 | class RegistrationBackgroundUpdateStore(RegistrationWorkerStore): |
1038 | def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"): | |
1046 | def __init__( | |
1047 | self, | |
1048 | database: DatabasePool, | |
1049 | db_conn: LoggingDatabaseConnection, | |
1050 | hs: "HomeServer", | |
1051 | ): | |
1039 | 1052 | super().__init__(database, db_conn, hs) |
1040 | 1053 | |
1041 | 1054 | self._clock = hs.get_clock() |
0 | /* Copyright 2020 The Matrix.org Foundation C.I.C | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | INSERT INTO background_updates (ordering, update_name, progress_json) VALUES | |
16 | (5828, 'rejected_events_metadata', '{}'); |
+20
-0
0 | /* Copyright 2021 The Matrix.org Foundation C.I.C | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | ||
16 | -- We may not have deleted all pushers for deactivated accounts, so we set up a | |
17 | -- background job to delete them. | |
18 | INSERT INTO background_updates (ordering, update_name, progress_json) VALUES | |
19 | (5908, 'remove_deactivated_pushers', '{}'); |
0 | /* Copyright 2021 The Matrix.org Foundation C.I.C | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | ||
16 | -- Delete all pushers associated with deleted devices. This is to clear up after | |
17 | -- a bug where they weren't correctly deleted when using workers. | |
18 | INSERT INTO background_updates (ordering, update_name, progress_json) VALUES | |
19 | (5908, 'remove_stale_pushers', '{}'); |
0 | /* Copyright 2020 The Matrix.org Foundation C.I.C | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | -- This originally was in 58/, but landed after 59/ was created, and so some | |
16 | -- servers running develop didn't run this delta. Running it again should be | |
17 | -- safe. | |
18 | -- | |
19 | -- We first delete any in progress `rejected_events_metadata` background update, | |
20 | -- to ensure that we don't conflict when trying to insert the new one. (We could | |
21 | -- alternatively do an ON CONFLICT DO NOTHING, but that syntax isn't supported | |
22 | -- by older SQLite versions. Plus, this should be a rare case). | |
23 | DELETE FROM background_updates WHERE update_name = 'rejected_events_metadata'; | |
24 | INSERT INTO background_updates (ordering, update_name, progress_json) VALUES | |
25 | (5828, 'rejected_events_metadata', '{}'); |
496 | 496 | async def add_users_in_public_rooms( |
497 | 497 | self, room_id: str, user_ids: Iterable[str] |
498 | 498 | ) -> None: |
499 | """Insert entries into the users_who_share_private_rooms table. The first | |
500 | user should be a local user. | |
499 | """Insert entries into the users_in_public_rooms table. | |
501 | 500 | |
502 | 501 | Args: |
503 | 502 | room_id |
555 | 554 | def __init__(self, database: DatabasePool, db_conn, hs): |
556 | 555 | super().__init__(database, db_conn, hs) |
557 | 556 | |
557 | self._prefer_local_users_in_search = ( | |
558 | hs.config.user_directory_search_prefer_local_users | |
559 | ) | |
560 | self._server_name = hs.config.server_name | |
561 | ||
558 | 562 | async def remove_from_user_dir(self, user_id: str) -> None: |
559 | 563 | def _remove_from_user_dir_txn(txn): |
560 | 564 | self.db_pool.simple_delete_txn( |
664 | 668 | users.update(rows) |
665 | 669 | return list(users) |
666 | 670 | |
667 | @cached() | |
668 | 671 | async def get_shared_rooms_for_users( |
669 | 672 | self, user_id: str, other_user_id: str |
670 | 673 | ) -> Set[str]: |
753 | 756 | ) |
754 | 757 | """ |
755 | 758 | |
759 | # We allow manipulating the ranking algorithm by injecting statements | |
760 | # based on config options. | |
761 | additional_ordering_statements = [] | |
762 | ordering_arguments = () | |
763 | ||
756 | 764 | if isinstance(self.database_engine, PostgresEngine): |
757 | 765 | full_query, exact_query, prefix_query = _parse_query_postgres(search_term) |
766 | ||
767 | # If enabled, this config option will rank local users higher than those on | |
768 | # remote instances. | |
769 | if self._prefer_local_users_in_search: | |
770 | # This statement checks whether a given user's user ID contains a server name | |
771 | # that matches the local server | |
772 | statement = "* (CASE WHEN user_id LIKE ? THEN 2.0 ELSE 1.0 END)" | |
773 | additional_ordering_statements.append(statement) | |
774 | ||
775 | ordering_arguments += ("%:" + self._server_name,) | |
758 | 776 | |
759 | 777 | # We order by rank and then if they have profile info |
760 | 778 | # The ranking algorithm is hand tweaked for "best" results. Broadly |
766 | 784 | FROM user_directory_search as t |
767 | 785 | INNER JOIN user_directory AS d USING (user_id) |
768 | 786 | WHERE |
769 | %s | |
787 | %(where_clause)s | |
770 | 788 | AND vector @@ to_tsquery('simple', ?) |
771 | 789 | ORDER BY |
772 | 790 | (CASE WHEN d.user_id IS NOT NULL THEN 4.0 ELSE 1.0 END) |
786 | 804 | 8 |
787 | 805 | ) |
788 | 806 | ) |
807 | %(order_case_statements)s | |
789 | 808 | DESC, |
790 | 809 | display_name IS NULL, |
791 | 810 | avatar_url IS NULL |
792 | 811 | LIMIT ? |
793 | """ % ( | |
794 | where_clause, | |
795 | ) | |
796 | args = join_args + (full_query, exact_query, prefix_query, limit + 1) | |
812 | """ % { | |
813 | "where_clause": where_clause, | |
814 | "order_case_statements": " ".join(additional_ordering_statements), | |
815 | } | |
816 | args = ( | |
817 | join_args | |
818 | + (full_query, exact_query, prefix_query) | |
819 | + ordering_arguments | |
820 | + (limit + 1,) | |
821 | ) | |
797 | 822 | elif isinstance(self.database_engine, Sqlite3Engine): |
798 | 823 | search_query = _parse_query_sqlite(search_term) |
824 | ||
825 | # If enabled, this config option will rank local users higher than those on | |
826 | # remote instances. | |
827 | if self._prefer_local_users_in_search: | |
828 | # This statement checks whether a given user's user ID contains a server name | |
829 | # that matches the local server | |
830 | # | |
831 | # Note that we need to include a comma at the end for valid SQL | |
832 | statement = "user_id LIKE ? DESC," | |
833 | additional_ordering_statements.append(statement) | |
834 | ||
835 | ordering_arguments += ("%:" + self._server_name,) | |
799 | 836 | |
800 | 837 | sql = """ |
801 | 838 | SELECT d.user_id AS user_id, display_name, avatar_url |
802 | 839 | FROM user_directory_search as t |
803 | 840 | INNER JOIN user_directory AS d USING (user_id) |
804 | 841 | WHERE |
805 | %s | |
842 | %(where_clause)s | |
806 | 843 | AND value MATCH ? |
807 | 844 | ORDER BY |
808 | 845 | rank(matchinfo(user_directory_search)) DESC, |
846 | %(order_statements)s | |
809 | 847 | display_name IS NULL, |
810 | 848 | avatar_url IS NULL |
811 | 849 | LIMIT ? |
812 | """ % ( | |
813 | where_clause, | |
814 | ) | |
815 | args = join_args + (search_query, limit + 1) | |
850 | """ % { | |
851 | "where_clause": where_clause, | |
852 | "order_statements": " ".join(additional_ordering_statements), | |
853 | } | |
854 | args = join_args + (search_query,) + ordering_arguments + (limit + 1,) | |
816 | 855 | else: |
817 | 856 | # This should be unreachable. |
818 | 857 | raise Exception("Unrecognized database engine") |
96 | 96 | return txn.fetchone()[0] |
97 | 97 | |
98 | 98 | self._state_group_seq_gen = build_sequence_generator( |
99 | self.database_engine, get_max_state_group_txn, "state_group_id_seq" | |
100 | ) | |
101 | self._state_group_seq_gen.check_consistency( | |
102 | db_conn, table="state_groups", id_column="id" | |
99 | db_conn, | |
100 | self.database_engine, | |
101 | get_max_state_group_txn, | |
102 | "state_group_id_seq", | |
103 | table="state_groups", | |
104 | id_column="id", | |
103 | 105 | ) |
104 | 106 | |
105 | 107 | @cached(max_entries=10000, iterable=True) |
72 | 72 | Returns: |
73 | 73 | The set of state groups that can be deleted. |
74 | 74 | """ |
75 | # Graph of state group -> previous group | |
76 | graph = {} | |
77 | ||
78 | 75 | # Set of events that we have found to be referenced by events |
79 | 76 | referenced_groups = set() |
80 | 77 | |
110 | 107 | next_to_search |= prevs |
111 | 108 | state_groups_seen |= prevs |
112 | 109 | |
113 | graph.update(edges) | |
114 | ||
115 | 110 | to_delete = state_groups_seen - referenced_groups |
116 | 111 | |
117 | 112 | return to_delete |
24 | 24 | ) |
25 | 25 | |
26 | 26 | GetRoomsForUserWithStreamOrdering = namedtuple( |
27 | "_GetRoomsForUserWithStreamOrdering", ("room_id", "event_pos") | |
27 | "GetRoomsForUserWithStreamOrdering", ("room_id", "event_pos") | |
28 | 28 | ) |
29 | 29 | |
30 | 30 |
250 | 250 | |
251 | 251 | |
252 | 252 | def build_sequence_generator( |
253 | db_conn: "LoggingDatabaseConnection", | |
253 | 254 | database_engine: BaseDatabaseEngine, |
254 | 255 | get_first_callback: GetFirstCallbackType, |
255 | 256 | sequence_name: str, |
257 | table: Optional[str], | |
258 | id_column: Optional[str], | |
259 | stream_name: Optional[str] = None, | |
260 | positive: bool = True, | |
256 | 261 | ) -> SequenceGenerator: |
257 | 262 | """Get the best impl of SequenceGenerator available |
258 | 263 | |
264 | 269 | get_first_callback: a callback which gets the next sequence ID. Used if |
265 | 270 | we're on sqlite. |
266 | 271 | sequence_name: the name of a postgres sequence to use. |
272 | table, id_column, stream_name, positive: If set then `check_consistency` | |
273 | is called on the created sequence. See docstring for | |
274 | `check_consistency` details. | |
267 | 275 | """ |
268 | 276 | if isinstance(database_engine, PostgresEngine): |
269 | return PostgresSequenceGenerator(sequence_name) | |
277 | seq = PostgresSequenceGenerator(sequence_name) # type: SequenceGenerator | |
270 | 278 | else: |
271 | return LocalSequenceGenerator(get_first_callback) | |
279 | seq = LocalSequenceGenerator(get_first_callback) | |
280 | ||
281 | if table: | |
282 | assert id_column | |
283 | seq.check_consistency( | |
284 | db_conn=db_conn, | |
285 | table=table, | |
286 | id_column=id_column, | |
287 | stream_name=stream_name, | |
288 | positive=positive, | |
289 | ) | |
290 | ||
291 | return seq |
29 | 29 | |
30 | 30 | from synapse.config import find_config_files |
31 | 31 | |
32 | SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"] | |
32 | SYNAPSE = [sys.executable, "-m", "synapse.app.homeserver"] | |
33 | 33 | |
34 | 34 | GREEN = "\x1b[1;32m" |
35 | 35 | YELLOW = "\x1b[1;33m" |
116 | 116 | |
117 | 117 | args = [ |
118 | 118 | sys.executable, |
119 | "-B", | |
120 | 119 | "-m", |
121 | 120 | app, |
122 | 121 | "-c", |
520 | 520 | ) |
521 | 521 | self.assertEqual(expected_state.state, PresenceState.ONLINE) |
522 | 522 | self.federation_sender.send_presence_to_destinations.assert_called_once_with( |
523 | destinations=["server2"], states=[expected_state] | |
523 | destinations=["server2"], states={expected_state} | |
524 | 524 | ) |
525 | 525 | |
526 | 526 | # |
532 | 532 | |
533 | 533 | self.federation_sender.send_presence.assert_not_called() |
534 | 534 | self.federation_sender.send_presence_to_destinations.assert_called_once_with( |
535 | destinations=["server3"], states=[expected_state] | |
535 | destinations=["server3"], states={expected_state} | |
536 | 536 | ) |
537 | 537 | |
538 | 538 | def test_remote_gets_presence_when_local_user_joins(self): |
583 | 583 | self.presence_handler.current_state_for_user("@test2:server") |
584 | 584 | ) |
585 | 585 | self.assertEqual(expected_state.state, PresenceState.ONLINE) |
586 | self.federation_sender.send_presence_to_destinations.assert_called_once_with( | |
587 | destinations={"server2", "server3"}, states=[expected_state] | |
586 | self.assertEqual( | |
587 | self.federation_sender.send_presence_to_destinations.call_count, 2 | |
588 | ) | |
589 | self.federation_sender.send_presence_to_destinations.assert_any_call( | |
590 | destinations=["server3"], states={expected_state} | |
591 | ) | |
592 | self.federation_sender.send_presence_to_destinations.assert_any_call( | |
593 | destinations=["server2"], states={expected_state} | |
588 | 594 | ) |
589 | 595 | |
590 | 596 | def _add_new_user(self, room_id, user_id): |
160 | 160 | |
161 | 161 | response = self.get_success( |
162 | 162 | self.query_handlers["profile"]( |
163 | {"user_id": "@caroline:test", "field": "displayname"} | |
163 | { | |
164 | "user_id": "@caroline:test", | |
165 | "field": "displayname", | |
166 | "origin": "servername.tld", | |
167 | } | |
164 | 168 | ) |
165 | 169 | ) |
166 | 170 |
17 | 17 | |
18 | 18 | import synapse.rest.admin |
19 | 19 | from synapse.api.constants import EventTypes, RoomEncryptionAlgorithms, UserTypes |
20 | from synapse.api.room_versions import RoomVersion, RoomVersions | |
20 | 21 | from synapse.rest.client.v1 import login, room |
21 | 22 | from synapse.rest.client.v2_alpha import user_directory |
22 | 23 | from synapse.storage.roommember import ProfileInfo |
45 | 46 | def prepare(self, reactor, clock, hs): |
46 | 47 | self.store = hs.get_datastore() |
47 | 48 | self.handler = hs.get_user_directory_handler() |
49 | self.event_builder_factory = self.hs.get_event_builder_factory() | |
50 | self.event_creation_handler = self.hs.get_event_creation_handler() | |
48 | 51 | |
49 | 52 | def test_handle_local_profile_change_with_support_user(self): |
50 | 53 | support_user_id = "@support:test" |
546 | 549 | s = self.get_success(self.handler.search_users(u1, u4, 10)) |
547 | 550 | self.assertEqual(len(s["results"]), 1) |
548 | 551 | |
552 | @override_config( | |
553 | { | |
554 | "user_directory": { | |
555 | "enabled": True, | |
556 | "search_all_users": True, | |
557 | "prefer_local_users": True, | |
558 | } | |
559 | } | |
560 | ) | |
561 | def test_prefer_local_users(self): | |
562 | """Tests that local users are shown higher in search results when | |
563 | user_directory.prefer_local_users is True. | |
564 | """ | |
565 | # Create a room and few users to test the directory with | |
566 | searching_user = self.register_user("searcher", "password") | |
567 | searching_user_tok = self.login("searcher", "password") | |
568 | ||
569 | room_id = self.helper.create_room_as( | |
570 | searching_user, | |
571 | room_version=RoomVersions.V1.identifier, | |
572 | tok=searching_user_tok, | |
573 | ) | |
574 | ||
575 | # Create a few local users and join them to the room | |
576 | local_user_1 = self.register_user("user_xxxxx", "password") | |
577 | local_user_2 = self.register_user("user_bbbbb", "password") | |
578 | local_user_3 = self.register_user("user_zzzzz", "password") | |
579 | ||
580 | self._add_user_to_room(room_id, RoomVersions.V1, local_user_1) | |
581 | self._add_user_to_room(room_id, RoomVersions.V1, local_user_2) | |
582 | self._add_user_to_room(room_id, RoomVersions.V1, local_user_3) | |
583 | ||
584 | # Create a few "remote" users and join them to the room | |
585 | remote_user_1 = "@user_aaaaa:remote_server" | |
586 | remote_user_2 = "@user_yyyyy:remote_server" | |
587 | remote_user_3 = "@user_ccccc:remote_server" | |
588 | self._add_user_to_room(room_id, RoomVersions.V1, remote_user_1) | |
589 | self._add_user_to_room(room_id, RoomVersions.V1, remote_user_2) | |
590 | self._add_user_to_room(room_id, RoomVersions.V1, remote_user_3) | |
591 | ||
592 | local_users = [local_user_1, local_user_2, local_user_3] | |
593 | remote_users = [remote_user_1, remote_user_2, remote_user_3] | |
594 | ||
595 | # Populate the user directory via background update | |
596 | self._add_background_updates() | |
597 | while not self.get_success( | |
598 | self.store.db_pool.updates.has_completed_background_updates() | |
599 | ): | |
600 | self.get_success( | |
601 | self.store.db_pool.updates.do_next_background_update(100), by=0.1 | |
602 | ) | |
603 | ||
604 | # The local searching user searches for the term "user", which other users have | |
605 | # in their user id | |
606 | results = self.get_success( | |
607 | self.handler.search_users(searching_user, "user", 20) | |
608 | )["results"] | |
609 | received_user_id_ordering = [result["user_id"] for result in results] | |
610 | ||
611 | # Typically we'd expect Synapse to return users in lexicographical order, | |
612 | # assuming they have similar User IDs/display names, and profile information. | |
613 | ||
614 | # Check that the order of returned results using our module is as we expect, | |
615 | # i.e our local users show up first, despite all users having lexographically mixed | |
616 | # user IDs. | |
617 | [self.assertIn(user, local_users) for user in received_user_id_ordering[:3]] | |
618 | [self.assertIn(user, remote_users) for user in received_user_id_ordering[3:]] | |
619 | ||
620 | def _add_user_to_room( | |
621 | self, | |
622 | room_id: str, | |
623 | room_version: RoomVersion, | |
624 | user_id: str, | |
625 | ): | |
626 | # Add a user to the room. | |
627 | builder = self.event_builder_factory.for_room_version( | |
628 | room_version, | |
629 | { | |
630 | "type": "m.room.member", | |
631 | "sender": user_id, | |
632 | "state_key": user_id, | |
633 | "room_id": room_id, | |
634 | "content": {"membership": "join"}, | |
635 | }, | |
636 | ) | |
637 | ||
638 | event, context = self.get_success( | |
639 | self.event_creation_handler.create_new_client_event(builder) | |
640 | ) | |
641 | ||
642 | self.get_success( | |
643 | self.hs.get_storage().persistence.persist_event(event, context) | |
644 | ) | |
645 | ||
549 | 646 | |
550 | 647 | class TestUserDirSearchDisabled(unittest.HomeserverTestCase): |
551 | 648 | user_id = "@test:test" |
25 | 25 | |
26 | 26 | |
27 | 27 | class ReadBodyWithMaxSizeTests(TestCase): |
28 | def setUp(self): | |
28 | def _build_response(self, length=UNKNOWN_LENGTH): | |
29 | 29 | """Start reading the body, returns the response, result and proto""" |
30 | response = Mock(length=UNKNOWN_LENGTH) | |
31 | self.result = BytesIO() | |
32 | self.deferred = read_body_with_max_size(response, self.result, 6) | |
30 | response = Mock(length=length) | |
31 | result = BytesIO() | |
32 | deferred = read_body_with_max_size(response, result, 6) | |
33 | 33 | |
34 | 34 | # Fish the protocol out of the response. |
35 | self.protocol = response.deliverBody.call_args[0][0] | |
36 | self.protocol.transport = Mock() | |
35 | protocol = response.deliverBody.call_args[0][0] | |
36 | protocol.transport = Mock() | |
37 | 37 | |
38 | def _cleanup_error(self): | |
38 | return result, deferred, protocol | |
39 | ||
40 | def _assert_error(self, deferred, protocol): | |
41 | """Ensure that the expected error is received.""" | |
42 | self.assertIsInstance(deferred.result, Failure) | |
43 | self.assertIsInstance(deferred.result.value, BodyExceededMaxSize) | |
44 | protocol.transport.abortConnection.assert_called_once() | |
45 | ||
46 | def _cleanup_error(self, deferred): | |
39 | 47 | """Ensure that the error in the Deferred is handled gracefully.""" |
40 | 48 | called = [False] |
41 | 49 | |
42 | 50 | def errback(f): |
43 | 51 | called[0] = True |
44 | 52 | |
45 | self.deferred.addErrback(errback) | |
53 | deferred.addErrback(errback) | |
46 | 54 | self.assertTrue(called[0]) |
47 | 55 | |
48 | 56 | def test_no_error(self): |
49 | 57 | """A response that is NOT too large.""" |
58 | result, deferred, protocol = self._build_response() | |
50 | 59 | |
51 | 60 | # Start sending data. |
52 | self.protocol.dataReceived(b"12345") | |
61 | protocol.dataReceived(b"12345") | |
53 | 62 | # Close the connection. |
54 | self.protocol.connectionLost(Failure(ResponseDone())) | |
63 | protocol.connectionLost(Failure(ResponseDone())) | |
55 | 64 | |
56 | self.assertEqual(self.result.getvalue(), b"12345") | |
57 | self.assertEqual(self.deferred.result, 5) | |
65 | self.assertEqual(result.getvalue(), b"12345") | |
66 | self.assertEqual(deferred.result, 5) | |
58 | 67 | |
59 | 68 | def test_too_large(self): |
60 | 69 | """A response which is too large raises an exception.""" |
70 | result, deferred, protocol = self._build_response() | |
61 | 71 | |
62 | 72 | # Start sending data. |
63 | self.protocol.dataReceived(b"1234567890") | |
64 | # Close the connection. | |
65 | self.protocol.connectionLost(Failure(ResponseDone())) | |
73 | protocol.dataReceived(b"1234567890") | |
66 | 74 | |
67 | self.assertEqual(self.result.getvalue(), b"1234567890") | |
68 | self.assertIsInstance(self.deferred.result, Failure) | |
69 | self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize) | |
70 | self._cleanup_error() | |
75 | self.assertEqual(result.getvalue(), b"1234567890") | |
76 | self._assert_error(deferred, protocol) | |
77 | self._cleanup_error(deferred) | |
71 | 78 | |
72 | 79 | def test_multiple_packets(self): |
73 | """Data should be accummulated through mutliple packets.""" | |
80 | """Data should be accumulated through mutliple packets.""" | |
81 | result, deferred, protocol = self._build_response() | |
74 | 82 | |
75 | 83 | # Start sending data. |
76 | self.protocol.dataReceived(b"12") | |
77 | self.protocol.dataReceived(b"34") | |
84 | protocol.dataReceived(b"12") | |
85 | protocol.dataReceived(b"34") | |
78 | 86 | # Close the connection. |
79 | self.protocol.connectionLost(Failure(ResponseDone())) | |
87 | protocol.connectionLost(Failure(ResponseDone())) | |
80 | 88 | |
81 | self.assertEqual(self.result.getvalue(), b"1234") | |
82 | self.assertEqual(self.deferred.result, 4) | |
89 | self.assertEqual(result.getvalue(), b"1234") | |
90 | self.assertEqual(deferred.result, 4) | |
83 | 91 | |
84 | 92 | def test_additional_data(self): |
85 | 93 | """A connection can receive data after being closed.""" |
94 | result, deferred, protocol = self._build_response() | |
86 | 95 | |
87 | 96 | # Start sending data. |
88 | self.protocol.dataReceived(b"1234567890") | |
89 | self.assertIsInstance(self.deferred.result, Failure) | |
90 | self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize) | |
91 | self.protocol.transport.abortConnection.assert_called_once() | |
97 | protocol.dataReceived(b"1234567890") | |
98 | self._assert_error(deferred, protocol) | |
92 | 99 | |
93 | 100 | # More data might have come in. |
94 | self.protocol.dataReceived(b"1234567890") | |
95 | # Close the connection. | |
96 | self.protocol.connectionLost(Failure(ResponseDone())) | |
101 | protocol.dataReceived(b"1234567890") | |
97 | 102 | |
98 | self.assertEqual(self.result.getvalue(), b"1234567890") | |
99 | self.assertIsInstance(self.deferred.result, Failure) | |
100 | self.assertIsInstance(self.deferred.result.value, BodyExceededMaxSize) | |
101 | self._cleanup_error() | |
103 | self.assertEqual(result.getvalue(), b"1234567890") | |
104 | self._assert_error(deferred, protocol) | |
105 | self._cleanup_error(deferred) | |
106 | ||
107 | def test_content_length(self): | |
108 | """The body shouldn't be read (at all) if the Content-Length header is too large.""" | |
109 | result, deferred, protocol = self._build_response(length=10) | |
110 | ||
111 | # Deferred shouldn't be called yet. | |
112 | self.assertFalse(deferred.called) | |
113 | ||
114 | # Start sending data. | |
115 | protocol.dataReceived(b"12345") | |
116 | self._assert_error(deferred, protocol) | |
117 | self._cleanup_error(deferred) | |
118 | ||
119 | # The data is never consumed. | |
120 | self.assertEqual(result.getvalue(), b"") |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | import os | |
16 | from unittest.mock import patch | |
15 | 17 | |
16 | 18 | import treq |
17 | 19 | from netaddr import IPSet |
99 | 101 | |
100 | 102 | return http_protocol |
101 | 103 | |
102 | def test_http_request(self): | |
103 | agent = ProxyAgent(self.reactor) | |
104 | ||
105 | self.reactor.lookups["test.com"] = "1.2.3.4" | |
106 | d = agent.request(b"GET", b"http://test.com") | |
104 | def _test_request_direct_connection(self, agent, scheme, hostname, path): | |
105 | """Runs a test case for a direct connection not going through a proxy. | |
106 | ||
107 | Args: | |
108 | agent (ProxyAgent): the proxy agent being tested | |
109 | ||
110 | scheme (bytes): expected to be either "http" or "https" | |
111 | ||
112 | hostname (bytes): the hostname to connect to in the test | |
113 | ||
114 | path (bytes): the path to connect to in the test | |
115 | """ | |
116 | is_https = scheme == b"https" | |
117 | ||
118 | self.reactor.lookups[hostname.decode()] = "1.2.3.4" | |
119 | d = agent.request(b"GET", scheme + b"://" + hostname + b"/" + path) | |
107 | 120 | |
108 | 121 | # there should be a pending TCP connection |
109 | 122 | clients = self.reactor.tcpClients |
110 | 123 | self.assertEqual(len(clients), 1) |
111 | 124 | (host, port, client_factory, _timeout, _bindAddress) = clients[0] |
112 | 125 | self.assertEqual(host, "1.2.3.4") |
113 | self.assertEqual(port, 80) | |
114 | ||
115 | # make a test server, and wire up the client | |
116 | http_server = self._make_connection( | |
117 | client_factory, _get_test_protocol_factory() | |
118 | ) | |
119 | ||
120 | # the FakeTransport is async, so we need to pump the reactor | |
121 | self.reactor.advance(0) | |
122 | ||
123 | # now there should be a pending request | |
124 | self.assertEqual(len(http_server.requests), 1) | |
125 | ||
126 | request = http_server.requests[0] | |
127 | self.assertEqual(request.method, b"GET") | |
128 | self.assertEqual(request.path, b"/") | |
129 | self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"]) | |
130 | request.write(b"result") | |
131 | request.finish() | |
132 | ||
133 | self.reactor.advance(0) | |
134 | ||
135 | resp = self.successResultOf(d) | |
136 | body = self.successResultOf(treq.content(resp)) | |
137 | self.assertEqual(body, b"result") | |
138 | ||
139 | def test_https_request(self): | |
140 | agent = ProxyAgent(self.reactor, contextFactory=get_test_https_policy()) | |
141 | ||
142 | self.reactor.lookups["test.com"] = "1.2.3.4" | |
143 | d = agent.request(b"GET", b"https://test.com/abc") | |
144 | ||
145 | # there should be a pending TCP connection | |
146 | clients = self.reactor.tcpClients | |
147 | self.assertEqual(len(clients), 1) | |
148 | (host, port, client_factory, _timeout, _bindAddress) = clients[0] | |
149 | self.assertEqual(host, "1.2.3.4") | |
150 | self.assertEqual(port, 443) | |
126 | self.assertEqual(port, 443 if is_https else 80) | |
151 | 127 | |
152 | 128 | # make a test server, and wire up the client |
153 | 129 | http_server = self._make_connection( |
154 | 130 | client_factory, |
155 | 131 | _get_test_protocol_factory(), |
156 | ssl=True, | |
157 | expected_sni=b"test.com", | |
132 | ssl=is_https, | |
133 | expected_sni=hostname if is_https else None, | |
158 | 134 | ) |
159 | 135 | |
160 | 136 | # the FakeTransport is async, so we need to pump the reactor |
165 | 141 | |
166 | 142 | request = http_server.requests[0] |
167 | 143 | self.assertEqual(request.method, b"GET") |
168 | self.assertEqual(request.path, b"/abc") | |
169 | self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"]) | |
144 | self.assertEqual(request.path, b"/" + path) | |
145 | self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [hostname]) | |
170 | 146 | request.write(b"result") |
171 | 147 | request.finish() |
172 | 148 | |
176 | 152 | body = self.successResultOf(treq.content(resp)) |
177 | 153 | self.assertEqual(body, b"result") |
178 | 154 | |
155 | def test_http_request(self): | |
156 | agent = ProxyAgent(self.reactor) | |
157 | self._test_request_direct_connection(agent, b"http", b"test.com", b"") | |
158 | ||
159 | def test_https_request(self): | |
160 | agent = ProxyAgent(self.reactor, contextFactory=get_test_https_policy()) | |
161 | self._test_request_direct_connection(agent, b"https", b"test.com", b"abc") | |
162 | ||
163 | def test_http_request_use_proxy_empty_environment(self): | |
164 | agent = ProxyAgent(self.reactor, use_proxy=True) | |
165 | self._test_request_direct_connection(agent, b"http", b"test.com", b"") | |
166 | ||
167 | @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "NO_PROXY": "test.com"}) | |
168 | def test_http_request_via_uppercase_no_proxy(self): | |
169 | agent = ProxyAgent(self.reactor, use_proxy=True) | |
170 | self._test_request_direct_connection(agent, b"http", b"test.com", b"") | |
171 | ||
172 | @patch.dict( | |
173 | os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "test.com,unused.com"} | |
174 | ) | |
175 | def test_http_request_via_no_proxy(self): | |
176 | agent = ProxyAgent(self.reactor, use_proxy=True) | |
177 | self._test_request_direct_connection(agent, b"http", b"test.com", b"") | |
178 | ||
179 | @patch.dict( | |
180 | os.environ, {"https_proxy": "proxy.com", "no_proxy": "test.com,unused.com"} | |
181 | ) | |
182 | def test_https_request_via_no_proxy(self): | |
183 | agent = ProxyAgent( | |
184 | self.reactor, | |
185 | contextFactory=get_test_https_policy(), | |
186 | use_proxy=True, | |
187 | ) | |
188 | self._test_request_direct_connection(agent, b"https", b"test.com", b"abc") | |
189 | ||
190 | @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "*"}) | |
191 | def test_http_request_via_no_proxy_star(self): | |
192 | agent = ProxyAgent(self.reactor, use_proxy=True) | |
193 | self._test_request_direct_connection(agent, b"http", b"test.com", b"") | |
194 | ||
195 | @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "*"}) | |
196 | def test_https_request_via_no_proxy_star(self): | |
197 | agent = ProxyAgent( | |
198 | self.reactor, | |
199 | contextFactory=get_test_https_policy(), | |
200 | use_proxy=True, | |
201 | ) | |
202 | self._test_request_direct_connection(agent, b"https", b"test.com", b"abc") | |
203 | ||
204 | @patch.dict(os.environ, {"http_proxy": "proxy.com:8888", "no_proxy": "unused.com"}) | |
179 | 205 | def test_http_request_via_proxy(self): |
180 | agent = ProxyAgent(self.reactor, http_proxy=b"proxy.com:8888") | |
206 | agent = ProxyAgent(self.reactor, use_proxy=True) | |
181 | 207 | |
182 | 208 | self.reactor.lookups["proxy.com"] = "1.2.3.5" |
183 | 209 | d = agent.request(b"GET", b"http://test.com") |
213 | 239 | body = self.successResultOf(treq.content(resp)) |
214 | 240 | self.assertEqual(body, b"result") |
215 | 241 | |
242 | @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "unused.com"}) | |
216 | 243 | def test_https_request_via_proxy(self): |
217 | 244 | agent = ProxyAgent( |
218 | 245 | self.reactor, |
219 | 246 | contextFactory=get_test_https_policy(), |
220 | https_proxy=b"proxy.com", | |
247 | use_proxy=True, | |
221 | 248 | ) |
222 | 249 | |
223 | 250 | self.reactor.lookups["proxy.com"] = "1.2.3.5" |
293 | 320 | body = self.successResultOf(treq.content(resp)) |
294 | 321 | self.assertEqual(body, b"result") |
295 | 322 | |
323 | @patch.dict(os.environ, {"http_proxy": "proxy.com:8888"}) | |
296 | 324 | def test_http_request_via_proxy_with_blacklist(self): |
297 | 325 | # The blacklist includes the configured proxy IP. |
298 | 326 | agent = ProxyAgent( |
300 | 328 | self.reactor, ip_whitelist=None, ip_blacklist=IPSet(["1.0.0.0/8"]) |
301 | 329 | ), |
302 | 330 | self.reactor, |
303 | http_proxy=b"proxy.com:8888", | |
331 | use_proxy=True, | |
304 | 332 | ) |
305 | 333 | |
306 | 334 | self.reactor.lookups["proxy.com"] = "1.2.3.5" |
337 | 365 | body = self.successResultOf(treq.content(resp)) |
338 | 366 | self.assertEqual(body, b"result") |
339 | 367 | |
340 | def test_https_request_via_proxy_with_blacklist(self): | |
368 | @patch.dict(os.environ, {"HTTPS_PROXY": "proxy.com"}) | |
369 | def test_https_request_via_uppercase_proxy_with_blacklist(self): | |
341 | 370 | # The blacklist includes the configured proxy IP. |
342 | 371 | agent = ProxyAgent( |
343 | 372 | BlacklistingReactorWrapper( |
345 | 374 | ), |
346 | 375 | self.reactor, |
347 | 376 | contextFactory=get_test_https_policy(), |
348 | https_proxy=b"proxy.com", | |
377 | use_proxy=True, | |
349 | 378 | ) |
350 | 379 | |
351 | 380 | self.reactor.lookups["proxy.com"] = "1.2.3.5" |
20 | 20 | from twisted.internet.defer import Deferred |
21 | 21 | |
22 | 22 | import synapse.rest.admin |
23 | from synapse.api.errors import Codes, SynapseError | |
23 | 24 | from synapse.rest.client.v1 import login, room |
24 | 25 | |
25 | 26 | from tests.unittest import HomeserverTestCase |
99 | 100 | user_tuple = self.get_success( |
100 | 101 | self.hs.get_datastore().get_user_by_access_token(self.access_token) |
101 | 102 | ) |
102 | token_id = user_tuple.token_id | |
103 | self.token_id = user_tuple.token_id | |
104 | ||
105 | # We need to add email to account before we can create a pusher. | |
106 | self.get_success( | |
107 | hs.get_datastore().user_add_threepid( | |
108 | self.user_id, "email", "a@example.com", 0, 0 | |
109 | ) | |
110 | ) | |
103 | 111 | |
104 | 112 | self.pusher = self.get_success( |
105 | 113 | self.hs.get_pusherpool().add_pusher( |
106 | 114 | user_id=self.user_id, |
107 | access_token=token_id, | |
115 | access_token=self.token_id, | |
108 | 116 | kind="email", |
109 | 117 | app_id="m.email", |
110 | 118 | app_display_name="Email Notifications", |
115 | 123 | ) |
116 | 124 | ) |
117 | 125 | |
126 | def test_need_validated_email(self): | |
127 | """Test that we can only add an email pusher if the user has validated | |
128 | their email. | |
129 | """ | |
130 | with self.assertRaises(SynapseError) as cm: | |
131 | self.get_success_or_raise( | |
132 | self.hs.get_pusherpool().add_pusher( | |
133 | user_id=self.user_id, | |
134 | access_token=self.token_id, | |
135 | kind="email", | |
136 | app_id="m.email", | |
137 | app_display_name="Email Notifications", | |
138 | device_display_name="b@example.com", | |
139 | pushkey="b@example.com", | |
140 | lang=None, | |
141 | data={}, | |
142 | ) | |
143 | ) | |
144 | ||
145 | self.assertEqual(400, cm.exception.code) | |
146 | self.assertEqual(Codes.THREEPID_NOT_FOUND, cm.exception.errcode) | |
147 | ||
118 | 148 | def test_simple_sends_email(self): |
119 | 149 | # Create a simple room with two users |
120 | 150 | room = self.helper.create_room_as(self.user_id, tok=self.access_token) |
23 | 23 | # enable federation sending on the worker |
24 | 24 | config = super()._get_worker_hs_config() |
25 | 25 | # TODO: make it so we don't need both of these |
26 | config["send_federation"] = True | |
26 | config["send_federation"] = False | |
27 | 27 | config["worker_app"] = "synapse.app.federation_sender" |
28 | 28 | return config |
29 | 29 |
26 | 26 | def default_config(self) -> dict: |
27 | 27 | config = super().default_config() |
28 | 28 | config["worker_app"] = "synapse.app.federation_sender" |
29 | config["send_federation"] = True | |
29 | config["send_federation"] = False | |
30 | 30 | return config |
31 | 31 | |
32 | 32 | def make_homeserver(self, reactor, clock): |
48 | 48 | |
49 | 49 | self.make_worker_hs( |
50 | 50 | "synapse.app.federation_sender", |
51 | {"send_federation": True}, | |
51 | {"send_federation": False}, | |
52 | 52 | federation_http_client=mock_client, |
53 | 53 | ) |
54 | 54 |
94 | 94 | |
95 | 95 | self.make_worker_hs( |
96 | 96 | "synapse.app.pusher", |
97 | {"start_pushers": True}, | |
97 | {"start_pushers": False}, | |
98 | 98 | proxied_blacklisted_http_client=http_client_mock, |
99 | 99 | ) |
100 | 100 |
17 | 17 | import json |
18 | 18 | import urllib.parse |
19 | 19 | from binascii import unhexlify |
20 | from typing import Optional | |
20 | from typing import List, Optional | |
21 | 21 | |
22 | 22 | from mock import Mock |
23 | 23 | |
30 | 30 | from synapse.types import JsonDict |
31 | 31 | |
32 | 32 | from tests import unittest |
33 | from tests.server import FakeSite, make_request | |
33 | 34 | from tests.test_utils import make_awaitable |
34 | 35 | from tests.unittest import override_config |
35 | 36 | |
1953 | 1954 | ] |
1954 | 1955 | |
1955 | 1956 | def prepare(self, reactor, clock, hs): |
1957 | self.store = hs.get_datastore() | |
1956 | 1958 | self.media_repo = hs.get_media_repository_resource() |
1957 | 1959 | |
1958 | 1960 | self.admin_user = self.register_user("admin", "pass", admin=True) |
2023 | 2025 | |
2024 | 2026 | number_media = 20 |
2025 | 2027 | other_user_tok = self.login("user", "pass") |
2026 | self._create_media(other_user_tok, number_media) | |
2028 | self._create_media_for_user(other_user_tok, number_media) | |
2027 | 2029 | |
2028 | 2030 | channel = self.make_request( |
2029 | 2031 | "GET", |
2044 | 2046 | |
2045 | 2047 | number_media = 20 |
2046 | 2048 | other_user_tok = self.login("user", "pass") |
2047 | self._create_media(other_user_tok, number_media) | |
2049 | self._create_media_for_user(other_user_tok, number_media) | |
2048 | 2050 | |
2049 | 2051 | channel = self.make_request( |
2050 | 2052 | "GET", |
2065 | 2067 | |
2066 | 2068 | number_media = 20 |
2067 | 2069 | other_user_tok = self.login("user", "pass") |
2068 | self._create_media(other_user_tok, number_media) | |
2070 | self._create_media_for_user(other_user_tok, number_media) | |
2069 | 2071 | |
2070 | 2072 | channel = self.make_request( |
2071 | 2073 | "GET", |
2079 | 2081 | self.assertEqual(len(channel.json_body["media"]), 10) |
2080 | 2082 | self._check_fields(channel.json_body["media"]) |
2081 | 2083 | |
2082 | def test_limit_is_negative(self): | |
2083 | """ | |
2084 | Testing that a negative limit parameter returns a 400 | |
2085 | """ | |
2086 | ||
2084 | def test_invalid_parameter(self): | |
2085 | """ | |
2086 | If parameters are invalid, an error is returned. | |
2087 | """ | |
2088 | # unkown order_by | |
2089 | channel = self.make_request( | |
2090 | "GET", | |
2091 | self.url + "?order_by=bar", | |
2092 | access_token=self.admin_user_tok, | |
2093 | ) | |
2094 | ||
2095 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
2096 | self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"]) | |
2097 | ||
2098 | # invalid search order | |
2099 | channel = self.make_request( | |
2100 | "GET", | |
2101 | self.url + "?dir=bar", | |
2102 | access_token=self.admin_user_tok, | |
2103 | ) | |
2104 | ||
2105 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
2106 | self.assertEqual(Codes.UNKNOWN, channel.json_body["errcode"]) | |
2107 | ||
2108 | # negative limit | |
2087 | 2109 | channel = self.make_request( |
2088 | 2110 | "GET", |
2089 | 2111 | self.url + "?limit=-5", |
2093 | 2115 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) |
2094 | 2116 | self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"]) |
2095 | 2117 | |
2096 | def test_from_is_negative(self): | |
2097 | """ | |
2098 | Testing that a negative from parameter returns a 400 | |
2099 | """ | |
2100 | ||
2118 | # negative from | |
2101 | 2119 | channel = self.make_request( |
2102 | 2120 | "GET", |
2103 | 2121 | self.url + "?from=-5", |
2114 | 2132 | |
2115 | 2133 | number_media = 20 |
2116 | 2134 | other_user_tok = self.login("user", "pass") |
2117 | self._create_media(other_user_tok, number_media) | |
2135 | self._create_media_for_user(other_user_tok, number_media) | |
2118 | 2136 | |
2119 | 2137 | # `next_token` does not appear |
2120 | 2138 | # Number of results is the number of entries |
2192 | 2210 | |
2193 | 2211 | number_media = 5 |
2194 | 2212 | other_user_tok = self.login("user", "pass") |
2195 | self._create_media(other_user_tok, number_media) | |
2213 | self._create_media_for_user(other_user_tok, number_media) | |
2196 | 2214 | |
2197 | 2215 | channel = self.make_request( |
2198 | 2216 | "GET", |
2206 | 2224 | self.assertNotIn("next_token", channel.json_body) |
2207 | 2225 | self._check_fields(channel.json_body["media"]) |
2208 | 2226 | |
2209 | def _create_media(self, user_token, number_media): | |
2227 | def test_order_by(self): | |
2228 | """ | |
2229 | Testing order list with parameter `order_by` | |
2230 | """ | |
2231 | ||
2232 | other_user_tok = self.login("user", "pass") | |
2233 | ||
2234 | # Resolution: 1×1, MIME type: image/png, Extension: png, Size: 67 B | |
2235 | image_data1 = unhexlify( | |
2236 | b"89504e470d0a1a0a0000000d4948445200000001000000010806" | |
2237 | b"0000001f15c4890000000a49444154789c63000100000500010d" | |
2238 | b"0a2db40000000049454e44ae426082" | |
2239 | ) | |
2240 | # Resolution: 1×1, MIME type: image/gif, Extension: gif, Size: 35 B | |
2241 | image_data2 = unhexlify( | |
2242 | b"47494638376101000100800100000000" | |
2243 | b"ffffff2c00000000010001000002024c" | |
2244 | b"01003b" | |
2245 | ) | |
2246 | # Resolution: 1×1, MIME type: image/bmp, Extension: bmp, Size: 54 B | |
2247 | image_data3 = unhexlify( | |
2248 | b"424d3a0000000000000036000000280000000100000001000000" | |
2249 | b"0100180000000000040000000000000000000000000000000000" | |
2250 | b"0000" | |
2251 | ) | |
2252 | ||
2253 | # create media and make sure they do not have the same timestamp | |
2254 | media1 = self._create_media_and_access(other_user_tok, image_data1, "image.png") | |
2255 | self.pump(1.0) | |
2256 | media2 = self._create_media_and_access(other_user_tok, image_data2, "image.gif") | |
2257 | self.pump(1.0) | |
2258 | media3 = self._create_media_and_access(other_user_tok, image_data3, "image.bmp") | |
2259 | self.pump(1.0) | |
2260 | ||
2261 | # Mark one media as safe from quarantine. | |
2262 | self.get_success(self.store.mark_local_media_as_safe(media2)) | |
2263 | # Quarantine one media | |
2264 | self.get_success( | |
2265 | self.store.quarantine_media_by_id("test", media3, self.admin_user) | |
2266 | ) | |
2267 | ||
2268 | # order by default ("created_ts") | |
2269 | # default is backwards | |
2270 | self._order_test([media3, media2, media1], None) | |
2271 | self._order_test([media1, media2, media3], None, "f") | |
2272 | self._order_test([media3, media2, media1], None, "b") | |
2273 | ||
2274 | # sort by media_id | |
2275 | sorted_media = sorted([media1, media2, media3], reverse=False) | |
2276 | sorted_media_reverse = sorted(sorted_media, reverse=True) | |
2277 | ||
2278 | # order by media_id | |
2279 | self._order_test(sorted_media, "media_id") | |
2280 | self._order_test(sorted_media, "media_id", "f") | |
2281 | self._order_test(sorted_media_reverse, "media_id", "b") | |
2282 | ||
2283 | # order by upload_name | |
2284 | self._order_test([media3, media2, media1], "upload_name") | |
2285 | self._order_test([media3, media2, media1], "upload_name", "f") | |
2286 | self._order_test([media1, media2, media3], "upload_name", "b") | |
2287 | ||
2288 | # order by media_type | |
2289 | # result is ordered by media_id | |
2290 | # because of uploaded media_type is always 'application/json' | |
2291 | self._order_test(sorted_media, "media_type") | |
2292 | self._order_test(sorted_media, "media_type", "f") | |
2293 | self._order_test(sorted_media, "media_type", "b") | |
2294 | ||
2295 | # order by media_length | |
2296 | self._order_test([media2, media3, media1], "media_length") | |
2297 | self._order_test([media2, media3, media1], "media_length", "f") | |
2298 | self._order_test([media1, media3, media2], "media_length", "b") | |
2299 | ||
2300 | # order by created_ts | |
2301 | self._order_test([media1, media2, media3], "created_ts") | |
2302 | self._order_test([media1, media2, media3], "created_ts", "f") | |
2303 | self._order_test([media3, media2, media1], "created_ts", "b") | |
2304 | ||
2305 | # order by last_access_ts | |
2306 | self._order_test([media1, media2, media3], "last_access_ts") | |
2307 | self._order_test([media1, media2, media3], "last_access_ts", "f") | |
2308 | self._order_test([media3, media2, media1], "last_access_ts", "b") | |
2309 | ||
2310 | # order by quarantined_by | |
2311 | # one media is in quarantine, others are ordered by media_ids | |
2312 | ||
2313 | # Different sort order of SQlite and PostreSQL | |
2314 | # If a media is not in quarantine `quarantined_by` is NULL | |
2315 | # SQLite considers NULL to be smaller than any other value. | |
2316 | # PostreSQL considers NULL to be larger than any other value. | |
2317 | ||
2318 | # self._order_test(sorted([media1, media2]) + [media3], "quarantined_by") | |
2319 | # self._order_test(sorted([media1, media2]) + [media3], "quarantined_by", "f") | |
2320 | # self._order_test([media3] + sorted([media1, media2]), "quarantined_by", "b") | |
2321 | ||
2322 | # order by safe_from_quarantine | |
2323 | # one media is safe from quarantine, others are ordered by media_ids | |
2324 | self._order_test(sorted([media1, media3]) + [media2], "safe_from_quarantine") | |
2325 | self._order_test( | |
2326 | sorted([media1, media3]) + [media2], "safe_from_quarantine", "f" | |
2327 | ) | |
2328 | self._order_test( | |
2329 | [media2] + sorted([media1, media3]), "safe_from_quarantine", "b" | |
2330 | ) | |
2331 | ||
2332 | def _create_media_for_user(self, user_token: str, number_media: int): | |
2210 | 2333 | """ |
2211 | 2334 | Create a number of media for a specific user |
2212 | """ | |
2213 | upload_resource = self.media_repo.children[b"upload"] | |
2335 | Args: | |
2336 | user_token: Access token of the user | |
2337 | number_media: Number of media to be created for the user | |
2338 | """ | |
2214 | 2339 | for i in range(number_media): |
2215 | 2340 | # file size is 67 Byte |
2216 | 2341 | image_data = unhexlify( |
2219 | 2344 | b"0a2db40000000049454e44ae426082" |
2220 | 2345 | ) |
2221 | 2346 | |
2222 | # Upload some media into the room | |
2223 | self.helper.upload_media( | |
2224 | upload_resource, image_data, tok=user_token, expect_code=200 | |
2225 | ) | |
2226 | ||
2227 | def _check_fields(self, content): | |
2228 | """Checks that all attributes are present in content""" | |
2347 | self._create_media_and_access(user_token, image_data) | |
2348 | ||
2349 | def _create_media_and_access( | |
2350 | self, | |
2351 | user_token: str, | |
2352 | image_data: bytes, | |
2353 | filename: str = "image1.png", | |
2354 | ) -> str: | |
2355 | """ | |
2356 | Create one media for a specific user, access and returns `media_id` | |
2357 | Args: | |
2358 | user_token: Access token of the user | |
2359 | image_data: binary data of image | |
2360 | filename: The filename of the media to be uploaded | |
2361 | Returns: | |
2362 | The ID of the newly created media. | |
2363 | """ | |
2364 | upload_resource = self.media_repo.children[b"upload"] | |
2365 | download_resource = self.media_repo.children[b"download"] | |
2366 | ||
2367 | # Upload some media into the room | |
2368 | response = self.helper.upload_media( | |
2369 | upload_resource, image_data, user_token, filename, expect_code=200 | |
2370 | ) | |
2371 | ||
2372 | # Extract media ID from the response | |
2373 | server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://' | |
2374 | media_id = server_and_media_id.split("/")[1] | |
2375 | ||
2376 | # Try to access a media and to create `last_access_ts` | |
2377 | channel = make_request( | |
2378 | self.reactor, | |
2379 | FakeSite(download_resource), | |
2380 | "GET", | |
2381 | server_and_media_id, | |
2382 | shorthand=False, | |
2383 | access_token=user_token, | |
2384 | ) | |
2385 | ||
2386 | self.assertEqual( | |
2387 | 200, | |
2388 | channel.code, | |
2389 | msg=( | |
2390 | "Expected to receive a 200 on accessing media: %s" % server_and_media_id | |
2391 | ), | |
2392 | ) | |
2393 | ||
2394 | return media_id | |
2395 | ||
2396 | def _check_fields(self, content: JsonDict): | |
2397 | """Checks that the expected user attributes are present in content | |
2398 | Args: | |
2399 | content: List that is checked for content | |
2400 | """ | |
2229 | 2401 | for m in content: |
2230 | 2402 | self.assertIn("media_id", m) |
2231 | 2403 | self.assertIn("media_type", m) |
2235 | 2407 | self.assertIn("last_access_ts", m) |
2236 | 2408 | self.assertIn("quarantined_by", m) |
2237 | 2409 | self.assertIn("safe_from_quarantine", m) |
2410 | ||
2411 | def _order_test( | |
2412 | self, | |
2413 | expected_media_list: List[str], | |
2414 | order_by: Optional[str], | |
2415 | dir: Optional[str] = None, | |
2416 | ): | |
2417 | """Request the list of media in a certain order. Assert that order is what | |
2418 | we expect | |
2419 | Args: | |
2420 | expected_media_list: The list of media_ids in the order we expect to get | |
2421 | back from the server | |
2422 | order_by: The type of ordering to give the server | |
2423 | dir: The direction of ordering to give the server | |
2424 | """ | |
2425 | ||
2426 | url = self.url + "?" | |
2427 | if order_by is not None: | |
2428 | url += "order_by=%s&" % (order_by,) | |
2429 | if dir is not None and dir in ("b", "f"): | |
2430 | url += "dir=%s" % (dir,) | |
2431 | channel = self.make_request( | |
2432 | "GET", | |
2433 | url.encode("ascii"), | |
2434 | access_token=self.admin_user_tok, | |
2435 | ) | |
2436 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
2437 | self.assertEqual(channel.json_body["total"], len(expected_media_list)) | |
2438 | ||
2439 | returned_order = [row["media_id"] for row in channel.json_body["media"]] | |
2440 | self.assertEqual(expected_media_list, returned_order) | |
2441 | self._check_fields(channel.json_body["media"]) | |
2238 | 2442 | |
2239 | 2443 | |
2240 | 2444 | class UserTokenRestTestCase(unittest.HomeserverTestCase): |
14 | 14 | |
15 | 15 | import time |
16 | 16 | import urllib.parse |
17 | from typing import Any, Dict, List, Union | |
17 | from typing import Any, Dict, List, Optional, Union | |
18 | 18 | from urllib.parse import urlencode |
19 | 19 | |
20 | 20 | from mock import Mock |
46 | 46 | HAS_JWT = False |
47 | 47 | |
48 | 48 | |
49 | # public_base_url used in some tests | |
50 | BASE_URL = "https://synapse/" | |
49 | # synapse server name: used to populate public_baseurl in some tests | |
50 | SYNAPSE_SERVER_PUBLIC_HOSTNAME = "synapse" | |
51 | ||
52 | # public_baseurl for some tests. It uses an http:// scheme because | |
53 | # FakeChannel.isSecure() returns False, so synapse will see the requested uri as | |
54 | # http://..., so using http in the public_baseurl stops Synapse trying to redirect to | |
55 | # https://.... | |
56 | BASE_URL = "http://%s/" % (SYNAPSE_SERVER_PUBLIC_HOSTNAME,) | |
51 | 57 | |
52 | 58 | # CAS server used in some tests |
53 | 59 | CAS_SERVER = "https://fake.test" |
479 | 485 | def test_multi_sso_redirect(self): |
480 | 486 | """/login/sso/redirect should redirect to an identity picker""" |
481 | 487 | # first hit the redirect url, which should redirect to our idp picker |
482 | channel = self.make_request( | |
483 | "GET", | |
484 | "/_matrix/client/r0/login/sso/redirect?redirectUrl=" | |
485 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
486 | ) | |
488 | channel = self._make_sso_redirect_request(False, None) | |
487 | 489 | self.assertEqual(channel.code, 302, channel.result) |
488 | 490 | uri = channel.headers.getRawHeaders("Location")[0] |
489 | 491 | |
519 | 521 | shorthand=False, |
520 | 522 | ) |
521 | 523 | self.assertEqual(channel.code, 302, channel.result) |
522 | cas_uri = channel.headers.getRawHeaders("Location")[0] | |
524 | location_headers = channel.headers.getRawHeaders("Location") | |
525 | assert location_headers | |
526 | cas_uri = location_headers[0] | |
523 | 527 | cas_uri_path, cas_uri_query = cas_uri.split("?", 1) |
524 | 528 | |
525 | 529 | # it should redirect us to the login page of the cas server |
542 | 546 | + "&idp=saml", |
543 | 547 | ) |
544 | 548 | self.assertEqual(channel.code, 302, channel.result) |
545 | saml_uri = channel.headers.getRawHeaders("Location")[0] | |
549 | location_headers = channel.headers.getRawHeaders("Location") | |
550 | assert location_headers | |
551 | saml_uri = location_headers[0] | |
546 | 552 | saml_uri_path, saml_uri_query = saml_uri.split("?", 1) |
547 | 553 | |
548 | 554 | # it should redirect us to the login page of the SAML server |
564 | 570 | + "&idp=oidc", |
565 | 571 | ) |
566 | 572 | self.assertEqual(channel.code, 302, channel.result) |
567 | oidc_uri = channel.headers.getRawHeaders("Location")[0] | |
573 | location_headers = channel.headers.getRawHeaders("Location") | |
574 | assert location_headers | |
575 | oidc_uri = location_headers[0] | |
568 | 576 | oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1) |
569 | 577 | |
570 | 578 | # it should redirect us to the auth page of the OIDC server |
571 | 579 | self.assertEqual(oidc_uri_path, TEST_OIDC_AUTH_ENDPOINT) |
572 | 580 | |
573 | 581 | # ... and should have set a cookie including the redirect url |
574 | cookies = dict( | |
575 | h.split(";")[0].split("=", maxsplit=1) | |
576 | for h in channel.headers.getRawHeaders("Set-Cookie") | |
577 | ) | |
582 | cookie_headers = channel.headers.getRawHeaders("Set-Cookie") | |
583 | assert cookie_headers | |
584 | cookies = {} # type: Dict[str, str] | |
585 | for h in cookie_headers: | |
586 | key, value = h.split(";")[0].split("=", maxsplit=1) | |
587 | cookies[key] = value | |
578 | 588 | |
579 | 589 | oidc_session_cookie = cookies["oidc_session"] |
580 | 590 | macaroon = pymacaroons.Macaroon.deserialize(oidc_session_cookie) |
587 | 597 | |
588 | 598 | # that should serve a confirmation page |
589 | 599 | self.assertEqual(channel.code, 200, channel.result) |
590 | self.assertTrue( | |
591 | channel.headers.getRawHeaders("Content-Type")[-1].startswith("text/html") | |
592 | ) | |
600 | content_type_headers = channel.headers.getRawHeaders("Content-Type") | |
601 | assert content_type_headers | |
602 | self.assertTrue(content_type_headers[-1].startswith("text/html")) | |
593 | 603 | p = TestHtmlParser() |
594 | 604 | p.feed(channel.text_body) |
595 | 605 | p.close() |
627 | 637 | |
628 | 638 | def test_client_idp_redirect_msc2858_disabled(self): |
629 | 639 | """If the client tries to pick an IdP but MSC2858 is disabled, return a 400""" |
630 | channel = self.make_request( | |
631 | "GET", | |
632 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl=" | |
633 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
634 | ) | |
640 | channel = self._make_sso_redirect_request(True, "oidc") | |
635 | 641 | self.assertEqual(channel.code, 400, channel.result) |
636 | 642 | self.assertEqual(channel.json_body["errcode"], "M_UNRECOGNIZED") |
637 | 643 | |
638 | 644 | @override_config({"experimental_features": {"msc2858_enabled": True}}) |
639 | 645 | def test_client_idp_redirect_to_unknown(self): |
640 | 646 | """If the client tries to pick an unknown IdP, return a 404""" |
641 | channel = self.make_request( | |
642 | "GET", | |
643 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/xxx?redirectUrl=" | |
644 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
645 | ) | |
647 | channel = self._make_sso_redirect_request(True, "xxx") | |
646 | 648 | self.assertEqual(channel.code, 404, channel.result) |
647 | 649 | self.assertEqual(channel.json_body["errcode"], "M_NOT_FOUND") |
648 | 650 | |
649 | 651 | @override_config({"experimental_features": {"msc2858_enabled": True}}) |
650 | 652 | def test_client_idp_redirect_to_oidc(self): |
651 | 653 | """If the client pick a known IdP, redirect to it""" |
652 | channel = self.make_request( | |
653 | "GET", | |
654 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect/oidc?redirectUrl=" | |
655 | + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL), | |
656 | ) | |
657 | ||
654 | channel = self._make_sso_redirect_request(True, "oidc") | |
658 | 655 | self.assertEqual(channel.code, 302, channel.result) |
659 | 656 | oidc_uri = channel.headers.getRawHeaders("Location")[0] |
660 | 657 | oidc_uri_path, oidc_uri_query = oidc_uri.split("?", 1) |
661 | 658 | |
662 | 659 | # it should redirect us to the auth page of the OIDC server |
663 | 660 | self.assertEqual(oidc_uri_path, TEST_OIDC_AUTH_ENDPOINT) |
661 | ||
662 | def _make_sso_redirect_request( | |
663 | self, unstable_endpoint: bool = False, idp_prov: Optional[str] = None | |
664 | ): | |
665 | """Send a request to /_matrix/client/r0/login/sso/redirect | |
666 | ||
667 | ... or the unstable equivalent | |
668 | ||
669 | ... possibly specifying an IDP provider | |
670 | """ | |
671 | endpoint = ( | |
672 | "/_matrix/client/unstable/org.matrix.msc2858/login/sso/redirect" | |
673 | if unstable_endpoint | |
674 | else "/_matrix/client/r0/login/sso/redirect" | |
675 | ) | |
676 | if idp_prov is not None: | |
677 | endpoint += "/" + idp_prov | |
678 | endpoint += "?redirectUrl=" + urllib.parse.quote_plus(TEST_CLIENT_REDIRECT_URL) | |
679 | ||
680 | return self.make_request( | |
681 | "GET", | |
682 | endpoint, | |
683 | custom_headers=[("Host", SYNAPSE_SERVER_PUBLIC_HOSTNAME)], | |
684 | ) | |
664 | 685 | |
665 | 686 | @staticmethod |
666 | 687 | def _get_value_from_macaroon(macaroon: pymacaroons.Macaroon, key: str) -> str: |
792 | 813 | |
793 | 814 | self.assertEqual(channel.code, 302) |
794 | 815 | location_headers = channel.headers.getRawHeaders("Location") |
816 | assert location_headers | |
795 | 817 | self.assertEqual(location_headers[0][: len(redirect_url)], redirect_url) |
796 | 818 | |
797 | 819 | @override_config({"sso": {"client_whitelist": ["https://legit-site.com/"]}}) |
1234 | 1256 | |
1235 | 1257 | # that should redirect to the username picker |
1236 | 1258 | self.assertEqual(channel.code, 302, channel.result) |
1237 | picker_url = channel.headers.getRawHeaders("Location")[0] | |
1259 | location_headers = channel.headers.getRawHeaders("Location") | |
1260 | assert location_headers | |
1261 | picker_url = location_headers[0] | |
1238 | 1262 | self.assertEqual(picker_url, "/_synapse/client/pick_username/account_details") |
1239 | 1263 | |
1240 | 1264 | # ... with a username_mapping_session cookie |
1277 | 1301 | ) |
1278 | 1302 | self.assertEqual(chan.code, 302, chan.result) |
1279 | 1303 | location_headers = chan.headers.getRawHeaders("Location") |
1304 | assert location_headers | |
1280 | 1305 | |
1281 | 1306 | # send a request to the completion page, which should 302 to the client redirectUrl |
1282 | 1307 | chan = self.make_request( |
1286 | 1311 | ) |
1287 | 1312 | self.assertEqual(chan.code, 302, chan.result) |
1288 | 1313 | location_headers = chan.headers.getRawHeaders("Location") |
1314 | assert location_headers | |
1289 | 1315 | |
1290 | 1316 | # ensure that the returned location matches the requested redirect URL |
1291 | 1317 | path, query = location_headers[0].split("?", 1) |
541 | 541 | if client_redirect_url: |
542 | 542 | params["redirectUrl"] = client_redirect_url |
543 | 543 | |
544 | # hit the redirect url (which will issue a cookie and state) | |
544 | # hit the redirect url (which should redirect back to the redirect url. This | |
545 | # is the easiest way of figuring out what the Host header ought to be set to | |
546 | # to keep Synapse happy. | |
545 | 547 | channel = make_request( |
546 | 548 | self.hs.get_reactor(), |
547 | 549 | self.site, |
548 | 550 | "GET", |
549 | 551 | "/_matrix/client/r0/login/sso/redirect?" + urllib.parse.urlencode(params), |
552 | ) | |
553 | assert channel.code == 302 | |
554 | ||
555 | # hit the redirect url again with the right Host header, which should now issue | |
556 | # a cookie and redirect to the SSO provider. | |
557 | location = channel.headers.getRawHeaders("Location")[0] | |
558 | parts = urllib.parse.urlsplit(location) | |
559 | channel = make_request( | |
560 | self.hs.get_reactor(), | |
561 | self.site, | |
562 | "GET", | |
563 | urllib.parse.urlunsplit(("", "") + parts[2:]), | |
564 | custom_headers=[ | |
565 | ("Host", parts[1]), | |
566 | ], | |
550 | 567 | ) |
551 | 568 | |
552 | 569 | assert channel.code == 302 |
160 | 160 | |
161 | 161 | def default_config(self): |
162 | 162 | config = super().default_config() |
163 | config["public_baseurl"] = "https://synapse.test" | |
163 | ||
164 | # public_baseurl uses an http:// scheme because FakeChannel.isSecure() returns | |
165 | # False, so synapse will see the requested uri as http://..., so using http in | |
166 | # the public_baseurl stops Synapse trying to redirect to https. | |
167 | config["public_baseurl"] = "http://synapse.test" | |
164 | 168 | |
165 | 169 | if HAS_OIDC: |
166 | 170 | # we enable OIDC as a way of testing SSO flows |
53 | 53 | A room should show up in the shared list of rooms between two users |
54 | 54 | if it is public. |
55 | 55 | """ |
56 | u1 = self.register_user("user1", "pass") | |
57 | u1_token = self.login(u1, "pass") | |
58 | u2 = self.register_user("user2", "pass") | |
59 | u2_token = self.login(u2, "pass") | |
60 | ||
61 | room = self.helper.create_room_as(u1, is_public=True, tok=u1_token) | |
62 | self.helper.invite(room, src=u1, targ=u2, tok=u1_token) | |
63 | self.helper.join(room, user=u2, tok=u2_token) | |
64 | ||
65 | channel = self._get_shared_rooms(u1_token, u2) | |
66 | self.assertEquals(200, channel.code, channel.result) | |
67 | self.assertEquals(len(channel.json_body["joined"]), 1) | |
68 | self.assertEquals(channel.json_body["joined"][0], room) | |
56 | self._check_shared_rooms_with(room_one_is_public=True, room_two_is_public=True) | |
69 | 57 | |
70 | 58 | def test_shared_room_list_private(self): |
71 | 59 | """ |
72 | 60 | A room should show up in the shared list of rooms between two users |
73 | 61 | if it is private. |
74 | 62 | """ |
75 | u1 = self.register_user("user1", "pass") | |
76 | u1_token = self.login(u1, "pass") | |
77 | u2 = self.register_user("user2", "pass") | |
78 | u2_token = self.login(u2, "pass") | |
79 | ||
80 | room = self.helper.create_room_as(u1, is_public=False, tok=u1_token) | |
81 | self.helper.invite(room, src=u1, targ=u2, tok=u1_token) | |
82 | self.helper.join(room, user=u2, tok=u2_token) | |
83 | ||
84 | channel = self._get_shared_rooms(u1_token, u2) | |
85 | self.assertEquals(200, channel.code, channel.result) | |
86 | self.assertEquals(len(channel.json_body["joined"]), 1) | |
87 | self.assertEquals(channel.json_body["joined"][0], room) | |
63 | self._check_shared_rooms_with( | |
64 | room_one_is_public=False, room_two_is_public=False | |
65 | ) | |
88 | 66 | |
89 | 67 | def test_shared_room_list_mixed(self): |
90 | 68 | """ |
91 | 69 | The shared room list between two users should contain both public and private |
92 | 70 | rooms. |
93 | 71 | """ |
72 | self._check_shared_rooms_with(room_one_is_public=True, room_two_is_public=False) | |
73 | ||
74 | def _check_shared_rooms_with( | |
75 | self, room_one_is_public: bool, room_two_is_public: bool | |
76 | ): | |
77 | """Checks that shared public or private rooms between two users appear in | |
78 | their shared room lists | |
79 | """ | |
94 | 80 | u1 = self.register_user("user1", "pass") |
95 | 81 | u1_token = self.login(u1, "pass") |
96 | 82 | u2 = self.register_user("user2", "pass") |
97 | 83 | u2_token = self.login(u2, "pass") |
98 | 84 | |
99 | room_public = self.helper.create_room_as(u1, is_public=True, tok=u1_token) | |
100 | room_private = self.helper.create_room_as(u2, is_public=False, tok=u2_token) | |
101 | self.helper.invite(room_public, src=u1, targ=u2, tok=u1_token) | |
102 | self.helper.invite(room_private, src=u2, targ=u1, tok=u2_token) | |
103 | self.helper.join(room_public, user=u2, tok=u2_token) | |
104 | self.helper.join(room_private, user=u1, tok=u1_token) | |
85 | # Create a room. user1 invites user2, who joins | |
86 | room_id_one = self.helper.create_room_as( | |
87 | u1, is_public=room_one_is_public, tok=u1_token | |
88 | ) | |
89 | self.helper.invite(room_id_one, src=u1, targ=u2, tok=u1_token) | |
90 | self.helper.join(room_id_one, user=u2, tok=u2_token) | |
105 | 91 | |
92 | # Check shared rooms from user1's perspective. | |
93 | # We should see the one room in common | |
94 | channel = self._get_shared_rooms(u1_token, u2) | |
95 | self.assertEquals(200, channel.code, channel.result) | |
96 | self.assertEquals(len(channel.json_body["joined"]), 1) | |
97 | self.assertEquals(channel.json_body["joined"][0], room_id_one) | |
98 | ||
99 | # Create another room and invite user2 to it | |
100 | room_id_two = self.helper.create_room_as( | |
101 | u1, is_public=room_two_is_public, tok=u1_token | |
102 | ) | |
103 | self.helper.invite(room_id_two, src=u1, targ=u2, tok=u1_token) | |
104 | self.helper.join(room_id_two, user=u2, tok=u2_token) | |
105 | ||
106 | # Check shared rooms again. We should now see both rooms. | |
106 | 107 | channel = self._get_shared_rooms(u1_token, u2) |
107 | 108 | self.assertEquals(200, channel.code, channel.result) |
108 | 109 | self.assertEquals(len(channel.json_body["joined"]), 2) |
109 | self.assertTrue(room_public in channel.json_body["joined"]) | |
110 | self.assertTrue(room_private in channel.json_body["joined"]) | |
110 | for room_id_id in channel.json_body["joined"]: | |
111 | self.assertIn(room_id_id, [room_id_one, room_id_two]) | |
111 | 112 | |
112 | 113 | def test_shared_room_list_after_leave(self): |
113 | 114 | """ |
131 | 132 | |
132 | 133 | self.helper.leave(room, user=u1, tok=u1_token) |
133 | 134 | |
135 | # Check user1's view of shared rooms with user2 | |
136 | channel = self._get_shared_rooms(u1_token, u2) | |
137 | self.assertEquals(200, channel.code, channel.result) | |
138 | self.assertEquals(len(channel.json_body["joined"]), 0) | |
139 | ||
140 | # Check user2's view of shared rooms with user1 | |
134 | 141 | channel = self._get_shared_rooms(u2_token, u1) |
135 | 142 | self.assertEquals(200, channel.code, channel.result) |
136 | 143 | self.assertEquals(len(channel.json_body["joined"]), 0) |
230 | 230 | |
231 | 231 | def prepare(self, reactor, clock, hs): |
232 | 232 | |
233 | self.media_repo = hs.get_media_repository_resource() | |
234 | self.download_resource = self.media_repo.children[b"download"] | |
235 | self.thumbnail_resource = self.media_repo.children[b"thumbnail"] | |
233 | media_resource = hs.get_media_repository_resource() | |
234 | self.download_resource = media_resource.children[b"download"] | |
235 | self.thumbnail_resource = media_resource.children[b"thumbnail"] | |
236 | self.store = hs.get_datastore() | |
237 | self.media_repo = hs.get_media_repository() | |
236 | 238 | |
237 | 239 | self.media_id = "example.com/12345" |
238 | 240 | |
355 | 357 | Override the config to generate only cropped thumbnails, but request a scaled one. |
356 | 358 | """ |
357 | 359 | self._test_thumbnail("scale", None, False) |
360 | ||
361 | def test_thumbnail_repeated_thumbnail(self): | |
362 | """Test that fetching the same thumbnail works, and deleting the on disk | |
363 | thumbnail regenerates it. | |
364 | """ | |
365 | self._test_thumbnail( | |
366 | "scale", self.test_image.expected_scaled, self.test_image.expected_found | |
367 | ) | |
368 | ||
369 | if not self.test_image.expected_found: | |
370 | return | |
371 | ||
372 | # Fetching again should work, without re-requesting the image from the | |
373 | # remote. | |
374 | params = "?width=32&height=32&method=scale" | |
375 | channel = make_request( | |
376 | self.reactor, | |
377 | FakeSite(self.thumbnail_resource), | |
378 | "GET", | |
379 | self.media_id + params, | |
380 | shorthand=False, | |
381 | await_result=False, | |
382 | ) | |
383 | self.pump() | |
384 | ||
385 | self.assertEqual(channel.code, 200) | |
386 | if self.test_image.expected_scaled: | |
387 | self.assertEqual( | |
388 | channel.result["body"], | |
389 | self.test_image.expected_scaled, | |
390 | channel.result["body"], | |
391 | ) | |
392 | ||
393 | # Deleting the thumbnail on disk then re-requesting it should work as | |
394 | # Synapse should regenerate missing thumbnails. | |
395 | origin, media_id = self.media_id.split("/") | |
396 | info = self.get_success(self.store.get_cached_remote_media(origin, media_id)) | |
397 | file_id = info["filesystem_id"] | |
398 | ||
399 | thumbnail_dir = self.media_repo.filepaths.remote_media_thumbnail_dir( | |
400 | origin, file_id | |
401 | ) | |
402 | shutil.rmtree(thumbnail_dir, ignore_errors=True) | |
403 | ||
404 | channel = make_request( | |
405 | self.reactor, | |
406 | FakeSite(self.thumbnail_resource), | |
407 | "GET", | |
408 | self.media_id + params, | |
409 | shorthand=False, | |
410 | await_result=False, | |
411 | ) | |
412 | self.pump() | |
413 | ||
414 | self.assertEqual(channel.code, 200) | |
415 | if self.test_image.expected_scaled: | |
416 | self.assertEqual( | |
417 | channel.result["body"], | |
418 | self.test_image.expected_scaled, | |
419 | channel.result["body"], | |
420 | ) | |
358 | 421 | |
359 | 422 | def _test_thumbnail(self, method, expected_body, expected_found): |
360 | 423 | params = "?width=32&height=32&method=" + method |
123 | 123 | return address.IPv4Address("TCP", self._ip, 3423) |
124 | 124 | |
125 | 125 | def getHost(self): |
126 | return None | |
126 | # this is called by Request.__init__ to configure Request.host. | |
127 | return address.IPv4Address("TCP", "127.0.0.1", 8888) | |
128 | ||
129 | def isSecure(self): | |
130 | return False | |
127 | 131 | |
128 | 132 | @property |
129 | 133 | def transport(self): |
113 | 113 | "server_name": name, |
114 | 114 | "send_federation": False, |
115 | 115 | "media_store_path": "media", |
116 | "uploads_path": "uploads", | |
117 | 116 | # the test signing key is just an arbitrary ed25519 key to keep the config |
118 | 117 | # parser happy |
119 | 118 | "signing_key": "ed25519 a_lPym qvioDNmfExFBRPgdTU+wtFYKq4JfwFRv7sYVgWvmgJg", |