Update upstream source from tag 'upstream/1.36.0'
Update to upstream version '1.36.0'
with Debian dir 8af317fdf161708fb514a7208659a47682d1077f
Andrej Shadura
2 years ago
40 | 40 | - dockerhubuploadlatest: |
41 | 41 | filters: |
42 | 42 | branches: |
43 | only: master | |
43 | only: [ master, main ] | |
44 | 44 | |
45 | 45 | commands: |
46 | 46 | docker_prepare: |
0 | name: Deploy the documentation | |
1 | ||
2 | on: | |
3 | push: | |
4 | branches: | |
5 | - develop | |
6 | ||
7 | workflow_dispatch: | |
8 | ||
9 | jobs: | |
10 | pages: | |
11 | name: GitHub Pages | |
12 | runs-on: ubuntu-latest | |
13 | steps: | |
14 | - uses: actions/checkout@v2 | |
15 | ||
16 | - name: Setup mdbook | |
17 | uses: peaceiris/actions-mdbook@4b5ef36b314c2599664ca107bb8c02412548d79d # v1.1.14 | |
18 | with: | |
19 | mdbook-version: '0.4.9' | |
20 | ||
21 | - name: Build the documentation | |
22 | run: mdbook build | |
23 | ||
24 | - name: Deploy latest documentation | |
25 | uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0 | |
26 | with: | |
27 | github_token: ${{ secrets.GITHUB_TOKEN }} | |
28 | keep_files: true | |
29 | publish_dir: ./book | |
30 | destination_dir: ./develop |
33 | 33 | if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }} |
34 | 34 | runs-on: ubuntu-latest |
35 | 35 | steps: |
36 | - uses: actions/checkout@v2 | |
36 | # Note: This and the script can be simplified once we drop Buildkite. See: | |
37 | # https://github.com/actions/checkout/issues/266#issuecomment-638346893 | |
38 | # https://github.com/actions/checkout/issues/416 | |
39 | - uses: actions/checkout@v2 | |
40 | with: | |
41 | ref: ${{ github.event.pull_request.head.sha }} | |
42 | fetch-depth: 0 | |
37 | 43 | - uses: actions/setup-python@v2 |
38 | 44 | - run: pip install tox |
39 | 45 | - name: Patch Buildkite-specific test script |
225 | 231 | - name: Run SyTest |
226 | 232 | run: /bootstrap.sh synapse |
227 | 233 | working-directory: /src |
228 | - name: Dump results.tap | |
234 | - name: Summarise results.tap | |
229 | 235 | if: ${{ always() }} |
230 | run: cat /logs/results.tap | |
236 | run: /sytest/scripts/tap_to_gha.pl /logs/results.tap | |
231 | 237 | - name: Upload SyTest logs |
232 | 238 | uses: actions/upload-artifact@v2 |
233 | 239 | if: ${{ always() }} |
0 | Synapse 1.36.0 (2021-06-15) | |
1 | =========================== | |
2 | ||
3 | No significant changes. | |
4 | ||
5 | ||
6 | Synapse 1.36.0rc2 (2021-06-11) | |
7 | ============================== | |
8 | ||
9 | Bugfixes | |
10 | -------- | |
11 | ||
12 | - Fix a bug which caused presence updates to stop working some time after a restart, when using a presence writer worker. Broke in v1.33.0. ([\#10149](https://github.com/matrix-org/synapse/issues/10149)) | |
13 | - Fix a bug when using federation sender worker where it would send out more presence updates than necessary, leading to high resource usage. Broke in v1.33.0. ([\#10163](https://github.com/matrix-org/synapse/issues/10163)) | |
14 | - Fix a bug where Synapse could send the same presence update to a remote twice. ([\#10165](https://github.com/matrix-org/synapse/issues/10165)) | |
15 | ||
16 | ||
17 | Synapse 1.36.0rc1 (2021-06-08) | |
18 | ============================== | |
19 | ||
20 | Features | |
21 | -------- | |
22 | ||
23 | - Add new endpoint `/_matrix/client/r0/rooms/{roomId}/aliases` from Client-Server API r0.6.1 (previously [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)). ([\#9224](https://github.com/matrix-org/synapse/issues/9224)) | |
24 | - Improve performance of incoming federation transactions in large rooms. ([\#9953](https://github.com/matrix-org/synapse/issues/9953), [\#9973](https://github.com/matrix-org/synapse/issues/9973)) | |
25 | - Rewrite logic around verifying JSON object and fetching server keys to be more performant and use less memory. ([\#10035](https://github.com/matrix-org/synapse/issues/10035)) | |
26 | - Add new admin APIs for unprotecting local media from quarantine. Contributed by @dklimpel. ([\#10040](https://github.com/matrix-org/synapse/issues/10040)) | |
27 | - Add new admin APIs to remove media by media ID from quarantine. Contributed by @dklimpel. ([\#10044](https://github.com/matrix-org/synapse/issues/10044)) | |
28 | - Make reason and score parameters optional for reporting content. Implements [MSC2414](https://github.com/matrix-org/matrix-doc/pull/2414). Contributed by Callum Brown. ([\#10077](https://github.com/matrix-org/synapse/issues/10077)) | |
29 | - Add support for routing more requests to workers. ([\#10084](https://github.com/matrix-org/synapse/issues/10084)) | |
30 | - Report OpenTracing spans for database activity. ([\#10113](https://github.com/matrix-org/synapse/issues/10113), [\#10136](https://github.com/matrix-org/synapse/issues/10136), [\#10141](https://github.com/matrix-org/synapse/issues/10141)) | |
31 | - Significantly reduce memory usage of joining large remote rooms. ([\#10117](https://github.com/matrix-org/synapse/issues/10117)) | |
32 | ||
33 | ||
34 | Bugfixes | |
35 | -------- | |
36 | ||
37 | - Fixed a bug causing replication requests to fail when receiving a lot of events via federation. ([\#10082](https://github.com/matrix-org/synapse/issues/10082)) | |
38 | - Fix a bug in the `force_tracing_for_users` option introduced in Synapse v1.35 which meant that the OpenTracing spans produced were missing most tags. ([\#10092](https://github.com/matrix-org/synapse/issues/10092)) | |
39 | - Fixed a bug that could cause Synapse to stop notifying application services. Contributed by Willem Mulder. ([\#10107](https://github.com/matrix-org/synapse/issues/10107)) | |
40 | - Fix bug where the server would attempt to fetch the same history in the room from a remote server multiple times in parallel. ([\#10116](https://github.com/matrix-org/synapse/issues/10116)) | |
41 | - Fix a bug introduced in Synapse 1.33.0 which caused replication requests to fail when receiving a lot of very large events via federation. ([\#10118](https://github.com/matrix-org/synapse/issues/10118)) | |
42 | - Fix bug when using workers where pagination requests failed if a remote server returned zero events from `/backfill`. Introduced in 1.35.0. ([\#10133](https://github.com/matrix-org/synapse/issues/10133)) | |
43 | ||
44 | ||
45 | Improved Documentation | |
46 | ---------------------- | |
47 | ||
48 | - Clarify security note regarding hosting Synapse on the same domain as other web applications. ([\#9221](https://github.com/matrix-org/synapse/issues/9221)) | |
49 | - Update CAPTCHA documentation to mention turning off the verify origin feature. Contributed by @aaronraimist. ([\#10046](https://github.com/matrix-org/synapse/issues/10046)) | |
50 | - Tweak wording of database recommendation in `INSTALL.md`. Contributed by @aaronraimist. ([\#10057](https://github.com/matrix-org/synapse/issues/10057)) | |
51 | - Add initial infrastructure for rendering Synapse documentation with mdbook. ([\#10086](https://github.com/matrix-org/synapse/issues/10086)) | |
52 | - Convert the remaining Admin API documentation files to markdown. ([\#10089](https://github.com/matrix-org/synapse/issues/10089)) | |
53 | - Make a link in docs use HTTPS. Contributed by @RhnSharma. ([\#10130](https://github.com/matrix-org/synapse/issues/10130)) | |
54 | - Fix broken link in Docker docs. ([\#10132](https://github.com/matrix-org/synapse/issues/10132)) | |
55 | ||
56 | ||
57 | Deprecations and Removals | |
58 | ------------------------- | |
59 | ||
60 | - Remove the experimental `spaces_enabled` flag. The spaces features are always available now. ([\#10063](https://github.com/matrix-org/synapse/issues/10063)) | |
61 | ||
62 | ||
63 | Internal Changes | |
64 | ---------------- | |
65 | ||
66 | - Tell CircleCI to build Docker images from `main` branch. ([\#9906](https://github.com/matrix-org/synapse/issues/9906)) | |
67 | - Simplify naming convention for release branches to only include the major and minor version numbers. ([\#10013](https://github.com/matrix-org/synapse/issues/10013)) | |
68 | - Add `parse_strings_from_args` for parsing an array from query parameters. ([\#10048](https://github.com/matrix-org/synapse/issues/10048), [\#10137](https://github.com/matrix-org/synapse/issues/10137)) | |
69 | - Remove some dead code regarding TLS certificate handling. ([\#10054](https://github.com/matrix-org/synapse/issues/10054)) | |
70 | - Remove redundant, unmaintained `convert_server_keys` script. ([\#10055](https://github.com/matrix-org/synapse/issues/10055)) | |
71 | - Improve the error message printed by synctl when synapse fails to start. ([\#10059](https://github.com/matrix-org/synapse/issues/10059)) | |
72 | - Fix GitHub Actions lint for newsfragments. ([\#10069](https://github.com/matrix-org/synapse/issues/10069)) | |
73 | - Update opentracing to inject the right context into the carrier. ([\#10074](https://github.com/matrix-org/synapse/issues/10074)) | |
74 | - Fix up `BatchingQueue` implementation. ([\#10078](https://github.com/matrix-org/synapse/issues/10078)) | |
75 | - Log method and path when dropping request due to size limit. ([\#10091](https://github.com/matrix-org/synapse/issues/10091)) | |
76 | - In Github Actions workflows, summarize the Sytest results in an easy-to-read format. ([\#10094](https://github.com/matrix-org/synapse/issues/10094)) | |
77 | - Make `/sync` do fewer state resolutions. ([\#10102](https://github.com/matrix-org/synapse/issues/10102)) | |
78 | - Add missing type hints to the admin API servlets. ([\#10105](https://github.com/matrix-org/synapse/issues/10105)) | |
79 | - Improve opentracing annotations for `Notifier`. ([\#10111](https://github.com/matrix-org/synapse/issues/10111)) | |
80 | - Enable Prometheus metrics for the jaeger client library. ([\#10112](https://github.com/matrix-org/synapse/issues/10112)) | |
81 | - Work to improve the responsiveness of `/sync` requests. ([\#10124](https://github.com/matrix-org/synapse/issues/10124)) | |
82 | - OpenTracing: use a consistent name for background processes. ([\#10135](https://github.com/matrix-org/synapse/issues/10135)) | |
83 | ||
84 | ||
0 | 85 | Synapse 1.35.1 (2021-06-03) |
1 | 86 | =========================== |
2 | 87 |
398 | 398 | |
399 | 399 | ### Using PostgreSQL |
400 | 400 | |
401 | By default Synapse uses [SQLite](https://sqlite.org/) and in doing so trades performance for convenience. | |
402 | SQLite is only recommended in Synapse for testing purposes or for servers with | |
403 | very light workloads. | |
404 | ||
405 | Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org). Advantages include: | |
401 | By default Synapse uses an [SQLite](https://sqlite.org/) database and in doing so trades | |
402 | performance for convenience. Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org) | |
403 | instead. Advantages include: | |
406 | 404 | |
407 | 405 | - significant performance improvements due to the superior threading and |
408 | 406 | caching model, smarter query optimiser |
410 | 408 | |
411 | 409 | For information on how to install and use PostgreSQL in Synapse, please see |
412 | 410 | [docs/postgres.md](docs/postgres.md) |
411 | ||
412 | SQLite is only acceptable for testing purposes. SQLite should not be used in | |
413 | a production server. Synapse will perform poorly when using | |
414 | SQLite, especially when participating in large rooms. | |
413 | 415 | |
414 | 416 | ### TLS certificates |
415 | 417 |
39 | 39 | exclude sytest-blacklist |
40 | 40 | exclude test_postgresql.sh |
41 | 41 | |
42 | include book.toml | |
42 | 43 | include pyproject.toml |
43 | 44 | recursive-include changelog.d * |
44 | 45 |
148 | 148 | automatically, please see `<docs/ACME.md>`_. |
149 | 149 | |
150 | 150 | |
151 | Security Note | |
151 | Security note | |
152 | 152 | ============= |
153 | 153 | |
154 | Matrix serves raw user generated data in some APIs - specifically the `content | |
155 | repository endpoints <https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_. | |
156 | ||
157 | Whilst we have tried to mitigate against possible XSS attacks (e.g. | |
158 | https://github.com/matrix-org/synapse/pull/1021) we recommend running | |
159 | matrix homeservers on a dedicated domain name, to limit any malicious user generated | |
160 | content served to web browsers a matrix API from being able to attack webapps hosted | |
161 | on the same domain. This is particularly true of sharing a matrix webclient and | |
162 | server on the same domain. | |
163 | ||
164 | See https://github.com/vector-im/riot-web/issues/1977 and | |
165 | https://developer.github.com/changes/2014-04-25-user-content-security for more details. | |
154 | Matrix serves raw, user-supplied data in some APIs -- specifically the `content | |
155 | repository endpoints`_. | |
156 | ||
157 | .. _content repository endpoints: https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid | |
158 | ||
159 | Whilst we make a reasonable effort to mitigate against XSS attacks (for | |
160 | instance, by using `CSP`_), a Matrix homeserver should not be hosted on a | |
161 | domain hosting other web applications. This especially applies to sharing | |
162 | the domain with Matrix web clients and other sensitive applications like | |
163 | webmail. See | |
164 | https://developer.github.com/changes/2014-04-25-user-content-security for more | |
165 | information. | |
166 | ||
167 | .. _CSP: https://github.com/matrix-org/synapse/pull/1021 | |
168 | ||
169 | Ideally, the homeserver should not simply be on a different subdomain, but on | |
170 | a completely different `registered domain`_ (also known as top-level site or | |
171 | eTLD+1). This is because `some attacks`_ are still possible as long as the two | |
172 | applications share the same registered domain. | |
173 | ||
174 | .. _registered domain: https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-2.3 | |
175 | ||
176 | .. _some attacks: https://en.wikipedia.org/wiki/Session_fixation#Attacks_using_cross-subdomain_cookie | |
177 | ||
178 | To illustrate this with an example, if your Element Web or other sensitive web | |
179 | application is hosted on ``A.example1.com``, you should ideally host Synapse on | |
180 | ``example2.com``. Some amount of protection is offered by hosting on | |
181 | ``B.example1.com`` instead, so this is also acceptable in some scenarios. | |
182 | However, you should *not* host your Synapse on ``A.example1.com``. | |
183 | ||
184 | Note that all of the above refers exclusively to the domain used in Synapse's | |
185 | ``public_baseurl`` setting. In particular, it has no bearing on the domain | |
186 | mentioned in MXIDs hosted on that server. | |
187 | ||
188 | Following this advice ensures that even if an XSS is found in Synapse, the | |
189 | impact to other applications will be minimal. | |
166 | 190 | |
167 | 191 | |
168 | 192 | Upgrading an existing Synapse |
0 | # Documentation for possible options in this file is at | |
1 | # https://rust-lang.github.io/mdBook/format/config.html | |
2 | [book] | |
3 | title = "Synapse" | |
4 | authors = ["The Matrix.org Foundation C.I.C."] | |
5 | language = "en" | |
6 | multilingual = false | |
7 | ||
8 | # The directory that documentation files are stored in | |
9 | src = "docs" | |
10 | ||
11 | [build] | |
12 | # Prevent markdown pages from being automatically generated when they're | |
13 | # linked to in SUMMARY.md | |
14 | create-missing = false | |
15 | ||
16 | [output.html] | |
17 | # The URL visitors will be directed to when they try to edit a page | |
18 | edit-url-template = "https://github.com/matrix-org/synapse/edit/develop/{path}" | |
19 | ||
20 | # Remove the numbers that appear before each item in the sidebar, as they can | |
21 | # get quite messy as we nest deeper | |
22 | no-section-label = true | |
23 | ||
24 | # The source code URL of the repository | |
25 | git-repository-url = "https://github.com/matrix-org/synapse" | |
26 | ||
27 | # The path that the docs are hosted on | |
28 | site-url = "/synapse/" | |
29 | ||
30 | # Additional HTML, JS, CSS that's injected into each page of the book. | |
31 | # More information available in docs/website_files/README.md | |
32 | additional-css = [ | |
33 | "docs/website_files/table-of-contents.css", | |
34 | "docs/website_files/remove-nav-buttons.css", | |
35 | "docs/website_files/indent-section-headers.css", | |
36 | ] | |
37 | additional-js = ["docs/website_files/table-of-contents.js"] | |
38 | theme = "docs/website_files/theme"⏎ |
225 | 225 | ## Using jemalloc |
226 | 226 | |
227 | 227 | Jemalloc is embedded in the image and will be used instead of the default allocator. |
228 | You can read about jemalloc by reading the Synapse [README](../README.md). | |
228 | You can read about jemalloc by reading the Synapse [README](../README.rst). |
0 | 0 | # Overview |
1 | Captcha can be enabled for this home server. This file explains how to do that. | |
2 | The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google. | |
1 | A captcha can be enabled on your homeserver to help prevent bots from registering | |
2 | accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys | |
3 | from Google. | |
3 | 4 | |
4 | ## Getting keys | |
5 | ## Getting API keys | |
5 | 6 | |
6 | Requires a site/secret key pair from: | |
7 | ||
8 | <https://developers.google.com/recaptcha/> | |
9 | ||
10 | Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option | |
11 | ||
12 | ## Setting ReCaptcha Keys | |
13 | ||
14 | The keys are a config option on the home server config. If they are not | |
15 | visible, you can generate them via `--generate-config`. Set the following value: | |
16 | ||
7 | 1. Create a new site at <https://www.google.com/recaptcha/admin/create> | |
8 | 1. Set the label to anything you want | |
9 | 1. Set the type to reCAPTCHA v2 using the "I'm not a robot" Checkbox option. | |
10 | This is the only type of captcha that works with Synapse. | |
11 | 1. Add the public hostname for your server, as set in `public_baseurl` | |
12 | in `homeserver.yaml`, to the list of authorized domains. If you have not set | |
13 | `public_baseurl`, use `server_name`. | |
14 | 1. Agree to the terms of service and submit. | |
15 | 1. Copy your site key and secret key and add them to your `homeserver.yaml` | |
16 | configuration file | |
17 | ``` | |
17 | 18 | recaptcha_public_key: YOUR_SITE_KEY |
18 | 19 | recaptcha_private_key: YOUR_SECRET_KEY |
19 | ||
20 | In addition, you MUST enable captchas via: | |
21 | ||
20 | ``` | |
21 | 1. Enable the CAPTCHA for new registrations | |
22 | ``` | |
22 | 23 | enable_registration_captcha: true |
24 | ``` | |
25 | 1. Go to the settings page for the CAPTCHA you just created | |
26 | 1. Uncheck the "Verify the origin of reCAPTCHA solutions" checkbox so that the | |
27 | captcha can be displayed in any client. If you do not disable this option then you | |
28 | must specify the domains of every client that is allowed to display the CAPTCHA. | |
23 | 29 | |
24 | 30 | ## Configuring IP used for auth |
25 | 31 | |
26 | The ReCaptcha API requires that the IP address of the user who solved the | |
27 | captcha is sent. If the client is connecting through a proxy or load balancer, | |
32 | The reCAPTCHA API requires that the IP address of the user who solved the | |
33 | CAPTCHA is sent. If the client is connecting through a proxy or load balancer, | |
28 | 34 | it may be required to use the `X-Forwarded-For` (XFF) header instead of the origin |
29 | 35 | IP address. This can be configured using the `x_forwarded` directive in the |
30 | listeners section of the homeserver.yaml configuration file. | |
36 | listeners section of the `homeserver.yaml` configuration file. |
0 | 0 | # Synapse Documentation |
1 | 1 | |
2 | This directory contains documentation specific to the `synapse` homeserver. | |
2 | **The documentation is currently hosted [here](https://matrix-org.github.io/synapse).** | |
3 | Please update any links to point to the new website instead. | |
3 | 4 | |
4 | All matrix-generic documentation now lives in its own project, located at [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc) | |
5 | ## About | |
5 | 6 | |
6 | (Note: some items here may be moved to [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc) at some point in the future.) | |
7 | This directory currently holds a series of markdown files documenting how to install, use | |
8 | and develop Synapse, the reference Matrix homeserver. The documentation is readable directly | |
9 | from this repository, but it is recommended to instead browse through the | |
10 | [website](https://matrix-org.github.io/synapse) for easier discoverability. | |
11 | ||
12 | ## Adding to the documentation | |
13 | ||
14 | Most of the documentation currently exists as top-level files, as when organising them into | |
15 | a structured website, these files were kept in place so that existing links would not break. | |
16 | The rest of the documentation is stored in folders, such as `setup`, `usage`, and `development` | |
17 | etc. **All new documentation files should be placed in structured folders.** For example: | |
18 | ||
19 | To create a new user-facing documentation page about a new Single Sign-On protocol named | |
20 | "MyCoolProtocol", one should create a new file with a relevant name, such as "my_cool_protocol.md". | |
21 | This file might fit into the documentation structure at: | |
22 | ||
23 | - Usage | |
24 | - Configuration | |
25 | - User Authentication | |
26 | - Single Sign-On | |
27 | - **My Cool Protocol** | |
28 | ||
29 | Given that, one would place the new file under | |
30 | `usage/configuration/user_authentication/single_sign_on/my_cool_protocol.md`. | |
31 | ||
32 | Note that the structure of the documentation (and thus the left sidebar on the website) is determined | |
33 | by the list in [SUMMARY.md](SUMMARY.md). The final thing to do when adding a new page is to add a new | |
34 | line linking to the new documentation file: | |
35 | ||
36 | ```markdown | |
37 | - [My Cool Protocol](usage/configuration/user_authentication/single_sign_on/my_cool_protocol.md) | |
38 | ``` | |
39 | ||
40 | ## Building the documentation | |
41 | ||
42 | The documentation is built with [mdbook](https://rust-lang.github.io/mdBook/), and the outline of the | |
43 | documentation is determined by the structure of [SUMMARY.md](SUMMARY.md). | |
44 | ||
45 | First, [get mdbook](https://github.com/rust-lang/mdBook#installation). Then, **from the root of the repository**, | |
46 | build the documentation with: | |
47 | ||
48 | ```sh | |
49 | mdbook build | |
50 | ``` | |
51 | ||
52 | The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can | |
53 | browse the book by opening `book/index.html` in a web browser. | |
54 | ||
55 | You can also have mdbook host the docs on a local webserver with hot-reload functionality via: | |
56 | ||
57 | ```sh | |
58 | mdbook serve | |
59 | ``` | |
60 | ||
61 | The URL at which the docs can be viewed at will be logged. | |
62 | ||
63 | ## Configuration and theming | |
64 | ||
65 | The look and behaviour of the website is configured by the [book.toml](../book.toml) file | |
66 | at the root of the repository. See | |
67 | [mdbook's documentation on configuration](https://rust-lang.github.io/mdBook/format/config.html) | |
68 | for available options. | |
69 | ||
70 | The site can be themed and additionally extended with extra UI and features. See | |
71 | [website_files/README.md](website_files/README.md) for details. |
0 | # Summary | |
1 | ||
2 | # Introduction | |
3 | - [Welcome and Overview](welcome_and_overview.md) | |
4 | ||
5 | # Setup | |
6 | - [Installation](setup/installation.md) | |
7 | - [Using Postgres](postgres.md) | |
8 | - [Configuring a Reverse Proxy](reverse_proxy.md) | |
9 | - [Configuring a Turn Server](turn-howto.md) | |
10 | - [Delegation](delegate.md) | |
11 | ||
12 | # Upgrading | |
13 | - [Upgrading between Synapse Versions](upgrading/README.md) | |
14 | - [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md) | |
15 | ||
16 | # Usage | |
17 | - [Federation](federate.md) | |
18 | - [Configuration](usage/configuration/README.md) | |
19 | - [Homeserver Sample Config File](usage/configuration/homeserver_sample_config.md) | |
20 | - [Logging Sample Config File](usage/configuration/logging_sample_config.md) | |
21 | - [Structured Logging](structured_logging.md) | |
22 | - [User Authentication](usage/configuration/user_authentication/README.md) | |
23 | - [Single-Sign On]() | |
24 | - [OpenID Connect](openid.md) | |
25 | - [SAML]() | |
26 | - [CAS]() | |
27 | - [SSO Mapping Providers](sso_mapping_providers.md) | |
28 | - [Password Auth Providers](password_auth_providers.md) | |
29 | - [JSON Web Tokens](jwt.md) | |
30 | - [Registration Captcha](CAPTCHA_SETUP.md) | |
31 | - [Application Services](application_services.md) | |
32 | - [Server Notices](server_notices.md) | |
33 | - [Consent Tracking](consent_tracking.md) | |
34 | - [URL Previews](url_previews.md) | |
35 | - [User Directory](user_directory.md) | |
36 | - [Message Retention Policies](message_retention_policies.md) | |
37 | - [Pluggable Modules]() | |
38 | - [Third Party Rules]() | |
39 | - [Spam Checker](spam_checker.md) | |
40 | - [Presence Router](presence_router_module.md) | |
41 | - [Media Storage Providers]() | |
42 | - [Workers](workers.md) | |
43 | - [Using `synctl` with Workers](synctl_workers.md) | |
44 | - [Systemd](systemd-with-workers/README.md) | |
45 | - [Administration](usage/administration/README.md) | |
46 | - [Admin API](usage/administration/admin_api/README.md) | |
47 | - [Account Validity](admin_api/account_validity.md) | |
48 | - [Delete Group](admin_api/delete_group.md) | |
49 | - [Event Reports](admin_api/event_reports.md) | |
50 | - [Media](admin_api/media_admin_api.md) | |
51 | - [Purge History](admin_api/purge_history_api.md) | |
52 | - [Purge Rooms](admin_api/purge_room.md) | |
53 | - [Register Users](admin_api/register_api.md) | |
54 | - [Manipulate Room Membership](admin_api/room_membership.md) | |
55 | - [Rooms](admin_api/rooms.md) | |
56 | - [Server Notices](admin_api/server_notices.md) | |
57 | - [Shutdown Room](admin_api/shutdown_room.md) | |
58 | - [Statistics](admin_api/statistics.md) | |
59 | - [Users](admin_api/user_admin_api.md) | |
60 | - [Server Version](admin_api/version_api.md) | |
61 | - [Manhole](manhole.md) | |
62 | - [Monitoring](metrics-howto.md) | |
63 | - [Scripts]() | |
64 | ||
65 | # Development | |
66 | - [Contributing Guide](development/contributing_guide.md) | |
67 | - [Code Style](code_style.md) | |
68 | - [Git Usage](dev/git.md) | |
69 | - [Testing]() | |
70 | - [OpenTracing](opentracing.md) | |
71 | - [Synapse Architecture]() | |
72 | - [Log Contexts](log_contexts.md) | |
73 | - [Replication](replication.md) | |
74 | - [TCP Replication](tcp_replication.md) | |
75 | - [Internal Documentation](development/internal_documentation/README.md) | |
76 | - [Single Sign-On]() | |
77 | - [SAML](dev/saml.md) | |
78 | - [CAS](dev/cas.md) | |
79 | - [State Resolution]() | |
80 | - [The Auth Chain Difference Algorithm](auth_chain_difference_algorithm.md) | |
81 | - [Media Repository](media_repository.md) | |
82 | - [Room and User Statistics](room_and_user_statistics.md) | |
83 | - [Scripts]() | |
84 | ||
85 | # Other | |
86 | - [Dependency Deprecation Policy](deprecation_policy.md)⏎ |
0 | 0 | Admin APIs |
1 | 1 | ========== |
2 | 2 | |
3 | **Note**: The latest documentation can be viewed `here <https://matrix-org.github.io/synapse>`_. | |
4 | See `docs/README.md <../docs/README.md>`_ for more information. | |
5 | ||
6 | **Please update links to point to the website instead.** Existing files in this directory | |
7 | are preserved to maintain historical links, but may be moved in the future. | |
8 | ||
3 | 9 | This directory includes documentation for the various synapse specific admin |
4 | APIs available. | |
10 | APIs available. Updates to the existing Admin API documentation should still | |
11 | be made to these files, but any new documentation files should instead be placed under | |
12 | `docs/usage/administration/admin_api <../docs/usage/administration/admin_api>`_. | |
5 | 13 | |
6 | Authenticating as a server admin | |
7 | -------------------------------- | |
8 | ||
9 | Many of the API calls in the admin api will require an `access_token` for a | |
10 | server admin. (Note that a server admin is distinct from a room admin.) | |
11 | ||
12 | A user can be marked as a server admin by updating the database directly, e.g.: | |
13 | ||
14 | .. code-block:: sql | |
15 | ||
16 | UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'; | |
17 | ||
18 | A new server admin user can also be created using the | |
19 | ``register_new_matrix_user`` script. | |
20 | ||
21 | Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings. | |
22 | ||
23 | Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header: | |
24 | ||
25 | ``curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>`` | |
26 | ||
27 | Fore more details, please refer to the complete `matrix spec documentation <https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens>`_. |
0 | # Account validity API | |
1 | ||
2 | This API allows a server administrator to manage the validity of an account. To | |
3 | use it, you must enable the account validity feature (under | |
4 | `account_validity`) in Synapse's configuration. | |
5 | ||
6 | ## Renew account | |
7 | ||
8 | This API extends the validity of an account by as much time as configured in the | |
9 | `period` parameter from the `account_validity` configuration. | |
10 | ||
11 | The API is: | |
12 | ||
13 | ``` | |
14 | POST /_synapse/admin/v1/account_validity/validity | |
15 | ``` | |
16 | ||
17 | with the following body: | |
18 | ||
19 | ```json | |
20 | { | |
21 | "user_id": "<user ID for the account to renew>", | |
22 | "expiration_ts": 0, | |
23 | "enable_renewal_emails": true | |
24 | } | |
25 | ``` | |
26 | ||
27 | ||
28 | `expiration_ts` is an optional parameter and overrides the expiration date, | |
29 | which otherwise defaults to now + validity period. | |
30 | ||
31 | `enable_renewal_emails` is also an optional parameter and enables/disables | |
32 | sending renewal emails to the user. Defaults to true. | |
33 | ||
34 | The API returns with the new expiration date for this account, as a timestamp in | |
35 | milliseconds since epoch: | |
36 | ||
37 | ```json | |
38 | { | |
39 | "expiration_ts": 0 | |
40 | } | |
41 | ``` |
0 | Account validity API | |
1 | ==================== | |
2 | ||
3 | This API allows a server administrator to manage the validity of an account. To | |
4 | use it, you must enable the account validity feature (under | |
5 | ``account_validity``) in Synapse's configuration. | |
6 | ||
7 | Renew account | |
8 | ------------- | |
9 | ||
10 | This API extends the validity of an account by as much time as configured in the | |
11 | ``period`` parameter from the ``account_validity`` configuration. | |
12 | ||
13 | The API is:: | |
14 | ||
15 | POST /_synapse/admin/v1/account_validity/validity | |
16 | ||
17 | with the following body: | |
18 | ||
19 | .. code:: json | |
20 | ||
21 | { | |
22 | "user_id": "<user ID for the account to renew>", | |
23 | "expiration_ts": 0, | |
24 | "enable_renewal_emails": true | |
25 | } | |
26 | ||
27 | ||
28 | ``expiration_ts`` is an optional parameter and overrides the expiration date, | |
29 | which otherwise defaults to now + validity period. | |
30 | ||
31 | ``enable_renewal_emails`` is also an optional parameter and enables/disables | |
32 | sending renewal emails to the user. Defaults to true. | |
33 | ||
34 | The API returns with the new expiration date for this account, as a timestamp in | |
35 | milliseconds since epoch: | |
36 | ||
37 | .. code:: json | |
38 | ||
39 | { | |
40 | "expiration_ts": 0 | |
41 | } |
10 | 10 | ``` |
11 | 11 | |
12 | 12 | To use it, you will need to authenticate by providing an `access_token` for a |
13 | server admin: see [README.rst](README.rst). | |
13 | server admin: see [Admin API](../../usage/administration/admin_api). |
6 | 6 | GET /_synapse/admin/v1/event_reports?from=0&limit=10 |
7 | 7 | ``` |
8 | 8 | To use it, you will need to authenticate by providing an `access_token` for a |
9 | server admin: see [README.rst](README.rst). | |
9 | server admin: see [Admin API](../../usage/administration/admin_api). | |
10 | 10 | |
11 | 11 | It returns a JSON body like the following: |
12 | 12 | |
74 | 74 | * `name`: string - The name of the room. |
75 | 75 | * `event_id`: string - The ID of the reported event. |
76 | 76 | * `user_id`: string - This is the user who reported the event and wrote the reason. |
77 | * `reason`: string - Comment made by the `user_id` in this report. May be blank. | |
77 | * `reason`: string - Comment made by the `user_id` in this report. May be blank or `null`. | |
78 | 78 | * `score`: integer - Content is reported based upon a negative score, where -100 is |
79 | "most offensive" and 0 is "inoffensive". | |
79 | "most offensive" and 0 is "inoffensive". May be `null`. | |
80 | 80 | * `sender`: string - This is the ID of the user who sent the original message/event that |
81 | 81 | was reported. |
82 | 82 | * `canonical_alias`: string - The canonical alias of the room. `null` if the room does not |
94 | 94 | GET /_synapse/admin/v1/event_reports/<report_id> |
95 | 95 | ``` |
96 | 96 | To use it, you will need to authenticate by providing an `access_token` for a |
97 | server admin: see [README.rst](README.rst). | |
97 | server admin: see [Admin API](../../usage/administration/admin_api). | |
98 | 98 | |
99 | 99 | It returns a JSON body like the following: |
100 | 100 |
3 | 3 | * [List all media uploaded by a user](#list-all-media-uploaded-by-a-user) |
4 | 4 | - [Quarantine media](#quarantine-media) |
5 | 5 | * [Quarantining media by ID](#quarantining-media-by-id) |
6 | * [Remove media from quarantine by ID](#remove-media-from-quarantine-by-id) | |
6 | 7 | * [Quarantining media in a room](#quarantining-media-in-a-room) |
7 | 8 | * [Quarantining all media of a user](#quarantining-all-media-of-a-user) |
8 | 9 | * [Protecting media from being quarantined](#protecting-media-from-being-quarantined) |
10 | * [Unprotecting media from being quarantined](#unprotecting-media-from-being-quarantined) | |
9 | 11 | - [Delete local media](#delete-local-media) |
10 | 12 | * [Delete a specific local media](#delete-a-specific-local-media) |
11 | 13 | * [Delete local media by date or size](#delete-local-media-by-date-or-size) |
25 | 27 | GET /_synapse/admin/v1/room/<room_id>/media |
26 | 28 | ``` |
27 | 29 | To use it, you will need to authenticate by providing an `access_token` for a |
28 | server admin: see [README.rst](README.rst). | |
30 | server admin: see [Admin API](../../usage/administration/admin_api). | |
29 | 31 | |
30 | 32 | The API returns a JSON body like the following: |
31 | 33 | ```json |
75 | 77 | {} |
76 | 78 | ``` |
77 | 79 | |
80 | ## Remove media from quarantine by ID | |
81 | ||
82 | This API removes a single piece of local or remote media from quarantine. | |
83 | ||
84 | Request: | |
85 | ||
86 | ``` | |
87 | POST /_synapse/admin/v1/media/unquarantine/<server_name>/<media_id> | |
88 | ||
89 | {} | |
90 | ``` | |
91 | ||
92 | Where `server_name` is in the form of `example.org`, and `media_id` is in the | |
93 | form of `abcdefg12345...`. | |
94 | ||
95 | Response: | |
96 | ||
97 | ```json | |
98 | {} | |
99 | ``` | |
100 | ||
78 | 101 | ## Quarantining media in a room |
79 | 102 | |
80 | 103 | This API quarantines all local and remote media in a room. |
146 | 169 | |
147 | 170 | ``` |
148 | 171 | POST /_synapse/admin/v1/media/protect/<media_id> |
172 | ||
173 | {} | |
174 | ``` | |
175 | ||
176 | Where `media_id` is in the form of `abcdefg12345...`. | |
177 | ||
178 | Response: | |
179 | ||
180 | ```json | |
181 | {} | |
182 | ``` | |
183 | ||
184 | ## Unprotecting media from being quarantined | |
185 | ||
186 | This API reverts the protection of a media. | |
187 | ||
188 | Request: | |
189 | ||
190 | ``` | |
191 | POST /_synapse/admin/v1/media/unprotect/<media_id> | |
149 | 192 | |
150 | 193 | {} |
151 | 194 | ``` |
267 | 310 | * `deleted`: integer - The number of media items successfully deleted |
268 | 311 | |
269 | 312 | To use it, you will need to authenticate by providing an `access_token` for a |
270 | server admin: see [README.rst](README.rst). | |
313 | server admin: see [Admin API](../../usage/administration/admin_api). | |
271 | 314 | |
272 | 315 | If the user re-requests purged remote media, synapse will re-request the media |
273 | 316 | from the originating server. |
0 | # Purge History API | |
1 | ||
2 | The purge history API allows server admins to purge historic events from their | |
3 | database, reclaiming disk space. | |
4 | ||
5 | Depending on the amount of history being purged a call to the API may take | |
6 | several minutes or longer. During this period users will not be able to | |
7 | paginate further back in the room from the point being purged from. | |
8 | ||
9 | Note that Synapse requires at least one message in each room, so it will never | |
10 | delete the last message in a room. | |
11 | ||
12 | The API is: | |
13 | ||
14 | ``` | |
15 | POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>] | |
16 | ``` | |
17 | ||
18 | To use it, you will need to authenticate by providing an `access_token` for a | |
19 | server admin: [Admin API](../../usage/administration/admin_api) | |
20 | ||
21 | By default, events sent by local users are not deleted, as they may represent | |
22 | the only copies of this content in existence. (Events sent by remote users are | |
23 | deleted.) | |
24 | ||
25 | Room state data (such as joins, leaves, topic) is always preserved. | |
26 | ||
27 | To delete local message events as well, set `delete_local_events` in the body: | |
28 | ||
29 | ``` | |
30 | { | |
31 | "delete_local_events": true | |
32 | } | |
33 | ``` | |
34 | ||
35 | The caller must specify the point in the room to purge up to. This can be | |
36 | specified by including an event_id in the URI, or by setting a | |
37 | `purge_up_to_event_id` or `purge_up_to_ts` in the request body. If an event | |
38 | id is given, that event (and others at the same graph depth) will be retained. | |
39 | If `purge_up_to_ts` is given, it should be a timestamp since the unix epoch, | |
40 | in milliseconds. | |
41 | ||
42 | The API starts the purge running, and returns immediately with a JSON body with | |
43 | a purge id: | |
44 | ||
45 | ```json | |
46 | { | |
47 | "purge_id": "<opaque id>" | |
48 | } | |
49 | ``` | |
50 | ||
51 | ## Purge status query | |
52 | ||
53 | It is possible to poll for updates on recent purges with a second API; | |
54 | ||
55 | ``` | |
56 | GET /_synapse/admin/v1/purge_history_status/<purge_id> | |
57 | ``` | |
58 | ||
59 | Again, you will need to authenticate by providing an `access_token` for a | |
60 | server admin. | |
61 | ||
62 | This API returns a JSON body like the following: | |
63 | ||
64 | ```json | |
65 | { | |
66 | "status": "active" | |
67 | } | |
68 | ``` | |
69 | ||
70 | The status will be one of `active`, `complete`, or `failed`. | |
71 | ||
72 | ## Reclaim disk space (Postgres) | |
73 | ||
74 | To reclaim the disk space and return it to the operating system, you need to run | |
75 | `VACUUM FULL;` on the database. | |
76 | ||
77 | <https://www.postgresql.org/docs/current/sql-vacuum.html> |
0 | Purge History API | |
1 | ================= | |
2 | ||
3 | The purge history API allows server admins to purge historic events from their | |
4 | database, reclaiming disk space. | |
5 | ||
6 | Depending on the amount of history being purged a call to the API may take | |
7 | several minutes or longer. During this period users will not be able to | |
8 | paginate further back in the room from the point being purged from. | |
9 | ||
10 | Note that Synapse requires at least one message in each room, so it will never | |
11 | delete the last message in a room. | |
12 | ||
13 | The API is: | |
14 | ||
15 | ``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]`` | |
16 | ||
17 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
18 | server admin: see `README.rst <README.rst>`_. | |
19 | ||
20 | By default, events sent by local users are not deleted, as they may represent | |
21 | the only copies of this content in existence. (Events sent by remote users are | |
22 | deleted.) | |
23 | ||
24 | Room state data (such as joins, leaves, topic) is always preserved. | |
25 | ||
26 | To delete local message events as well, set ``delete_local_events`` in the body: | |
27 | ||
28 | .. code:: json | |
29 | ||
30 | { | |
31 | "delete_local_events": true | |
32 | } | |
33 | ||
34 | The caller must specify the point in the room to purge up to. This can be | |
35 | specified by including an event_id in the URI, or by setting a | |
36 | ``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event | |
37 | id is given, that event (and others at the same graph depth) will be retained. | |
38 | If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch, | |
39 | in milliseconds. | |
40 | ||
41 | The API starts the purge running, and returns immediately with a JSON body with | |
42 | a purge id: | |
43 | ||
44 | .. code:: json | |
45 | ||
46 | { | |
47 | "purge_id": "<opaque id>" | |
48 | } | |
49 | ||
50 | Purge status query | |
51 | ------------------ | |
52 | ||
53 | It is possible to poll for updates on recent purges with a second API; | |
54 | ||
55 | ``GET /_synapse/admin/v1/purge_history_status/<purge_id>`` | |
56 | ||
57 | Again, you will need to authenticate by providing an ``access_token`` for a | |
58 | server admin. | |
59 | ||
60 | This API returns a JSON body like the following: | |
61 | ||
62 | .. code:: json | |
63 | ||
64 | { | |
65 | "status": "active" | |
66 | } | |
67 | ||
68 | The status will be one of ``active``, ``complete``, or ``failed``. | |
69 | ||
70 | Reclaim disk space (Postgres) | |
71 | ----------------------------- | |
72 | ||
73 | To reclaim the disk space and return it to the operating system, you need to run | |
74 | `VACUUM FULL;` on the database. | |
75 | ||
76 | https://www.postgresql.org/docs/current/sql-vacuum.html |
0 | # Shared-Secret Registration | |
1 | ||
2 | This API allows for the creation of users in an administrative and | |
3 | non-interactive way. This is generally used for bootstrapping a Synapse | |
4 | instance with administrator accounts. | |
5 | ||
6 | To authenticate yourself to the server, you will need both the shared secret | |
7 | (`registration_shared_secret` in the homeserver configuration), and a | |
8 | one-time nonce. If the registration shared secret is not configured, this API | |
9 | is not enabled. | |
10 | ||
11 | To fetch the nonce, you need to request one from the API: | |
12 | ||
13 | ``` | |
14 | > GET /_synapse/admin/v1/register | |
15 | ||
16 | < {"nonce": "thisisanonce"} | |
17 | ``` | |
18 | ||
19 | Once you have the nonce, you can make a `POST` to the same URL with a JSON | |
20 | body containing the nonce, username, password, whether they are an admin | |
21 | (optional, False by default), and a HMAC digest of the content. Also you can | |
22 | set the displayname (optional, `username` by default). | |
23 | ||
24 | As an example: | |
25 | ||
26 | ``` | |
27 | > POST /_synapse/admin/v1/register | |
28 | > { | |
29 | "nonce": "thisisanonce", | |
30 | "username": "pepper_roni", | |
31 | "displayname": "Pepper Roni", | |
32 | "password": "pizza", | |
33 | "admin": true, | |
34 | "mac": "mac_digest_here" | |
35 | } | |
36 | ||
37 | < { | |
38 | "access_token": "token_here", | |
39 | "user_id": "@pepper_roni:localhost", | |
40 | "home_server": "test", | |
41 | "device_id": "device_id_here" | |
42 | } | |
43 | ``` | |
44 | ||
45 | The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being | |
46 | the shared secret and the content being the nonce, user, password, either the | |
47 | string "admin" or "notadmin", and optionally the user_type | |
48 | each separated by NULs. For an example of generation in Python: | |
49 | ||
50 | ```python | |
51 | import hmac, hashlib | |
52 | ||
53 | def generate_mac(nonce, user, password, admin=False, user_type=None): | |
54 | ||
55 | mac = hmac.new( | |
56 | key=shared_secret, | |
57 | digestmod=hashlib.sha1, | |
58 | ) | |
59 | ||
60 | mac.update(nonce.encode('utf8')) | |
61 | mac.update(b"\x00") | |
62 | mac.update(user.encode('utf8')) | |
63 | mac.update(b"\x00") | |
64 | mac.update(password.encode('utf8')) | |
65 | mac.update(b"\x00") | |
66 | mac.update(b"admin" if admin else b"notadmin") | |
67 | if user_type: | |
68 | mac.update(b"\x00") | |
69 | mac.update(user_type.encode('utf8')) | |
70 | ||
71 | return mac.hexdigest() | |
72 | ```⏎ |
0 | Shared-Secret Registration | |
1 | ========================== | |
2 | ||
3 | This API allows for the creation of users in an administrative and | |
4 | non-interactive way. This is generally used for bootstrapping a Synapse | |
5 | instance with administrator accounts. | |
6 | ||
7 | To authenticate yourself to the server, you will need both the shared secret | |
8 | (``registration_shared_secret`` in the homeserver configuration), and a | |
9 | one-time nonce. If the registration shared secret is not configured, this API | |
10 | is not enabled. | |
11 | ||
12 | To fetch the nonce, you need to request one from the API:: | |
13 | ||
14 | > GET /_synapse/admin/v1/register | |
15 | ||
16 | < {"nonce": "thisisanonce"} | |
17 | ||
18 | Once you have the nonce, you can make a ``POST`` to the same URL with a JSON | |
19 | body containing the nonce, username, password, whether they are an admin | |
20 | (optional, False by default), and a HMAC digest of the content. Also you can | |
21 | set the displayname (optional, ``username`` by default). | |
22 | ||
23 | As an example:: | |
24 | ||
25 | > POST /_synapse/admin/v1/register | |
26 | > { | |
27 | "nonce": "thisisanonce", | |
28 | "username": "pepper_roni", | |
29 | "displayname": "Pepper Roni", | |
30 | "password": "pizza", | |
31 | "admin": true, | |
32 | "mac": "mac_digest_here" | |
33 | } | |
34 | ||
35 | < { | |
36 | "access_token": "token_here", | |
37 | "user_id": "@pepper_roni:localhost", | |
38 | "home_server": "test", | |
39 | "device_id": "device_id_here" | |
40 | } | |
41 | ||
42 | The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being | |
43 | the shared secret and the content being the nonce, user, password, either the | |
44 | string "admin" or "notadmin", and optionally the user_type | |
45 | each separated by NULs. For an example of generation in Python:: | |
46 | ||
47 | import hmac, hashlib | |
48 | ||
49 | def generate_mac(nonce, user, password, admin=False, user_type=None): | |
50 | ||
51 | mac = hmac.new( | |
52 | key=shared_secret, | |
53 | digestmod=hashlib.sha1, | |
54 | ) | |
55 | ||
56 | mac.update(nonce.encode('utf8')) | |
57 | mac.update(b"\x00") | |
58 | mac.update(user.encode('utf8')) | |
59 | mac.update(b"\x00") | |
60 | mac.update(password.encode('utf8')) | |
61 | mac.update(b"\x00") | |
62 | mac.update(b"admin" if admin else b"notadmin") | |
63 | if user_type: | |
64 | mac.update(b"\x00") | |
65 | mac.update(user_type.encode('utf8')) | |
66 | ||
67 | return mac.hexdigest() |
23 | 23 | ``` |
24 | 24 | |
25 | 25 | To use it, you will need to authenticate by providing an `access_token` for a |
26 | server admin: see [README.rst](README.rst). | |
26 | server admin: see [Admin API](../../usage/administration/admin_api). | |
27 | 27 | |
28 | 28 | Response: |
29 | 29 |
442 | 442 | ``` |
443 | 443 | |
444 | 444 | To use it, you will need to authenticate by providing an ``access_token`` for a |
445 | server admin: see [README.rst](README.rst). | |
445 | server admin: see [Admin API](../../usage/administration/admin_api). | |
446 | 446 | |
447 | 447 | A response body like the following is returned: |
448 | 448 |
9 | 9 | ``` |
10 | 10 | |
11 | 11 | To use it, you will need to authenticate by providing an `access_token` |
12 | for a server admin: see [README.rst](README.rst). | |
12 | for a server admin: see [Admin API](../../usage/administration/admin_api). | |
13 | 13 | |
14 | 14 | A response body like the following is returned: |
15 | 15 |
0 | # User Admin API | |
1 | ||
2 | ## Query User Account | |
3 | ||
4 | This API returns information about a specific user account. | |
5 | ||
6 | The api is: | |
7 | ||
8 | ``` | |
9 | GET /_synapse/admin/v2/users/<user_id> | |
10 | ``` | |
11 | ||
12 | To use it, you will need to authenticate by providing an `access_token` for a | |
13 | server admin: [Admin API](../../usage/administration/admin_api) | |
14 | ||
15 | It returns a JSON body like the following: | |
16 | ||
17 | ```json | |
18 | { | |
19 | "displayname": "User", | |
20 | "threepids": [ | |
21 | { | |
22 | "medium": "email", | |
23 | "address": "<user_mail_1>" | |
24 | }, | |
25 | { | |
26 | "medium": "email", | |
27 | "address": "<user_mail_2>" | |
28 | } | |
29 | ], | |
30 | "avatar_url": "<avatar_url>", | |
31 | "admin": 0, | |
32 | "deactivated": 0, | |
33 | "shadow_banned": 0, | |
34 | "password_hash": "$2b$12$p9B4GkqYdRTPGD", | |
35 | "creation_ts": 1560432506, | |
36 | "appservice_id": null, | |
37 | "consent_server_notice_sent": null, | |
38 | "consent_version": null | |
39 | } | |
40 | ``` | |
41 | ||
42 | URL parameters: | |
43 | ||
44 | - `user_id`: fully-qualified user id: for example, `@user:server.com`. | |
45 | ||
46 | ## Create or modify Account | |
47 | ||
48 | This API allows an administrator to create or modify a user account with a | |
49 | specific `user_id`. | |
50 | ||
51 | This api is: | |
52 | ||
53 | ``` | |
54 | PUT /_synapse/admin/v2/users/<user_id> | |
55 | ``` | |
56 | ||
57 | with a body of: | |
58 | ||
59 | ```json | |
60 | { | |
61 | "password": "user_password", | |
62 | "displayname": "User", | |
63 | "threepids": [ | |
64 | { | |
65 | "medium": "email", | |
66 | "address": "<user_mail_1>" | |
67 | }, | |
68 | { | |
69 | "medium": "email", | |
70 | "address": "<user_mail_2>" | |
71 | } | |
72 | ], | |
73 | "avatar_url": "<avatar_url>", | |
74 | "admin": false, | |
75 | "deactivated": false | |
76 | } | |
77 | ``` | |
78 | ||
79 | To use it, you will need to authenticate by providing an `access_token` for a | |
80 | server admin: [Admin API](../../usage/administration/admin_api) | |
81 | ||
82 | URL parameters: | |
83 | ||
84 | - `user_id`: fully-qualified user id: for example, `@user:server.com`. | |
85 | ||
86 | Body parameters: | |
87 | ||
88 | - `password`, optional. If provided, the user's password is updated and all | |
89 | devices are logged out. | |
90 | ||
91 | - `displayname`, optional, defaults to the value of `user_id`. | |
92 | ||
93 | - `threepids`, optional, allows setting the third-party IDs (email, msisdn) | |
94 | belonging to a user. | |
95 | ||
96 | - `avatar_url`, optional, must be a | |
97 | [MXC URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris). | |
98 | ||
99 | - `admin`, optional, defaults to `false`. | |
100 | ||
101 | - `deactivated`, optional. If unspecified, deactivation state will be left | |
102 | unchanged on existing accounts and set to `false` for new accounts. | |
103 | A user cannot be erased by deactivating with this API. For details on | |
104 | deactivating users see [Deactivate Account](#deactivate-account). | |
105 | ||
106 | If the user already exists then optional parameters default to the current value. | |
107 | ||
108 | In order to re-activate an account `deactivated` must be set to `false`. If | |
109 | users do not login via single-sign-on, a new `password` must be provided. | |
110 | ||
111 | ## List Accounts | |
112 | ||
113 | This API returns all local user accounts. | |
114 | By default, the response is ordered by ascending user ID. | |
115 | ||
116 | ``` | |
117 | GET /_synapse/admin/v2/users?from=0&limit=10&guests=false | |
118 | ``` | |
119 | ||
120 | To use it, you will need to authenticate by providing an `access_token` for a | |
121 | server admin: [Admin API](../../usage/administration/admin_api) | |
122 | ||
123 | A response body like the following is returned: | |
124 | ||
125 | ```json | |
126 | { | |
127 | "users": [ | |
128 | { | |
129 | "name": "<user_id1>", | |
130 | "is_guest": 0, | |
131 | "admin": 0, | |
132 | "user_type": null, | |
133 | "deactivated": 0, | |
134 | "shadow_banned": 0, | |
135 | "displayname": "<User One>", | |
136 | "avatar_url": null | |
137 | }, { | |
138 | "name": "<user_id2>", | |
139 | "is_guest": 0, | |
140 | "admin": 1, | |
141 | "user_type": null, | |
142 | "deactivated": 0, | |
143 | "shadow_banned": 0, | |
144 | "displayname": "<User Two>", | |
145 | "avatar_url": "<avatar_url>" | |
146 | } | |
147 | ], | |
148 | "next_token": "100", | |
149 | "total": 200 | |
150 | } | |
151 | ``` | |
152 | ||
153 | To paginate, check for `next_token` and if present, call the endpoint again | |
154 | with `from` set to the value of `next_token`. This will return a new page. | |
155 | ||
156 | If the endpoint does not return a `next_token` then there are no more users | |
157 | to paginate through. | |
158 | ||
159 | **Parameters** | |
160 | ||
161 | The following parameters should be set in the URL: | |
162 | ||
163 | - `user_id` - Is optional and filters to only return users with user IDs | |
164 | that contain this value. This parameter is ignored when using the `name` parameter. | |
165 | - `name` - Is optional and filters to only return users with user ID localparts | |
166 | **or** displaynames that contain this value. | |
167 | - `guests` - string representing a bool - Is optional and if `false` will **exclude** guest users. | |
168 | Defaults to `true` to include guest users. | |
169 | - `deactivated` - string representing a bool - Is optional and if `true` will **include** deactivated users. | |
170 | Defaults to `false` to exclude deactivated users. | |
171 | - `limit` - string representing a positive integer - Is optional but is used for pagination, | |
172 | denoting the maximum number of items to return in this call. Defaults to `100`. | |
173 | - `from` - string representing a positive integer - Is optional but used for pagination, | |
174 | denoting the offset in the returned results. This should be treated as an opaque value and | |
175 | not explicitly set to anything other than the return value of `next_token` from a previous call. | |
176 | Defaults to `0`. | |
177 | - `order_by` - The method by which to sort the returned list of users. | |
178 | If the ordered field has duplicates, the second order is always by ascending `name`, | |
179 | which guarantees a stable ordering. Valid values are: | |
180 | ||
181 | - `name` - Users are ordered alphabetically by `name`. This is the default. | |
182 | - `is_guest` - Users are ordered by `is_guest` status. | |
183 | - `admin` - Users are ordered by `admin` status. | |
184 | - `user_type` - Users are ordered alphabetically by `user_type`. | |
185 | - `deactivated` - Users are ordered by `deactivated` status. | |
186 | - `shadow_banned` - Users are ordered by `shadow_banned` status. | |
187 | - `displayname` - Users are ordered alphabetically by `displayname`. | |
188 | - `avatar_url` - Users are ordered alphabetically by avatar URL. | |
189 | ||
190 | - `dir` - Direction of media order. Either `f` for forwards or `b` for backwards. | |
191 | Setting this value to `b` will reverse the above sort order. Defaults to `f`. | |
192 | ||
193 | Caution. The database only has indexes on the columns `name` and `created_ts`. | |
194 | This means that if a different sort order is used (`is_guest`, `admin`, | |
195 | `user_type`, `deactivated`, `shadow_banned`, `avatar_url` or `displayname`), | |
196 | this can cause a large load on the database, especially for large environments. | |
197 | ||
198 | **Response** | |
199 | ||
200 | The following fields are returned in the JSON response body: | |
201 | ||
202 | - `users` - An array of objects, each containing information about an user. | |
203 | User objects contain the following fields: | |
204 | ||
205 | - `name` - string - Fully-qualified user ID (ex. `@user:server.com`). | |
206 | - `is_guest` - bool - Status if that user is a guest account. | |
207 | - `admin` - bool - Status if that user is a server administrator. | |
208 | - `user_type` - string - Type of the user. Normal users are type `None`. | |
209 | This allows user type specific behaviour. There are also types `support` and `bot`. | |
210 | - `deactivated` - bool - Status if that user has been marked as deactivated. | |
211 | - `shadow_banned` - bool - Status if that user has been marked as shadow banned. | |
212 | - `displayname` - string - The user's display name if they have set one. | |
213 | - `avatar_url` - string - The user's avatar URL if they have set one. | |
214 | ||
215 | - `next_token`: string representing a positive integer - Indication for pagination. See above. | |
216 | - `total` - integer - Total number of media. | |
217 | ||
218 | ||
219 | ## Query current sessions for a user | |
220 | ||
221 | This API returns information about the active sessions for a specific user. | |
222 | ||
223 | The endpoints are: | |
224 | ||
225 | ``` | |
226 | GET /_synapse/admin/v1/whois/<user_id> | |
227 | ``` | |
228 | ||
229 | and: | |
230 | ||
231 | ``` | |
232 | GET /_matrix/client/r0/admin/whois/<userId> | |
233 | ``` | |
234 | ||
235 | See also: [Client Server | |
236 | API Whois](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid). | |
237 | ||
238 | To use it, you will need to authenticate by providing an `access_token` for a | |
239 | server admin: [Admin API](../../usage/administration/admin_api) | |
240 | ||
241 | It returns a JSON body like the following: | |
242 | ||
243 | ```json | |
244 | { | |
245 | "user_id": "<user_id>", | |
246 | "devices": { | |
247 | "": { | |
248 | "sessions": [ | |
249 | { | |
250 | "connections": [ | |
251 | { | |
252 | "ip": "1.2.3.4", | |
253 | "last_seen": 1417222374433, | |
254 | "user_agent": "Mozilla/5.0 ..." | |
255 | }, | |
256 | { | |
257 | "ip": "1.2.3.10", | |
258 | "last_seen": 1417222374500, | |
259 | "user_agent": "Dalvik/2.1.0 ..." | |
260 | } | |
261 | ] | |
262 | } | |
263 | ] | |
264 | } | |
265 | } | |
266 | } | |
267 | ``` | |
268 | ||
269 | `last_seen` is measured in milliseconds since the Unix epoch. | |
270 | ||
271 | ## Deactivate Account | |
272 | ||
273 | This API deactivates an account. It removes active access tokens, resets the | |
274 | password, and deletes third-party IDs (to prevent the user requesting a | |
275 | password reset). | |
276 | ||
277 | It can also mark the user as GDPR-erased. This means messages sent by the | |
278 | user will still be visible by anyone that was in the room when these messages | |
279 | were sent, but hidden from users joining the room afterwards. | |
280 | ||
281 | The api is: | |
282 | ||
283 | ``` | |
284 | POST /_synapse/admin/v1/deactivate/<user_id> | |
285 | ``` | |
286 | ||
287 | with a body of: | |
288 | ||
289 | ```json | |
290 | { | |
291 | "erase": true | |
292 | } | |
293 | ``` | |
294 | ||
295 | To use it, you will need to authenticate by providing an `access_token` for a | |
296 | server admin: [Admin API](../../usage/administration/admin_api) | |
297 | ||
298 | The erase parameter is optional and defaults to `false`. | |
299 | An empty body may be passed for backwards compatibility. | |
300 | ||
301 | The following actions are performed when deactivating an user: | |
302 | ||
303 | - Try to unpind 3PIDs from the identity server | |
304 | - Remove all 3PIDs from the homeserver | |
305 | - Delete all devices and E2EE keys | |
306 | - Delete all access tokens | |
307 | - Delete the password hash | |
308 | - Removal from all rooms the user is a member of | |
309 | - Remove the user from the user directory | |
310 | - Reject all pending invites | |
311 | - Remove all account validity information related to the user | |
312 | ||
313 | The following additional actions are performed during deactivation if `erase` | |
314 | is set to `true`: | |
315 | ||
316 | - Remove the user's display name | |
317 | - Remove the user's avatar URL | |
318 | - Mark the user as erased | |
319 | ||
320 | ||
321 | ## Reset password | |
322 | ||
323 | Changes the password of another user. This will automatically log the user out of all their devices. | |
324 | ||
325 | The api is: | |
326 | ||
327 | ``` | |
328 | POST /_synapse/admin/v1/reset_password/<user_id> | |
329 | ``` | |
330 | ||
331 | with a body of: | |
332 | ||
333 | ```json | |
334 | { | |
335 | "new_password": "<secret>", | |
336 | "logout_devices": true | |
337 | } | |
338 | ``` | |
339 | ||
340 | To use it, you will need to authenticate by providing an `access_token` for a | |
341 | server admin: [Admin API](../../usage/administration/admin_api) | |
342 | ||
343 | The parameter `new_password` is required. | |
344 | The parameter `logout_devices` is optional and defaults to `true`. | |
345 | ||
346 | ||
347 | ## Get whether a user is a server administrator or not | |
348 | ||
349 | The api is: | |
350 | ||
351 | ``` | |
352 | GET /_synapse/admin/v1/users/<user_id>/admin | |
353 | ``` | |
354 | ||
355 | To use it, you will need to authenticate by providing an `access_token` for a | |
356 | server admin: [Admin API](../../usage/administration/admin_api) | |
357 | ||
358 | A response body like the following is returned: | |
359 | ||
360 | ```json | |
361 | { | |
362 | "admin": true | |
363 | } | |
364 | ``` | |
365 | ||
366 | ||
367 | ## Change whether a user is a server administrator or not | |
368 | ||
369 | Note that you cannot demote yourself. | |
370 | ||
371 | The api is: | |
372 | ||
373 | ``` | |
374 | PUT /_synapse/admin/v1/users/<user_id>/admin | |
375 | ``` | |
376 | ||
377 | with a body of: | |
378 | ||
379 | ```json | |
380 | { | |
381 | "admin": true | |
382 | } | |
383 | ``` | |
384 | ||
385 | To use it, you will need to authenticate by providing an `access_token` for a | |
386 | server admin: [Admin API](../../usage/administration/admin_api) | |
387 | ||
388 | ||
389 | ## List room memberships of a user | |
390 | ||
391 | Gets a list of all `room_id` that a specific `user_id` is member. | |
392 | ||
393 | The API is: | |
394 | ||
395 | ``` | |
396 | GET /_synapse/admin/v1/users/<user_id>/joined_rooms | |
397 | ``` | |
398 | ||
399 | To use it, you will need to authenticate by providing an `access_token` for a | |
400 | server admin: [Admin API](../../usage/administration/admin_api) | |
401 | ||
402 | A response body like the following is returned: | |
403 | ||
404 | ```json | |
405 | { | |
406 | "joined_rooms": [ | |
407 | "!DuGcnbhHGaSZQoNQR:matrix.org", | |
408 | "!ZtSaPCawyWtxfWiIy:matrix.org" | |
409 | ], | |
410 | "total": 2 | |
411 | } | |
412 | ``` | |
413 | ||
414 | The server returns the list of rooms of which the user and the server | |
415 | are member. If the user is local, all the rooms of which the user is | |
416 | member are returned. | |
417 | ||
418 | **Parameters** | |
419 | ||
420 | The following parameters should be set in the URL: | |
421 | ||
422 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
423 | ||
424 | **Response** | |
425 | ||
426 | The following fields are returned in the JSON response body: | |
427 | ||
428 | - `joined_rooms` - An array of `room_id`. | |
429 | - `total` - Number of rooms. | |
430 | ||
431 | ||
432 | ## List media of a user | |
433 | Gets a list of all local media that a specific `user_id` has created. | |
434 | By default, the response is ordered by descending creation date and ascending media ID. | |
435 | The newest media is on top. You can change the order with parameters | |
436 | `order_by` and `dir`. | |
437 | ||
438 | The API is: | |
439 | ||
440 | ``` | |
441 | GET /_synapse/admin/v1/users/<user_id>/media | |
442 | ``` | |
443 | ||
444 | To use it, you will need to authenticate by providing an `access_token` for a | |
445 | server admin: [Admin API](../../usage/administration/admin_api) | |
446 | ||
447 | A response body like the following is returned: | |
448 | ||
449 | ```json | |
450 | { | |
451 | "media": [ | |
452 | { | |
453 | "created_ts": 100400, | |
454 | "last_access_ts": null, | |
455 | "media_id": "qXhyRzulkwLsNHTbpHreuEgo", | |
456 | "media_length": 67, | |
457 | "media_type": "image/png", | |
458 | "quarantined_by": null, | |
459 | "safe_from_quarantine": false, | |
460 | "upload_name": "test1.png" | |
461 | }, | |
462 | { | |
463 | "created_ts": 200400, | |
464 | "last_access_ts": null, | |
465 | "media_id": "FHfiSnzoINDatrXHQIXBtahw", | |
466 | "media_length": 67, | |
467 | "media_type": "image/png", | |
468 | "quarantined_by": null, | |
469 | "safe_from_quarantine": false, | |
470 | "upload_name": "test2.png" | |
471 | } | |
472 | ], | |
473 | "next_token": 3, | |
474 | "total": 2 | |
475 | } | |
476 | ``` | |
477 | ||
478 | To paginate, check for `next_token` and if present, call the endpoint again | |
479 | with `from` set to the value of `next_token`. This will return a new page. | |
480 | ||
481 | If the endpoint does not return a `next_token` then there are no more | |
482 | reports to paginate through. | |
483 | ||
484 | **Parameters** | |
485 | ||
486 | The following parameters should be set in the URL: | |
487 | ||
488 | - `user_id` - string - fully qualified: for example, `@user:server.com`. | |
489 | - `limit`: string representing a positive integer - Is optional but is used for pagination, | |
490 | denoting the maximum number of items to return in this call. Defaults to `100`. | |
491 | - `from`: string representing a positive integer - Is optional but used for pagination, | |
492 | denoting the offset in the returned results. This should be treated as an opaque value and | |
493 | not explicitly set to anything other than the return value of `next_token` from a previous call. | |
494 | Defaults to `0`. | |
495 | - `order_by` - The method by which to sort the returned list of media. | |
496 | If the ordered field has duplicates, the second order is always by ascending `media_id`, | |
497 | which guarantees a stable ordering. Valid values are: | |
498 | ||
499 | - `media_id` - Media are ordered alphabetically by `media_id`. | |
500 | - `upload_name` - Media are ordered alphabetically by name the media was uploaded with. | |
501 | - `created_ts` - Media are ordered by when the content was uploaded in ms. | |
502 | Smallest to largest. This is the default. | |
503 | - `last_access_ts` - Media are ordered by when the content was last accessed in ms. | |
504 | Smallest to largest. | |
505 | - `media_length` - Media are ordered by length of the media in bytes. | |
506 | Smallest to largest. | |
507 | - `media_type` - Media are ordered alphabetically by MIME-type. | |
508 | - `quarantined_by` - Media are ordered alphabetically by the user ID that | |
509 | initiated the quarantine request for this media. | |
510 | - `safe_from_quarantine` - Media are ordered by the status if this media is safe | |
511 | from quarantining. | |
512 | ||
513 | - `dir` - Direction of media order. Either `f` for forwards or `b` for backwards. | |
514 | Setting this value to `b` will reverse the above sort order. Defaults to `f`. | |
515 | ||
516 | If neither `order_by` nor `dir` is set, the default order is newest media on top | |
517 | (corresponds to `order_by` = `created_ts` and `dir` = `b`). | |
518 | ||
519 | Caution. The database only has indexes on the columns `media_id`, | |
520 | `user_id` and `created_ts`. This means that if a different sort order is used | |
521 | (`upload_name`, `last_access_ts`, `media_length`, `media_type`, | |
522 | `quarantined_by` or `safe_from_quarantine`), this can cause a large load on the | |
523 | database, especially for large environments. | |
524 | ||
525 | **Response** | |
526 | ||
527 | The following fields are returned in the JSON response body: | |
528 | ||
529 | - `media` - An array of objects, each containing information about a media. | |
530 | Media objects contain the following fields: | |
531 | ||
532 | - `created_ts` - integer - Timestamp when the content was uploaded in ms. | |
533 | - `last_access_ts` - integer - Timestamp when the content was last accessed in ms. | |
534 | - `media_id` - string - The id used to refer to the media. | |
535 | - `media_length` - integer - Length of the media in bytes. | |
536 | - `media_type` - string - The MIME-type of the media. | |
537 | - `quarantined_by` - string - The user ID that initiated the quarantine request | |
538 | for this media. | |
539 | ||
540 | - `safe_from_quarantine` - bool - Status if this media is safe from quarantining. | |
541 | - `upload_name` - string - The name the media was uploaded with. | |
542 | ||
543 | - `next_token`: integer - Indication for pagination. See above. | |
544 | - `total` - integer - Total number of media. | |
545 | ||
546 | ## Login as a user | |
547 | ||
548 | Get an access token that can be used to authenticate as that user. Useful for | |
549 | when admins wish to do actions on behalf of a user. | |
550 | ||
551 | The API is: | |
552 | ||
553 | ``` | |
554 | POST /_synapse/admin/v1/users/<user_id>/login | |
555 | {} | |
556 | ``` | |
557 | ||
558 | An optional `valid_until_ms` field can be specified in the request body as an | |
559 | integer timestamp that specifies when the token should expire. By default tokens | |
560 | do not expire. | |
561 | ||
562 | A response body like the following is returned: | |
563 | ||
564 | ```json | |
565 | { | |
566 | "access_token": "<opaque_access_token_string>" | |
567 | } | |
568 | ``` | |
569 | ||
570 | This API does *not* generate a new device for the user, and so will not appear | |
571 | their `/devices` list, and in general the target user should not be able to | |
572 | tell they have been logged in as. | |
573 | ||
574 | To expire the token call the standard `/logout` API with the token. | |
575 | ||
576 | Note: The token will expire if the *admin* user calls `/logout/all` from any | |
577 | of their devices, but the token will *not* expire if the target user does the | |
578 | same. | |
579 | ||
580 | ||
581 | ## User devices | |
582 | ||
583 | ### List all devices | |
584 | Gets information about all devices for a specific `user_id`. | |
585 | ||
586 | The API is: | |
587 | ||
588 | ``` | |
589 | GET /_synapse/admin/v2/users/<user_id>/devices | |
590 | ``` | |
591 | ||
592 | To use it, you will need to authenticate by providing an `access_token` for a | |
593 | server admin: [Admin API](../../usage/administration/admin_api) | |
594 | ||
595 | A response body like the following is returned: | |
596 | ||
597 | ```json | |
598 | { | |
599 | "devices": [ | |
600 | { | |
601 | "device_id": "QBUAZIFURK", | |
602 | "display_name": "android", | |
603 | "last_seen_ip": "1.2.3.4", | |
604 | "last_seen_ts": 1474491775024, | |
605 | "user_id": "<user_id>" | |
606 | }, | |
607 | { | |
608 | "device_id": "AUIECTSRND", | |
609 | "display_name": "ios", | |
610 | "last_seen_ip": "1.2.3.5", | |
611 | "last_seen_ts": 1474491775025, | |
612 | "user_id": "<user_id>" | |
613 | } | |
614 | ], | |
615 | "total": 2 | |
616 | } | |
617 | ``` | |
618 | ||
619 | **Parameters** | |
620 | ||
621 | The following parameters should be set in the URL: | |
622 | ||
623 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
624 | ||
625 | **Response** | |
626 | ||
627 | The following fields are returned in the JSON response body: | |
628 | ||
629 | - `devices` - An array of objects, each containing information about a device. | |
630 | Device objects contain the following fields: | |
631 | ||
632 | - `device_id` - Identifier of device. | |
633 | - `display_name` - Display name set by the user for this device. | |
634 | Absent if no name has been set. | |
635 | - `last_seen_ip` - The IP address where this device was last seen. | |
636 | (May be a few minutes out of date, for efficiency reasons). | |
637 | - `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this | |
638 | devices was last seen. (May be a few minutes out of date, for efficiency reasons). | |
639 | - `user_id` - Owner of device. | |
640 | ||
641 | - `total` - Total number of user's devices. | |
642 | ||
643 | ### Delete multiple devices | |
644 | Deletes the given devices for a specific `user_id`, and invalidates | |
645 | any access token associated with them. | |
646 | ||
647 | The API is: | |
648 | ||
649 | ``` | |
650 | POST /_synapse/admin/v2/users/<user_id>/delete_devices | |
651 | ||
652 | { | |
653 | "devices": [ | |
654 | "QBUAZIFURK", | |
655 | "AUIECTSRND" | |
656 | ], | |
657 | } | |
658 | ``` | |
659 | ||
660 | To use it, you will need to authenticate by providing an `access_token` for a | |
661 | server admin: [Admin API](../../usage/administration/admin_api) | |
662 | ||
663 | An empty JSON dict is returned. | |
664 | ||
665 | **Parameters** | |
666 | ||
667 | The following parameters should be set in the URL: | |
668 | ||
669 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
670 | ||
671 | The following fields are required in the JSON request body: | |
672 | ||
673 | - `devices` - The list of device IDs to delete. | |
674 | ||
675 | ### Show a device | |
676 | Gets information on a single device, by `device_id` for a specific `user_id`. | |
677 | ||
678 | The API is: | |
679 | ||
680 | ``` | |
681 | GET /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
682 | ``` | |
683 | ||
684 | To use it, you will need to authenticate by providing an `access_token` for a | |
685 | server admin: [Admin API](../../usage/administration/admin_api) | |
686 | ||
687 | A response body like the following is returned: | |
688 | ||
689 | ```json | |
690 | { | |
691 | "device_id": "<device_id>", | |
692 | "display_name": "android", | |
693 | "last_seen_ip": "1.2.3.4", | |
694 | "last_seen_ts": 1474491775024, | |
695 | "user_id": "<user_id>" | |
696 | } | |
697 | ``` | |
698 | ||
699 | **Parameters** | |
700 | ||
701 | The following parameters should be set in the URL: | |
702 | ||
703 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
704 | - `device_id` - The device to retrieve. | |
705 | ||
706 | **Response** | |
707 | ||
708 | The following fields are returned in the JSON response body: | |
709 | ||
710 | - `device_id` - Identifier of device. | |
711 | - `display_name` - Display name set by the user for this device. | |
712 | Absent if no name has been set. | |
713 | - `last_seen_ip` - The IP address where this device was last seen. | |
714 | (May be a few minutes out of date, for efficiency reasons). | |
715 | - `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this | |
716 | devices was last seen. (May be a few minutes out of date, for efficiency reasons). | |
717 | - `user_id` - Owner of device. | |
718 | ||
719 | ### Update a device | |
720 | Updates the metadata on the given `device_id` for a specific `user_id`. | |
721 | ||
722 | The API is: | |
723 | ||
724 | ``` | |
725 | PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
726 | ||
727 | { | |
728 | "display_name": "My other phone" | |
729 | } | |
730 | ``` | |
731 | ||
732 | To use it, you will need to authenticate by providing an `access_token` for a | |
733 | server admin: [Admin API](../../usage/administration/admin_api) | |
734 | ||
735 | An empty JSON dict is returned. | |
736 | ||
737 | **Parameters** | |
738 | ||
739 | The following parameters should be set in the URL: | |
740 | ||
741 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
742 | - `device_id` - The device to update. | |
743 | ||
744 | The following fields are required in the JSON request body: | |
745 | ||
746 | - `display_name` - The new display name for this device. If not given, | |
747 | the display name is unchanged. | |
748 | ||
749 | ### Delete a device | |
750 | Deletes the given `device_id` for a specific `user_id`, | |
751 | and invalidates any access token associated with it. | |
752 | ||
753 | The API is: | |
754 | ||
755 | ``` | |
756 | DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
757 | ||
758 | {} | |
759 | ``` | |
760 | ||
761 | To use it, you will need to authenticate by providing an `access_token` for a | |
762 | server admin: [Admin API](../../usage/administration/admin_api) | |
763 | ||
764 | An empty JSON dict is returned. | |
765 | ||
766 | **Parameters** | |
767 | ||
768 | The following parameters should be set in the URL: | |
769 | ||
770 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
771 | - `device_id` - The device to delete. | |
772 | ||
773 | ## List all pushers | |
774 | Gets information about all pushers for a specific `user_id`. | |
775 | ||
776 | The API is: | |
777 | ||
778 | ``` | |
779 | GET /_synapse/admin/v1/users/<user_id>/pushers | |
780 | ``` | |
781 | ||
782 | To use it, you will need to authenticate by providing an `access_token` for a | |
783 | server admin: [Admin API](../../usage/administration/admin_api) | |
784 | ||
785 | A response body like the following is returned: | |
786 | ||
787 | ```json | |
788 | { | |
789 | "pushers": [ | |
790 | { | |
791 | "app_display_name":"HTTP Push Notifications", | |
792 | "app_id":"m.http", | |
793 | "data": { | |
794 | "url":"example.com" | |
795 | }, | |
796 | "device_display_name":"pushy push", | |
797 | "kind":"http", | |
798 | "lang":"None", | |
799 | "profile_tag":"", | |
800 | "pushkey":"a@example.com" | |
801 | } | |
802 | ], | |
803 | "total": 1 | |
804 | } | |
805 | ``` | |
806 | ||
807 | **Parameters** | |
808 | ||
809 | The following parameters should be set in the URL: | |
810 | ||
811 | - `user_id` - fully qualified: for example, `@user:server.com`. | |
812 | ||
813 | **Response** | |
814 | ||
815 | The following fields are returned in the JSON response body: | |
816 | ||
817 | - `pushers` - An array containing the current pushers for the user | |
818 | ||
819 | - `app_display_name` - string - A string that will allow the user to identify | |
820 | what application owns this pusher. | |
821 | ||
822 | - `app_id` - string - This is a reverse-DNS style identifier for the application. | |
823 | Max length, 64 chars. | |
824 | ||
825 | - `data` - A dictionary of information for the pusher implementation itself. | |
826 | ||
827 | - `url` - string - Required if `kind` is `http`. The URL to use to send | |
828 | notifications to. | |
829 | ||
830 | - `format` - string - The format to use when sending notifications to the | |
831 | Push Gateway. | |
832 | ||
833 | - `device_display_name` - string - A string that will allow the user to identify | |
834 | what device owns this pusher. | |
835 | ||
836 | - `profile_tag` - string - This string determines which set of device specific rules | |
837 | this pusher executes. | |
838 | ||
839 | - `kind` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes. | |
840 | - `lang` - string - The preferred language for receiving notifications | |
841 | (e.g. 'en' or 'en-US') | |
842 | ||
843 | - `profile_tag` - string - This string determines which set of device specific rules | |
844 | this pusher executes. | |
845 | ||
846 | - `pushkey` - string - This is a unique identifier for this pusher. | |
847 | Max length, 512 bytes. | |
848 | ||
849 | - `total` - integer - Number of pushers. | |
850 | ||
851 | See also the | |
852 | [Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers). | |
853 | ||
854 | ## Shadow-banning users | |
855 | ||
856 | Shadow-banning is a useful tool for moderating malicious or egregiously abusive users. | |
857 | A shadow-banned users receives successful responses to their client-server API requests, | |
858 | but the events are not propagated into rooms. This can be an effective tool as it | |
859 | (hopefully) takes longer for the user to realise they are being moderated before | |
860 | pivoting to another account. | |
861 | ||
862 | Shadow-banning a user should be used as a tool of last resort and may lead to confusing | |
863 | or broken behaviour for the client. A shadow-banned user will not receive any | |
864 | notification and it is generally more appropriate to ban or kick abusive users. | |
865 | A shadow-banned user will be unable to contact anyone on the server. | |
866 | ||
867 | The API is: | |
868 | ||
869 | ``` | |
870 | POST /_synapse/admin/v1/users/<user_id>/shadow_ban | |
871 | ``` | |
872 | ||
873 | To use it, you will need to authenticate by providing an `access_token` for a | |
874 | server admin: [Admin API](../../usage/administration/admin_api) | |
875 | ||
876 | An empty JSON dict is returned. | |
877 | ||
878 | **Parameters** | |
879 | ||
880 | The following parameters should be set in the URL: | |
881 | ||
882 | - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must | |
883 | be local. | |
884 | ||
885 | ## Override ratelimiting for users | |
886 | ||
887 | This API allows to override or disable ratelimiting for a specific user. | |
888 | There are specific APIs to set, get and delete a ratelimit. | |
889 | ||
890 | ### Get status of ratelimit | |
891 | ||
892 | The API is: | |
893 | ||
894 | ``` | |
895 | GET /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
896 | ``` | |
897 | ||
898 | To use it, you will need to authenticate by providing an `access_token` for a | |
899 | server admin: [Admin API](../../usage/administration/admin_api) | |
900 | ||
901 | A response body like the following is returned: | |
902 | ||
903 | ```json | |
904 | { | |
905 | "messages_per_second": 0, | |
906 | "burst_count": 0 | |
907 | } | |
908 | ``` | |
909 | ||
910 | **Parameters** | |
911 | ||
912 | The following parameters should be set in the URL: | |
913 | ||
914 | - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must | |
915 | be local. | |
916 | ||
917 | **Response** | |
918 | ||
919 | The following fields are returned in the JSON response body: | |
920 | ||
921 | - `messages_per_second` - integer - The number of actions that can | |
922 | be performed in a second. `0` mean that ratelimiting is disabled for this user. | |
923 | - `burst_count` - integer - How many actions that can be performed before | |
924 | being limited. | |
925 | ||
926 | If **no** custom ratelimit is set, an empty JSON dict is returned. | |
927 | ||
928 | ```json | |
929 | {} | |
930 | ``` | |
931 | ||
932 | ### Set ratelimit | |
933 | ||
934 | The API is: | |
935 | ||
936 | ``` | |
937 | POST /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
938 | ``` | |
939 | ||
940 | To use it, you will need to authenticate by providing an `access_token` for a | |
941 | server admin: [Admin API](../../usage/administration/admin_api) | |
942 | ||
943 | A response body like the following is returned: | |
944 | ||
945 | ```json | |
946 | { | |
947 | "messages_per_second": 0, | |
948 | "burst_count": 0 | |
949 | } | |
950 | ``` | |
951 | ||
952 | **Parameters** | |
953 | ||
954 | The following parameters should be set in the URL: | |
955 | ||
956 | - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must | |
957 | be local. | |
958 | ||
959 | Body parameters: | |
960 | ||
961 | - `messages_per_second` - positive integer, optional. The number of actions that can | |
962 | be performed in a second. Defaults to `0`. | |
963 | - `burst_count` - positive integer, optional. How many actions that can be performed | |
964 | before being limited. Defaults to `0`. | |
965 | ||
966 | To disable users' ratelimit set both values to `0`. | |
967 | ||
968 | **Response** | |
969 | ||
970 | The following fields are returned in the JSON response body: | |
971 | ||
972 | - `messages_per_second` - integer - The number of actions that can | |
973 | be performed in a second. | |
974 | - `burst_count` - integer - How many actions that can be performed before | |
975 | being limited. | |
976 | ||
977 | ### Delete ratelimit | |
978 | ||
979 | The API is: | |
980 | ||
981 | ``` | |
982 | DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
983 | ``` | |
984 | ||
985 | To use it, you will need to authenticate by providing an `access_token` for a | |
986 | server admin: [Admin API](../../usage/administration/admin_api) | |
987 | ||
988 | An empty JSON dict is returned. | |
989 | ||
990 | ```json | |
991 | {} | |
992 | ``` | |
993 | ||
994 | **Parameters** | |
995 | ||
996 | The following parameters should be set in the URL: | |
997 | ||
998 | - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must | |
999 | be local. | |
1000 |
0 | .. contents:: | |
1 | ||
2 | Query User Account | |
3 | ================== | |
4 | ||
5 | This API returns information about a specific user account. | |
6 | ||
7 | The api is:: | |
8 | ||
9 | GET /_synapse/admin/v2/users/<user_id> | |
10 | ||
11 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
12 | server admin: see `README.rst <README.rst>`_. | |
13 | ||
14 | It returns a JSON body like the following: | |
15 | ||
16 | .. code:: json | |
17 | ||
18 | { | |
19 | "displayname": "User", | |
20 | "threepids": [ | |
21 | { | |
22 | "medium": "email", | |
23 | "address": "<user_mail_1>" | |
24 | }, | |
25 | { | |
26 | "medium": "email", | |
27 | "address": "<user_mail_2>" | |
28 | } | |
29 | ], | |
30 | "avatar_url": "<avatar_url>", | |
31 | "admin": 0, | |
32 | "deactivated": 0, | |
33 | "shadow_banned": 0, | |
34 | "password_hash": "$2b$12$p9B4GkqYdRTPGD", | |
35 | "creation_ts": 1560432506, | |
36 | "appservice_id": null, | |
37 | "consent_server_notice_sent": null, | |
38 | "consent_version": null | |
39 | } | |
40 | ||
41 | URL parameters: | |
42 | ||
43 | - ``user_id``: fully-qualified user id: for example, ``@user:server.com``. | |
44 | ||
45 | Create or modify Account | |
46 | ======================== | |
47 | ||
48 | This API allows an administrator to create or modify a user account with a | |
49 | specific ``user_id``. | |
50 | ||
51 | This api is:: | |
52 | ||
53 | PUT /_synapse/admin/v2/users/<user_id> | |
54 | ||
55 | with a body of: | |
56 | ||
57 | .. code:: json | |
58 | ||
59 | { | |
60 | "password": "user_password", | |
61 | "displayname": "User", | |
62 | "threepids": [ | |
63 | { | |
64 | "medium": "email", | |
65 | "address": "<user_mail_1>" | |
66 | }, | |
67 | { | |
68 | "medium": "email", | |
69 | "address": "<user_mail_2>" | |
70 | } | |
71 | ], | |
72 | "avatar_url": "<avatar_url>", | |
73 | "admin": false, | |
74 | "deactivated": false | |
75 | } | |
76 | ||
77 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
78 | server admin: see `README.rst <README.rst>`_. | |
79 | ||
80 | URL parameters: | |
81 | ||
82 | - ``user_id``: fully-qualified user id: for example, ``@user:server.com``. | |
83 | ||
84 | Body parameters: | |
85 | ||
86 | - ``password``, optional. If provided, the user's password is updated and all | |
87 | devices are logged out. | |
88 | ||
89 | - ``displayname``, optional, defaults to the value of ``user_id``. | |
90 | ||
91 | - ``threepids``, optional, allows setting the third-party IDs (email, msisdn) | |
92 | belonging to a user. | |
93 | ||
94 | - ``avatar_url``, optional, must be a | |
95 | `MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_. | |
96 | ||
97 | - ``admin``, optional, defaults to ``false``. | |
98 | ||
99 | - ``deactivated``, optional. If unspecified, deactivation state will be left | |
100 | unchanged on existing accounts and set to ``false`` for new accounts. | |
101 | A user cannot be erased by deactivating with this API. For details on deactivating users see | |
102 | `Deactivate Account <#deactivate-account>`_. | |
103 | ||
104 | If the user already exists then optional parameters default to the current value. | |
105 | ||
106 | In order to re-activate an account ``deactivated`` must be set to ``false``. If | |
107 | users do not login via single-sign-on, a new ``password`` must be provided. | |
108 | ||
109 | List Accounts | |
110 | ============= | |
111 | ||
112 | This API returns all local user accounts. | |
113 | By default, the response is ordered by ascending user ID. | |
114 | ||
115 | The API is:: | |
116 | ||
117 | GET /_synapse/admin/v2/users?from=0&limit=10&guests=false | |
118 | ||
119 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
120 | server admin: see `README.rst <README.rst>`_. | |
121 | ||
122 | A response body like the following is returned: | |
123 | ||
124 | .. code:: json | |
125 | ||
126 | { | |
127 | "users": [ | |
128 | { | |
129 | "name": "<user_id1>", | |
130 | "is_guest": 0, | |
131 | "admin": 0, | |
132 | "user_type": null, | |
133 | "deactivated": 0, | |
134 | "shadow_banned": 0, | |
135 | "displayname": "<User One>", | |
136 | "avatar_url": null | |
137 | }, { | |
138 | "name": "<user_id2>", | |
139 | "is_guest": 0, | |
140 | "admin": 1, | |
141 | "user_type": null, | |
142 | "deactivated": 0, | |
143 | "shadow_banned": 0, | |
144 | "displayname": "<User Two>", | |
145 | "avatar_url": "<avatar_url>" | |
146 | } | |
147 | ], | |
148 | "next_token": "100", | |
149 | "total": 200 | |
150 | } | |
151 | ||
152 | To paginate, check for ``next_token`` and if present, call the endpoint again | |
153 | with ``from`` set to the value of ``next_token``. This will return a new page. | |
154 | ||
155 | If the endpoint does not return a ``next_token`` then there are no more users | |
156 | to paginate through. | |
157 | ||
158 | **Parameters** | |
159 | ||
160 | The following parameters should be set in the URL: | |
161 | ||
162 | - ``user_id`` - Is optional and filters to only return users with user IDs | |
163 | that contain this value. This parameter is ignored when using the ``name`` parameter. | |
164 | - ``name`` - Is optional and filters to only return users with user ID localparts | |
165 | **or** displaynames that contain this value. | |
166 | - ``guests`` - string representing a bool - Is optional and if ``false`` will **exclude** guest users. | |
167 | Defaults to ``true`` to include guest users. | |
168 | - ``deactivated`` - string representing a bool - Is optional and if ``true`` will **include** deactivated users. | |
169 | Defaults to ``false`` to exclude deactivated users. | |
170 | - ``limit`` - string representing a positive integer - Is optional but is used for pagination, | |
171 | denoting the maximum number of items to return in this call. Defaults to ``100``. | |
172 | - ``from`` - string representing a positive integer - Is optional but used for pagination, | |
173 | denoting the offset in the returned results. This should be treated as an opaque value and | |
174 | not explicitly set to anything other than the return value of ``next_token`` from a previous call. | |
175 | Defaults to ``0``. | |
176 | - ``order_by`` - The method by which to sort the returned list of users. | |
177 | If the ordered field has duplicates, the second order is always by ascending ``name``, | |
178 | which guarantees a stable ordering. Valid values are: | |
179 | ||
180 | - ``name`` - Users are ordered alphabetically by ``name``. This is the default. | |
181 | - ``is_guest`` - Users are ordered by ``is_guest`` status. | |
182 | - ``admin`` - Users are ordered by ``admin`` status. | |
183 | - ``user_type`` - Users are ordered alphabetically by ``user_type``. | |
184 | - ``deactivated`` - Users are ordered by ``deactivated`` status. | |
185 | - ``shadow_banned`` - Users are ordered by ``shadow_banned`` status. | |
186 | - ``displayname`` - Users are ordered alphabetically by ``displayname``. | |
187 | - ``avatar_url`` - Users are ordered alphabetically by avatar URL. | |
188 | ||
189 | - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards. | |
190 | Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``. | |
191 | ||
192 | Caution. The database only has indexes on the columns ``name`` and ``created_ts``. | |
193 | This means that if a different sort order is used (``is_guest``, ``admin``, | |
194 | ``user_type``, ``deactivated``, ``shadow_banned``, ``avatar_url`` or ``displayname``), | |
195 | this can cause a large load on the database, especially for large environments. | |
196 | ||
197 | **Response** | |
198 | ||
199 | The following fields are returned in the JSON response body: | |
200 | ||
201 | - ``users`` - An array of objects, each containing information about an user. | |
202 | User objects contain the following fields: | |
203 | ||
204 | - ``name`` - string - Fully-qualified user ID (ex. ``@user:server.com``). | |
205 | - ``is_guest`` - bool - Status if that user is a guest account. | |
206 | - ``admin`` - bool - Status if that user is a server administrator. | |
207 | - ``user_type`` - string - Type of the user. Normal users are type ``None``. | |
208 | This allows user type specific behaviour. There are also types ``support`` and ``bot``. | |
209 | - ``deactivated`` - bool - Status if that user has been marked as deactivated. | |
210 | - ``shadow_banned`` - bool - Status if that user has been marked as shadow banned. | |
211 | - ``displayname`` - string - The user's display name if they have set one. | |
212 | - ``avatar_url`` - string - The user's avatar URL if they have set one. | |
213 | ||
214 | - ``next_token``: string representing a positive integer - Indication for pagination. See above. | |
215 | - ``total`` - integer - Total number of media. | |
216 | ||
217 | ||
218 | Query current sessions for a user | |
219 | ================================= | |
220 | ||
221 | This API returns information about the active sessions for a specific user. | |
222 | ||
223 | The api is:: | |
224 | ||
225 | GET /_synapse/admin/v1/whois/<user_id> | |
226 | ||
227 | and:: | |
228 | ||
229 | GET /_matrix/client/r0/admin/whois/<userId> | |
230 | ||
231 | See also: `Client Server API Whois | |
232 | <https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>`_ | |
233 | ||
234 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
235 | server admin: see `README.rst <README.rst>`_. | |
236 | ||
237 | It returns a JSON body like the following: | |
238 | ||
239 | .. code:: json | |
240 | ||
241 | { | |
242 | "user_id": "<user_id>", | |
243 | "devices": { | |
244 | "": { | |
245 | "sessions": [ | |
246 | { | |
247 | "connections": [ | |
248 | { | |
249 | "ip": "1.2.3.4", | |
250 | "last_seen": 1417222374433, | |
251 | "user_agent": "Mozilla/5.0 ..." | |
252 | }, | |
253 | { | |
254 | "ip": "1.2.3.10", | |
255 | "last_seen": 1417222374500, | |
256 | "user_agent": "Dalvik/2.1.0 ..." | |
257 | } | |
258 | ] | |
259 | } | |
260 | ] | |
261 | } | |
262 | } | |
263 | } | |
264 | ||
265 | ``last_seen`` is measured in milliseconds since the Unix epoch. | |
266 | ||
267 | Deactivate Account | |
268 | ================== | |
269 | ||
270 | This API deactivates an account. It removes active access tokens, resets the | |
271 | password, and deletes third-party IDs (to prevent the user requesting a | |
272 | password reset). | |
273 | ||
274 | It can also mark the user as GDPR-erased. This means messages sent by the | |
275 | user will still be visible by anyone that was in the room when these messages | |
276 | were sent, but hidden from users joining the room afterwards. | |
277 | ||
278 | The api is:: | |
279 | ||
280 | POST /_synapse/admin/v1/deactivate/<user_id> | |
281 | ||
282 | with a body of: | |
283 | ||
284 | .. code:: json | |
285 | ||
286 | { | |
287 | "erase": true | |
288 | } | |
289 | ||
290 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
291 | server admin: see `README.rst <README.rst>`_. | |
292 | ||
293 | The erase parameter is optional and defaults to ``false``. | |
294 | An empty body may be passed for backwards compatibility. | |
295 | ||
296 | The following actions are performed when deactivating an user: | |
297 | ||
298 | - Try to unpind 3PIDs from the identity server | |
299 | - Remove all 3PIDs from the homeserver | |
300 | - Delete all devices and E2EE keys | |
301 | - Delete all access tokens | |
302 | - Delete the password hash | |
303 | - Removal from all rooms the user is a member of | |
304 | - Remove the user from the user directory | |
305 | - Reject all pending invites | |
306 | - Remove all account validity information related to the user | |
307 | ||
308 | The following additional actions are performed during deactivation if ``erase`` | |
309 | is set to ``true``: | |
310 | ||
311 | - Remove the user's display name | |
312 | - Remove the user's avatar URL | |
313 | - Mark the user as erased | |
314 | ||
315 | ||
316 | Reset password | |
317 | ============== | |
318 | ||
319 | Changes the password of another user. This will automatically log the user out of all their devices. | |
320 | ||
321 | The api is:: | |
322 | ||
323 | POST /_synapse/admin/v1/reset_password/<user_id> | |
324 | ||
325 | with a body of: | |
326 | ||
327 | .. code:: json | |
328 | ||
329 | { | |
330 | "new_password": "<secret>", | |
331 | "logout_devices": true | |
332 | } | |
333 | ||
334 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
335 | server admin: see `README.rst <README.rst>`_. | |
336 | ||
337 | The parameter ``new_password`` is required. | |
338 | The parameter ``logout_devices`` is optional and defaults to ``true``. | |
339 | ||
340 | Get whether a user is a server administrator or not | |
341 | =================================================== | |
342 | ||
343 | ||
344 | The api is:: | |
345 | ||
346 | GET /_synapse/admin/v1/users/<user_id>/admin | |
347 | ||
348 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
349 | server admin: see `README.rst <README.rst>`_. | |
350 | ||
351 | A response body like the following is returned: | |
352 | ||
353 | .. code:: json | |
354 | ||
355 | { | |
356 | "admin": true | |
357 | } | |
358 | ||
359 | ||
360 | Change whether a user is a server administrator or not | |
361 | ====================================================== | |
362 | ||
363 | Note that you cannot demote yourself. | |
364 | ||
365 | The api is:: | |
366 | ||
367 | PUT /_synapse/admin/v1/users/<user_id>/admin | |
368 | ||
369 | with a body of: | |
370 | ||
371 | .. code:: json | |
372 | ||
373 | { | |
374 | "admin": true | |
375 | } | |
376 | ||
377 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
378 | server admin: see `README.rst <README.rst>`_. | |
379 | ||
380 | ||
381 | List room memberships of an user | |
382 | ================================ | |
383 | Gets a list of all ``room_id`` that a specific ``user_id`` is member. | |
384 | ||
385 | The API is:: | |
386 | ||
387 | GET /_synapse/admin/v1/users/<user_id>/joined_rooms | |
388 | ||
389 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
390 | server admin: see `README.rst <README.rst>`_. | |
391 | ||
392 | A response body like the following is returned: | |
393 | ||
394 | .. code:: json | |
395 | ||
396 | { | |
397 | "joined_rooms": [ | |
398 | "!DuGcnbhHGaSZQoNQR:matrix.org", | |
399 | "!ZtSaPCawyWtxfWiIy:matrix.org" | |
400 | ], | |
401 | "total": 2 | |
402 | } | |
403 | ||
404 | The server returns the list of rooms of which the user and the server | |
405 | are member. If the user is local, all the rooms of which the user is | |
406 | member are returned. | |
407 | ||
408 | **Parameters** | |
409 | ||
410 | The following parameters should be set in the URL: | |
411 | ||
412 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
413 | ||
414 | **Response** | |
415 | ||
416 | The following fields are returned in the JSON response body: | |
417 | ||
418 | - ``joined_rooms`` - An array of ``room_id``. | |
419 | - ``total`` - Number of rooms. | |
420 | ||
421 | ||
422 | List media of a user | |
423 | ==================== | |
424 | Gets a list of all local media that a specific ``user_id`` has created. | |
425 | By default, the response is ordered by descending creation date and ascending media ID. | |
426 | The newest media is on top. You can change the order with parameters | |
427 | ``order_by`` and ``dir``. | |
428 | ||
429 | The API is:: | |
430 | ||
431 | GET /_synapse/admin/v1/users/<user_id>/media | |
432 | ||
433 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
434 | server admin: see `README.rst <README.rst>`_. | |
435 | ||
436 | A response body like the following is returned: | |
437 | ||
438 | .. code:: json | |
439 | ||
440 | { | |
441 | "media": [ | |
442 | { | |
443 | "created_ts": 100400, | |
444 | "last_access_ts": null, | |
445 | "media_id": "qXhyRzulkwLsNHTbpHreuEgo", | |
446 | "media_length": 67, | |
447 | "media_type": "image/png", | |
448 | "quarantined_by": null, | |
449 | "safe_from_quarantine": false, | |
450 | "upload_name": "test1.png" | |
451 | }, | |
452 | { | |
453 | "created_ts": 200400, | |
454 | "last_access_ts": null, | |
455 | "media_id": "FHfiSnzoINDatrXHQIXBtahw", | |
456 | "media_length": 67, | |
457 | "media_type": "image/png", | |
458 | "quarantined_by": null, | |
459 | "safe_from_quarantine": false, | |
460 | "upload_name": "test2.png" | |
461 | } | |
462 | ], | |
463 | "next_token": 3, | |
464 | "total": 2 | |
465 | } | |
466 | ||
467 | To paginate, check for ``next_token`` and if present, call the endpoint again | |
468 | with ``from`` set to the value of ``next_token``. This will return a new page. | |
469 | ||
470 | If the endpoint does not return a ``next_token`` then there are no more | |
471 | reports to paginate through. | |
472 | ||
473 | **Parameters** | |
474 | ||
475 | The following parameters should be set in the URL: | |
476 | ||
477 | - ``user_id`` - string - fully qualified: for example, ``@user:server.com``. | |
478 | - ``limit``: string representing a positive integer - Is optional but is used for pagination, | |
479 | denoting the maximum number of items to return in this call. Defaults to ``100``. | |
480 | - ``from``: string representing a positive integer - Is optional but used for pagination, | |
481 | denoting the offset in the returned results. This should be treated as an opaque value and | |
482 | not explicitly set to anything other than the return value of ``next_token`` from a previous call. | |
483 | Defaults to ``0``. | |
484 | - ``order_by`` - The method by which to sort the returned list of media. | |
485 | If the ordered field has duplicates, the second order is always by ascending ``media_id``, | |
486 | which guarantees a stable ordering. Valid values are: | |
487 | ||
488 | - ``media_id`` - Media are ordered alphabetically by ``media_id``. | |
489 | - ``upload_name`` - Media are ordered alphabetically by name the media was uploaded with. | |
490 | - ``created_ts`` - Media are ordered by when the content was uploaded in ms. | |
491 | Smallest to largest. This is the default. | |
492 | - ``last_access_ts`` - Media are ordered by when the content was last accessed in ms. | |
493 | Smallest to largest. | |
494 | - ``media_length`` - Media are ordered by length of the media in bytes. | |
495 | Smallest to largest. | |
496 | - ``media_type`` - Media are ordered alphabetically by MIME-type. | |
497 | - ``quarantined_by`` - Media are ordered alphabetically by the user ID that | |
498 | initiated the quarantine request for this media. | |
499 | - ``safe_from_quarantine`` - Media are ordered by the status if this media is safe | |
500 | from quarantining. | |
501 | ||
502 | - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards. | |
503 | Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``. | |
504 | ||
505 | If neither ``order_by`` nor ``dir`` is set, the default order is newest media on top | |
506 | (corresponds to ``order_by`` = ``created_ts`` and ``dir`` = ``b``). | |
507 | ||
508 | Caution. The database only has indexes on the columns ``media_id``, | |
509 | ``user_id`` and ``created_ts``. This means that if a different sort order is used | |
510 | (``upload_name``, ``last_access_ts``, ``media_length``, ``media_type``, | |
511 | ``quarantined_by`` or ``safe_from_quarantine``), this can cause a large load on the | |
512 | database, especially for large environments. | |
513 | ||
514 | **Response** | |
515 | ||
516 | The following fields are returned in the JSON response body: | |
517 | ||
518 | - ``media`` - An array of objects, each containing information about a media. | |
519 | Media objects contain the following fields: | |
520 | ||
521 | - ``created_ts`` - integer - Timestamp when the content was uploaded in ms. | |
522 | - ``last_access_ts`` - integer - Timestamp when the content was last accessed in ms. | |
523 | - ``media_id`` - string - The id used to refer to the media. | |
524 | - ``media_length`` - integer - Length of the media in bytes. | |
525 | - ``media_type`` - string - The MIME-type of the media. | |
526 | - ``quarantined_by`` - string - The user ID that initiated the quarantine request | |
527 | for this media. | |
528 | ||
529 | - ``safe_from_quarantine`` - bool - Status if this media is safe from quarantining. | |
530 | - ``upload_name`` - string - The name the media was uploaded with. | |
531 | ||
532 | - ``next_token``: integer - Indication for pagination. See above. | |
533 | - ``total`` - integer - Total number of media. | |
534 | ||
535 | Login as a user | |
536 | =============== | |
537 | ||
538 | Get an access token that can be used to authenticate as that user. Useful for | |
539 | when admins wish to do actions on behalf of a user. | |
540 | ||
541 | The API is:: | |
542 | ||
543 | POST /_synapse/admin/v1/users/<user_id>/login | |
544 | {} | |
545 | ||
546 | An optional ``valid_until_ms`` field can be specified in the request body as an | |
547 | integer timestamp that specifies when the token should expire. By default tokens | |
548 | do not expire. | |
549 | ||
550 | A response body like the following is returned: | |
551 | ||
552 | .. code:: json | |
553 | ||
554 | { | |
555 | "access_token": "<opaque_access_token_string>" | |
556 | } | |
557 | ||
558 | ||
559 | This API does *not* generate a new device for the user, and so will not appear | |
560 | their ``/devices`` list, and in general the target user should not be able to | |
561 | tell they have been logged in as. | |
562 | ||
563 | To expire the token call the standard ``/logout`` API with the token. | |
564 | ||
565 | Note: The token will expire if the *admin* user calls ``/logout/all`` from any | |
566 | of their devices, but the token will *not* expire if the target user does the | |
567 | same. | |
568 | ||
569 | ||
570 | User devices | |
571 | ============ | |
572 | ||
573 | List all devices | |
574 | ---------------- | |
575 | Gets information about all devices for a specific ``user_id``. | |
576 | ||
577 | The API is:: | |
578 | ||
579 | GET /_synapse/admin/v2/users/<user_id>/devices | |
580 | ||
581 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
582 | server admin: see `README.rst <README.rst>`_. | |
583 | ||
584 | A response body like the following is returned: | |
585 | ||
586 | .. code:: json | |
587 | ||
588 | { | |
589 | "devices": [ | |
590 | { | |
591 | "device_id": "QBUAZIFURK", | |
592 | "display_name": "android", | |
593 | "last_seen_ip": "1.2.3.4", | |
594 | "last_seen_ts": 1474491775024, | |
595 | "user_id": "<user_id>" | |
596 | }, | |
597 | { | |
598 | "device_id": "AUIECTSRND", | |
599 | "display_name": "ios", | |
600 | "last_seen_ip": "1.2.3.5", | |
601 | "last_seen_ts": 1474491775025, | |
602 | "user_id": "<user_id>" | |
603 | } | |
604 | ], | |
605 | "total": 2 | |
606 | } | |
607 | ||
608 | **Parameters** | |
609 | ||
610 | The following parameters should be set in the URL: | |
611 | ||
612 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
613 | ||
614 | **Response** | |
615 | ||
616 | The following fields are returned in the JSON response body: | |
617 | ||
618 | - ``devices`` - An array of objects, each containing information about a device. | |
619 | Device objects contain the following fields: | |
620 | ||
621 | - ``device_id`` - Identifier of device. | |
622 | - ``display_name`` - Display name set by the user for this device. | |
623 | Absent if no name has been set. | |
624 | - ``last_seen_ip`` - The IP address where this device was last seen. | |
625 | (May be a few minutes out of date, for efficiency reasons). | |
626 | - ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this | |
627 | devices was last seen. (May be a few minutes out of date, for efficiency reasons). | |
628 | - ``user_id`` - Owner of device. | |
629 | ||
630 | - ``total`` - Total number of user's devices. | |
631 | ||
632 | Delete multiple devices | |
633 | ------------------ | |
634 | Deletes the given devices for a specific ``user_id``, and invalidates | |
635 | any access token associated with them. | |
636 | ||
637 | The API is:: | |
638 | ||
639 | POST /_synapse/admin/v2/users/<user_id>/delete_devices | |
640 | ||
641 | { | |
642 | "devices": [ | |
643 | "QBUAZIFURK", | |
644 | "AUIECTSRND" | |
645 | ], | |
646 | } | |
647 | ||
648 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
649 | server admin: see `README.rst <README.rst>`_. | |
650 | ||
651 | An empty JSON dict is returned. | |
652 | ||
653 | **Parameters** | |
654 | ||
655 | The following parameters should be set in the URL: | |
656 | ||
657 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
658 | ||
659 | The following fields are required in the JSON request body: | |
660 | ||
661 | - ``devices`` - The list of device IDs to delete. | |
662 | ||
663 | Show a device | |
664 | --------------- | |
665 | Gets information on a single device, by ``device_id`` for a specific ``user_id``. | |
666 | ||
667 | The API is:: | |
668 | ||
669 | GET /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
670 | ||
671 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
672 | server admin: see `README.rst <README.rst>`_. | |
673 | ||
674 | A response body like the following is returned: | |
675 | ||
676 | .. code:: json | |
677 | ||
678 | { | |
679 | "device_id": "<device_id>", | |
680 | "display_name": "android", | |
681 | "last_seen_ip": "1.2.3.4", | |
682 | "last_seen_ts": 1474491775024, | |
683 | "user_id": "<user_id>" | |
684 | } | |
685 | ||
686 | **Parameters** | |
687 | ||
688 | The following parameters should be set in the URL: | |
689 | ||
690 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
691 | - ``device_id`` - The device to retrieve. | |
692 | ||
693 | **Response** | |
694 | ||
695 | The following fields are returned in the JSON response body: | |
696 | ||
697 | - ``device_id`` - Identifier of device. | |
698 | - ``display_name`` - Display name set by the user for this device. | |
699 | Absent if no name has been set. | |
700 | - ``last_seen_ip`` - The IP address where this device was last seen. | |
701 | (May be a few minutes out of date, for efficiency reasons). | |
702 | - ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this | |
703 | devices was last seen. (May be a few minutes out of date, for efficiency reasons). | |
704 | - ``user_id`` - Owner of device. | |
705 | ||
706 | Update a device | |
707 | --------------- | |
708 | Updates the metadata on the given ``device_id`` for a specific ``user_id``. | |
709 | ||
710 | The API is:: | |
711 | ||
712 | PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
713 | ||
714 | { | |
715 | "display_name": "My other phone" | |
716 | } | |
717 | ||
718 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
719 | server admin: see `README.rst <README.rst>`_. | |
720 | ||
721 | An empty JSON dict is returned. | |
722 | ||
723 | **Parameters** | |
724 | ||
725 | The following parameters should be set in the URL: | |
726 | ||
727 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
728 | - ``device_id`` - The device to update. | |
729 | ||
730 | The following fields are required in the JSON request body: | |
731 | ||
732 | - ``display_name`` - The new display name for this device. If not given, | |
733 | the display name is unchanged. | |
734 | ||
735 | Delete a device | |
736 | --------------- | |
737 | Deletes the given ``device_id`` for a specific ``user_id``, | |
738 | and invalidates any access token associated with it. | |
739 | ||
740 | The API is:: | |
741 | ||
742 | DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id> | |
743 | ||
744 | {} | |
745 | ||
746 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
747 | server admin: see `README.rst <README.rst>`_. | |
748 | ||
749 | An empty JSON dict is returned. | |
750 | ||
751 | **Parameters** | |
752 | ||
753 | The following parameters should be set in the URL: | |
754 | ||
755 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
756 | - ``device_id`` - The device to delete. | |
757 | ||
758 | List all pushers | |
759 | ================ | |
760 | Gets information about all pushers for a specific ``user_id``. | |
761 | ||
762 | The API is:: | |
763 | ||
764 | GET /_synapse/admin/v1/users/<user_id>/pushers | |
765 | ||
766 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
767 | server admin: see `README.rst <README.rst>`_. | |
768 | ||
769 | A response body like the following is returned: | |
770 | ||
771 | .. code:: json | |
772 | ||
773 | { | |
774 | "pushers": [ | |
775 | { | |
776 | "app_display_name":"HTTP Push Notifications", | |
777 | "app_id":"m.http", | |
778 | "data": { | |
779 | "url":"example.com" | |
780 | }, | |
781 | "device_display_name":"pushy push", | |
782 | "kind":"http", | |
783 | "lang":"None", | |
784 | "profile_tag":"", | |
785 | "pushkey":"a@example.com" | |
786 | } | |
787 | ], | |
788 | "total": 1 | |
789 | } | |
790 | ||
791 | **Parameters** | |
792 | ||
793 | The following parameters should be set in the URL: | |
794 | ||
795 | - ``user_id`` - fully qualified: for example, ``@user:server.com``. | |
796 | ||
797 | **Response** | |
798 | ||
799 | The following fields are returned in the JSON response body: | |
800 | ||
801 | - ``pushers`` - An array containing the current pushers for the user | |
802 | ||
803 | - ``app_display_name`` - string - A string that will allow the user to identify | |
804 | what application owns this pusher. | |
805 | ||
806 | - ``app_id`` - string - This is a reverse-DNS style identifier for the application. | |
807 | Max length, 64 chars. | |
808 | ||
809 | - ``data`` - A dictionary of information for the pusher implementation itself. | |
810 | ||
811 | - ``url`` - string - Required if ``kind`` is ``http``. The URL to use to send | |
812 | notifications to. | |
813 | ||
814 | - ``format`` - string - The format to use when sending notifications to the | |
815 | Push Gateway. | |
816 | ||
817 | - ``device_display_name`` - string - A string that will allow the user to identify | |
818 | what device owns this pusher. | |
819 | ||
820 | - ``profile_tag`` - string - This string determines which set of device specific rules | |
821 | this pusher executes. | |
822 | ||
823 | - ``kind`` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes. | |
824 | - ``lang`` - string - The preferred language for receiving notifications | |
825 | (e.g. 'en' or 'en-US') | |
826 | ||
827 | - ``profile_tag`` - string - This string determines which set of device specific rules | |
828 | this pusher executes. | |
829 | ||
830 | - ``pushkey`` - string - This is a unique identifier for this pusher. | |
831 | Max length, 512 bytes. | |
832 | ||
833 | - ``total`` - integer - Number of pushers. | |
834 | ||
835 | See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_ | |
836 | ||
837 | Shadow-banning users | |
838 | ==================== | |
839 | ||
840 | Shadow-banning is a useful tool for moderating malicious or egregiously abusive users. | |
841 | A shadow-banned users receives successful responses to their client-server API requests, | |
842 | but the events are not propagated into rooms. This can be an effective tool as it | |
843 | (hopefully) takes longer for the user to realise they are being moderated before | |
844 | pivoting to another account. | |
845 | ||
846 | Shadow-banning a user should be used as a tool of last resort and may lead to confusing | |
847 | or broken behaviour for the client. A shadow-banned user will not receive any | |
848 | notification and it is generally more appropriate to ban or kick abusive users. | |
849 | A shadow-banned user will be unable to contact anyone on the server. | |
850 | ||
851 | The API is:: | |
852 | ||
853 | POST /_synapse/admin/v1/users/<user_id>/shadow_ban | |
854 | ||
855 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
856 | server admin: see `README.rst <README.rst>`_. | |
857 | ||
858 | An empty JSON dict is returned. | |
859 | ||
860 | **Parameters** | |
861 | ||
862 | The following parameters should be set in the URL: | |
863 | ||
864 | - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must | |
865 | be local. | |
866 | ||
867 | Override ratelimiting for users | |
868 | =============================== | |
869 | ||
870 | This API allows to override or disable ratelimiting for a specific user. | |
871 | There are specific APIs to set, get and delete a ratelimit. | |
872 | ||
873 | Get status of ratelimit | |
874 | ----------------------- | |
875 | ||
876 | The API is:: | |
877 | ||
878 | GET /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
879 | ||
880 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
881 | server admin: see `README.rst <README.rst>`_. | |
882 | ||
883 | A response body like the following is returned: | |
884 | ||
885 | .. code:: json | |
886 | ||
887 | { | |
888 | "messages_per_second": 0, | |
889 | "burst_count": 0 | |
890 | } | |
891 | ||
892 | **Parameters** | |
893 | ||
894 | The following parameters should be set in the URL: | |
895 | ||
896 | - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must | |
897 | be local. | |
898 | ||
899 | **Response** | |
900 | ||
901 | The following fields are returned in the JSON response body: | |
902 | ||
903 | - ``messages_per_second`` - integer - The number of actions that can | |
904 | be performed in a second. `0` mean that ratelimiting is disabled for this user. | |
905 | - ``burst_count`` - integer - How many actions that can be performed before | |
906 | being limited. | |
907 | ||
908 | If **no** custom ratelimit is set, an empty JSON dict is returned. | |
909 | ||
910 | .. code:: json | |
911 | ||
912 | {} | |
913 | ||
914 | Set ratelimit | |
915 | ------------- | |
916 | ||
917 | The API is:: | |
918 | ||
919 | POST /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
920 | ||
921 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
922 | server admin: see `README.rst <README.rst>`_. | |
923 | ||
924 | A response body like the following is returned: | |
925 | ||
926 | .. code:: json | |
927 | ||
928 | { | |
929 | "messages_per_second": 0, | |
930 | "burst_count": 0 | |
931 | } | |
932 | ||
933 | **Parameters** | |
934 | ||
935 | The following parameters should be set in the URL: | |
936 | ||
937 | - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must | |
938 | be local. | |
939 | ||
940 | Body parameters: | |
941 | ||
942 | - ``messages_per_second`` - positive integer, optional. The number of actions that can | |
943 | be performed in a second. Defaults to ``0``. | |
944 | - ``burst_count`` - positive integer, optional. How many actions that can be performed | |
945 | before being limited. Defaults to ``0``. | |
946 | ||
947 | To disable users' ratelimit set both values to ``0``. | |
948 | ||
949 | **Response** | |
950 | ||
951 | The following fields are returned in the JSON response body: | |
952 | ||
953 | - ``messages_per_second`` - integer - The number of actions that can | |
954 | be performed in a second. | |
955 | - ``burst_count`` - integer - How many actions that can be performed before | |
956 | being limited. | |
957 | ||
958 | Delete ratelimit | |
959 | ---------------- | |
960 | ||
961 | The API is:: | |
962 | ||
963 | DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit | |
964 | ||
965 | To use it, you will need to authenticate by providing an ``access_token`` for a | |
966 | server admin: see `README.rst <README.rst>`_. | |
967 | ||
968 | An empty JSON dict is returned. | |
969 | ||
970 | .. code:: json | |
971 | ||
972 | {} | |
973 | ||
974 | **Parameters** | |
975 | ||
976 | The following parameters should be set in the URL: | |
977 | ||
978 | - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must | |
979 | be local. | |
980 |
0 | # Version API | |
1 | ||
2 | This API returns the running Synapse version and the Python version | |
3 | on which Synapse is being run. This is useful when a Synapse instance | |
4 | is behind a proxy that does not forward the 'Server' header (which also | |
5 | contains Synapse version information). | |
6 | ||
7 | The api is: | |
8 | ||
9 | ``` | |
10 | GET /_synapse/admin/v1/server_version | |
11 | ``` | |
12 | ||
13 | It returns a JSON body like the following: | |
14 | ||
15 | ```json | |
16 | { | |
17 | "server_version": "0.99.2rc1 (b=develop, abcdef123)", | |
18 | "python_version": "3.6.8" | |
19 | } | |
20 | ``` |
0 | Version API | |
1 | =========== | |
2 | ||
3 | This API returns the running Synapse version and the Python version | |
4 | on which Synapse is being run. This is useful when a Synapse instance | |
5 | is behind a proxy that does not forward the 'Server' header (which also | |
6 | contains Synapse version information). | |
7 | ||
8 | The api is:: | |
9 | ||
10 | GET /_synapse/admin/v1/server_version | |
11 | ||
12 | It returns a JSON body like the following: | |
13 | ||
14 | .. code:: json | |
15 | ||
16 | { | |
17 | "server_version": "0.99.2rc1 (b=develop, abcdef123)", | |
18 | "python_version": "3.6.8" | |
19 | } |
121 | 121 | that our active branches are ordered thus, from more-stable to less-stable: |
122 | 122 | |
123 | 123 | * `master` (tracks our last release). |
124 | * `release-vX.Y.Z` (the branch where we prepare the next release)<sup | |
124 | * `release-vX.Y` (the branch where we prepare the next release)<sup | |
125 | 125 | id="a3">[3](#f3)</sup>. |
126 | 126 | * PR branches which are targeting the release. |
127 | 127 | * `develop` (our "mainline" branch containing our bleeding-edge). |
128 | 128 | * regular PR branches. |
129 | 129 | |
130 | 130 | The corollary is: if you have a bugfix that needs to land in both |
131 | `release-vX.Y.Z` *and* `develop`, then you should base your PR on | |
132 | `release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to | |
131 | `release-vX.Y` *and* `develop`, then you should base your PR on | |
132 | `release-vX.Y`, get it merged there, and then merge from `release-vX.Y` to | |
133 | 133 | `develop`. (If a fix lands in `develop` and we later need it in a |
134 | 134 | release-branch, we can of course cherry-pick it, but landing it in the release |
135 | 135 | branch first helps reduce the chance of annoying conflicts.) |
144 | 144 | |
145 | 145 | <b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in |
146 | 146 | the history of Synapse), we've had two releases in flight at once. Obviously, |
147 | `release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3) | |
147 | `release-v1.2` is more-stable than `release-v1.3`. [^](#a3) |
0 | <!-- | |
1 | Include the contents of CONTRIBUTING.md from the project root (where GitHub likes it | |
2 | to be) | |
3 | --> | |
4 | # Contributing | |
5 | ||
6 | {{#include ../../CONTRIBUTING.md}} |
0 | # Internal Documentation | |
1 | ||
2 | This section covers implementation documentation for various parts of Synapse. | |
3 | ||
4 | If a developer is planning to make a change to a feature of Synapse, it can be useful for | |
5 | general documentation of how that feature is implemented to be available. This saves the | |
6 | developer time in place of needing to understand how the feature works by reading the | |
7 | code. | |
8 | ||
9 | Documentation that would be more useful for the perspective of a system administrator, | |
10 | rather than a developer who's intending to change to code, should instead be placed | |
11 | under the Usage section of the documentation.⏎ |
Binary diff not shown
0 | <?xml version="1.0" encoding="UTF-8" standalone="no"?> | |
1 | <svg | |
2 | xmlns:dc="http://purl.org/dc/elements/1.1/" | |
3 | xmlns:cc="http://creativecommons.org/ns#" | |
4 | xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" | |
5 | xmlns:svg="http://www.w3.org/2000/svg" | |
6 | xmlns="http://www.w3.org/2000/svg" | |
7 | xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" | |
8 | xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" | |
9 | viewBox="0 0 199.7 184.2" | |
10 | version="1.1" | |
11 | id="svg62" | |
12 | sodipodi:docname="mdbook-favicon.svg" | |
13 | inkscape:version="1.0.2 (e86c870879, 2021-01-15, custom)"> | |
14 | <metadata | |
15 | id="metadata68"> | |
16 | <rdf:RDF> | |
17 | <cc:Work | |
18 | rdf:about=""> | |
19 | <dc:format>image/svg+xml</dc:format> | |
20 | <dc:type | |
21 | rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> | |
22 | </cc:Work> | |
23 | </rdf:RDF> | |
24 | </metadata> | |
25 | <defs | |
26 | id="defs66" /> | |
27 | <sodipodi:namedview | |
28 | pagecolor="#ffffff" | |
29 | bordercolor="#666666" | |
30 | borderopacity="1" | |
31 | objecttolerance="10" | |
32 | gridtolerance="10" | |
33 | guidetolerance="10" | |
34 | inkscape:pageopacity="0" | |
35 | inkscape:pageshadow="2" | |
36 | inkscape:window-width="1920" | |
37 | inkscape:window-height="1026" | |
38 | id="namedview64" | |
39 | showgrid="false" | |
40 | inkscape:zoom="3.2245912" | |
41 | inkscape:cx="84.790185" | |
42 | inkscape:cy="117.96478" | |
43 | inkscape:window-x="0" | |
44 | inkscape:window-y="0" | |
45 | inkscape:window-maximized="1" | |
46 | inkscape:current-layer="svg62" /> | |
47 | <style | |
48 | id="style58"> | |
49 | @media (prefers-color-scheme: dark) { | |
50 | svg { fill: white; } | |
51 | } | |
52 | </style> | |
53 | <path | |
54 | d="m 189.5,36.8 c 0.2,2.8 0,5.1 -0.6,6.8 L 153,162 c -0.6,2.1 -2,3.7 -4.2,5 -2.2,1.2 -4.4,1.9 -6.7,1.9 H 31.4 c -9.6,0 -15.3,-2.8 -17.3,-8.4 -0.8,-2.2 -0.8,-3.9 0.1,-5.2 0.9,-1.2 2.4,-1.8 4.6,-1.8 H 123 c 7.4,0 12.6,-1.4 15.4,-4.1 2.8,-2.7 5.7,-8.9 8.6,-18.4 L 179.9,22.4 c 1.8,-5.9 1,-11.1 -2.2,-15.6 C 174.5,2.3 169.9,0 164,0 H 72.7 c -1,0 -3.1,0.4 -6.1,1.1 L 66.7,0.7 C 64.5,0.2 62.6,0 61,0.1 c -1.6,0.1 -3,0.5 -4.3,1.4 -1.3,0.9 -2.4,1.8 -3.2,2.8 -0.8,1 -1.5,2.2 -2.3,3.8 -0.8,1.6 -1.4,3 -1.9,4.3 -0.5,1.3 -1.1,2.7 -1.8,4.2 -0.7,1.5 -1.3,2.7 -2,3.7 -0.5,0.6 -1.2,1.5 -2,2.5 -0.8,1 -1.6,2 -2.2,2.8 -0.6,0.8 -0.9,1.5 -1.1,2.2 -0.2,0.7 -0.1,1.8 0.2,3.2 0.3,1.4 0.4,2.4 0.4,3.1 -0.3,3 -1.4,6.9 -3.3,11.6 -1.9,4.7 -3.6,8.1 -5.1,10.1 -0.3,0.4 -1.2,1.3 -2.6,2.7 -1.4,1.4 -2.3,2.6 -2.6,3.7 -0.3,0.4 -0.3,1.5 -0.1,3.4 0.3,1.8 0.4,3.1 0.3,3.8 -0.3,2.7 -1.3,6.3 -3,10.8 -2.406801,6.370944 -3.4,8.2 -5,11 -0.2,0.5 -0.9,1.4 -2,2.8 -1.1,1.4 -1.8,2.5 -2,3.4 -0.2,0.6 -0.1,1.8 0.1,3.4 0.2,1.6 0.2,2.8 -0.1,3.6 -0.6,3 -1.8,6.7 -3.6,11 -1.8,4.3 -3.6,7.9 -5.4,11 -0.5,0.8 -1.1,1.7 -2,2.8 -0.8,1.1 -1.5,2 -2,2.8 -0.5,0.8 -0.8,1.6 -1,2.5 -0.1,0.5 0,1.3 0.4,2.3 0.3,1.1 0.4,1.9 0.4,2.6 -0.1,1.1 -0.2,2.6 -0.5,4.4 -0.2,1.8 -0.4,2.9 -0.4,3.2 -1.8,4.8 -1.7,9.9 0.2,15.2 2.2,6.2 6.2,11.5 11.9,15.8 5.7,4.3 11.7,6.4 17.8,6.4 h 110.7 c 5.2,0 10.1,-1.7 14.7,-5.2 4.6,-3.5 7.7,-7.8 9.2,-12.9 l 33,-108.6 c 1.8,-5.8 1,-10.9 -2.2,-15.5 -1.7,-2.5 -4,-4.2 -7.1,-5.4 z M 38.14858,105.59813 60.882735,41.992545 h 10.8 c 6.340631,0 33.351895,0.778957 70.804135,0.970479 -18.18245,63.254766 0,0 -18.18245,63.254766 -23.00947,-0.10382 -63.362955,-0.6218 -72.55584,-0.51966 -18,0.2 -13.6,-0.1 -13.6,-0.1 z m 80.621,-5.891206 c 15.19043,-50.034423 0,1e-5 15.19043,-50.034423 l -11.90624,-0.13228 2.73304,-9.302941 -44.32863,0.07339 -2.532953,8.036036 -11.321128,-0.18864 -17.955519,51.440073 c 0.02698,0.027 4.954586,0.0514 12.187488,0.0717 l -2.997994,9.804886 c 11.36463,0.0271 1.219679,-0.0736 46.117666,-0.31499 l 2.65246,-9.571696 c 7.08021,0.14819 11.59705,0.13117 12.16138,0.1189 z m -56.149615,-3.855606 13.7,-42.5 h 9.8 l 1.194896,32.99936 23.205109,-32.99936 h 9.9 l -13.6,42.5 h -7.099996 l 12.499996,-35.4 -24.50001,35.4 h -6.799995 l -0.8,-35 -10.8,35 z" | |
55 | id="path60" | |
56 | sodipodi:nodetypes="ccccssccsssccsssccsssssscsssscssscccscscscsccsccccccssssccccccsccsccccccccccccccccccccccccccccc" /> | |
57 | </svg> |
2915 | 2915 | # Optional password if configured on the Redis instance |
2916 | 2916 | # |
2917 | 2917 | #password: <secret_password> |
2918 | ||
2919 | ||
2920 | # Enable experimental features in Synapse. | |
2921 | # | |
2922 | # Experimental features might break or be removed without a deprecation | |
2923 | # period. | |
2924 | # | |
2925 | experimental_features: | |
2926 | # Support for Spaces (MSC1772), it enables the following: | |
2927 | # | |
2928 | # * The Spaces Summary API (MSC2946). | |
2929 | # * Restricting room membership based on space membership (MSC3083). | |
2930 | # | |
2931 | # Uncomment to disable support for Spaces. | |
2932 | #spaces_enabled: false |
0 | <!-- | |
1 | Include the contents of INSTALL.md from the project root without moving it, which may | |
2 | break links around the internet. Additionally, note that SUMMARY.md is unable to | |
3 | directly link to content outside of the docs/ directory. So we use this file as a | |
4 | redirection. | |
5 | --> | |
6 | {{#include ../../INSTALL.md}}⏎ |
3 | 3 | TURN. |
4 | 4 | |
5 | 5 | The synapse Matrix Home Server supports integration with TURN server via the |
6 | [TURN server REST API](<http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This | |
6 | [TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This | |
7 | 7 | allows the Home Server to generate credentials that are valid for use on the |
8 | 8 | TURN server through the use of a secret shared between the Home Server and the |
9 | 9 | TURN server. |
0 | <!-- | |
1 | Include the contents of UPGRADE.rst from the project root without moving it, which may | |
2 | break links around the internet. Additionally, note that SUMMARY.md is unable to | |
3 | directly link to content outside of the docs/ directory. So we use this file as a | |
4 | redirection. | |
5 | --> | |
6 | {{#include ../../UPGRADE.rst}}⏎ |
0 | # Administration | |
1 | ||
2 | This section contains information on managing your Synapse homeserver. This includes: | |
3 | ||
4 | * Managing users, rooms and media via the Admin API. | |
5 | * Setting up metrics and monitoring to give you insight into your homeserver's health. | |
6 | * Configuring structured logging.⏎ |
0 | # The Admin API | |
1 | ||
2 | ## Authenticate as a server admin | |
3 | ||
4 | Many of the API calls in the admin api will require an `access_token` for a | |
5 | server admin. (Note that a server admin is distinct from a room admin.) | |
6 | ||
7 | A user can be marked as a server admin by updating the database directly, e.g.: | |
8 | ||
9 | ```sql | |
10 | UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'; | |
11 | ``` | |
12 | ||
13 | A new server admin user can also be created using the `register_new_matrix_user` | |
14 | command. This is a script that is located in the `scripts/` directory, or possibly | |
15 | already on your `$PATH` depending on how Synapse was installed. | |
16 | ||
17 | Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings. | |
18 | ||
19 | ## Making an Admin API request | |
20 | Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by | |
21 | providing the token as either a query parameter or a request header. To add it as a request header in cURL: | |
22 | ||
23 | ```sh | |
24 | curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request> | |
25 | ``` | |
26 | ||
27 | For more details on access tokens in Matrix, please refer to the complete | |
28 | [matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens). |
0 | # Configuration | |
1 | ||
2 | This section contains information on tweaking Synapse via the various options in the configuration file. A configuration | |
3 | file should have been generated when you [installed Synapse](../../setup/installation.html). |
0 | # Homeserver Sample Configuration File | |
1 | ||
2 | Below is a sample homeserver configuration file. The homeserver configuration file | |
3 | can be tweaked to change the behaviour of your homeserver. A restart of the server is | |
4 | generally required to apply any changes made to this file. | |
5 | ||
6 | Note that the contents below are *not* intended to be copied and used as the basis for | |
7 | a real homeserver.yaml. Instead, if you are starting from scratch, please generate | |
8 | a fresh config using Synapse by following the instructions in | |
9 | [Installation](../../setup/installation.md). | |
10 | ||
11 | ```yaml | |
12 | {{#include ../../sample_config.yaml}} | |
13 | ``` |
0 | # Logging Sample Configuration File | |
1 | ||
2 | Below is a sample logging configuration file. This file can be tweaked to control how your | |
3 | homeserver will output logs. A restart of the server is generally required to apply any | |
4 | changes made to this file. | |
5 | ||
6 | Note that the contents below are *not* intended to be copied and used as the basis for | |
7 | a real homeserver.yaml. Instead, if you are starting from scratch, please generate | |
8 | a fresh config using Synapse by following the instructions in | |
9 | [Installation](../../setup/installation.md). | |
10 | ||
11 | ```yaml | |
12 | {{#include ../../sample_log_config.yaml}} | |
13 | ``__`⏎ |
0 | # User Authentication | |
1 | ||
2 | Synapse supports multiple methods of authenticating users, either out-of-the-box or through custom pluggable | |
3 | authentication modules. | |
4 | ||
5 | Included in Synapse is support for authenticating users via: | |
6 | ||
7 | * A username and password. | |
8 | * An email address and password. | |
9 | * Single Sign-On through the SAML, Open ID Connect or CAS protocols. | |
10 | * JSON Web Tokens. | |
11 | * An administrator's shared secret. | |
12 | ||
13 | Synapse can additionally be extended to support custom authentication schemes through optional "password auth provider" | |
14 | modules.⏎ |
0 | # Documentation Website Files and Assets | |
1 | ||
2 | This directory contains extra files for modifying the look and functionality of | |
3 | [mdbook](https://github.com/rust-lang/mdBook), the documentation software that's | |
4 | used to generate Synapse's documentation website. | |
5 | ||
6 | The configuration options in the `output.html` section of [book.toml](../../book.toml) | |
7 | point to additional JS/CSS in this directory that are added on each page load. In | |
8 | addition, the `theme` directory contains files that overwrite their counterparts in | |
9 | each of the default themes included with mdbook. | |
10 | ||
11 | Currently we use these files to generate a floating Table of Contents panel. The code for | |
12 | which was partially taken from | |
13 | [JorelAli/mdBook-pagetoc](https://github.com/JorelAli/mdBook-pagetoc/) | |
14 | before being modified such that it scrolls with the content of the page. This is handled | |
15 | by the `table-of-contents.js/css` files. The table of contents panel only appears on pages | |
16 | that have more than one header, as well as only appearing on desktop-sized monitors. | |
17 | ||
18 | We remove the navigation arrows which typically appear on the left and right side of the | |
19 | screen on desktop as they interfere with the table of contents. This is handled by | |
20 | the `remove-nav-buttons.css` file. | |
21 | ||
22 | Finally, we also stylise the chapter titles in the left sidebar by indenting them | |
23 | slightly so that they are more visually distinguishable from the section headers | |
24 | (the bold titles). This is done through the `indent-section-headers.css` file. | |
25 | ||
26 | More information can be found in mdbook's official documentation for | |
27 | [injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html) | |
28 | and | |
29 | [customising the default themes](https://rust-lang.github.io/mdBook/format/theme/index.html).⏎ |
0 | /* | |
1 | * Indents each chapter title in the left sidebar so that they aren't | |
2 | * at the same level as the section headers. | |
3 | */ | |
4 | .chapter-item { | |
5 | margin-left: 1em; | |
6 | }⏎ |
0 | /* Remove the prev, next chapter buttons as they interfere with the | |
1 | * table of contents. | |
2 | * Note that the table of contents only appears on desktop, thus we | |
3 | * only remove the desktop (wide) chapter buttons. | |
4 | */ | |
5 | .nav-wide-wrapper { | |
6 | display: none | |
7 | }⏎ |
0 | @media only screen and (max-width:1439px) { | |
1 | .sidetoc { | |
2 | display: none; | |
3 | } | |
4 | } | |
5 | ||
6 | @media only screen and (min-width:1440px) { | |
7 | main { | |
8 | position: relative; | |
9 | margin-left: 100px !important; | |
10 | } | |
11 | .sidetoc { | |
12 | margin-left: auto; | |
13 | margin-right: auto; | |
14 | left: calc(100% + (var(--content-max-width))/4 - 140px); | |
15 | position: absolute; | |
16 | text-align: right; | |
17 | } | |
18 | .pagetoc { | |
19 | position: fixed; | |
20 | width: 250px; | |
21 | overflow: auto; | |
22 | right: 20px; | |
23 | height: calc(100% - var(--menu-bar-height)); | |
24 | } | |
25 | .pagetoc a { | |
26 | color: var(--fg) !important; | |
27 | display: block; | |
28 | padding: 5px 15px 5px 10px; | |
29 | text-align: left; | |
30 | text-decoration: none; | |
31 | } | |
32 | .pagetoc a:hover, | |
33 | .pagetoc a.active { | |
34 | background: var(--sidebar-bg) !important; | |
35 | color: var(--sidebar-fg) !important; | |
36 | } | |
37 | .pagetoc .active { | |
38 | background: var(--sidebar-bg); | |
39 | color: var(--sidebar-fg); | |
40 | } | |
41 | } |
0 | const getPageToc = () => document.getElementsByClassName('pagetoc')[0]; | |
1 | ||
2 | const pageToc = getPageToc(); | |
3 | const pageTocChildren = [...pageToc.children]; | |
4 | const headers = [...document.getElementsByClassName('header')]; | |
5 | ||
6 | ||
7 | // Select highlighted item in ToC when clicking an item | |
8 | pageTocChildren.forEach(child => { | |
9 | child.addEventHandler('click', () => { | |
10 | pageTocChildren.forEach(child => { | |
11 | child.classList.remove('active'); | |
12 | }); | |
13 | child.classList.add('active'); | |
14 | }); | |
15 | }); | |
16 | ||
17 | ||
18 | /** | |
19 | * Test whether a node is in the viewport | |
20 | */ | |
21 | function isInViewport(node) { | |
22 | const rect = node.getBoundingClientRect(); | |
23 | return rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && rect.right <= (window.innerWidth || document.documentElement.clientWidth); | |
24 | } | |
25 | ||
26 | ||
27 | /** | |
28 | * Set a new ToC entry. | |
29 | * Clear any previously highlighted ToC items, set the new one, | |
30 | * and adjust the ToC scroll position. | |
31 | */ | |
32 | function setTocEntry() { | |
33 | let activeEntry; | |
34 | const pageTocChildren = [...getPageToc().children]; | |
35 | ||
36 | // Calculate which header is the current one at the top of screen | |
37 | headers.forEach(header => { | |
38 | if (window.pageYOffset >= header.offsetTop) { | |
39 | activeEntry = header; | |
40 | } | |
41 | }); | |
42 | ||
43 | // Update selected item in ToC when scrolling | |
44 | pageTocChildren.forEach(child => { | |
45 | if (activeEntry.href.localeCompare(child.href) === 0) { | |
46 | child.classList.add('active'); | |
47 | } else { | |
48 | child.classList.remove('active'); | |
49 | } | |
50 | }); | |
51 | ||
52 | let tocEntryForLocation = document.querySelector(`nav a[href="${activeEntry.href}"]`); | |
53 | if (tocEntryForLocation) { | |
54 | const headingForLocation = document.querySelector(activeEntry.hash); | |
55 | if (headingForLocation && isInViewport(headingForLocation)) { | |
56 | // Update ToC scroll | |
57 | const nav = getPageToc(); | |
58 | const content = document.querySelector('html'); | |
59 | if (content.scrollTop !== 0) { | |
60 | nav.scrollTo({ | |
61 | top: tocEntryForLocation.offsetTop - 100, | |
62 | left: 0, | |
63 | behavior: 'smooth', | |
64 | }); | |
65 | } else { | |
66 | nav.scrollTop = 0; | |
67 | } | |
68 | } | |
69 | } | |
70 | } | |
71 | ||
72 | ||
73 | /** | |
74 | * Populate sidebar on load | |
75 | */ | |
76 | window.addEventListener('load', () => { | |
77 | // Only create table of contents if there is more than one header on the page | |
78 | if (headers.length <= 1) { | |
79 | return; | |
80 | } | |
81 | ||
82 | // Create an entry in the page table of contents for each header in the document | |
83 | headers.forEach((header, index) => { | |
84 | const link = document.createElement('a'); | |
85 | ||
86 | // Indent shows hierarchy | |
87 | let indent = '0px'; | |
88 | switch (header.parentElement.tagName) { | |
89 | case 'H1': | |
90 | indent = '5px'; | |
91 | break; | |
92 | case 'H2': | |
93 | indent = '20px'; | |
94 | break; | |
95 | case 'H3': | |
96 | indent = '30px'; | |
97 | break; | |
98 | case 'H4': | |
99 | indent = '40px'; | |
100 | break; | |
101 | case 'H5': | |
102 | indent = '50px'; | |
103 | break; | |
104 | case 'H6': | |
105 | indent = '60px'; | |
106 | break; | |
107 | default: | |
108 | break; | |
109 | } | |
110 | ||
111 | let tocEntry; | |
112 | if (index == 0) { | |
113 | // Create a bolded title for the first element | |
114 | tocEntry = document.createElement("strong"); | |
115 | tocEntry.innerHTML = header.text; | |
116 | } else { | |
117 | // All other elements are non-bold | |
118 | tocEntry = document.createTextNode(header.text); | |
119 | } | |
120 | link.appendChild(tocEntry); | |
121 | ||
122 | link.style.paddingLeft = indent; | |
123 | link.href = header.href; | |
124 | pageToc.appendChild(link); | |
125 | }); | |
126 | setTocEntry.call(); | |
127 | }); | |
128 | ||
129 | ||
130 | // Handle active headers on scroll, if there is more than one header on the page | |
131 | if (headers.length > 1) { | |
132 | window.addEventListener('scroll', setTocEntry); | |
133 | } |
0 | <!DOCTYPE HTML> | |
1 | <html lang="{{ language }}" class="sidebar-visible no-js {{ default_theme }}"> | |
2 | <head> | |
3 | <!-- Book generated using mdBook --> | |
4 | <meta charset="UTF-8"> | |
5 | <title>{{ title }}</title> | |
6 | {{#if is_print }} | |
7 | <meta name="robots" content="noindex" /> | |
8 | {{/if}} | |
9 | {{#if base_url}} | |
10 | <base href="{{ base_url }}"> | |
11 | {{/if}} | |
12 | ||
13 | ||
14 | <!-- Custom HTML head --> | |
15 | {{> head}} | |
16 | ||
17 | <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> | |
18 | <meta name="description" content="{{ description }}"> | |
19 | <meta name="viewport" content="width=device-width, initial-scale=1"> | |
20 | <meta name="theme-color" content="#ffffff" /> | |
21 | ||
22 | {{#if favicon_svg}} | |
23 | <link rel="icon" href="{{ path_to_root }}favicon.svg"> | |
24 | {{/if}} | |
25 | {{#if favicon_png}} | |
26 | <link rel="shortcut icon" href="{{ path_to_root }}favicon.png"> | |
27 | {{/if}} | |
28 | <link rel="stylesheet" href="{{ path_to_root }}css/variables.css"> | |
29 | <link rel="stylesheet" href="{{ path_to_root }}css/general.css"> | |
30 | <link rel="stylesheet" href="{{ path_to_root }}css/chrome.css"> | |
31 | {{#if print_enable}} | |
32 | <link rel="stylesheet" href="{{ path_to_root }}css/print.css" media="print"> | |
33 | {{/if}} | |
34 | ||
35 | <!-- Fonts --> | |
36 | <link rel="stylesheet" href="{{ path_to_root }}FontAwesome/css/font-awesome.css"> | |
37 | {{#if copy_fonts}} | |
38 | <link rel="stylesheet" href="{{ path_to_root }}fonts/fonts.css"> | |
39 | {{/if}} | |
40 | ||
41 | <!-- Highlight.js Stylesheets --> | |
42 | <link rel="stylesheet" href="{{ path_to_root }}highlight.css"> | |
43 | <link rel="stylesheet" href="{{ path_to_root }}tomorrow-night.css"> | |
44 | <link rel="stylesheet" href="{{ path_to_root }}ayu-highlight.css"> | |
45 | ||
46 | <!-- Custom theme stylesheets --> | |
47 | {{#each additional_css}} | |
48 | <link rel="stylesheet" href="{{ ../path_to_root }}{{ this }}"> | |
49 | {{/each}} | |
50 | ||
51 | {{#if mathjax_support}} | |
52 | <!-- MathJax --> | |
53 | <script async type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> | |
54 | {{/if}} | |
55 | </head> | |
56 | <body> | |
57 | <!-- Provide site root to javascript --> | |
58 | <script type="text/javascript"> | |
59 | var path_to_root = "{{ path_to_root }}"; | |
60 | var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "{{ preferred_dark_theme }}" : "{{ default_theme }}"; | |
61 | </script> | |
62 | ||
63 | <!-- Work around some values being stored in localStorage wrapped in quotes --> | |
64 | <script type="text/javascript"> | |
65 | try { | |
66 | var theme = localStorage.getItem('mdbook-theme'); | |
67 | var sidebar = localStorage.getItem('mdbook-sidebar'); | |
68 | if (theme.startsWith('"') && theme.endsWith('"')) { | |
69 | localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1)); | |
70 | } | |
71 | if (sidebar.startsWith('"') && sidebar.endsWith('"')) { | |
72 | localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1)); | |
73 | } | |
74 | } catch (e) { } | |
75 | </script> | |
76 | ||
77 | <!-- Set the theme before any content is loaded, prevents flash --> | |
78 | <script type="text/javascript"> | |
79 | var theme; | |
80 | try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { } | |
81 | if (theme === null || theme === undefined) { theme = default_theme; } | |
82 | var html = document.querySelector('html'); | |
83 | html.classList.remove('no-js') | |
84 | html.classList.remove('{{ default_theme }}') | |
85 | html.classList.add(theme); | |
86 | html.classList.add('js'); | |
87 | </script> | |
88 | ||
89 | <!-- Hide / unhide sidebar before it is displayed --> | |
90 | <script type="text/javascript"> | |
91 | var html = document.querySelector('html'); | |
92 | var sidebar = 'hidden'; | |
93 | if (document.body.clientWidth >= 1080) { | |
94 | try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { } | |
95 | sidebar = sidebar || 'visible'; | |
96 | } | |
97 | html.classList.remove('sidebar-visible'); | |
98 | html.classList.add("sidebar-" + sidebar); | |
99 | </script> | |
100 | ||
101 | <nav id="sidebar" class="sidebar" aria-label="Table of contents"> | |
102 | <div class="sidebar-scrollbox"> | |
103 | {{#toc}}{{/toc}} | |
104 | </div> | |
105 | <div id="sidebar-resize-handle" class="sidebar-resize-handle"></div> | |
106 | </nav> | |
107 | ||
108 | <div id="page-wrapper" class="page-wrapper"> | |
109 | ||
110 | <div class="page"> | |
111 | {{> header}} | |
112 | <div id="menu-bar-hover-placeholder"></div> | |
113 | <div id="menu-bar" class="menu-bar sticky bordered"> | |
114 | <div class="left-buttons"> | |
115 | <button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar"> | |
116 | <i class="fa fa-bars"></i> | |
117 | </button> | |
118 | <button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list"> | |
119 | <i class="fa fa-paint-brush"></i> | |
120 | </button> | |
121 | <ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu"> | |
122 | <li role="none"><button role="menuitem" class="theme" id="light">{{ theme_option "Light" }}</button></li> | |
123 | <li role="none"><button role="menuitem" class="theme" id="rust">{{ theme_option "Rust" }}</button></li> | |
124 | <li role="none"><button role="menuitem" class="theme" id="coal">{{ theme_option "Coal" }}</button></li> | |
125 | <li role="none"><button role="menuitem" class="theme" id="navy">{{ theme_option "Navy" }}</button></li> | |
126 | <li role="none"><button role="menuitem" class="theme" id="ayu">{{ theme_option "Ayu" }}</button></li> | |
127 | </ul> | |
128 | {{#if search_enabled}} | |
129 | <button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar"> | |
130 | <i class="fa fa-search"></i> | |
131 | </button> | |
132 | {{/if}} | |
133 | </div> | |
134 | ||
135 | <h1 class="menu-title">{{ book_title }}</h1> | |
136 | ||
137 | <div class="right-buttons"> | |
138 | {{#if print_enable}} | |
139 | <a href="{{ path_to_root }}print.html" title="Print this book" aria-label="Print this book"> | |
140 | <i id="print-button" class="fa fa-print"></i> | |
141 | </a> | |
142 | {{/if}} | |
143 | {{#if git_repository_url}} | |
144 | <a href="{{git_repository_url}}" title="Git repository" aria-label="Git repository"> | |
145 | <i id="git-repository-button" class="fa {{git_repository_icon}}"></i> | |
146 | </a> | |
147 | {{/if}} | |
148 | {{#if git_repository_edit_url}} | |
149 | <a href="{{git_repository_edit_url}}" title="Suggest an edit" aria-label="Suggest an edit"> | |
150 | <i id="git-edit-button" class="fa fa-edit"></i> | |
151 | </a> | |
152 | {{/if}} | |
153 | ||
154 | </div> | |
155 | </div> | |
156 | ||
157 | {{#if search_enabled}} | |
158 | <div id="search-wrapper" class="hidden"> | |
159 | <form id="searchbar-outer" class="searchbar-outer"> | |
160 | <input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header"> | |
161 | </form> | |
162 | <div id="searchresults-outer" class="searchresults-outer hidden"> | |
163 | <div id="searchresults-header" class="searchresults-header"></div> | |
164 | <ul id="searchresults"> | |
165 | </ul> | |
166 | </div> | |
167 | </div> | |
168 | {{/if}} | |
169 | ||
170 | <!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM --> | |
171 | <script type="text/javascript"> | |
172 | document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible'); | |
173 | document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible'); | |
174 | Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) { | |
175 | link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1); | |
176 | }); | |
177 | </script> | |
178 | ||
179 | <div id="content" class="content"> | |
180 | <main> | |
181 | <!-- Page table of contents --> | |
182 | <div class="sidetoc"> | |
183 | <nav class="pagetoc"></nav> | |
184 | </div> | |
185 | ||
186 | {{{ content }}} | |
187 | </main> | |
188 | ||
189 | <nav class="nav-wrapper" aria-label="Page navigation"> | |
190 | <!-- Mobile navigation buttons --> | |
191 | {{#previous}} | |
192 | <a rel="prev" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left"> | |
193 | <i class="fa fa-angle-left"></i> | |
194 | </a> | |
195 | {{/previous}} | |
196 | ||
197 | {{#next}} | |
198 | <a rel="next" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right"> | |
199 | <i class="fa fa-angle-right"></i> | |
200 | </a> | |
201 | {{/next}} | |
202 | ||
203 | <div style="clear: both"></div> | |
204 | </nav> | |
205 | </div> | |
206 | </div> | |
207 | ||
208 | <nav class="nav-wide-wrapper" aria-label="Page navigation"> | |
209 | {{#previous}} | |
210 | <a rel="prev" href="{{ path_to_root }}{{link}}" class="nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left"> | |
211 | <i class="fa fa-angle-left"></i> | |
212 | </a> | |
213 | {{/previous}} | |
214 | ||
215 | {{#next}} | |
216 | <a rel="next" href="{{ path_to_root }}{{link}}" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right"> | |
217 | <i class="fa fa-angle-right"></i> | |
218 | </a> | |
219 | {{/next}} | |
220 | </nav> | |
221 | ||
222 | </div> | |
223 | ||
224 | {{#if livereload}} | |
225 | <!-- Livereload script (if served using the cli tool) --> | |
226 | <script type="text/javascript"> | |
227 | var socket = new WebSocket("{{{livereload}}}"); | |
228 | socket.onmessage = function (event) { | |
229 | if (event.data === "reload") { | |
230 | socket.close(); | |
231 | location.reload(); | |
232 | } | |
233 | }; | |
234 | window.onbeforeunload = function() { | |
235 | socket.close(); | |
236 | } | |
237 | </script> | |
238 | {{/if}} | |
239 | ||
240 | {{#if google_analytics}} | |
241 | <!-- Google Analytics Tag --> | |
242 | <script type="text/javascript"> | |
243 | var localAddrs = ["localhost", "127.0.0.1", ""]; | |
244 | // make sure we don't activate google analytics if the developer is | |
245 | // inspecting the book locally... | |
246 | if (localAddrs.indexOf(document.location.hostname) === -1) { | |
247 | (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ | |
248 | (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), | |
249 | m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) | |
250 | })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); | |
251 | ga('create', '{{google_analytics}}', 'auto'); | |
252 | ga('send', 'pageview'); | |
253 | } | |
254 | </script> | |
255 | {{/if}} | |
256 | ||
257 | {{#if playground_line_numbers}} | |
258 | <script type="text/javascript"> | |
259 | window.playground_line_numbers = true; | |
260 | </script> | |
261 | {{/if}} | |
262 | ||
263 | {{#if playground_copyable}} | |
264 | <script type="text/javascript"> | |
265 | window.playground_copyable = true; | |
266 | </script> | |
267 | {{/if}} | |
268 | ||
269 | {{#if playground_js}} | |
270 | <script src="{{ path_to_root }}ace.js" type="text/javascript" charset="utf-8"></script> | |
271 | <script src="{{ path_to_root }}editor.js" type="text/javascript" charset="utf-8"></script> | |
272 | <script src="{{ path_to_root }}mode-rust.js" type="text/javascript" charset="utf-8"></script> | |
273 | <script src="{{ path_to_root }}theme-dawn.js" type="text/javascript" charset="utf-8"></script> | |
274 | <script src="{{ path_to_root }}theme-tomorrow_night.js" type="text/javascript" charset="utf-8"></script> | |
275 | {{/if}} | |
276 | ||
277 | {{#if search_js}} | |
278 | <script src="{{ path_to_root }}elasticlunr.min.js" type="text/javascript" charset="utf-8"></script> | |
279 | <script src="{{ path_to_root }}mark.min.js" type="text/javascript" charset="utf-8"></script> | |
280 | <script src="{{ path_to_root }}searcher.js" type="text/javascript" charset="utf-8"></script> | |
281 | {{/if}} | |
282 | ||
283 | <script src="{{ path_to_root }}clipboard.min.js" type="text/javascript" charset="utf-8"></script> | |
284 | <script src="{{ path_to_root }}highlight.js" type="text/javascript" charset="utf-8"></script> | |
285 | <script src="{{ path_to_root }}book.js" type="text/javascript" charset="utf-8"></script> | |
286 | ||
287 | <!-- Custom JS scripts --> | |
288 | {{#each additional_js}} | |
289 | <script type="text/javascript" src="{{ ../path_to_root }}{{this}}"></script> | |
290 | {{/each}} | |
291 | ||
292 | {{#if is_print}} | |
293 | {{#if mathjax_support}} | |
294 | <script type="text/javascript"> | |
295 | window.addEventListener('load', function() { | |
296 | MathJax.Hub.Register.StartupHook('End', function() { | |
297 | window.setTimeout(window.print, 100); | |
298 | }); | |
299 | }); | |
300 | </script> | |
301 | {{else}} | |
302 | <script type="text/javascript"> | |
303 | window.addEventListener('load', function() { | |
304 | window.setTimeout(window.print, 100); | |
305 | }); | |
306 | </script> | |
307 | {{/if}} | |
308 | {{/if}} | |
309 | ||
310 | </body> | |
311 | </html>⏎ |
0 | # Introduction | |
1 | ||
2 | Welcome to the documentation repository for Synapse, the reference | |
3 | [Matrix](https://matrix.org) homeserver implementation.⏎ |
227 | 227 | ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$ |
228 | 228 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$ |
229 | 229 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/ |
230 | ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/ | |
231 | ^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$ | |
232 | ^/_matrix/client/(api/v1|r0|unstable)/search$ | |
230 | 233 | |
231 | 234 | # Registration/login requests |
232 | 235 | ^/_matrix/client/(api/v1|r0|unstable)/login$ |
31 | 31 | synapse/http/federation/matrix_federation_agent.py, |
32 | 32 | synapse/http/federation/well_known_resolver.py, |
33 | 33 | synapse/http/matrixfederationclient.py, |
34 | synapse/http/servlet.py, | |
34 | 35 | synapse/http/server.py, |
35 | 36 | synapse/http/site.py, |
36 | 37 | synapse/logging, |
129 | 130 | [mypy-canonicaljson] |
130 | 131 | ignore_missing_imports = True |
131 | 132 | |
132 | [mypy-jaeger_client] | |
133 | [mypy-jaeger_client.*] | |
133 | 134 | ignore_missing_imports = True |
134 | 135 | |
135 | 136 | [mypy-jsonschema] |
0 | import json | |
1 | import sys | |
2 | import time | |
3 | ||
4 | import psycopg2 | |
5 | import yaml | |
6 | from canonicaljson import encode_canonical_json | |
7 | from signedjson.key import read_signing_keys | |
8 | from signedjson.sign import sign_json | |
9 | from unpaddedbase64 import encode_base64 | |
10 | ||
11 | db_binary_type = memoryview | |
12 | ||
13 | ||
14 | def select_v1_keys(connection): | |
15 | cursor = connection.cursor() | |
16 | cursor.execute("SELECT server_name, key_id, verify_key FROM server_signature_keys") | |
17 | rows = cursor.fetchall() | |
18 | cursor.close() | |
19 | results = {} | |
20 | for server_name, key_id, verify_key in rows: | |
21 | results.setdefault(server_name, {})[key_id] = encode_base64(verify_key) | |
22 | return results | |
23 | ||
24 | ||
25 | def select_v1_certs(connection): | |
26 | cursor = connection.cursor() | |
27 | cursor.execute("SELECT server_name, tls_certificate FROM server_tls_certificates") | |
28 | rows = cursor.fetchall() | |
29 | cursor.close() | |
30 | results = {} | |
31 | for server_name, tls_certificate in rows: | |
32 | results[server_name] = tls_certificate | |
33 | return results | |
34 | ||
35 | ||
36 | def select_v2_json(connection): | |
37 | cursor = connection.cursor() | |
38 | cursor.execute("SELECT server_name, key_id, key_json FROM server_keys_json") | |
39 | rows = cursor.fetchall() | |
40 | cursor.close() | |
41 | results = {} | |
42 | for server_name, key_id, key_json in rows: | |
43 | results.setdefault(server_name, {})[key_id] = json.loads( | |
44 | str(key_json).decode("utf-8") | |
45 | ) | |
46 | return results | |
47 | ||
48 | ||
49 | def convert_v1_to_v2(server_name, valid_until, keys, certificate): | |
50 | return { | |
51 | "old_verify_keys": {}, | |
52 | "server_name": server_name, | |
53 | "verify_keys": {key_id: {"key": key} for key_id, key in keys.items()}, | |
54 | "valid_until_ts": valid_until, | |
55 | } | |
56 | ||
57 | ||
58 | def rows_v2(server, json): | |
59 | valid_until = json["valid_until_ts"] | |
60 | key_json = encode_canonical_json(json) | |
61 | for key_id in json["verify_keys"]: | |
62 | yield (server, key_id, "-", valid_until, valid_until, db_binary_type(key_json)) | |
63 | ||
64 | ||
65 | def main(): | |
66 | config = yaml.safe_load(open(sys.argv[1])) | |
67 | valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24 | |
68 | ||
69 | server_name = config["server_name"] | |
70 | signing_key = read_signing_keys(open(config["signing_key_path"]))[0] | |
71 | ||
72 | database = config["database"] | |
73 | assert database["name"] == "psycopg2", "Can only convert for postgresql" | |
74 | args = database["args"] | |
75 | args.pop("cp_max") | |
76 | args.pop("cp_min") | |
77 | connection = psycopg2.connect(**args) | |
78 | keys = select_v1_keys(connection) | |
79 | certificates = select_v1_certs(connection) | |
80 | json = select_v2_json(connection) | |
81 | ||
82 | result = {} | |
83 | for server in keys: | |
84 | if server not in json: | |
85 | v2_json = convert_v1_to_v2( | |
86 | server, valid_until, keys[server], certificates[server] | |
87 | ) | |
88 | v2_json = sign_json(v2_json, server_name, signing_key) | |
89 | result[server] = v2_json | |
90 | ||
91 | yaml.safe_dump(result, sys.stdout, default_flow_style=False) | |
92 | ||
93 | rows = [row for server, json in result.items() for row in rows_v2(server, json)] | |
94 | ||
95 | cursor = connection.cursor() | |
96 | cursor.executemany( | |
97 | "INSERT INTO server_keys_json (" | |
98 | " server_name, key_id, from_server," | |
99 | " ts_added_ms, ts_valid_until_ms, key_json" | |
100 | ") VALUES (%s, %s, %s, %s, %s, %s)", | |
101 | rows, | |
102 | ) | |
103 | connection.commit() | |
104 | ||
105 | ||
106 | if __name__ == "__main__": | |
107 | main() |
138 | 138 | click.get_current_context().abort() |
139 | 139 | |
140 | 140 | # Switch to the release branch. |
141 | release_branch_name = f"release-v{base_version}" | |
141 | release_branch_name = f"release-v{current_version.major}.{current_version.minor}" | |
142 | 142 | release_branch = find_ref(repo, release_branch_name) |
143 | 143 | if release_branch: |
144 | 144 | if release_branch.is_remote(): |
46 | 46 | except ImportError: |
47 | 47 | pass |
48 | 48 | |
49 | __version__ = "1.35.1" | |
49 | __version__ = "1.36.0" | |
50 | 50 | |
51 | 51 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
52 | 52 | # We import here so that we don't have to install a bunch of deps when |
205 | 205 | requester = create_requester(user_id, app_service=app_service) |
206 | 206 | |
207 | 207 | request.requester = user_id |
208 | if user_id in self._force_tracing_for_users: | |
209 | opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1) | |
208 | 210 | opentracing.set_tag("authenticated_entity", user_id) |
209 | 211 | opentracing.set_tag("user_id", user_id) |
210 | 212 | opentracing.set_tag("appservice_id", app_service.id) |
211 | if user_id in self._force_tracing_for_users: | |
212 | opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1) | |
213 | 213 | |
214 | 214 | return requester |
215 | 215 | |
258 | 258 | ) |
259 | 259 | |
260 | 260 | request.requester = requester |
261 | if user_info.token_owner in self._force_tracing_for_users: | |
262 | opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1) | |
261 | 263 | opentracing.set_tag("authenticated_entity", user_info.token_owner) |
262 | 264 | opentracing.set_tag("user_id", user_info.user_id) |
263 | 265 | if device_id: |
264 | 266 | opentracing.set_tag("device_id", device_id) |
265 | if user_info.token_owner in self._force_tracing_for_users: | |
266 | opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1) | |
267 | 267 | |
268 | 268 | return requester |
269 | 269 | except KeyError: |
180 | 180 | RoomVersions.V5, |
181 | 181 | RoomVersions.V6, |
182 | 182 | RoomVersions.MSC2176, |
183 | RoomVersions.MSC3083, | |
183 | 184 | ) |
184 | # Note that we do not include MSC3083 here unless it is enabled in the config. | |
185 | 185 | } # type: Dict[str, RoomVersion] |
260 | 260 | Refresh the TLS certificates that Synapse is using by re-reading them from |
261 | 261 | disk and updating the TLS context factories to use them. |
262 | 262 | """ |
263 | ||
264 | 263 | if not hs.config.has_tls_listener(): |
265 | # attempt to reload the certs for the good of the tls_fingerprints | |
266 | hs.config.read_certificate_from_disk(require_cert_and_key=False) | |
267 | 264 | return |
268 | 265 | |
269 | hs.config.read_certificate_from_disk(require_cert_and_key=True) | |
266 | hs.config.read_certificate_from_disk() | |
270 | 267 | hs.tls_server_context_factory = context_factory.ServerContextFactory(hs.config) |
271 | 268 | |
272 | 269 | if hs._listening_services: |
108 | 108 | MonthlyActiveUsersWorkerStore, |
109 | 109 | ) |
110 | 110 | from synapse.storage.databases.main.presence import PresenceStore |
111 | from synapse.storage.databases.main.search import SearchWorkerStore | |
111 | from synapse.storage.databases.main.search import SearchStore | |
112 | 112 | from synapse.storage.databases.main.stats import StatsStore |
113 | 113 | from synapse.storage.databases.main.transactions import TransactionWorkerStore |
114 | 114 | from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore |
241 | 241 | MonthlyActiveUsersWorkerStore, |
242 | 242 | MediaRepositoryStore, |
243 | 243 | ServerMetricsStore, |
244 | SearchWorkerStore, | |
244 | SearchStore, | |
245 | 245 | TransactionWorkerStore, |
246 | 246 | BaseSlavedStore, |
247 | 247 | ): |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | 13 | |
14 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions | |
15 | 14 | from synapse.config._base import Config |
16 | 15 | from synapse.types import JsonDict |
17 | 16 | |
27 | 26 | # MSC2858 (multiple SSO identity providers) |
28 | 27 | self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool |
29 | 28 | |
30 | # Spaces (MSC1772, MSC2946, MSC3083, etc) | |
31 | self.spaces_enabled = experimental.get("spaces_enabled", True) # type: bool | |
32 | if self.spaces_enabled: | |
33 | KNOWN_ROOM_VERSIONS[RoomVersions.MSC3083.identifier] = RoomVersions.MSC3083 | |
34 | ||
35 | 29 | # MSC3026 (busy presence state) |
36 | 30 | self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool |
37 | ||
38 | def generate_config_section(self, **kwargs): | |
39 | return """\ | |
40 | # Enable experimental features in Synapse. | |
41 | # | |
42 | # Experimental features might break or be removed without a deprecation | |
43 | # period. | |
44 | # | |
45 | experimental_features: | |
46 | # Support for Spaces (MSC1772), it enables the following: | |
47 | # | |
48 | # * The Spaces Summary API (MSC2946). | |
49 | # * Restricting room membership based on space membership (MSC3083). | |
50 | # | |
51 | # Uncomment to disable support for Spaces. | |
52 | #spaces_enabled: false | |
53 | """ |
214 | 214 | days_remaining = (expires_on - now).days |
215 | 215 | return days_remaining |
216 | 216 | |
217 | def read_certificate_from_disk(self, require_cert_and_key: bool): | |
217 | def read_certificate_from_disk(self): | |
218 | 218 | """ |
219 | 219 | Read the certificates and private key from disk. |
220 | ||
221 | Args: | |
222 | require_cert_and_key: set to True to throw an error if the certificate | |
223 | and key file are not given | |
224 | """ | |
225 | if require_cert_and_key: | |
226 | self.tls_private_key = self.read_tls_private_key() | |
227 | self.tls_certificate = self.read_tls_certificate() | |
228 | elif self.tls_certificate_file: | |
229 | # we only need the certificate for the tls_fingerprints. Reload it if we | |
230 | # can, but it's not a fatal error if we can't. | |
231 | try: | |
232 | self.tls_certificate = self.read_tls_certificate() | |
233 | except Exception as e: | |
234 | logger.info( | |
235 | "Unable to read TLS certificate (%s). Ignoring as no " | |
236 | "tls listeners enabled.", | |
237 | e, | |
238 | ) | |
220 | """ | |
221 | self.tls_private_key = self.read_tls_private_key() | |
222 | self.tls_certificate = self.read_tls_certificate() | |
239 | 223 | |
240 | 224 | def generate_config_section( |
241 | 225 | self, |
15 | 15 | import abc |
16 | 16 | import logging |
17 | 17 | import urllib |
18 | from collections import defaultdict | |
19 | from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Set, Tuple | |
18 | from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple | |
20 | 19 | |
21 | 20 | import attr |
22 | 21 | from signedjson.key import ( |
43 | 42 | from synapse.config.key import TrustedKeyServer |
44 | 43 | from synapse.events import EventBase |
45 | 44 | from synapse.events.utils import prune_event_dict |
46 | from synapse.logging.context import ( | |
47 | PreserveLoggingContext, | |
48 | make_deferred_yieldable, | |
49 | preserve_fn, | |
50 | run_in_background, | |
51 | ) | |
45 | from synapse.logging.context import make_deferred_yieldable, run_in_background | |
52 | 46 | from synapse.storage.keys import FetchKeyResult |
53 | 47 | from synapse.types import JsonDict |
54 | 48 | from synapse.util import unwrapFirstError |
55 | 49 | from synapse.util.async_helpers import yieldable_gather_results |
56 | from synapse.util.metrics import Measure | |
50 | from synapse.util.batching_queue import BatchingQueue | |
57 | 51 | from synapse.util.retryutils import NotRetryingDestination |
58 | 52 | |
59 | 53 | if TYPE_CHECKING: |
79 | 73 | minimum_valid_until_ts: time at which we require the signing key to |
80 | 74 | be valid. (0 implies we don't care) |
81 | 75 | |
82 | request_name: The name of the request. | |
83 | ||
84 | 76 | key_ids: The set of key_ids to that could be used to verify the JSON object |
85 | ||
86 | key_ready (Deferred[str, str, nacl.signing.VerifyKey]): | |
87 | A deferred (server_name, key_id, verify_key) tuple that resolves when | |
88 | a verify key has been fetched. The deferreds' callbacks are run with no | |
89 | logcontext. | |
90 | ||
91 | If we are unable to find a key which satisfies the request, the deferred | |
92 | errbacks with an M_UNAUTHORIZED SynapseError. | |
93 | 77 | """ |
94 | 78 | |
95 | 79 | server_name = attr.ib(type=str) |
96 | 80 | get_json_object = attr.ib(type=Callable[[], JsonDict]) |
97 | 81 | minimum_valid_until_ts = attr.ib(type=int) |
98 | request_name = attr.ib(type=str) | |
99 | 82 | key_ids = attr.ib(type=List[str]) |
100 | key_ready = attr.ib(default=attr.Factory(defer.Deferred), type=defer.Deferred) | |
101 | 83 | |
102 | 84 | @staticmethod |
103 | 85 | def from_json_object( |
104 | 86 | server_name: str, |
105 | 87 | json_object: JsonDict, |
106 | 88 | minimum_valid_until_ms: int, |
107 | request_name: str, | |
108 | 89 | ): |
109 | 90 | """Create a VerifyJsonRequest to verify all signatures on a signed JSON |
110 | 91 | object for the given server. |
114 | 95 | server_name, |
115 | 96 | lambda: json_object, |
116 | 97 | minimum_valid_until_ms, |
117 | request_name=request_name, | |
118 | 98 | key_ids=key_ids, |
119 | 99 | ) |
120 | 100 | |
134 | 114 | # memory than the Event object itself. |
135 | 115 | lambda: prune_event_dict(event.room_version, event.get_pdu_json()), |
136 | 116 | minimum_valid_until_ms, |
137 | request_name=event.event_id, | |
138 | 117 | key_ids=key_ids, |
118 | ) | |
119 | ||
120 | def to_fetch_key_request(self) -> "_FetchKeyRequest": | |
121 | """Create a key fetch request for all keys needed to satisfy the | |
122 | verification request. | |
123 | """ | |
124 | return _FetchKeyRequest( | |
125 | server_name=self.server_name, | |
126 | minimum_valid_until_ts=self.minimum_valid_until_ts, | |
127 | key_ids=self.key_ids, | |
139 | 128 | ) |
140 | 129 | |
141 | 130 | |
143 | 132 | pass |
144 | 133 | |
145 | 134 | |
135 | @attr.s(slots=True) | |
136 | class _FetchKeyRequest: | |
137 | """A request for keys for a given server. | |
138 | ||
139 | We will continue to try and fetch until we have all the keys listed under | |
140 | `key_ids` (with an appropriate `valid_until_ts` property) or we run out of | |
141 | places to fetch keys from. | |
142 | ||
143 | Attributes: | |
144 | server_name: The name of the server that owns the keys. | |
145 | minimum_valid_until_ts: The timestamp which the keys must be valid until. | |
146 | key_ids: The IDs of the keys to attempt to fetch | |
147 | """ | |
148 | ||
149 | server_name = attr.ib(type=str) | |
150 | minimum_valid_until_ts = attr.ib(type=int) | |
151 | key_ids = attr.ib(type=List[str]) | |
152 | ||
153 | ||
146 | 154 | class Keyring: |
155 | """Handles verifying signed JSON objects and fetching the keys needed to do | |
156 | so. | |
157 | """ | |
158 | ||
147 | 159 | def __init__( |
148 | 160 | self, hs: "HomeServer", key_fetchers: "Optional[Iterable[KeyFetcher]]" = None |
149 | 161 | ): |
157 | 169 | ) |
158 | 170 | self._key_fetchers = key_fetchers |
159 | 171 | |
160 | # map from server name to Deferred. Has an entry for each server with | |
161 | # an ongoing key download; the Deferred completes once the download | |
162 | # completes. | |
163 | # | |
164 | # These are regular, logcontext-agnostic Deferreds. | |
165 | self.key_downloads = {} # type: Dict[str, defer.Deferred] | |
166 | ||
167 | def verify_json_for_server( | |
172 | self._server_queue = BatchingQueue( | |
173 | "keyring_server", | |
174 | clock=hs.get_clock(), | |
175 | process_batch_callback=self._inner_fetch_key_requests, | |
176 | ) # type: BatchingQueue[_FetchKeyRequest, Dict[str, Dict[str, FetchKeyResult]]] | |
177 | ||
178 | async def verify_json_for_server( | |
168 | 179 | self, |
169 | 180 | server_name: str, |
170 | 181 | json_object: JsonDict, |
171 | 182 | validity_time: int, |
172 | request_name: str, | |
173 | ) -> defer.Deferred: | |
183 | ) -> None: | |
174 | 184 | """Verify that a JSON object has been signed by a given server |
185 | ||
186 | Completes if the the object was correctly signed, otherwise raises. | |
175 | 187 | |
176 | 188 | Args: |
177 | 189 | server_name: name of the server which must have signed this object |
180 | 192 | |
181 | 193 | validity_time: timestamp at which we require the signing key to |
182 | 194 | be valid. (0 implies we don't care) |
183 | ||
184 | request_name: an identifier for this json object (eg, an event id) | |
185 | for logging. | |
186 | ||
187 | Returns: | |
188 | Deferred[None]: completes if the the object was correctly signed, otherwise | |
189 | errbacks with an error | |
190 | 195 | """ |
191 | 196 | request = VerifyJsonRequest.from_json_object( |
192 | 197 | server_name, |
193 | 198 | json_object, |
194 | 199 | validity_time, |
195 | request_name, | |
196 | ) | |
197 | requests = (request,) | |
198 | return make_deferred_yieldable(self._verify_objects(requests)[0]) | |
200 | ) | |
201 | return await self.process_request(request) | |
199 | 202 | |
200 | 203 | def verify_json_objects_for_server( |
201 | self, server_and_json: Iterable[Tuple[str, dict, int, str]] | |
204 | self, server_and_json: Iterable[Tuple[str, dict, int]] | |
202 | 205 | ) -> List[defer.Deferred]: |
203 | 206 | """Bulk verifies signatures of json objects, bulk fetching keys as |
204 | 207 | necessary. |
205 | 208 | |
206 | 209 | Args: |
207 | 210 | server_and_json: |
208 | Iterable of (server_name, json_object, validity_time, request_name) | |
211 | Iterable of (server_name, json_object, validity_time) | |
209 | 212 | tuples. |
210 | 213 | |
211 | 214 | validity_time is a timestamp at which the signing key must be |
212 | 215 | valid. |
213 | ||
214 | request_name is an identifier for this json object (eg, an event id) | |
215 | for logging. | |
216 | 216 | |
217 | 217 | Returns: |
218 | 218 | List<Deferred[None]>: for each input triplet, a deferred indicating success |
220 | 220 | server_name. The deferreds run their callbacks in the sentinel |
221 | 221 | logcontext. |
222 | 222 | """ |
223 | return self._verify_objects( | |
224 | VerifyJsonRequest.from_json_object( | |
225 | server_name, json_object, validity_time, request_name | |
226 | ) | |
227 | for server_name, json_object, validity_time, request_name in server_and_json | |
228 | ) | |
229 | ||
230 | def verify_events_for_server( | |
231 | self, server_and_events: Iterable[Tuple[str, EventBase, int]] | |
232 | ) -> List[defer.Deferred]: | |
233 | """Bulk verification of signatures on events. | |
234 | ||
235 | Args: | |
236 | server_and_events: | |
237 | Iterable of `(server_name, event, validity_time)` tuples. | |
238 | ||
239 | `server_name` is which server we are verifying the signature for | |
240 | on the event. | |
241 | ||
242 | `event` is the event that we'll verify the signatures of for | |
243 | the given `server_name`. | |
244 | ||
245 | `validity_time` is a timestamp at which the signing key must be | |
246 | valid. | |
247 | ||
248 | Returns: | |
249 | List<Deferred[None]>: for each input triplet, a deferred indicating success | |
250 | or failure to verify each event's signature for the given | |
251 | server_name. The deferreds run their callbacks in the sentinel | |
252 | logcontext. | |
253 | """ | |
254 | return self._verify_objects( | |
255 | VerifyJsonRequest.from_event(server_name, event, validity_time) | |
256 | for server_name, event, validity_time in server_and_events | |
257 | ) | |
258 | ||
259 | def _verify_objects( | |
260 | self, verify_requests: Iterable[VerifyJsonRequest] | |
261 | ) -> List[defer.Deferred]: | |
262 | """Does the work of verify_json_[objects_]for_server | |
263 | ||
264 | ||
265 | Args: | |
266 | verify_requests: Iterable of verification requests. | |
267 | ||
268 | Returns: | |
269 | List<Deferred[None]>: for each input item, a deferred indicating success | |
270 | or failure to verify each json object's signature for the given | |
271 | server_name. The deferreds run their callbacks in the sentinel | |
272 | logcontext. | |
273 | """ | |
274 | # a list of VerifyJsonRequests which are awaiting a key lookup | |
275 | key_lookups = [] | |
276 | handle = preserve_fn(_handle_key_deferred) | |
277 | ||
278 | def process(verify_request: VerifyJsonRequest) -> defer.Deferred: | |
279 | """Process an entry in the request list | |
280 | ||
281 | Adds a key request to key_lookups, and returns a deferred which | |
282 | will complete or fail (in the sentinel context) when verification completes. | |
283 | """ | |
284 | if not verify_request.key_ids: | |
285 | return defer.fail( | |
286 | SynapseError( | |
287 | 400, | |
288 | "Not signed by %s" % (verify_request.server_name,), | |
289 | Codes.UNAUTHORIZED, | |
290 | ) | |
291 | ) | |
292 | ||
293 | logger.debug( | |
294 | "Verifying %s for %s with key_ids %s, min_validity %i", | |
295 | verify_request.request_name, | |
223 | return [ | |
224 | run_in_background( | |
225 | self.process_request, | |
226 | VerifyJsonRequest.from_json_object( | |
227 | server_name, | |
228 | json_object, | |
229 | validity_time, | |
230 | ), | |
231 | ) | |
232 | for server_name, json_object, validity_time in server_and_json | |
233 | ] | |
234 | ||
235 | async def verify_event_for_server( | |
236 | self, | |
237 | server_name: str, | |
238 | event: EventBase, | |
239 | validity_time: int, | |
240 | ) -> None: | |
241 | await self.process_request( | |
242 | VerifyJsonRequest.from_event( | |
243 | server_name, | |
244 | event, | |
245 | validity_time, | |
246 | ) | |
247 | ) | |
248 | ||
249 | async def process_request(self, verify_request: VerifyJsonRequest) -> None: | |
250 | """Processes the `VerifyJsonRequest`. Raises if the object is not signed | |
251 | by the server, the signatures don't match or we failed to fetch the | |
252 | necessary keys. | |
253 | """ | |
254 | ||
255 | if not verify_request.key_ids: | |
256 | raise SynapseError( | |
257 | 400, | |
258 | f"Not signed by {verify_request.server_name}", | |
259 | Codes.UNAUTHORIZED, | |
260 | ) | |
261 | ||
262 | # Add the keys we need to verify to the queue for retrieval. We queue | |
263 | # up requests for the same server so we don't end up with many in flight | |
264 | # requests for the same keys. | |
265 | key_request = verify_request.to_fetch_key_request() | |
266 | found_keys_by_server = await self._server_queue.add_to_queue( | |
267 | key_request, key=verify_request.server_name | |
268 | ) | |
269 | ||
270 | # Since we batch up requests the returned set of keys may contain keys | |
271 | # from other servers, so we pull out only the ones we care about.s | |
272 | found_keys = found_keys_by_server.get(verify_request.server_name, {}) | |
273 | ||
274 | # Verify each signature we got valid keys for, raising if we can't | |
275 | # verify any of them. | |
276 | verified = False | |
277 | for key_id in verify_request.key_ids: | |
278 | key_result = found_keys.get(key_id) | |
279 | if not key_result: | |
280 | continue | |
281 | ||
282 | if key_result.valid_until_ts < verify_request.minimum_valid_until_ts: | |
283 | continue | |
284 | ||
285 | verify_key = key_result.verify_key | |
286 | json_object = verify_request.get_json_object() | |
287 | try: | |
288 | verify_signed_json( | |
289 | json_object, | |
290 | verify_request.server_name, | |
291 | verify_key, | |
292 | ) | |
293 | verified = True | |
294 | except SignatureVerifyException as e: | |
295 | logger.debug( | |
296 | "Error verifying signature for %s:%s:%s with key %s: %s", | |
297 | verify_request.server_name, | |
298 | verify_key.alg, | |
299 | verify_key.version, | |
300 | encode_verify_key_base64(verify_key), | |
301 | str(e), | |
302 | ) | |
303 | raise SynapseError( | |
304 | 401, | |
305 | "Invalid signature for server %s with key %s:%s: %s" | |
306 | % ( | |
307 | verify_request.server_name, | |
308 | verify_key.alg, | |
309 | verify_key.version, | |
310 | str(e), | |
311 | ), | |
312 | Codes.UNAUTHORIZED, | |
313 | ) | |
314 | ||
315 | if not verified: | |
316 | raise SynapseError( | |
317 | 401, | |
318 | f"Failed to find any key to satisfy: {key_request}", | |
319 | Codes.UNAUTHORIZED, | |
320 | ) | |
321 | ||
322 | async def _inner_fetch_key_requests( | |
323 | self, requests: List[_FetchKeyRequest] | |
324 | ) -> Dict[str, Dict[str, FetchKeyResult]]: | |
325 | """Processing function for the queue of `_FetchKeyRequest`.""" | |
326 | ||
327 | logger.debug("Starting fetch for %s", requests) | |
328 | ||
329 | # First we need to deduplicate requests for the same key. We do this by | |
330 | # taking the *maximum* requested `minimum_valid_until_ts` for each pair | |
331 | # of server name/key ID. | |
332 | server_to_key_to_ts = {} # type: Dict[str, Dict[str, int]] | |
333 | for request in requests: | |
334 | by_server = server_to_key_to_ts.setdefault(request.server_name, {}) | |
335 | for key_id in request.key_ids: | |
336 | existing_ts = by_server.get(key_id, 0) | |
337 | by_server[key_id] = max(request.minimum_valid_until_ts, existing_ts) | |
338 | ||
339 | deduped_requests = [ | |
340 | _FetchKeyRequest(server_name, minimum_valid_ts, [key_id]) | |
341 | for server_name, by_server in server_to_key_to_ts.items() | |
342 | for key_id, minimum_valid_ts in by_server.items() | |
343 | ] | |
344 | ||
345 | logger.debug("Deduplicated key requests to %s", deduped_requests) | |
346 | ||
347 | # For each key we call `_inner_verify_request` which will handle | |
348 | # fetching each key. Note these shouldn't throw if we fail to contact | |
349 | # other servers etc. | |
350 | results_per_request = await yieldable_gather_results( | |
351 | self._inner_fetch_key_request, | |
352 | deduped_requests, | |
353 | ) | |
354 | ||
355 | # We now convert the returned list of results into a map from server | |
356 | # name to key ID to FetchKeyResult, to return. | |
357 | to_return = {} # type: Dict[str, Dict[str, FetchKeyResult]] | |
358 | for (request, results) in zip(deduped_requests, results_per_request): | |
359 | to_return_by_server = to_return.setdefault(request.server_name, {}) | |
360 | for key_id, key_result in results.items(): | |
361 | existing = to_return_by_server.get(key_id) | |
362 | if not existing or existing.valid_until_ts < key_result.valid_until_ts: | |
363 | to_return_by_server[key_id] = key_result | |
364 | ||
365 | return to_return | |
366 | ||
367 | async def _inner_fetch_key_request( | |
368 | self, verify_request: _FetchKeyRequest | |
369 | ) -> Dict[str, FetchKeyResult]: | |
370 | """Attempt to fetch the given key by calling each key fetcher one by | |
371 | one. | |
372 | """ | |
373 | logger.debug("Starting fetch for %s", verify_request) | |
374 | ||
375 | found_keys: Dict[str, FetchKeyResult] = {} | |
376 | missing_key_ids = set(verify_request.key_ids) | |
377 | ||
378 | for fetcher in self._key_fetchers: | |
379 | if not missing_key_ids: | |
380 | break | |
381 | ||
382 | logger.debug("Getting keys from %s for %s", fetcher, verify_request) | |
383 | keys = await fetcher.get_keys( | |
296 | 384 | verify_request.server_name, |
297 | verify_request.key_ids, | |
385 | list(missing_key_ids), | |
298 | 386 | verify_request.minimum_valid_until_ts, |
299 | 387 | ) |
300 | 388 | |
301 | # add the key request to the queue, but don't start it off yet. | |
302 | key_lookups.append(verify_request) | |
303 | ||
304 | # now run _handle_key_deferred, which will wait for the key request | |
305 | # to complete and then do the verification. | |
306 | # | |
307 | # We want _handle_key_request to log to the right context, so we | |
308 | # wrap it with preserve_fn (aka run_in_background) | |
309 | return handle(verify_request) | |
310 | ||
311 | results = [process(r) for r in verify_requests] | |
312 | ||
313 | if key_lookups: | |
314 | run_in_background(self._start_key_lookups, key_lookups) | |
315 | ||
316 | return results | |
317 | ||
318 | async def _start_key_lookups( | |
319 | self, verify_requests: List[VerifyJsonRequest] | |
320 | ) -> None: | |
321 | """Sets off the key fetches for each verify request | |
322 | ||
323 | Once each fetch completes, verify_request.key_ready will be resolved. | |
324 | ||
325 | Args: | |
326 | verify_requests: | |
327 | """ | |
328 | ||
329 | try: | |
330 | # map from server name to a set of outstanding request ids | |
331 | server_to_request_ids = {} # type: Dict[str, Set[int]] | |
332 | ||
333 | for verify_request in verify_requests: | |
334 | server_name = verify_request.server_name | |
335 | request_id = id(verify_request) | |
336 | server_to_request_ids.setdefault(server_name, set()).add(request_id) | |
337 | ||
338 | # Wait for any previous lookups to complete before proceeding. | |
339 | await self.wait_for_previous_lookups(server_to_request_ids.keys()) | |
340 | ||
341 | # take out a lock on each of the servers by sticking a Deferred in | |
342 | # key_downloads | |
343 | for server_name in server_to_request_ids.keys(): | |
344 | self.key_downloads[server_name] = defer.Deferred() | |
345 | logger.debug("Got key lookup lock on %s", server_name) | |
346 | ||
347 | # When we've finished fetching all the keys for a given server_name, | |
348 | # drop the lock by resolving the deferred in key_downloads. | |
349 | def drop_server_lock(server_name): | |
350 | d = self.key_downloads.pop(server_name) | |
351 | d.callback(None) | |
352 | ||
353 | def lookup_done(res, verify_request): | |
354 | server_name = verify_request.server_name | |
355 | server_requests = server_to_request_ids[server_name] | |
356 | server_requests.remove(id(verify_request)) | |
357 | ||
358 | # if there are no more requests for this server, we can drop the lock. | |
359 | if not server_requests: | |
360 | logger.debug("Releasing key lookup lock on %s", server_name) | |
361 | drop_server_lock(server_name) | |
362 | ||
363 | return res | |
364 | ||
365 | for verify_request in verify_requests: | |
366 | verify_request.key_ready.addBoth(lookup_done, verify_request) | |
367 | ||
368 | # Actually start fetching keys. | |
369 | self._get_server_verify_keys(verify_requests) | |
370 | except Exception: | |
371 | logger.exception("Error starting key lookups") | |
372 | ||
373 | async def wait_for_previous_lookups(self, server_names: Iterable[str]) -> None: | |
374 | """Waits for any previous key lookups for the given servers to finish. | |
375 | ||
376 | Args: | |
377 | server_names: list of servers which we want to look up | |
378 | ||
379 | Returns: | |
380 | Resolves once all key lookups for the given servers have | |
381 | completed. Follows the synapse rules of logcontext preservation. | |
382 | """ | |
383 | loop_count = 1 | |
384 | while True: | |
385 | wait_on = [ | |
386 | (server_name, self.key_downloads[server_name]) | |
387 | for server_name in server_names | |
388 | if server_name in self.key_downloads | |
389 | ] | |
390 | if not wait_on: | |
391 | break | |
392 | logger.info( | |
393 | "Waiting for existing lookups for %s to complete [loop %i]", | |
394 | [w[0] for w in wait_on], | |
395 | loop_count, | |
396 | ) | |
397 | with PreserveLoggingContext(): | |
398 | await defer.DeferredList((w[1] for w in wait_on)) | |
399 | ||
400 | loop_count += 1 | |
401 | ||
402 | def _get_server_verify_keys(self, verify_requests: List[VerifyJsonRequest]) -> None: | |
403 | """Tries to find at least one key for each verify request | |
404 | ||
405 | For each verify_request, verify_request.key_ready is called back with | |
406 | params (server_name, key_id, VerifyKey) if a key is found, or errbacked | |
407 | with a SynapseError if none of the keys are found. | |
408 | ||
409 | Args: | |
410 | verify_requests: list of verify requests | |
411 | """ | |
412 | ||
413 | remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called} | |
414 | ||
415 | async def do_iterations(): | |
416 | try: | |
417 | with Measure(self.clock, "get_server_verify_keys"): | |
418 | for f in self._key_fetchers: | |
419 | if not remaining_requests: | |
420 | return | |
421 | await self._attempt_key_fetches_with_fetcher( | |
422 | f, remaining_requests | |
423 | ) | |
424 | ||
425 | # look for any requests which weren't satisfied | |
426 | while remaining_requests: | |
427 | verify_request = remaining_requests.pop() | |
428 | rq_str = ( | |
429 | "VerifyJsonRequest(server=%s, key_ids=%s, min_valid=%i)" | |
430 | % ( | |
431 | verify_request.server_name, | |
432 | verify_request.key_ids, | |
433 | verify_request.minimum_valid_until_ts, | |
434 | ) | |
435 | ) | |
436 | ||
437 | # If we run the errback immediately, it may cancel our | |
438 | # loggingcontext while we are still in it, so instead we | |
439 | # schedule it for the next time round the reactor. | |
440 | # | |
441 | # (this also ensures that we don't get a stack overflow if we | |
442 | # has a massive queue of lookups waiting for this server). | |
443 | self.clock.call_later( | |
444 | 0, | |
445 | verify_request.key_ready.errback, | |
446 | SynapseError( | |
447 | 401, | |
448 | "Failed to find any key to satisfy %s" % (rq_str,), | |
449 | Codes.UNAUTHORIZED, | |
450 | ), | |
451 | ) | |
452 | except Exception as err: | |
453 | # we don't really expect to get here, because any errors should already | |
454 | # have been caught and logged. But if we do, let's log the error and make | |
455 | # sure that all of the deferreds are resolved. | |
456 | logger.error("Unexpected error in _get_server_verify_keys: %s", err) | |
457 | with PreserveLoggingContext(): | |
458 | for verify_request in remaining_requests: | |
459 | if not verify_request.key_ready.called: | |
460 | verify_request.key_ready.errback(err) | |
461 | ||
462 | run_in_background(do_iterations) | |
463 | ||
464 | async def _attempt_key_fetches_with_fetcher( | |
465 | self, fetcher: "KeyFetcher", remaining_requests: Set[VerifyJsonRequest] | |
466 | ): | |
467 | """Use a key fetcher to attempt to satisfy some key requests | |
468 | ||
469 | Args: | |
470 | fetcher: fetcher to use to fetch the keys | |
471 | remaining_requests: outstanding key requests. | |
472 | Any successfully-completed requests will be removed from the list. | |
473 | """ | |
474 | # The keys to fetch. | |
475 | # server_name -> key_id -> min_valid_ts | |
476 | missing_keys = defaultdict(dict) # type: Dict[str, Dict[str, int]] | |
477 | ||
478 | for verify_request in remaining_requests: | |
479 | # any completed requests should already have been removed | |
480 | assert not verify_request.key_ready.called | |
481 | keys_for_server = missing_keys[verify_request.server_name] | |
482 | ||
483 | for key_id in verify_request.key_ids: | |
484 | # If we have several requests for the same key, then we only need to | |
485 | # request that key once, but we should do so with the greatest | |
486 | # min_valid_until_ts of the requests, so that we can satisfy all of | |
487 | # the requests. | |
488 | keys_for_server[key_id] = max( | |
489 | keys_for_server.get(key_id, -1), | |
490 | verify_request.minimum_valid_until_ts, | |
491 | ) | |
492 | ||
493 | results = await fetcher.get_keys(missing_keys) | |
494 | ||
495 | completed = [] | |
496 | for verify_request in remaining_requests: | |
497 | server_name = verify_request.server_name | |
498 | ||
499 | # see if any of the keys we got this time are sufficient to | |
500 | # complete this VerifyJsonRequest. | |
501 | result_keys = results.get(server_name, {}) | |
502 | for key_id in verify_request.key_ids: | |
503 | fetch_key_result = result_keys.get(key_id) | |
504 | if not fetch_key_result: | |
505 | # we didn't get a result for this key | |
389 | for key_id, key in keys.items(): | |
390 | if not key: | |
506 | 391 | continue |
507 | 392 | |
508 | if ( | |
509 | fetch_key_result.valid_until_ts | |
510 | < verify_request.minimum_valid_until_ts | |
511 | ): | |
512 | # key was not valid at this point | |
393 | # If we already have a result for the given key ID we keep the | |
394 | # one with the highest `valid_until_ts`. | |
395 | existing_key = found_keys.get(key_id) | |
396 | if existing_key: | |
397 | if key.valid_until_ts <= existing_key.valid_until_ts: | |
398 | continue | |
399 | ||
400 | # We always store the returned key even if it doesn't the | |
401 | # `minimum_valid_until_ts` requirement, as some verification | |
402 | # requests may still be able to be satisfied by it. | |
403 | # | |
404 | # We still keep looking for the key from other fetchers in that | |
405 | # case though. | |
406 | found_keys[key_id] = key | |
407 | ||
408 | if key.valid_until_ts < verify_request.minimum_valid_until_ts: | |
513 | 409 | continue |
514 | 410 | |
515 | # we have a valid key for this request. If we run the callback | |
516 | # immediately, it may cancel our loggingcontext while we are still in | |
517 | # it, so instead we schedule it for the next time round the reactor. | |
518 | # | |
519 | # (this also ensures that we don't get a stack overflow if we had | |
520 | # a massive queue of lookups waiting for this server). | |
521 | logger.debug( | |
522 | "Found key %s:%s for %s", | |
523 | server_name, | |
524 | key_id, | |
525 | verify_request.request_name, | |
526 | ) | |
527 | self.clock.call_later( | |
528 | 0, | |
529 | verify_request.key_ready.callback, | |
530 | (server_name, key_id, fetch_key_result.verify_key), | |
531 | ) | |
532 | completed.append(verify_request) | |
533 | break | |
534 | ||
535 | remaining_requests.difference_update(completed) | |
411 | missing_key_ids.discard(key_id) | |
412 | ||
413 | return found_keys | |
536 | 414 | |
537 | 415 | |
538 | 416 | class KeyFetcher(metaclass=abc.ABCMeta): |
417 | def __init__(self, hs: "HomeServer"): | |
418 | self._queue = BatchingQueue( | |
419 | self.__class__.__name__, hs.get_clock(), self._fetch_keys | |
420 | ) | |
421 | ||
422 | async def get_keys( | |
423 | self, server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
424 | ) -> Dict[str, FetchKeyResult]: | |
425 | results = await self._queue.add_to_queue( | |
426 | _FetchKeyRequest( | |
427 | server_name=server_name, | |
428 | key_ids=key_ids, | |
429 | minimum_valid_until_ts=minimum_valid_until_ts, | |
430 | ) | |
431 | ) | |
432 | return results.get(server_name, {}) | |
433 | ||
539 | 434 | @abc.abstractmethod |
540 | async def get_keys( | |
541 | self, keys_to_fetch: Dict[str, Dict[str, int]] | |
435 | async def _fetch_keys( | |
436 | self, keys_to_fetch: List[_FetchKeyRequest] | |
542 | 437 | ) -> Dict[str, Dict[str, FetchKeyResult]]: |
543 | """ | |
544 | Args: | |
545 | keys_to_fetch: | |
546 | the keys to be fetched. server_name -> key_id -> min_valid_ts | |
547 | ||
548 | Returns: | |
549 | Map from server_name -> key_id -> FetchKeyResult | |
550 | """ | |
551 | raise NotImplementedError | |
438 | pass | |
552 | 439 | |
553 | 440 | |
554 | 441 | class StoreKeyFetcher(KeyFetcher): |
555 | 442 | """KeyFetcher impl which fetches keys from our data store""" |
556 | 443 | |
557 | 444 | def __init__(self, hs: "HomeServer"): |
445 | super().__init__(hs) | |
446 | ||
558 | 447 | self.store = hs.get_datastore() |
559 | 448 | |
560 | async def get_keys( | |
561 | self, keys_to_fetch: Dict[str, Dict[str, int]] | |
562 | ) -> Dict[str, Dict[str, FetchKeyResult]]: | |
563 | """see KeyFetcher.get_keys""" | |
564 | ||
449 | async def _fetch_keys(self, keys_to_fetch: List[_FetchKeyRequest]): | |
565 | 450 | key_ids_to_fetch = ( |
566 | (server_name, key_id) | |
567 | for server_name, keys_for_server in keys_to_fetch.items() | |
568 | for key_id in keys_for_server.keys() | |
451 | (queue_value.server_name, key_id) | |
452 | for queue_value in keys_to_fetch | |
453 | for key_id in queue_value.key_ids | |
569 | 454 | ) |
570 | 455 | |
571 | 456 | res = await self.store.get_server_verify_keys(key_ids_to_fetch) |
577 | 462 | |
578 | 463 | class BaseV2KeyFetcher(KeyFetcher): |
579 | 464 | def __init__(self, hs: "HomeServer"): |
465 | super().__init__(hs) | |
466 | ||
580 | 467 | self.store = hs.get_datastore() |
581 | 468 | self.config = hs.config |
582 | 469 | |
684 | 571 | self.client = hs.get_federation_http_client() |
685 | 572 | self.key_servers = self.config.key_servers |
686 | 573 | |
687 | async def get_keys( | |
688 | self, keys_to_fetch: Dict[str, Dict[str, int]] | |
574 | async def _fetch_keys( | |
575 | self, keys_to_fetch: List[_FetchKeyRequest] | |
689 | 576 | ) -> Dict[str, Dict[str, FetchKeyResult]]: |
690 | """see KeyFetcher.get_keys""" | |
577 | """see KeyFetcher._fetch_keys""" | |
691 | 578 | |
692 | 579 | async def get_key(key_server: TrustedKeyServer) -> Dict: |
693 | 580 | try: |
723 | 610 | return union_of_keys |
724 | 611 | |
725 | 612 | async def get_server_verify_key_v2_indirect( |
726 | self, keys_to_fetch: Dict[str, Dict[str, int]], key_server: TrustedKeyServer | |
613 | self, keys_to_fetch: List[_FetchKeyRequest], key_server: TrustedKeyServer | |
727 | 614 | ) -> Dict[str, Dict[str, FetchKeyResult]]: |
728 | 615 | """ |
729 | 616 | Args: |
730 | 617 | keys_to_fetch: |
731 | the keys to be fetched. server_name -> key_id -> min_valid_ts | |
618 | the keys to be fetched. | |
732 | 619 | |
733 | 620 | key_server: notary server to query for the keys |
734 | 621 | |
742 | 629 | perspective_name = key_server.server_name |
743 | 630 | logger.info( |
744 | 631 | "Requesting keys %s from notary server %s", |
745 | keys_to_fetch.items(), | |
632 | keys_to_fetch, | |
746 | 633 | perspective_name, |
747 | 634 | ) |
748 | 635 | |
752 | 639 | path="/_matrix/key/v2/query", |
753 | 640 | data={ |
754 | 641 | "server_keys": { |
755 | server_name: { | |
756 | key_id: {"minimum_valid_until_ts": min_valid_ts} | |
757 | for key_id, min_valid_ts in server_keys.items() | |
642 | queue_value.server_name: { | |
643 | key_id: { | |
644 | "minimum_valid_until_ts": queue_value.minimum_valid_until_ts, | |
645 | } | |
646 | for key_id in queue_value.key_ids | |
758 | 647 | } |
759 | for server_name, server_keys in keys_to_fetch.items() | |
648 | for queue_value in keys_to_fetch | |
760 | 649 | } |
761 | 650 | }, |
762 | 651 | ) |
857 | 746 | self.client = hs.get_federation_http_client() |
858 | 747 | |
859 | 748 | async def get_keys( |
860 | self, keys_to_fetch: Dict[str, Dict[str, int]] | |
749 | self, server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
750 | ) -> Dict[str, FetchKeyResult]: | |
751 | results = await self._queue.add_to_queue( | |
752 | _FetchKeyRequest( | |
753 | server_name=server_name, | |
754 | key_ids=key_ids, | |
755 | minimum_valid_until_ts=minimum_valid_until_ts, | |
756 | ), | |
757 | key=server_name, | |
758 | ) | |
759 | return results.get(server_name, {}) | |
760 | ||
761 | async def _fetch_keys( | |
762 | self, keys_to_fetch: List[_FetchKeyRequest] | |
861 | 763 | ) -> Dict[str, Dict[str, FetchKeyResult]]: |
862 | 764 | """ |
863 | 765 | Args: |
870 | 772 | |
871 | 773 | results = {} |
872 | 774 | |
873 | async def get_key(key_to_fetch_item: Tuple[str, Dict[str, int]]) -> None: | |
874 | server_name, key_ids = key_to_fetch_item | |
775 | async def get_key(key_to_fetch_item: _FetchKeyRequest) -> None: | |
776 | server_name = key_to_fetch_item.server_name | |
777 | key_ids = key_to_fetch_item.key_ids | |
778 | ||
875 | 779 | try: |
876 | 780 | keys = await self.get_server_verify_key_v2_direct(server_name, key_ids) |
877 | 781 | results[server_name] = keys |
882 | 786 | except Exception: |
883 | 787 | logger.exception("Error getting keys %s from %s", key_ids, server_name) |
884 | 788 | |
885 | await yieldable_gather_results(get_key, keys_to_fetch.items()) | |
789 | await yieldable_gather_results(get_key, keys_to_fetch) | |
886 | 790 | return results |
887 | 791 | |
888 | 792 | async def get_server_verify_key_v2_direct( |
954 | 858 | keys.update(response_keys) |
955 | 859 | |
956 | 860 | return keys |
957 | ||
958 | ||
959 | async def _handle_key_deferred(verify_request: VerifyJsonRequest) -> None: | |
960 | """Waits for the key to become available, and then performs a verification | |
961 | ||
962 | Args: | |
963 | verify_request: | |
964 | ||
965 | Raises: | |
966 | SynapseError if there was a problem performing the verification | |
967 | """ | |
968 | server_name = verify_request.server_name | |
969 | with PreserveLoggingContext(): | |
970 | _, key_id, verify_key = await verify_request.key_ready | |
971 | ||
972 | json_object = verify_request.get_json_object() | |
973 | ||
974 | try: | |
975 | verify_signed_json(json_object, server_name, verify_key) | |
976 | except SignatureVerifyException as e: | |
977 | logger.debug( | |
978 | "Error verifying signature for %s:%s:%s with key %s: %s", | |
979 | server_name, | |
980 | verify_key.alg, | |
981 | verify_key.version, | |
982 | encode_verify_key_base64(verify_key), | |
983 | str(e), | |
984 | ) | |
985 | raise SynapseError( | |
986 | 401, | |
987 | "Invalid signature for server %s with key %s:%s: %s" | |
988 | % (server_name, verify_key.alg, verify_key.version, str(e)), | |
989 | Codes.UNAUTHORIZED, | |
990 | ) |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | 15 | from collections import namedtuple |
16 | from typing import Iterable, List | |
17 | ||
18 | from twisted.internet import defer | |
19 | from twisted.internet.defer import Deferred, DeferredList | |
20 | from twisted.python.failure import Failure | |
21 | 16 | |
22 | 17 | from synapse.api.constants import MAX_DEPTH, EventTypes, Membership |
23 | 18 | from synapse.api.errors import Codes, SynapseError |
27 | 22 | from synapse.events import EventBase, make_event_from_dict |
28 | 23 | from synapse.events.utils import prune_event, validate_canonicaljson |
29 | 24 | from synapse.http.servlet import assert_params_in_dict |
30 | from synapse.logging.context import ( | |
31 | PreserveLoggingContext, | |
32 | current_context, | |
33 | make_deferred_yieldable, | |
34 | ) | |
35 | 25 | from synapse.types import JsonDict, get_domain_from_id |
36 | 26 | |
37 | 27 | logger = logging.getLogger(__name__) |
47 | 37 | self.store = hs.get_datastore() |
48 | 38 | self._clock = hs.get_clock() |
49 | 39 | |
50 | def _check_sigs_and_hash( | |
40 | async def _check_sigs_and_hash( | |
51 | 41 | self, room_version: RoomVersion, pdu: EventBase |
52 | ) -> Deferred: | |
53 | return make_deferred_yieldable( | |
54 | self._check_sigs_and_hashes(room_version, [pdu])[0] | |
55 | ) | |
56 | ||
57 | def _check_sigs_and_hashes( | |
58 | self, room_version: RoomVersion, pdus: List[EventBase] | |
59 | ) -> List[Deferred]: | |
60 | """Checks that each of the received events is correctly signed by the | |
61 | sending server. | |
42 | ) -> EventBase: | |
43 | """Checks that event is correctly signed by the sending server. | |
62 | 44 | |
63 | 45 | Args: |
64 | room_version: The room version of the PDUs | |
65 | pdus: the events to be checked | |
46 | room_version: The room version of the PDU | |
47 | pdu: the event to be checked | |
66 | 48 | |
67 | 49 | Returns: |
68 | For each input event, a deferred which: | |
69 | * returns the original event if the checks pass | |
70 | * returns a redacted version of the event (if the signature | |
50 | * the original event if the checks pass | |
51 | * a redacted version of the event (if the signature | |
71 | 52 | matched but the hash did not) |
72 | * throws a SynapseError if the signature check failed. | |
73 | The deferreds run their callbacks in the sentinel | |
74 | """ | |
75 | deferreds = _check_sigs_on_pdus(self.keyring, room_version, pdus) | |
76 | ||
77 | ctx = current_context() | |
78 | ||
79 | @defer.inlineCallbacks | |
80 | def callback(_, pdu: EventBase): | |
81 | with PreserveLoggingContext(ctx): | |
82 | if not check_event_content_hash(pdu): | |
83 | # let's try to distinguish between failures because the event was | |
84 | # redacted (which are somewhat expected) vs actual ball-tampering | |
85 | # incidents. | |
86 | # | |
87 | # This is just a heuristic, so we just assume that if the keys are | |
88 | # about the same between the redacted and received events, then the | |
89 | # received event was probably a redacted copy (but we then use our | |
90 | # *actual* redacted copy to be on the safe side.) | |
91 | redacted_event = prune_event(pdu) | |
92 | if set(redacted_event.keys()) == set(pdu.keys()) and set( | |
93 | redacted_event.content.keys() | |
94 | ) == set(pdu.content.keys()): | |
95 | logger.info( | |
96 | "Event %s seems to have been redacted; using our redacted " | |
97 | "copy", | |
98 | pdu.event_id, | |
99 | ) | |
100 | else: | |
101 | logger.warning( | |
102 | "Event %s content has been tampered, redacting", | |
103 | pdu.event_id, | |
104 | ) | |
105 | return redacted_event | |
106 | ||
107 | result = yield defer.ensureDeferred( | |
108 | self.spam_checker.check_event_for_spam(pdu) | |
53 | * throws a SynapseError if the signature check failed.""" | |
54 | try: | |
55 | await _check_sigs_on_pdu(self.keyring, room_version, pdu) | |
56 | except SynapseError as e: | |
57 | logger.warning( | |
58 | "Signature check failed for %s: %s", | |
59 | pdu.event_id, | |
60 | e, | |
61 | ) | |
62 | raise | |
63 | ||
64 | if not check_event_content_hash(pdu): | |
65 | # let's try to distinguish between failures because the event was | |
66 | # redacted (which are somewhat expected) vs actual ball-tampering | |
67 | # incidents. | |
68 | # | |
69 | # This is just a heuristic, so we just assume that if the keys are | |
70 | # about the same between the redacted and received events, then the | |
71 | # received event was probably a redacted copy (but we then use our | |
72 | # *actual* redacted copy to be on the safe side.) | |
73 | redacted_event = prune_event(pdu) | |
74 | if set(redacted_event.keys()) == set(pdu.keys()) and set( | |
75 | redacted_event.content.keys() | |
76 | ) == set(pdu.content.keys()): | |
77 | logger.info( | |
78 | "Event %s seems to have been redacted; using our redacted copy", | |
79 | pdu.event_id, | |
109 | 80 | ) |
110 | ||
111 | if result: | |
112 | logger.warning( | |
113 | "Event contains spam, redacting %s: %s", | |
114 | pdu.event_id, | |
115 | pdu.get_pdu_json(), | |
116 | ) | |
117 | return prune_event(pdu) | |
118 | ||
119 | return pdu | |
120 | ||
121 | def errback(failure: Failure, pdu: EventBase): | |
122 | failure.trap(SynapseError) | |
123 | with PreserveLoggingContext(ctx): | |
81 | else: | |
124 | 82 | logger.warning( |
125 | "Signature check failed for %s: %s", | |
83 | "Event %s content has been tampered, redacting", | |
126 | 84 | pdu.event_id, |
127 | failure.getErrorMessage(), | |
128 | 85 | ) |
129 | return failure | |
130 | ||
131 | for deferred, pdu in zip(deferreds, pdus): | |
132 | deferred.addCallbacks( | |
133 | callback, errback, callbackArgs=[pdu], errbackArgs=[pdu] | |
134 | ) | |
135 | ||
136 | return deferreds | |
86 | return redacted_event | |
87 | ||
88 | result = await self.spam_checker.check_event_for_spam(pdu) | |
89 | ||
90 | if result: | |
91 | logger.warning( | |
92 | "Event contains spam, redacting %s: %s", | |
93 | pdu.event_id, | |
94 | pdu.get_pdu_json(), | |
95 | ) | |
96 | return prune_event(pdu) | |
97 | ||
98 | return pdu | |
137 | 99 | |
138 | 100 | |
139 | 101 | class PduToCheckSig(namedtuple("PduToCheckSig", ["pdu", "sender_domain", "deferreds"])): |
140 | 102 | pass |
141 | 103 | |
142 | 104 | |
143 | def _check_sigs_on_pdus( | |
144 | keyring: Keyring, room_version: RoomVersion, pdus: Iterable[EventBase] | |
145 | ) -> List[Deferred]: | |
105 | async def _check_sigs_on_pdu( | |
106 | keyring: Keyring, room_version: RoomVersion, pdu: EventBase | |
107 | ) -> None: | |
146 | 108 | """Check that the given events are correctly signed |
109 | ||
110 | Raise a SynapseError if the event wasn't correctly signed. | |
147 | 111 | |
148 | 112 | Args: |
149 | 113 | keyring: keyring object to do the checks |
150 | 114 | room_version: the room version of the PDUs |
151 | 115 | pdus: the events to be checked |
152 | ||
153 | Returns: | |
154 | A Deferred for each event in pdus, which will either succeed if | |
155 | the signatures are valid, or fail (with a SynapseError) if not. | |
156 | 116 | """ |
157 | 117 | |
158 | 118 | # we want to check that the event is signed by: |
176 | 136 | # let's start by getting the domain for each pdu, and flattening the event back |
177 | 137 | # to JSON. |
178 | 138 | |
179 | pdus_to_check = [ | |
180 | PduToCheckSig( | |
181 | pdu=p, | |
182 | sender_domain=get_domain_from_id(p.sender), | |
183 | deferreds=[], | |
184 | ) | |
185 | for p in pdus | |
186 | ] | |
187 | ||
188 | 139 | # First we check that the sender event is signed by the sender's domain |
189 | 140 | # (except if its a 3pid invite, in which case it may be sent by any server) |
190 | pdus_to_check_sender = [p for p in pdus_to_check if not _is_invite_via_3pid(p.pdu)] | |
191 | ||
192 | more_deferreds = keyring.verify_events_for_server( | |
193 | [ | |
194 | ( | |
195 | p.sender_domain, | |
196 | p.pdu, | |
197 | p.pdu.origin_server_ts if room_version.enforce_key_validity else 0, | |
198 | ) | |
199 | for p in pdus_to_check_sender | |
200 | ] | |
201 | ) | |
202 | ||
203 | def sender_err(e, pdu_to_check): | |
204 | errmsg = "event id %s: unable to verify signature for sender %s: %s" % ( | |
205 | pdu_to_check.pdu.event_id, | |
206 | pdu_to_check.sender_domain, | |
207 | e.getErrorMessage(), | |
208 | ) | |
209 | raise SynapseError(403, errmsg, Codes.FORBIDDEN) | |
210 | ||
211 | for p, d in zip(pdus_to_check_sender, more_deferreds): | |
212 | d.addErrback(sender_err, p) | |
213 | p.deferreds.append(d) | |
141 | if not _is_invite_via_3pid(pdu): | |
142 | try: | |
143 | await keyring.verify_event_for_server( | |
144 | get_domain_from_id(pdu.sender), | |
145 | pdu, | |
146 | pdu.origin_server_ts if room_version.enforce_key_validity else 0, | |
147 | ) | |
148 | except Exception as e: | |
149 | errmsg = "event id %s: unable to verify signature for sender %s: %s" % ( | |
150 | pdu.event_id, | |
151 | get_domain_from_id(pdu.sender), | |
152 | e, | |
153 | ) | |
154 | raise SynapseError(403, errmsg, Codes.FORBIDDEN) | |
214 | 155 | |
215 | 156 | # now let's look for events where the sender's domain is different to the |
216 | 157 | # event id's domain (normally only the case for joins/leaves), and add additional |
217 | 158 | # checks. Only do this if the room version has a concept of event ID domain |
218 | 159 | # (ie, the room version uses old-style non-hash event IDs). |
219 | if room_version.event_format == EventFormatVersions.V1: | |
220 | pdus_to_check_event_id = [ | |
221 | p | |
222 | for p in pdus_to_check | |
223 | if p.sender_domain != get_domain_from_id(p.pdu.event_id) | |
224 | ] | |
225 | ||
226 | more_deferreds = keyring.verify_events_for_server( | |
227 | [ | |
228 | ( | |
229 | get_domain_from_id(p.pdu.event_id), | |
230 | p.pdu, | |
231 | p.pdu.origin_server_ts if room_version.enforce_key_validity else 0, | |
160 | if room_version.event_format == EventFormatVersions.V1 and get_domain_from_id( | |
161 | pdu.event_id | |
162 | ) != get_domain_from_id(pdu.sender): | |
163 | try: | |
164 | await keyring.verify_event_for_server( | |
165 | get_domain_from_id(pdu.event_id), | |
166 | pdu, | |
167 | pdu.origin_server_ts if room_version.enforce_key_validity else 0, | |
168 | ) | |
169 | except Exception as e: | |
170 | errmsg = ( | |
171 | "event id %s: unable to verify signature for event id domain %s: %s" | |
172 | % ( | |
173 | pdu.event_id, | |
174 | get_domain_from_id(pdu.event_id), | |
175 | e, | |
232 | 176 | ) |
233 | for p in pdus_to_check_event_id | |
234 | ] | |
235 | ) | |
236 | ||
237 | def event_err(e, pdu_to_check): | |
238 | errmsg = ( | |
239 | "event id %s: unable to verify signature for event id domain: %s" | |
240 | % (pdu_to_check.pdu.event_id, e.getErrorMessage()) | |
241 | 177 | ) |
242 | 178 | raise SynapseError(403, errmsg, Codes.FORBIDDEN) |
243 | ||
244 | for p, d in zip(pdus_to_check_event_id, more_deferreds): | |
245 | d.addErrback(event_err, p) | |
246 | p.deferreds.append(d) | |
247 | ||
248 | # replace lists of deferreds with single Deferreds | |
249 | return [_flatten_deferred_list(p.deferreds) for p in pdus_to_check] | |
250 | ||
251 | ||
252 | def _flatten_deferred_list(deferreds: List[Deferred]) -> Deferred: | |
253 | """Given a list of deferreds, either return the single deferred, | |
254 | combine into a DeferredList, or return an already resolved deferred. | |
255 | """ | |
256 | if len(deferreds) > 1: | |
257 | return DeferredList(deferreds, fireOnOneErrback=True, consumeErrors=True) | |
258 | elif len(deferreds) == 1: | |
259 | return deferreds[0] | |
260 | else: | |
261 | return defer.succeed(None) | |
262 | 179 | |
263 | 180 | |
264 | 181 | def _is_invite_via_3pid(event: EventBase) -> bool: |
20 | 20 | Any, |
21 | 21 | Awaitable, |
22 | 22 | Callable, |
23 | Collection, | |
23 | 24 | Dict, |
24 | 25 | Iterable, |
25 | 26 | List, |
33 | 34 | |
34 | 35 | import attr |
35 | 36 | from prometheus_client import Counter |
36 | ||
37 | from twisted.internet import defer | |
38 | from twisted.internet.defer import Deferred | |
39 | 37 | |
40 | 38 | from synapse.api.constants import EventTypes, Membership |
41 | 39 | from synapse.api.errors import ( |
55 | 53 | from synapse.events import EventBase, builder |
56 | 54 | from synapse.federation.federation_base import FederationBase, event_from_pdu_json |
57 | 55 | from synapse.federation.transport.client import SendJoinResponse |
58 | from synapse.logging.context import make_deferred_yieldable, preserve_fn | |
59 | 56 | from synapse.logging.utils import log_function |
60 | 57 | from synapse.types import JsonDict, get_domain_from_id |
61 | from synapse.util import unwrapFirstError | |
58 | from synapse.util.async_helpers import concurrently_execute | |
62 | 59 | from synapse.util.caches.expiringcache import ExpiringCache |
63 | 60 | from synapse.util.retryutils import NotRetryingDestination |
64 | 61 | |
359 | 356 | async def _check_sigs_and_hash_and_fetch( |
360 | 357 | self, |
361 | 358 | origin: str, |
362 | pdus: List[EventBase], | |
359 | pdus: Collection[EventBase], | |
363 | 360 | room_version: RoomVersion, |
364 | 361 | outlier: bool = False, |
365 | include_none: bool = False, | |
366 | 362 | ) -> List[EventBase]: |
367 | 363 | """Takes a list of PDUs and checks the signatures and hashes of each |
368 | 364 | one. If a PDU fails its signature check then we check if we have it in |
373 | 369 | |
374 | 370 | The given list of PDUs are not modified, instead the function returns |
375 | 371 | a new list. |
372 | ||
373 | Args: | |
374 | origin | |
375 | pdu | |
376 | room_version | |
377 | outlier: Whether the events are outliers or not | |
378 | ||
379 | Returns: | |
380 | A list of PDUs that have valid signatures and hashes. | |
381 | """ | |
382 | ||
383 | # We limit how many PDUs we check at once, as if we try to do hundreds | |
384 | # of thousands of PDUs at once we see large memory spikes. | |
385 | ||
386 | valid_pdus = [] | |
387 | ||
388 | async def _execute(pdu: EventBase) -> None: | |
389 | valid_pdu = await self._check_sigs_and_hash_and_fetch_one( | |
390 | pdu=pdu, | |
391 | origin=origin, | |
392 | outlier=outlier, | |
393 | room_version=room_version, | |
394 | ) | |
395 | ||
396 | if valid_pdu: | |
397 | valid_pdus.append(valid_pdu) | |
398 | ||
399 | await concurrently_execute(_execute, pdus, 10000) | |
400 | ||
401 | return valid_pdus | |
402 | ||
403 | async def _check_sigs_and_hash_and_fetch_one( | |
404 | self, | |
405 | pdu: EventBase, | |
406 | origin: str, | |
407 | room_version: RoomVersion, | |
408 | outlier: bool = False, | |
409 | ) -> Optional[EventBase]: | |
410 | """Takes a PDU and checks its signatures and hashes. If the PDU fails | |
411 | its signature check then we check if we have it in the database and if | |
412 | not then request if from the originating server of that PDU. | |
413 | ||
414 | If then PDU fails its content hash check then it is redacted. | |
376 | 415 | |
377 | 416 | Args: |
378 | 417 | origin |
383 | 422 | for events that have failed their checks |
384 | 423 | |
385 | 424 | Returns: |
386 | A list of PDUs that have valid signatures and hashes. | |
387 | """ | |
388 | deferreds = self._check_sigs_and_hashes(room_version, pdus) | |
389 | ||
390 | async def handle_check_result(pdu: EventBase, deferred: Deferred): | |
425 | The PDU (possibly redacted) if it has valid signatures and hashes. | |
426 | """ | |
427 | ||
428 | res = None | |
429 | try: | |
430 | res = await self._check_sigs_and_hash(room_version, pdu) | |
431 | except SynapseError: | |
432 | pass | |
433 | ||
434 | if not res: | |
435 | # Check local db. | |
436 | res = await self.store.get_event( | |
437 | pdu.event_id, allow_rejected=True, allow_none=True | |
438 | ) | |
439 | ||
440 | pdu_origin = get_domain_from_id(pdu.sender) | |
441 | if not res and pdu_origin != origin: | |
391 | 442 | try: |
392 | res = await make_deferred_yieldable(deferred) | |
443 | res = await self.get_pdu( | |
444 | destinations=[pdu_origin], | |
445 | event_id=pdu.event_id, | |
446 | room_version=room_version, | |
447 | outlier=outlier, | |
448 | timeout=10000, | |
449 | ) | |
393 | 450 | except SynapseError: |
394 | res = None | |
395 | ||
396 | if not res: | |
397 | # Check local db. | |
398 | res = await self.store.get_event( | |
399 | pdu.event_id, allow_rejected=True, allow_none=True | |
400 | ) | |
401 | ||
402 | pdu_origin = get_domain_from_id(pdu.sender) | |
403 | if not res and pdu_origin != origin: | |
404 | try: | |
405 | res = await self.get_pdu( | |
406 | destinations=[pdu_origin], | |
407 | event_id=pdu.event_id, | |
408 | room_version=room_version, | |
409 | outlier=outlier, | |
410 | timeout=10000, | |
411 | ) | |
412 | except SynapseError: | |
413 | pass | |
414 | ||
415 | if not res: | |
416 | logger.warning( | |
417 | "Failed to find copy of %s with valid signature", pdu.event_id | |
418 | ) | |
419 | ||
420 | return res | |
421 | ||
422 | handle = preserve_fn(handle_check_result) | |
423 | deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)] | |
424 | ||
425 | valid_pdus = await make_deferred_yieldable( | |
426 | defer.gatherResults(deferreds2, consumeErrors=True) | |
427 | ).addErrback(unwrapFirstError) | |
428 | ||
429 | if include_none: | |
430 | return valid_pdus | |
431 | else: | |
432 | return [p for p in valid_pdus if p] | |
451 | pass | |
452 | ||
453 | if not res: | |
454 | logger.warning( | |
455 | "Failed to find copy of %s with valid signature", pdu.event_id | |
456 | ) | |
457 | ||
458 | return res | |
433 | 459 | |
434 | 460 | async def get_event_auth( |
435 | 461 | self, destination: str, room_id: str, event_id: str |
670 | 696 | state = response.state |
671 | 697 | auth_chain = response.auth_events |
672 | 698 | |
673 | pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)} | |
674 | ||
675 | 699 | create_event = None |
676 | 700 | for e in state: |
677 | 701 | if (e.type, e.state_key) == (EventTypes.Create, ""): |
695 | 719 | % (create_room_version,) |
696 | 720 | ) |
697 | 721 | |
698 | valid_pdus = await self._check_sigs_and_hash_and_fetch( | |
699 | destination, | |
700 | list(pdus.values()), | |
701 | outlier=True, | |
702 | room_version=room_version, | |
703 | ) | |
704 | ||
705 | valid_pdus_map = {p.event_id: p for p in valid_pdus} | |
722 | logger.info( | |
723 | "Processing from send_join %d events", len(state) + len(auth_chain) | |
724 | ) | |
725 | ||
726 | # We now go and check the signatures and hashes for the event. Note | |
727 | # that we limit how many events we process at a time to keep the | |
728 | # memory overhead from exploding. | |
729 | valid_pdus_map: Dict[str, EventBase] = {} | |
730 | ||
731 | async def _execute(pdu: EventBase) -> None: | |
732 | valid_pdu = await self._check_sigs_and_hash_and_fetch_one( | |
733 | pdu=pdu, | |
734 | origin=destination, | |
735 | outlier=True, | |
736 | room_version=room_version, | |
737 | ) | |
738 | ||
739 | if valid_pdu: | |
740 | valid_pdus_map[valid_pdu.event_id] = valid_pdu | |
741 | ||
742 | await concurrently_execute( | |
743 | _execute, itertools.chain(state, auth_chain), 10000 | |
744 | ) | |
706 | 745 | |
707 | 746 | # NB: We *need* to copy to ensure that we don't have multiple |
708 | 747 | # references being passed on, as that causes... issues. |
36 | 36 | ) |
37 | 37 | from synapse.logging.context import run_in_background |
38 | 38 | from synapse.logging.opentracing import ( |
39 | SynapseTags, | |
39 | 40 | start_active_span, |
40 | 41 | start_active_span_from_request, |
41 | 42 | tags, |
150 | 151 | ) |
151 | 152 | |
152 | 153 | await self.keyring.verify_json_for_server( |
153 | origin, json_request, now, "Incoming request" | |
154 | origin, | |
155 | json_request, | |
156 | now, | |
154 | 157 | ) |
155 | 158 | |
156 | 159 | logger.debug("Request from %s", origin) |
313 | 316 | raise |
314 | 317 | |
315 | 318 | request_tags = { |
316 | "request_id": request.get_request_id(), | |
319 | SynapseTags.REQUEST_ID: request.get_request_id(), | |
317 | 320 | tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER, |
318 | 321 | tags.HTTP_METHOD: request.get_method(), |
319 | 322 | tags.HTTP_URL: request.get_redacted_uri(), |
1561 | 1564 | server_name=hs.hostname, |
1562 | 1565 | ).register(resource) |
1563 | 1566 | |
1564 | if hs.config.experimental.spaces_enabled: | |
1565 | FederationSpaceSummaryServlet( | |
1566 | handler=hs.get_space_summary_handler(), | |
1567 | authenticator=authenticator, | |
1568 | ratelimiter=ratelimiter, | |
1569 | server_name=hs.hostname, | |
1570 | ).register(resource) | |
1567 | FederationSpaceSummaryServlet( | |
1568 | handler=hs.get_space_summary_handler(), | |
1569 | authenticator=authenticator, | |
1570 | ratelimiter=ratelimiter, | |
1571 | server_name=hs.hostname, | |
1572 | ).register(resource) | |
1571 | 1573 | |
1572 | 1574 | if "openid" in servlet_groups: |
1573 | 1575 | for servletclass in OPENID_SERVLET_CLASSES: |
107 | 107 | |
108 | 108 | assert server_name is not None |
109 | 109 | await self.keyring.verify_json_for_server( |
110 | server_name, attestation, now, "Group attestation" | |
110 | server_name, | |
111 | attestation, | |
112 | now, | |
111 | 113 | ) |
112 | 114 | |
113 | 115 | def create_attestation(self, group_id: str, user_id: str) -> JsonDict: |
86 | 86 | self.is_processing = True |
87 | 87 | try: |
88 | 88 | limit = 100 |
89 | while True: | |
89 | upper_bound = -1 | |
90 | while upper_bound < self.current_max: | |
90 | 91 | ( |
91 | 92 | upper_bound, |
92 | 93 | events, |
93 | 94 | ) = await self.store.get_new_events_for_appservice( |
94 | 95 | self.current_max, limit |
95 | 96 | ) |
96 | ||
97 | if not events: | |
98 | break | |
99 | 97 | |
100 | 98 | events_by_room = {} # type: Dict[str, List[EventBase]] |
101 | 99 | for event in events: |
152 | 150 | |
153 | 151 | await self.store.set_appservice_last_pos(upper_bound) |
154 | 152 | |
155 | now = self.clock.time_msec() | |
156 | ts = await self.store.get_received_ts(events[-1].event_id) | |
157 | ||
158 | 153 | synapse.metrics.event_processing_positions.labels( |
159 | 154 | "appservice_sender" |
160 | 155 | ).set(upper_bound) |
167 | 162 | |
168 | 163 | event_processing_loop_counter.labels("appservice_sender").inc() |
169 | 164 | |
170 | synapse.metrics.event_processing_lag.labels( | |
171 | "appservice_sender" | |
172 | ).set(now - ts) | |
173 | synapse.metrics.event_processing_last_ts.labels( | |
174 | "appservice_sender" | |
175 | ).set(ts) | |
165 | if events: | |
166 | now = self.clock.time_msec() | |
167 | ts = await self.store.get_received_ts(events[-1].event_id) | |
168 | ||
169 | synapse.metrics.event_processing_lag.labels( | |
170 | "appservice_sender" | |
171 | ).set(now - ts) | |
172 | synapse.metrics.event_processing_last_ts.labels( | |
173 | "appservice_sender" | |
174 | ).set(ts) | |
176 | 175 | finally: |
177 | 176 | self.is_processing = False |
178 | 177 |
21 | 21 | from http import HTTPStatus |
22 | 22 | from typing import ( |
23 | 23 | TYPE_CHECKING, |
24 | Collection, | |
24 | 25 | Dict, |
25 | 26 | Iterable, |
26 | 27 | List, |
177 | 178 | self.room_queues = {} # type: Dict[str, List[Tuple[EventBase, str]]] |
178 | 179 | self._room_pdu_linearizer = Linearizer("fed_room_pdu") |
179 | 180 | |
181 | self._room_backfill = Linearizer("room_backfill") | |
182 | ||
180 | 183 | self.third_party_event_rules = hs.get_third_party_event_rules() |
181 | 184 | |
182 | 185 | self._ephemeral_messages_enabled = hs.config.enable_ephemeral_messages |
576 | 579 | |
577 | 580 | # Fetch the state events from the DB, and check we have the auth events. |
578 | 581 | event_map = await self.store.get_events(state_event_ids, allow_rejected=True) |
579 | auth_events_in_store = await self.store.have_seen_events(auth_event_ids) | |
582 | auth_events_in_store = await self.store.have_seen_events( | |
583 | room_id, auth_event_ids | |
584 | ) | |
580 | 585 | |
581 | 586 | # Check for missing events. We handle state and auth event seperately, |
582 | 587 | # as we want to pull the state from the DB, but we don't for the auth |
609 | 614 | |
610 | 615 | if missing_auth_events: |
611 | 616 | auth_events_in_store = await self.store.have_seen_events( |
612 | missing_auth_events | |
617 | room_id, missing_auth_events | |
613 | 618 | ) |
614 | 619 | missing_auth_events.difference_update(auth_events_in_store) |
615 | 620 | |
709 | 714 | |
710 | 715 | missing_auth_events = set(auth_event_ids) - fetched_events.keys() |
711 | 716 | missing_auth_events.difference_update( |
712 | await self.store.have_seen_events(missing_auth_events) | |
717 | await self.store.have_seen_events(room_id, missing_auth_events) | |
713 | 718 | ) |
714 | 719 | logger.debug("We are also missing %i auth events", len(missing_auth_events)) |
715 | 720 | |
1038 | 1043 | return. This is used as part of the heuristic to decide if we |
1039 | 1044 | should back paginate. |
1040 | 1045 | """ |
1046 | with (await self._room_backfill.queue(room_id)): | |
1047 | return await self._maybe_backfill_inner(room_id, current_depth, limit) | |
1048 | ||
1049 | async def _maybe_backfill_inner( | |
1050 | self, room_id: str, current_depth: int, limit: int | |
1051 | ) -> bool: | |
1041 | 1052 | extremities = await self.store.get_oldest_events_with_depth_in_room(room_id) |
1042 | 1053 | |
1043 | 1054 | if not extremities: |
1353 | 1364 | |
1354 | 1365 | event_infos.append(_NewEventInfo(event, None, auth)) |
1355 | 1366 | |
1356 | await self._auth_and_persist_events( | |
1357 | destination, | |
1358 | room_id, | |
1359 | event_infos, | |
1360 | ) | |
1367 | if event_infos: | |
1368 | await self._auth_and_persist_events( | |
1369 | destination, | |
1370 | room_id, | |
1371 | event_infos, | |
1372 | ) | |
1361 | 1373 | |
1362 | 1374 | def _sanity_check_event(self, ev: EventBase) -> None: |
1363 | 1375 | """ |
2066 | 2078 | self, |
2067 | 2079 | origin: str, |
2068 | 2080 | room_id: str, |
2069 | event_infos: Iterable[_NewEventInfo], | |
2081 | event_infos: Collection[_NewEventInfo], | |
2070 | 2082 | backfilled: bool = False, |
2071 | 2083 | ) -> None: |
2072 | 2084 | """Creates the appropriate contexts and persists events. The events |
2076 | 2088 | |
2077 | 2089 | Notifies about the events where appropriate. |
2078 | 2090 | """ |
2091 | ||
2092 | if not event_infos: | |
2093 | return | |
2079 | 2094 | |
2080 | 2095 | async def prep(ev_info: _NewEventInfo): |
2081 | 2096 | event = ev_info.event |
2205 | 2220 | raise |
2206 | 2221 | events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR |
2207 | 2222 | |
2208 | await self.persist_events_and_notify( | |
2209 | room_id, | |
2210 | [ | |
2211 | (e, events_to_context[e.event_id]) | |
2212 | for e in itertools.chain(auth_events, state) | |
2213 | ], | |
2214 | ) | |
2223 | if auth_events or state: | |
2224 | await self.persist_events_and_notify( | |
2225 | room_id, | |
2226 | [ | |
2227 | (e, events_to_context[e.event_id]) | |
2228 | for e in itertools.chain(auth_events, state) | |
2229 | ], | |
2230 | ) | |
2215 | 2231 | |
2216 | 2232 | new_event_context = await self.state_handler.compute_event_context( |
2217 | 2233 | event, old_state=state |
2474 | 2490 | # |
2475 | 2491 | # we start by checking if they are in the store, and then try calling /event_auth/. |
2476 | 2492 | if missing_auth: |
2477 | have_events = await self.store.have_seen_events(missing_auth) | |
2493 | have_events = await self.store.have_seen_events(event.room_id, missing_auth) | |
2478 | 2494 | logger.debug("Events %s are in the store", have_events) |
2479 | 2495 | missing_auth.difference_update(have_events) |
2480 | 2496 | |
2493 | 2509 | return context |
2494 | 2510 | |
2495 | 2511 | seen_remotes = await self.store.have_seen_events( |
2496 | [e.event_id for e in remote_auth_chain] | |
2512 | event.room_id, [e.event_id for e in remote_auth_chain] | |
2497 | 2513 | ) |
2498 | 2514 | |
2499 | 2515 | for e in remote_auth_chain: |
3050 | 3066 | the same room. |
3051 | 3067 | backfilled: Whether these events are a result of |
3052 | 3068 | backfilling or not |
3053 | """ | |
3069 | ||
3070 | Returns: | |
3071 | The stream ID after which all events have been persisted. | |
3072 | """ | |
3073 | if not event_and_contexts: | |
3074 | return self.store.get_current_events_token() | |
3075 | ||
3054 | 3076 | instance = self.config.worker.events_shard_config.get_instance(room_id) |
3055 | 3077 | if instance != self._instance_name: |
3056 | # Limit the number of events sent over federation. | |
3057 | for batch in batch_iter(event_and_contexts, 1000): | |
3078 | # Limit the number of events sent over replication. We choose 200 | |
3079 | # here as that is what we default to in `max_request_body_size(..)` | |
3080 | for batch in batch_iter(event_and_contexts, 200): | |
3058 | 3081 | result = await self._send_events( |
3059 | 3082 | instance_name=instance, |
3060 | 3083 | store=self.store, |
298 | 298 | if not states: |
299 | 299 | return |
300 | 300 | |
301 | hosts_and_states = await get_interested_remotes( | |
301 | hosts_to_states = await get_interested_remotes( | |
302 | 302 | self.store, |
303 | 303 | self.presence_router, |
304 | 304 | states, |
305 | 305 | ) |
306 | 306 | |
307 | for destinations, states in hosts_and_states: | |
308 | self._federation.send_presence_to_destinations(states, destinations) | |
307 | for destination, host_states in hosts_to_states.items(): | |
308 | self._federation.send_presence_to_destinations(host_states, [destination]) | |
309 | 309 | |
310 | 310 | async def send_full_presence_to_users(self, user_ids: Collection[str]): |
311 | 311 | """ |
493 | 493 | rooms=room_ids_to_states.keys(), |
494 | 494 | users=users_to_states.keys(), |
495 | 495 | ) |
496 | ||
497 | # If this is a federation sender, notify about presence updates. | |
498 | await self.maybe_send_presence_to_interested_destinations(states) | |
499 | 496 | |
500 | 497 | async def process_replication_rows( |
501 | 498 | self, stream_name: str, instance_name: str, token: int, rows: list |
518 | 515 | for row in rows |
519 | 516 | ] |
520 | 517 | |
521 | for state in states: | |
522 | self.user_to_current_state[state.user_id] = state | |
518 | # The list of states to notify sync streams and remote servers about. | |
519 | # This is calculated by comparing the old and new states for each user | |
520 | # using `should_notify(..)`. | |
521 | # | |
522 | # Note that this is necessary as the presence writer will periodically | |
523 | # flush presence state changes that should not be notified about to the | |
524 | # DB, and so will be sent over the replication stream. | |
525 | state_to_notify = [] | |
526 | ||
527 | for new_state in states: | |
528 | old_state = self.user_to_current_state.get(new_state.user_id) | |
529 | self.user_to_current_state[new_state.user_id] = new_state | |
530 | ||
531 | if not old_state or should_notify(old_state, new_state): | |
532 | state_to_notify.append(new_state) | |
523 | 533 | |
524 | 534 | stream_id = token |
525 | await self.notify_from_replication(states, stream_id) | |
535 | await self.notify_from_replication(state_to_notify, stream_id) | |
536 | ||
537 | # If this is a federation sender, notify about presence updates. | |
538 | await self.maybe_send_presence_to_interested_destinations(state_to_notify) | |
526 | 539 | |
527 | 540 | def get_currently_syncing_users_for_replication(self) -> Iterable[str]: |
528 | 541 | return [ |
828 | 841 | if to_federation_ping: |
829 | 842 | federation_presence_out_counter.inc(len(to_federation_ping)) |
830 | 843 | |
831 | hosts_and_states = await get_interested_remotes( | |
844 | hosts_to_states = await get_interested_remotes( | |
832 | 845 | self.store, |
833 | 846 | self.presence_router, |
834 | 847 | list(to_federation_ping.values()), |
835 | 848 | ) |
836 | 849 | |
837 | for destinations, states in hosts_and_states: | |
850 | for destination, states in hosts_to_states.items(): | |
838 | 851 | self._federation_queue.send_presence_to_destinations( |
839 | states, destinations | |
852 | states, [destination] | |
840 | 853 | ) |
841 | 854 | |
842 | 855 | async def _handle_timeouts(self) -> None: |
1961 | 1974 | store: DataStore, |
1962 | 1975 | presence_router: PresenceRouter, |
1963 | 1976 | states: List[UserPresenceState], |
1964 | ) -> List[Tuple[Collection[str], List[UserPresenceState]]]: | |
1977 | ) -> Dict[str, Set[UserPresenceState]]: | |
1965 | 1978 | """Given a list of presence states figure out which remote servers |
1966 | 1979 | should be sent which. |
1967 | 1980 | |
1973 | 1986 | states: A list of incoming user presence updates. |
1974 | 1987 | |
1975 | 1988 | Returns: |
1976 | A list of 2-tuples of destinations and states, where for | |
1977 | each tuple the list of UserPresenceState should be sent to each | |
1978 | destination | |
1989 | A map from destinations to presence states to send to that destination. | |
1979 | 1990 | """ |
1980 | hosts_and_states = [] # type: List[Tuple[Collection[str], List[UserPresenceState]]] | |
1991 | hosts_and_states: Dict[str, Set[UserPresenceState]] = {} | |
1981 | 1992 | |
1982 | 1993 | # First we look up the rooms each user is in (as well as any explicit |
1983 | 1994 | # subscriptions), then for each distinct room we look up the remote |
1989 | 2000 | for room_id, states in room_ids_to_states.items(): |
1990 | 2001 | user_ids = await store.get_users_in_room(room_id) |
1991 | 2002 | hosts = {get_domain_from_id(user_id) for user_id in user_ids} |
1992 | hosts_and_states.append((hosts, states)) | |
2003 | for host in hosts: | |
2004 | hosts_and_states.setdefault(host, set()).update(states) | |
1993 | 2005 | |
1994 | 2006 | for user_id, states in users_to_states.items(): |
1995 | 2007 | host = get_domain_from_id(user_id) |
1996 | hosts_and_states.append(([host], states)) | |
2008 | hosts_and_states.setdefault(host, set()).update(states) | |
1997 | 2009 | |
1998 | 2010 | return hosts_and_states |
1999 | 2011 |
314 | 314 | if context: |
315 | 315 | context.tag = sync_type |
316 | 316 | |
317 | # if we have a since token, delete any to-device messages before that token | |
318 | # (since we now know that the device has received them) | |
319 | if since_token is not None: | |
320 | since_stream_id = since_token.to_device_key | |
321 | deleted = await self.store.delete_messages_for_device( | |
322 | sync_config.user.to_string(), sync_config.device_id, since_stream_id | |
323 | ) | |
324 | logger.debug( | |
325 | "Deleted %d to-device messages up to %d", deleted, since_stream_id | |
326 | ) | |
327 | ||
317 | 328 | if timeout == 0 or since_token is None or full_state: |
318 | 329 | # we are going to return immediately, so don't bother calling |
319 | 330 | # notifier.wait_for_events. |
462 | 473 | # ensure that we always include current state in the timeline |
463 | 474 | current_state_ids = frozenset() # type: FrozenSet[str] |
464 | 475 | if any(e.is_state() for e in recents): |
465 | current_state_ids_map = await self.state.get_current_state_ids( | |
476 | current_state_ids_map = await self.store.get_current_state_ids( | |
466 | 477 | room_id |
467 | 478 | ) |
468 | 479 | current_state_ids = frozenset(current_state_ids_map.values()) |
522 | 533 | # ensure that we always include current state in the timeline |
523 | 534 | current_state_ids = frozenset() |
524 | 535 | if any(e.is_state() for e in loaded_recents): |
525 | current_state_ids_map = await self.state.get_current_state_ids( | |
536 | current_state_ids_map = await self.store.get_current_state_ids( | |
526 | 537 | room_id |
527 | 538 | ) |
528 | 539 | current_state_ids = frozenset(current_state_ids_map.values()) |
1229 | 1240 | since_stream_id = int(sync_result_builder.since_token.to_device_key) |
1230 | 1241 | |
1231 | 1242 | if since_stream_id != int(now_token.to_device_key): |
1232 | # We only delete messages when a new message comes in, but that's | |
1233 | # fine so long as we delete them at some point. | |
1234 | ||
1235 | deleted = await self.store.delete_messages_for_device( | |
1236 | user_id, device_id, since_stream_id | |
1237 | ) | |
1238 | logger.debug( | |
1239 | "Deleted %d to-device messages up to %d", deleted, since_stream_id | |
1240 | ) | |
1241 | ||
1242 | 1243 | messages, stream_id = await self.store.get_new_messages_for_device( |
1243 | 1244 | user_id, device_id, since_stream_id, now_token.to_device_key |
1244 | 1245 | ) |
14 | 14 | """ This module contains base REST classes for constructing REST servlets. """ |
15 | 15 | |
16 | 16 | import logging |
17 | from typing import Dict, Iterable, List, Optional, overload | |
18 | ||
19 | from typing_extensions import Literal | |
20 | ||
21 | from twisted.web.server import Request | |
17 | 22 | |
18 | 23 | from synapse.api.errors import Codes, SynapseError |
19 | 24 | from synapse.util import json_decoder |
104 | 109 | return default |
105 | 110 | |
106 | 111 | |
112 | @overload | |
113 | def parse_bytes_from_args( | |
114 | args: Dict[bytes, List[bytes]], | |
115 | name: str, | |
116 | default: Literal[None] = None, | |
117 | required: Literal[True] = True, | |
118 | ) -> bytes: | |
119 | ... | |
120 | ||
121 | ||
122 | @overload | |
123 | def parse_bytes_from_args( | |
124 | args: Dict[bytes, List[bytes]], | |
125 | name: str, | |
126 | default: Optional[bytes] = None, | |
127 | required: bool = False, | |
128 | ) -> Optional[bytes]: | |
129 | ... | |
130 | ||
131 | ||
132 | def parse_bytes_from_args( | |
133 | args: Dict[bytes, List[bytes]], | |
134 | name: str, | |
135 | default: Optional[bytes] = None, | |
136 | required: bool = False, | |
137 | ) -> Optional[bytes]: | |
138 | """ | |
139 | Parse a string parameter as bytes from the request query string. | |
140 | ||
141 | Args: | |
142 | args: A mapping of request args as bytes to a list of bytes (e.g. request.args). | |
143 | name: the name of the query parameter. | |
144 | default: value to use if the parameter is absent, | |
145 | defaults to None. Must be bytes if encoding is None. | |
146 | required: whether to raise a 400 SynapseError if the | |
147 | parameter is absent, defaults to False. | |
148 | Returns: | |
149 | Bytes or the default value. | |
150 | ||
151 | Raises: | |
152 | SynapseError if the parameter is absent and required. | |
153 | """ | |
154 | name_bytes = name.encode("ascii") | |
155 | ||
156 | if name_bytes in args: | |
157 | return args[name_bytes][0] | |
158 | elif required: | |
159 | message = "Missing string query parameter %s" % (name,) | |
160 | raise SynapseError(400, message, errcode=Codes.MISSING_PARAM) | |
161 | ||
162 | return default | |
163 | ||
164 | ||
107 | 165 | def parse_string( |
108 | request, | |
109 | name, | |
110 | default=None, | |
111 | required=False, | |
112 | allowed_values=None, | |
113 | param_type="string", | |
114 | encoding="ascii", | |
166 | request: Request, | |
167 | name: str, | |
168 | default: Optional[str] = None, | |
169 | required: bool = False, | |
170 | allowed_values: Optional[Iterable[str]] = None, | |
171 | encoding: str = "ascii", | |
115 | 172 | ): |
116 | 173 | """ |
117 | 174 | Parse a string parameter from the request query string. |
121 | 178 | |
122 | 179 | Args: |
123 | 180 | request: the twisted HTTP request. |
124 | name (bytes|unicode): the name of the query parameter. | |
125 | default (bytes|unicode|None): value to use if the parameter is absent, | |
126 | defaults to None. Must be bytes if encoding is None. | |
127 | required (bool): whether to raise a 400 SynapseError if the | |
128 | parameter is absent, defaults to False. | |
129 | allowed_values (list[bytes|unicode]): List of allowed values for the | |
181 | name: the name of the query parameter. | |
182 | default: value to use if the parameter is absent, defaults to None. | |
183 | required: whether to raise a 400 SynapseError if the | |
184 | parameter is absent, defaults to False. | |
185 | allowed_values: List of allowed values for the | |
130 | 186 | string, or None if any value is allowed, defaults to None. Must be |
131 | 187 | the same type as name, if given. |
132 | encoding (str|None): The encoding to decode the string content with. | |
133 | ||
134 | Returns: | |
135 | bytes/unicode|None: A string value or the default. Unicode if encoding | |
136 | was given, bytes otherwise. | |
188 | encoding: The encoding to decode the string content with. | |
189 | ||
190 | Returns: | |
191 | A string value or the default. | |
137 | 192 | |
138 | 193 | Raises: |
139 | 194 | SynapseError if the parameter is absent and required, or if the |
140 | 195 | parameter is present, must be one of a list of allowed values and |
141 | 196 | is not one of those allowed values. |
142 | 197 | """ |
198 | args = request.args # type: Dict[bytes, List[bytes]] # type: ignore | |
143 | 199 | return parse_string_from_args( |
144 | request.args, name, default, required, allowed_values, param_type, encoding | |
200 | args, name, default, required, allowed_values, encoding | |
145 | 201 | ) |
146 | 202 | |
147 | 203 | |
148 | def parse_string_from_args( | |
149 | args, | |
150 | name, | |
151 | default=None, | |
152 | required=False, | |
153 | allowed_values=None, | |
154 | param_type="string", | |
155 | encoding="ascii", | |
156 | ): | |
157 | ||
158 | if not isinstance(name, bytes): | |
159 | name = name.encode("ascii") | |
160 | ||
161 | if name in args: | |
162 | value = args[name][0] | |
163 | ||
164 | if encoding: | |
165 | try: | |
166 | value = value.decode(encoding) | |
167 | except ValueError: | |
168 | raise SynapseError( | |
169 | 400, "Query parameter %r must be %s" % (name, encoding) | |
170 | ) | |
171 | ||
172 | if allowed_values is not None and value not in allowed_values: | |
173 | message = "Query parameter %r must be one of [%s]" % ( | |
174 | name, | |
175 | ", ".join(repr(v) for v in allowed_values), | |
176 | ) | |
177 | raise SynapseError(400, message) | |
178 | else: | |
179 | return value | |
204 | def _parse_string_value( | |
205 | value: bytes, | |
206 | allowed_values: Optional[Iterable[str]], | |
207 | name: str, | |
208 | encoding: str, | |
209 | ) -> str: | |
210 | try: | |
211 | value_str = value.decode(encoding) | |
212 | except ValueError: | |
213 | raise SynapseError(400, "Query parameter %r must be %s" % (name, encoding)) | |
214 | ||
215 | if allowed_values is not None and value_str not in allowed_values: | |
216 | message = "Query parameter %r must be one of [%s]" % ( | |
217 | name, | |
218 | ", ".join(repr(v) for v in allowed_values), | |
219 | ) | |
220 | raise SynapseError(400, message) | |
221 | else: | |
222 | return value_str | |
223 | ||
224 | ||
225 | @overload | |
226 | def parse_strings_from_args( | |
227 | args: Dict[bytes, List[bytes]], | |
228 | name: str, | |
229 | default: Optional[List[str]] = None, | |
230 | required: Literal[True] = True, | |
231 | allowed_values: Optional[Iterable[str]] = None, | |
232 | encoding: str = "ascii", | |
233 | ) -> List[str]: | |
234 | ... | |
235 | ||
236 | ||
237 | @overload | |
238 | def parse_strings_from_args( | |
239 | args: Dict[bytes, List[bytes]], | |
240 | name: str, | |
241 | default: Optional[List[str]] = None, | |
242 | required: bool = False, | |
243 | allowed_values: Optional[Iterable[str]] = None, | |
244 | encoding: str = "ascii", | |
245 | ) -> Optional[List[str]]: | |
246 | ... | |
247 | ||
248 | ||
249 | def parse_strings_from_args( | |
250 | args: Dict[bytes, List[bytes]], | |
251 | name: str, | |
252 | default: Optional[List[str]] = None, | |
253 | required: bool = False, | |
254 | allowed_values: Optional[Iterable[str]] = None, | |
255 | encoding: str = "ascii", | |
256 | ) -> Optional[List[str]]: | |
257 | """ | |
258 | Parse a string parameter from the request query string list. | |
259 | ||
260 | The content of the query param will be decoded to Unicode using the encoding. | |
261 | ||
262 | Args: | |
263 | args: A mapping of request args as bytes to a list of bytes (e.g. request.args). | |
264 | name: the name of the query parameter. | |
265 | default: value to use if the parameter is absent, defaults to None. | |
266 | required: whether to raise a 400 SynapseError if the | |
267 | parameter is absent, defaults to False. | |
268 | allowed_values: List of allowed values for the | |
269 | string, or None if any value is allowed, defaults to None. | |
270 | encoding: The encoding to decode the string content with. | |
271 | ||
272 | Returns: | |
273 | A string value or the default. | |
274 | ||
275 | Raises: | |
276 | SynapseError if the parameter is absent and required, or if the | |
277 | parameter is present, must be one of a list of allowed values and | |
278 | is not one of those allowed values. | |
279 | """ | |
280 | name_bytes = name.encode("ascii") | |
281 | ||
282 | if name_bytes in args: | |
283 | values = args[name_bytes] | |
284 | ||
285 | return [ | |
286 | _parse_string_value(value, allowed_values, name=name, encoding=encoding) | |
287 | for value in values | |
288 | ] | |
180 | 289 | else: |
181 | 290 | if required: |
182 | message = "Missing %s query parameter %r" % (param_type, name) | |
291 | message = "Missing string query parameter %r" % (name,) | |
183 | 292 | raise SynapseError(400, message, errcode=Codes.MISSING_PARAM) |
184 | else: | |
185 | ||
186 | if encoding and isinstance(default, bytes): | |
187 | return default.decode(encoding) | |
188 | ||
189 | return default | |
293 | ||
294 | return default | |
295 | ||
296 | ||
297 | def parse_string_from_args( | |
298 | args: Dict[bytes, List[bytes]], | |
299 | name: str, | |
300 | default: Optional[str] = None, | |
301 | required: bool = False, | |
302 | allowed_values: Optional[Iterable[str]] = None, | |
303 | encoding: str = "ascii", | |
304 | ) -> Optional[str]: | |
305 | """ | |
306 | Parse the string parameter from the request query string list | |
307 | and return the first result. | |
308 | ||
309 | The content of the query param will be decoded to Unicode using the encoding. | |
310 | ||
311 | Args: | |
312 | args: A mapping of request args as bytes to a list of bytes (e.g. request.args). | |
313 | name: the name of the query parameter. | |
314 | default: value to use if the parameter is absent, defaults to None. | |
315 | required: whether to raise a 400 SynapseError if the | |
316 | parameter is absent, defaults to False. | |
317 | allowed_values: List of allowed values for the | |
318 | string, or None if any value is allowed, defaults to None. Must be | |
319 | the same type as name, if given. | |
320 | encoding: The encoding to decode the string content with. | |
321 | ||
322 | Returns: | |
323 | A string value or the default. | |
324 | ||
325 | Raises: | |
326 | SynapseError if the parameter is absent and required, or if the | |
327 | parameter is present, must be one of a list of allowed values and | |
328 | is not one of those allowed values. | |
329 | """ | |
330 | ||
331 | strings = parse_strings_from_args( | |
332 | args, | |
333 | name, | |
334 | default=[default] if default is not None else None, | |
335 | required=required, | |
336 | allowed_values=allowed_values, | |
337 | encoding=encoding, | |
338 | ) | |
339 | ||
340 | if strings is None: | |
341 | return None | |
342 | ||
343 | return strings[0] | |
190 | 344 | |
191 | 345 | |
192 | 346 | def parse_json_value_from_request(request, allow_empty_body=False): |
214 | 368 | try: |
215 | 369 | content = json_decoder.decode(content_bytes.decode("utf-8")) |
216 | 370 | except Exception as e: |
217 | logger.warning("Unable to parse JSON: %s", e) | |
371 | logger.warning("Unable to parse JSON: %s (%s)", e, content_bytes) | |
218 | 372 | raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON) |
219 | 373 | |
220 | 374 | return content |
277 | 431 | |
278 | 432 | def register(self, http_server): |
279 | 433 | """ Register this servlet with the given HTTP server. """ |
280 | if hasattr(self, "PATTERNS"): | |
281 | patterns = self.PATTERNS | |
282 | ||
434 | patterns = getattr(self, "PATTERNS", None) | |
435 | if patterns: | |
283 | 436 | for method in ("GET", "PUT", "POST", "DELETE"): |
284 | 437 | if hasattr(self, "on_%s" % (method,)): |
285 | 438 | servlet_classname = self.__class__.__name__ |
264 | 264 | # Whether the sync response has new data to be returned to the client. |
265 | 265 | SYNC_RESULT = "sync.new_data" |
266 | 266 | |
267 | # incoming HTTP request ID (as written in the logs) | |
268 | REQUEST_ID = "request_id" | |
269 | ||
270 | # HTTP request tag (used to distinguish full vs incremental syncs, etc) | |
271 | REQUEST_TAG = "request_tag" | |
272 | ||
273 | # Text description of a database transaction | |
274 | DB_TXN_DESC = "db.txn_desc" | |
275 | ||
276 | # Uniqueish ID of a database transaction | |
277 | DB_TXN_ID = "db.txn_id" | |
278 | ||
267 | 279 | |
268 | 280 | # Block everything by default |
269 | 281 | # A regex which matches the server_names to expose traces for. |
324 | 336 | @contextlib.contextmanager |
325 | 337 | def noop_context_manager(*args, **kwargs): |
326 | 338 | """Does exactly what it says on the tin""" |
339 | # TODO: replace with contextlib.nullcontext once we drop support for Python 3.6 | |
327 | 340 | yield |
328 | 341 | |
329 | 342 | |
349 | 362 | |
350 | 363 | set_homeserver_whitelist(hs.config.opentracer_whitelist) |
351 | 364 | |
365 | from jaeger_client.metrics.prometheus import PrometheusMetricsFactory | |
366 | ||
352 | 367 | config = JaegerConfig( |
353 | 368 | config=hs.config.jaeger_config, |
354 | 369 | service_name="{} {}".format(hs.config.server_name, hs.get_instance_name()), |
355 | 370 | scope_manager=LogContextScopeManager(hs.config), |
371 | metrics_factory=PrometheusMetricsFactory(), | |
356 | 372 | ) |
357 | 373 | |
358 | 374 | # If we have the rust jaeger reporter available let's use that. |
587 | 603 | |
588 | 604 | span = opentracing.tracer.active_span |
589 | 605 | carrier = {} # type: Dict[str, str] |
590 | opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier) | |
606 | opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, carrier) | |
591 | 607 | |
592 | 608 | for key, value in carrier.items(): |
593 | 609 | headers.addRawHeaders(key, value) |
624 | 640 | span = opentracing.tracer.active_span |
625 | 641 | |
626 | 642 | carrier = {} # type: Dict[str, str] |
627 | opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier) | |
643 | opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, carrier) | |
628 | 644 | |
629 | 645 | for key, value in carrier.items(): |
630 | 646 | headers[key.encode()] = [value.encode()] |
658 | 674 | return |
659 | 675 | |
660 | 676 | opentracing.tracer.inject( |
661 | opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier | |
677 | opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier | |
662 | 678 | ) |
663 | 679 | |
664 | 680 | |
680 | 696 | |
681 | 697 | carrier = {} # type: Dict[str, str] |
682 | 698 | opentracing.tracer.inject( |
683 | opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier | |
699 | opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier | |
684 | 700 | ) |
685 | 701 | |
686 | 702 | return carrier |
695 | 711 | carrier = {} # type: Dict[str, str] |
696 | 712 | if opentracing: |
697 | 713 | opentracing.tracer.inject( |
698 | opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier | |
714 | opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier | |
699 | 715 | ) |
700 | 716 | return json_encoder.encode(carrier) |
701 | 717 | |
823 | 839 | return |
824 | 840 | |
825 | 841 | request_tags = { |
826 | "request_id": request.get_request_id(), | |
842 | SynapseTags.REQUEST_ID: request.get_request_id(), | |
827 | 843 | tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER, |
828 | 844 | tags.HTTP_METHOD: request.get_method(), |
829 | 845 | tags.HTTP_URL: request.get_redacted_uri(), |
832 | 848 | |
833 | 849 | request_name = request.request_metrics.name |
834 | 850 | if extract_context: |
835 | scope = start_active_span_from_request(request, request_name, tags=request_tags) | |
851 | scope = start_active_span_from_request(request, request_name) | |
836 | 852 | else: |
837 | scope = start_active_span(request_name, tags=request_tags) | |
853 | scope = start_active_span(request_name) | |
838 | 854 | |
839 | 855 | with scope: |
840 | 856 | try: |
844 | 860 | # with JsonResource). |
845 | 861 | scope.span.set_operation_name(request.request_metrics.name) |
846 | 862 | |
847 | scope.span.set_tag("request_tag", request.request_metrics.start_context.tag) | |
863 | # set the tags *after* the servlet completes, in case it decided to | |
864 | # prioritise the span (tags will get dropped on unprioritised spans) | |
865 | request_tags[ | |
866 | SynapseTags.REQUEST_TAG | |
867 | ] = request.request_metrics.start_context.tag | |
868 | ||
869 | for k, v in request_tags.items(): | |
870 | scope.span.set_tag(k, v) |
21 | 21 | from twisted.internet import defer |
22 | 22 | |
23 | 23 | from synapse.logging.context import LoggingContext, PreserveLoggingContext |
24 | from synapse.logging.opentracing import noop_context_manager, start_active_span | |
24 | from synapse.logging.opentracing import ( | |
25 | SynapseTags, | |
26 | noop_context_manager, | |
27 | start_active_span, | |
28 | ) | |
25 | 29 | from synapse.util.async_helpers import maybe_awaitable |
26 | 30 | |
27 | 31 | if TYPE_CHECKING: |
199 | 203 | |
200 | 204 | with BackgroundProcessLoggingContext(desc, count) as context: |
201 | 205 | try: |
202 | ctx = noop_context_manager() | |
203 | 206 | if bg_start_span: |
204 | ctx = start_active_span(desc, tags={"request_id": str(context)}) | |
207 | ctx = start_active_span( | |
208 | f"bgproc.{desc}", tags={SynapseTags.REQUEST_ID: str(context)} | |
209 | ) | |
210 | else: | |
211 | ctx = noop_context_manager() | |
205 | 212 | with ctx: |
206 | 213 | return await maybe_awaitable(func(*args, **kwargs)) |
207 | 214 | except Exception: |
484 | 484 | end_time = self.clock.time_msec() + timeout |
485 | 485 | |
486 | 486 | while not result: |
487 | try: | |
488 | now = self.clock.time_msec() | |
489 | if end_time <= now: | |
490 | break | |
491 | ||
492 | # Now we wait for the _NotifierUserStream to be told there | |
493 | # is a new token. | |
494 | listener = user_stream.new_listener(prev_token) | |
495 | listener.deferred = timeout_deferred( | |
496 | listener.deferred, | |
497 | (end_time - now) / 1000.0, | |
498 | self.hs.get_reactor(), | |
499 | ) | |
500 | ||
501 | with start_active_span("wait_for_events.deferred"): | |
487 | with start_active_span("wait_for_events"): | |
488 | try: | |
489 | now = self.clock.time_msec() | |
490 | if end_time <= now: | |
491 | break | |
492 | ||
493 | # Now we wait for the _NotifierUserStream to be told there | |
494 | # is a new token. | |
495 | listener = user_stream.new_listener(prev_token) | |
496 | listener.deferred = timeout_deferred( | |
497 | listener.deferred, | |
498 | (end_time - now) / 1000.0, | |
499 | self.hs.get_reactor(), | |
500 | ) | |
501 | ||
502 | 502 | log_kv( |
503 | 503 | { |
504 | 504 | "wait_for_events": "sleep", |
516 | 516 | } |
517 | 517 | ) |
518 | 518 | |
519 | current_token = user_stream.current_token | |
520 | ||
521 | result = await callback(prev_token, current_token) | |
522 | log_kv( | |
523 | { | |
524 | "wait_for_events": "result", | |
525 | "result": bool(result), | |
526 | } | |
527 | ) | |
528 | if result: | |
519 | current_token = user_stream.current_token | |
520 | ||
521 | result = await callback(prev_token, current_token) | |
522 | log_kv( | |
523 | { | |
524 | "wait_for_events": "result", | |
525 | "result": bool(result), | |
526 | } | |
527 | ) | |
528 | if result: | |
529 | break | |
530 | ||
531 | # Update the prev_token to the current_token since nothing | |
532 | # has happened between the old prev_token and the current_token | |
533 | prev_token = current_token | |
534 | except defer.TimeoutError: | |
535 | log_kv({"wait_for_events": "timeout"}) | |
529 | 536 | break |
530 | ||
531 | # Update the prev_token to the current_token since nothing | |
532 | # has happened between the old prev_token and the current_token | |
533 | prev_token = current_token | |
534 | except defer.TimeoutError: | |
535 | log_kv({"wait_for_events": "timeout"}) | |
536 | break | |
537 | except defer.CancelledError: | |
538 | log_kv({"wait_for_events": "cancelled"}) | |
539 | break | |
537 | except defer.CancelledError: | |
538 | log_kv({"wait_for_events": "cancelled"}) | |
539 | break | |
540 | 540 | |
541 | 541 | if result is None: |
542 | 542 | # This happened if there was no timeout or if the timeout had |
67 | 67 | if row.entity.startswith("@"): |
68 | 68 | self._device_list_stream_cache.entity_has_changed(row.entity, token) |
69 | 69 | self.get_cached_devices_for_user.invalidate((row.entity,)) |
70 | self._get_cached_user_device.invalidate_many((row.entity,)) | |
70 | self._get_cached_user_device.invalidate((row.entity,)) | |
71 | 71 | self.get_device_list_last_stream_id_for_remote.invalidate((row.entity,)) |
72 | 72 | |
73 | 73 | else: |
16 | 16 | |
17 | 17 | import logging |
18 | 18 | import platform |
19 | from typing import TYPE_CHECKING, Optional, Tuple | |
19 | 20 | |
20 | 21 | import synapse |
21 | 22 | from synapse.api.errors import Codes, NotFoundError, SynapseError |
22 | from synapse.http.server import JsonResource | |
23 | from synapse.http.server import HttpServer, JsonResource | |
23 | 24 | from synapse.http.servlet import RestServlet, parse_json_object_from_request |
25 | from synapse.http.site import SynapseRequest | |
24 | 26 | from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin |
25 | 27 | from synapse.rest.admin.devices import ( |
26 | 28 | DeleteDevicesRestServlet, |
65 | 67 | UserTokenRestServlet, |
66 | 68 | WhoisRestServlet, |
67 | 69 | ) |
68 | from synapse.types import RoomStreamToken | |
70 | from synapse.types import JsonDict, RoomStreamToken | |
69 | 71 | from synapse.util.versionstring import get_version_string |
72 | ||
73 | if TYPE_CHECKING: | |
74 | from synapse.server import HomeServer | |
70 | 75 | |
71 | 76 | logger = logging.getLogger(__name__) |
72 | 77 | |
74 | 79 | class VersionServlet(RestServlet): |
75 | 80 | PATTERNS = admin_patterns("/server_version$") |
76 | 81 | |
77 | def __init__(self, hs): | |
82 | def __init__(self, hs: "HomeServer"): | |
78 | 83 | self.res = { |
79 | 84 | "server_version": get_version_string(synapse), |
80 | 85 | "python_version": platform.python_version(), |
81 | 86 | } |
82 | 87 | |
83 | def on_GET(self, request): | |
88 | def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
84 | 89 | return 200, self.res |
85 | 90 | |
86 | 91 | |
89 | 94 | "/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?" |
90 | 95 | ) |
91 | 96 | |
92 | def __init__(self, hs): | |
93 | """ | |
94 | ||
95 | Args: | |
96 | hs (synapse.server.HomeServer) | |
97 | """ | |
97 | def __init__(self, hs: "HomeServer"): | |
98 | 98 | self.pagination_handler = hs.get_pagination_handler() |
99 | 99 | self.store = hs.get_datastore() |
100 | 100 | self.auth = hs.get_auth() |
101 | 101 | |
102 | async def on_POST(self, request, room_id, event_id): | |
102 | async def on_POST( | |
103 | self, request: SynapseRequest, room_id: str, event_id: Optional[str] | |
104 | ) -> Tuple[int, JsonDict]: | |
103 | 105 | await assert_requester_is_admin(self.auth, request) |
104 | 106 | |
105 | 107 | body = parse_json_object_from_request(request, allow_empty_body=True) |
118 | 120 | if event.room_id != room_id: |
119 | 121 | raise SynapseError(400, "Event is for wrong room.") |
120 | 122 | |
123 | # RoomStreamToken expects [int] not Optional[int] | |
124 | assert event.internal_metadata.stream_ordering is not None | |
121 | 125 | room_token = RoomStreamToken( |
122 | 126 | event.depth, event.internal_metadata.stream_ordering |
123 | 127 | ) |
172 | 176 | class PurgeHistoryStatusRestServlet(RestServlet): |
173 | 177 | PATTERNS = admin_patterns("/purge_history_status/(?P<purge_id>[^/]+)") |
174 | 178 | |
175 | def __init__(self, hs): | |
176 | """ | |
177 | ||
178 | Args: | |
179 | hs (synapse.server.HomeServer) | |
180 | """ | |
179 | def __init__(self, hs: "HomeServer"): | |
181 | 180 | self.pagination_handler = hs.get_pagination_handler() |
182 | 181 | self.auth = hs.get_auth() |
183 | 182 | |
184 | async def on_GET(self, request, purge_id): | |
183 | async def on_GET( | |
184 | self, request: SynapseRequest, purge_id: str | |
185 | ) -> Tuple[int, JsonDict]: | |
185 | 186 | await assert_requester_is_admin(self.auth, request) |
186 | 187 | |
187 | 188 | purge_status = self.pagination_handler.get_purge_status(purge_id) |
202 | 203 | class AdminRestResource(JsonResource): |
203 | 204 | """The REST resource which gets mounted at /_synapse/admin""" |
204 | 205 | |
205 | def __init__(self, hs): | |
206 | def __init__(self, hs: "HomeServer"): | |
206 | 207 | JsonResource.__init__(self, hs, canonical_json=False) |
207 | 208 | register_servlets(hs, self) |
208 | 209 | |
209 | 210 | |
210 | def register_servlets(hs, http_server): | |
211 | def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None: | |
211 | 212 | """ |
212 | 213 | Register all the admin servlets. |
213 | 214 | """ |
241 | 242 | RateLimitRestServlet(hs).register(http_server) |
242 | 243 | |
243 | 244 | |
244 | def register_servlets_for_client_rest_resource(hs, http_server): | |
245 | def register_servlets_for_client_rest_resource( | |
246 | hs: "HomeServer", http_server: HttpServer | |
247 | ) -> None: | |
245 | 248 | """Register only the servlets which need to be exposed on /_matrix/client/xxx""" |
246 | 249 | WhoisRestServlet(hs).register(http_server) |
247 | 250 | PurgeHistoryStatusRestServlet(hs).register(http_server) |
12 | 12 | # limitations under the License. |
13 | 13 | |
14 | 14 | import re |
15 | from typing import Iterable, Pattern | |
15 | 16 | |
16 | 17 | from synapse.api.auth import Auth |
17 | 18 | from synapse.api.errors import AuthError |
19 | 20 | from synapse.types import UserID |
20 | 21 | |
21 | 22 | |
22 | def admin_patterns(path_regex: str, version: str = "v1"): | |
23 | def admin_patterns(path_regex: str, version: str = "v1") -> Iterable[Pattern]: | |
23 | 24 | """Returns the list of patterns for an admin endpoint |
24 | 25 | |
25 | 26 | Args: |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | 13 | import logging |
14 | from typing import TYPE_CHECKING, Tuple | |
14 | 15 | |
15 | 16 | from synapse.api.errors import SynapseError |
16 | 17 | from synapse.http.servlet import RestServlet |
18 | from synapse.http.site import SynapseRequest | |
17 | 19 | from synapse.rest.admin._base import admin_patterns, assert_user_is_admin |
20 | from synapse.types import JsonDict | |
21 | ||
22 | if TYPE_CHECKING: | |
23 | from synapse.server import HomeServer | |
18 | 24 | |
19 | 25 | logger = logging.getLogger(__name__) |
20 | 26 | |
24 | 30 | |
25 | 31 | PATTERNS = admin_patterns("/delete_group/(?P<group_id>[^/]*)") |
26 | 32 | |
27 | def __init__(self, hs): | |
33 | def __init__(self, hs: "HomeServer"): | |
28 | 34 | self.group_server = hs.get_groups_server_handler() |
29 | 35 | self.is_mine_id = hs.is_mine_id |
30 | 36 | self.auth = hs.get_auth() |
31 | 37 | |
32 | async def on_POST(self, request, group_id): | |
38 | async def on_POST( | |
39 | self, request: SynapseRequest, group_id: str | |
40 | ) -> Tuple[int, JsonDict]: | |
33 | 41 | requester = await self.auth.get_user_by_req(request) |
34 | 42 | await assert_user_is_admin(self.auth, requester.user) |
35 | 43 |
16 | 16 | from typing import TYPE_CHECKING, Tuple |
17 | 17 | |
18 | 18 | from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError |
19 | from synapse.http.server import HttpServer | |
19 | 20 | from synapse.http.servlet import RestServlet, parse_boolean, parse_integer |
20 | 21 | from synapse.http.site import SynapseRequest |
21 | 22 | from synapse.rest.admin._base import ( |
36 | 37 | this server. |
37 | 38 | """ |
38 | 39 | |
39 | PATTERNS = ( | |
40 | admin_patterns("/room/(?P<room_id>[^/]+)/media/quarantine") | |
41 | + | |
40 | PATTERNS = [ | |
41 | *admin_patterns("/room/(?P<room_id>[^/]+)/media/quarantine"), | |
42 | 42 | # This path kept around for legacy reasons |
43 | admin_patterns("/quarantine_media/(?P<room_id>[^/]+)") | |
44 | ) | |
43 | *admin_patterns("/quarantine_media/(?P<room_id>[^/]+)"), | |
44 | ] | |
45 | 45 | |
46 | 46 | def __init__(self, hs: "HomeServer"): |
47 | 47 | self.store = hs.get_datastore() |
119 | 119 | return 200, {} |
120 | 120 | |
121 | 121 | |
122 | class UnquarantineMediaByID(RestServlet): | |
123 | """Quarantines local or remote media by a given ID so that no one can download | |
124 | it via this server. | |
125 | """ | |
126 | ||
127 | PATTERNS = admin_patterns( | |
128 | "/media/unquarantine/(?P<server_name>[^/]+)/(?P<media_id>[^/]+)" | |
129 | ) | |
130 | ||
131 | def __init__(self, hs: "HomeServer"): | |
132 | self.store = hs.get_datastore() | |
133 | self.auth = hs.get_auth() | |
134 | ||
135 | async def on_POST( | |
136 | self, request: SynapseRequest, server_name: str, media_id: str | |
137 | ) -> Tuple[int, JsonDict]: | |
138 | requester = await self.auth.get_user_by_req(request) | |
139 | await assert_user_is_admin(self.auth, requester.user) | |
140 | ||
141 | logging.info( | |
142 | "Remove from quarantine local media by ID: %s/%s", server_name, media_id | |
143 | ) | |
144 | ||
145 | # Remove from quarantine this media id | |
146 | await self.store.quarantine_media_by_id(server_name, media_id, None) | |
147 | ||
148 | return 200, {} | |
149 | ||
150 | ||
122 | 151 | class ProtectMediaByID(RestServlet): |
123 | 152 | """Protect local media from being quarantined.""" |
124 | 153 | |
136 | 165 | |
137 | 166 | logging.info("Protecting local media by ID: %s", media_id) |
138 | 167 | |
139 | # Quarantine this media id | |
140 | await self.store.mark_local_media_as_safe(media_id) | |
168 | # Protect this media id | |
169 | await self.store.mark_local_media_as_safe(media_id, safe=True) | |
170 | ||
171 | return 200, {} | |
172 | ||
173 | ||
174 | class UnprotectMediaByID(RestServlet): | |
175 | """Unprotect local media from being quarantined.""" | |
176 | ||
177 | PATTERNS = admin_patterns("/media/unprotect/(?P<media_id>[^/]+)") | |
178 | ||
179 | def __init__(self, hs: "HomeServer"): | |
180 | self.store = hs.get_datastore() | |
181 | self.auth = hs.get_auth() | |
182 | ||
183 | async def on_POST( | |
184 | self, request: SynapseRequest, media_id: str | |
185 | ) -> Tuple[int, JsonDict]: | |
186 | requester = await self.auth.get_user_by_req(request) | |
187 | await assert_user_is_admin(self.auth, requester.user) | |
188 | ||
189 | logging.info("Unprotecting local media by ID: %s", media_id) | |
190 | ||
191 | # Unprotect this media id | |
192 | await self.store.mark_local_media_as_safe(media_id, safe=False) | |
141 | 193 | |
142 | 194 | return 200, {} |
143 | 195 | |
259 | 311 | return 200, {"deleted_media": deleted_media, "total": total} |
260 | 312 | |
261 | 313 | |
262 | def register_servlets_for_media_repo(hs: "HomeServer", http_server): | |
314 | def register_servlets_for_media_repo(hs: "HomeServer", http_server: HttpServer) -> None: | |
263 | 315 | """ |
264 | 316 | Media repo specific APIs. |
265 | 317 | """ |
266 | 318 | PurgeMediaCacheRestServlet(hs).register(http_server) |
267 | 319 | QuarantineMediaInRoom(hs).register(http_server) |
268 | 320 | QuarantineMediaByID(hs).register(http_server) |
321 | UnquarantineMediaByID(hs).register(http_server) | |
269 | 322 | QuarantineMediaByUser(hs).register(http_server) |
270 | 323 | ProtectMediaByID(hs).register(http_server) |
324 | UnprotectMediaByID(hs).register(http_server) | |
271 | 325 | ListMediaInRoom(hs).register(http_server) |
272 | 326 | DeleteMediaByID(hs).register(http_server) |
273 | 327 | DeleteMediaByDateSize(hs).register(http_server) |
648 | 648 | limit = parse_integer(request, "limit", default=10) |
649 | 649 | |
650 | 650 | # picking the API shape for symmetry with /messages |
651 | filter_str = parse_string(request, b"filter", encoding="utf-8") | |
651 | filter_str = parse_string(request, "filter", encoding="utf-8") | |
652 | 652 | if filter_str: |
653 | 653 | filter_json = urlparse.unquote(filter_str) |
654 | 654 | event_filter = Filter( |
477 | 477 | |
478 | 478 | class WhoisRestServlet(RestServlet): |
479 | 479 | path_regex = "/whois/(?P<user_id>[^/]*)$" |
480 | PATTERNS = ( | |
481 | admin_patterns(path_regex) | |
482 | + | |
480 | PATTERNS = [ | |
481 | *admin_patterns(path_regex), | |
483 | 482 | # URL for spec reason |
484 | 483 | # https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid |
485 | client_patterns("/admin" + path_regex, v1=True) | |
486 | ) | |
484 | *client_patterns("/admin" + path_regex, v1=True), | |
485 | ] | |
487 | 486 | |
488 | 487 | def __init__(self, hs: "HomeServer"): |
489 | 488 | self.hs = hs |
552 | 551 | class AccountValidityRenewServlet(RestServlet): |
553 | 552 | PATTERNS = admin_patterns("/account_validity/validity$") |
554 | 553 | |
555 | def __init__(self, hs): | |
556 | """ | |
557 | Args: | |
558 | hs (synapse.server.HomeServer): server | |
559 | """ | |
554 | def __init__(self, hs: "HomeServer"): | |
560 | 555 | self.hs = hs |
561 | 556 | self.account_activity_handler = hs.get_account_validity_handler() |
562 | 557 | self.auth = hs.get_auth() |
13 | 13 | |
14 | 14 | import logging |
15 | 15 | import re |
16 | from typing import TYPE_CHECKING, Awaitable, Callable, Dict, Optional | |
16 | from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional | |
17 | 17 | |
18 | 18 | from synapse.api.errors import Codes, LoginError, SynapseError |
19 | 19 | from synapse.api.ratelimiting import Ratelimiter |
24 | 24 | from synapse.http.server import HttpServer, finish_request |
25 | 25 | from synapse.http.servlet import ( |
26 | 26 | RestServlet, |
27 | parse_bytes_from_args, | |
27 | 28 | parse_json_object_from_request, |
28 | 29 | parse_string, |
29 | 30 | ) |
436 | 437 | finish_request(request) |
437 | 438 | return |
438 | 439 | |
439 | client_redirect_url = parse_string( | |
440 | request, "redirectUrl", required=True, encoding=None | |
441 | ) | |
440 | args = request.args # type: Dict[bytes, List[bytes]] # type: ignore | |
441 | client_redirect_url = parse_bytes_from_args(args, "redirectUrl", required=True) | |
442 | 442 | sso_url = await self._sso_handler.handle_redirect_request( |
443 | 443 | request, |
444 | 444 | client_redirect_url, |
536 | 536 | self.store, request, default_limit=10 |
537 | 537 | ) |
538 | 538 | as_client_event = b"raw" not in request.args |
539 | filter_str = parse_string(request, b"filter", encoding="utf-8") | |
539 | filter_str = parse_string(request, "filter", encoding="utf-8") | |
540 | 540 | if filter_str: |
541 | 541 | filter_json = urlparse.unquote(filter_str) |
542 | 542 | event_filter = Filter( |
651 | 651 | limit = parse_integer(request, "limit", default=10) |
652 | 652 | |
653 | 653 | # picking the API shape for symmetry with /messages |
654 | filter_str = parse_string(request, b"filter", encoding="utf-8") | |
654 | filter_str = parse_string(request, "filter", encoding="utf-8") | |
655 | 655 | if filter_str: |
656 | 656 | filter_json = urlparse.unquote(filter_str) |
657 | 657 | event_filter = Filter( |
909 | 909 | r"^/_matrix/client/unstable/org\.matrix\.msc2432" |
910 | 910 | r"/rooms/(?P<room_id>[^/]*)/aliases" |
911 | 911 | ), |
912 | ] | |
912 | ] + list(client_patterns("/rooms/(?P<room_id>[^/]*)/aliases$", unstable=False)) | |
913 | 913 | |
914 | 914 | def __init__(self, hs: "HomeServer"): |
915 | 915 | super().__init__() |
1059 | 1059 | RoomRedactEventRestServlet(hs).register(http_server) |
1060 | 1060 | RoomTypingRestServlet(hs).register(http_server) |
1061 | 1061 | RoomEventContextServlet(hs).register(http_server) |
1062 | ||
1063 | if hs.config.experimental.spaces_enabled: | |
1064 | RoomSpaceSummaryRestServlet(hs).register(http_server) | |
1062 | RoomSpaceSummaryRestServlet(hs).register(http_server) | |
1063 | RoomEventServlet(hs).register(http_server) | |
1064 | JoinedRoomsRestServlet(hs).register(http_server) | |
1065 | RoomAliasListServlet(hs).register(http_server) | |
1066 | SearchRestServlet(hs).register(http_server) | |
1065 | 1067 | |
1066 | 1068 | # Some servlets only get registered for the main process. |
1067 | 1069 | if not is_worker: |
1068 | 1070 | RoomCreateRestServlet(hs).register(http_server) |
1069 | 1071 | RoomForgetRestServlet(hs).register(http_server) |
1070 | SearchRestServlet(hs).register(http_server) | |
1071 | JoinedRoomsRestServlet(hs).register(http_server) | |
1072 | RoomEventServlet(hs).register(http_server) | |
1073 | RoomAliasListServlet(hs).register(http_server) | |
1074 | 1072 | |
1075 | 1073 | |
1076 | 1074 | def register_deprecated_servlets(hs, http_server): |
15 | 15 | from http import HTTPStatus |
16 | 16 | |
17 | 17 | from synapse.api.errors import Codes, SynapseError |
18 | from synapse.http.servlet import ( | |
19 | RestServlet, | |
20 | assert_params_in_dict, | |
21 | parse_json_object_from_request, | |
22 | ) | |
18 | from synapse.http.servlet import RestServlet, parse_json_object_from_request | |
23 | 19 | |
24 | 20 | from ._base import client_patterns |
25 | 21 | |
41 | 37 | user_id = requester.user.to_string() |
42 | 38 | |
43 | 39 | body = parse_json_object_from_request(request) |
44 | assert_params_in_dict(body, ("reason", "score")) | |
45 | 40 | |
46 | if not isinstance(body["reason"], str): | |
41 | if not isinstance(body.get("reason", ""), str): | |
47 | 42 | raise SynapseError( |
48 | 43 | HTTPStatus.BAD_REQUEST, |
49 | 44 | "Param 'reason' must be a string", |
50 | 45 | Codes.BAD_JSON, |
51 | 46 | ) |
52 | if not isinstance(body["score"], int): | |
47 | if not isinstance(body.get("score", 0), int): | |
53 | 48 | raise SynapseError( |
54 | 49 | HTTPStatus.BAD_REQUEST, |
55 | 50 | "Param 'score' must be an integer", |
60 | 55 | room_id=room_id, |
61 | 56 | event_id=event_id, |
62 | 57 | user_id=user_id, |
63 | reason=body["reason"], | |
58 | reason=body.get("reason"), | |
64 | 59 | content=body, |
65 | 60 | received_ts=self.clock.time_msec(), |
66 | 61 | ) |
16 | 16 | from hashlib import sha256 |
17 | 17 | from http import HTTPStatus |
18 | 18 | from os import path |
19 | from typing import Dict, List | |
19 | 20 | |
20 | 21 | import jinja2 |
21 | 22 | from jinja2 import TemplateNotFound |
23 | 24 | from synapse.api.errors import NotFoundError, StoreError, SynapseError |
24 | 25 | from synapse.config import ConfigError |
25 | 26 | from synapse.http.server import DirectServeHtmlResource, respond_with_html |
26 | from synapse.http.servlet import parse_string | |
27 | from synapse.http.servlet import parse_bytes_from_args, parse_string | |
27 | 28 | from synapse.types import UserID |
28 | 29 | |
29 | 30 | # language to use for the templates. TODO: figure this out from Accept-Language |
115 | 116 | has_consented = False |
116 | 117 | public_version = username == "" |
117 | 118 | if not public_version: |
118 | userhmac_bytes = parse_string(request, "h", required=True, encoding=None) | |
119 | args = request.args # type: Dict[bytes, List[bytes]] | |
120 | userhmac_bytes = parse_bytes_from_args(args, "h", required=True) | |
119 | 121 | |
120 | 122 | self._check_hash(username, userhmac_bytes) |
121 | 123 | |
151 | 153 | """ |
152 | 154 | version = parse_string(request, "v", required=True) |
153 | 155 | username = parse_string(request, "u", required=True) |
154 | userhmac = parse_string(request, "h", required=True, encoding=None) | |
156 | args = request.args # type: Dict[bytes, List[bytes]] | |
157 | userhmac = parse_bytes_from_args(args, "h", required=True) | |
155 | 158 | |
156 | 159 | self._check_hash(username, userhmac) |
157 | 160 |
21 | 21 | from synapse.http.server import DirectServeJsonResource, respond_with_json |
22 | 22 | from synapse.http.servlet import parse_integer, parse_json_object_from_request |
23 | 23 | from synapse.util import json_decoder |
24 | from synapse.util.async_helpers import yieldable_gather_results | |
24 | 25 | |
25 | 26 | logger = logging.getLogger(__name__) |
26 | 27 | |
209 | 210 | # If there is a cache miss, request the missing keys, then recurse (and |
210 | 211 | # ensure the result is sent). |
211 | 212 | if cache_misses and query_remote_on_cache_miss: |
212 | await self.fetcher.get_keys(cache_misses) | |
213 | await yieldable_gather_results( | |
214 | lambda t: self.fetcher.get_keys(*t), | |
215 | ( | |
216 | (server_name, list(keys), 0) | |
217 | for server_name, keys in cache_misses.items() | |
218 | ), | |
219 | ) | |
213 | 220 | await self.query_keys(request, query, query_remote_on_cache_miss=False) |
214 | 221 | else: |
215 | 222 | signed_keys = [] |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import IO, TYPE_CHECKING | |
16 | from typing import IO, TYPE_CHECKING, Dict, List, Optional | |
17 | 17 | |
18 | 18 | from twisted.web.server import Request |
19 | 19 | |
20 | 20 | from synapse.api.errors import Codes, SynapseError |
21 | 21 | from synapse.http.server import DirectServeJsonResource, respond_with_json |
22 | from synapse.http.servlet import parse_string | |
22 | from synapse.http.servlet import parse_bytes_from_args | |
23 | 23 | from synapse.http.site import SynapseRequest |
24 | 24 | from synapse.rest.media.v1.media_storage import SpamMediaException |
25 | 25 | |
60 | 60 | errcode=Codes.TOO_LARGE, |
61 | 61 | ) |
62 | 62 | |
63 | upload_name = parse_string(request, b"filename", encoding=None) | |
64 | if upload_name: | |
63 | args = request.args # type: Dict[bytes, List[bytes]] # type: ignore | |
64 | upload_name_bytes = parse_bytes_from_args(args, "filename") | |
65 | if upload_name_bytes: | |
65 | 66 | try: |
66 | upload_name = upload_name.decode("utf8") | |
67 | upload_name = upload_name_bytes.decode("utf8") # type: Optional[str] | |
67 | 68 | except UnicodeDecodeError: |
68 | 69 | raise SynapseError( |
69 | 70 | msg="Invalid UTF-8 filename parameter: %r" % (upload_name), code=400 |
39 | 39 | |
40 | 40 | from synapse.api.errors import StoreError |
41 | 41 | from synapse.config.database import DatabaseConnectionConfig |
42 | from synapse.logging import opentracing | |
42 | 43 | from synapse.logging.context import ( |
43 | 44 | LoggingContext, |
44 | 45 | current_context, |
89 | 90 | db_args = dict(db_config.config.get("args", {})) |
90 | 91 | db_args.setdefault("cp_reconnect", True) |
91 | 92 | |
93 | def _on_new_connection(conn): | |
94 | # Ensure we have a logging context so we can correctly track queries, | |
95 | # etc. | |
96 | with LoggingContext("db.on_new_connection"): | |
97 | engine.on_new_connection( | |
98 | LoggingDatabaseConnection(conn, engine, "on_new_connection") | |
99 | ) | |
100 | ||
92 | 101 | return adbapi.ConnectionPool( |
93 | 102 | db_config.config["name"], |
94 | 103 | cp_reactor=reactor, |
95 | cp_openfun=lambda conn: engine.on_new_connection( | |
96 | LoggingDatabaseConnection(conn, engine, "on_new_connection") | |
97 | ), | |
104 | cp_openfun=_on_new_connection, | |
98 | 105 | **db_args, |
99 | 106 | ) |
100 | 107 | |
312 | 319 | start = time.time() |
313 | 320 | |
314 | 321 | try: |
315 | return func(sql, *args) | |
322 | with opentracing.start_active_span( | |
323 | "db.query", | |
324 | tags={ | |
325 | opentracing.tags.DATABASE_TYPE: "sql", | |
326 | opentracing.tags.DATABASE_STATEMENT: sql, | |
327 | }, | |
328 | ): | |
329 | return func(sql, *args) | |
316 | 330 | except Exception as e: |
317 | 331 | sql_logger.debug("[SQL FAIL] {%s} %s", self.name, e) |
318 | 332 | raise |
524 | 538 | exception_callbacks=exception_callbacks, |
525 | 539 | ) |
526 | 540 | try: |
527 | r = func(cursor, *args, **kwargs) | |
528 | conn.commit() | |
529 | return r | |
541 | with opentracing.start_active_span( | |
542 | "db.txn", | |
543 | tags={ | |
544 | opentracing.SynapseTags.DB_TXN_DESC: desc, | |
545 | opentracing.SynapseTags.DB_TXN_ID: name, | |
546 | }, | |
547 | ): | |
548 | r = func(cursor, *args, **kwargs) | |
549 | opentracing.log_kv({"message": "commit"}) | |
550 | conn.commit() | |
551 | return r | |
530 | 552 | except self.engine.module.OperationalError as e: |
531 | 553 | # This can happen if the database disappears mid |
532 | 554 | # transaction. |
540 | 562 | if i < N: |
541 | 563 | i += 1 |
542 | 564 | try: |
543 | conn.rollback() | |
565 | with opentracing.start_active_span("db.rollback"): | |
566 | conn.rollback() | |
544 | 567 | except self.engine.module.Error as e1: |
545 | 568 | transaction_logger.warning("[TXN EROLL] {%s} %s", name, e1) |
546 | 569 | continue |
553 | 576 | if i < N: |
554 | 577 | i += 1 |
555 | 578 | try: |
556 | conn.rollback() | |
579 | with opentracing.start_active_span("db.rollback"): | |
580 | conn.rollback() | |
557 | 581 | except self.engine.module.Error as e1: |
558 | 582 | transaction_logger.warning( |
559 | 583 | "[TXN EROLL] {%s} %s", |
652 | 676 | logger.warning("Starting db txn '%s' from sentinel context", desc) |
653 | 677 | |
654 | 678 | try: |
655 | result = await self.runWithConnection( | |
656 | self.new_transaction, | |
657 | desc, | |
658 | after_callbacks, | |
659 | exception_callbacks, | |
660 | func, | |
661 | *args, | |
662 | db_autocommit=db_autocommit, | |
663 | **kwargs, | |
664 | ) | |
679 | with opentracing.start_active_span(f"db.{desc}"): | |
680 | result = await self.runWithConnection( | |
681 | self.new_transaction, | |
682 | desc, | |
683 | after_callbacks, | |
684 | exception_callbacks, | |
685 | func, | |
686 | *args, | |
687 | db_autocommit=db_autocommit, | |
688 | **kwargs, | |
689 | ) | |
665 | 690 | |
666 | 691 | for after_callback, after_args, after_kwargs in after_callbacks: |
667 | 692 | after_callback(*after_args, **after_kwargs) |
717 | 742 | with LoggingContext( |
718 | 743 | str(curr_context), parent_context=parent_context |
719 | 744 | ) as context: |
720 | sched_duration_sec = monotonic_time() - start_time | |
721 | sql_scheduling_timer.observe(sched_duration_sec) | |
722 | context.add_database_scheduled(sched_duration_sec) | |
723 | ||
724 | if self.engine.is_connection_closed(conn): | |
725 | logger.debug("Reconnecting closed database connection") | |
726 | conn.reconnect() | |
727 | ||
728 | try: | |
729 | if db_autocommit: | |
730 | self.engine.attempt_to_set_autocommit(conn, True) | |
731 | ||
732 | db_conn = LoggingDatabaseConnection( | |
733 | conn, self.engine, "runWithConnection" | |
734 | ) | |
735 | return func(db_conn, *args, **kwargs) | |
736 | finally: | |
737 | if db_autocommit: | |
738 | self.engine.attempt_to_set_autocommit(conn, False) | |
745 | with opentracing.start_active_span( | |
746 | operation_name="db.connection", | |
747 | ): | |
748 | sched_duration_sec = monotonic_time() - start_time | |
749 | sql_scheduling_timer.observe(sched_duration_sec) | |
750 | context.add_database_scheduled(sched_duration_sec) | |
751 | ||
752 | if self.engine.is_connection_closed(conn): | |
753 | logger.debug("Reconnecting closed database connection") | |
754 | conn.reconnect() | |
755 | opentracing.log_kv({"message": "reconnected"}) | |
756 | ||
757 | try: | |
758 | if db_autocommit: | |
759 | self.engine.attempt_to_set_autocommit(conn, True) | |
760 | ||
761 | db_conn = LoggingDatabaseConnection( | |
762 | conn, self.engine, "runWithConnection" | |
763 | ) | |
764 | return func(db_conn, *args, **kwargs) | |
765 | finally: | |
766 | if db_autocommit: | |
767 | self.engine.attempt_to_set_autocommit(conn, False) | |
739 | 768 | |
740 | 769 | return await make_deferred_yieldable( |
741 | 770 | self._db_pool.runWithConnection(inner_func, *args, **kwargs) |
167 | 167 | backfilled, |
168 | 168 | ): |
169 | 169 | self._invalidate_get_event_cache(event_id) |
170 | self.have_seen_event.invalidate((room_id, event_id)) | |
170 | 171 | |
171 | 172 | self.get_latest_event_ids_in_room.invalidate((room_id,)) |
172 | 173 | |
173 | self.get_unread_event_push_actions_by_room_for_user.invalidate_many((room_id,)) | |
174 | self.get_unread_event_push_actions_by_room_for_user.invalidate((room_id,)) | |
174 | 175 | |
175 | 176 | if not backfilled: |
176 | 177 | self._events_stream_cache.entity_has_changed(room_id, stream_ordering) |
183 | 184 | self.get_invited_rooms_for_local_user.invalidate((state_key,)) |
184 | 185 | |
185 | 186 | if relates_to: |
186 | self.get_relations_for_event.invalidate_many((relates_to,)) | |
187 | self.get_aggregation_groups_for_event.invalidate_many((relates_to,)) | |
187 | self.get_relations_for_event.invalidate((relates_to,)) | |
188 | self.get_aggregation_groups_for_event.invalidate((relates_to,)) | |
188 | 189 | self.get_applicable_edit.invalidate((relates_to,)) |
189 | 190 | |
190 | 191 | async def invalidate_cache_and_stream(self, cache_name: str, keys: Tuple[Any, ...]): |
1281 | 1281 | ) |
1282 | 1282 | |
1283 | 1283 | txn.call_after(self.get_cached_devices_for_user.invalidate, (user_id,)) |
1284 | txn.call_after(self._get_cached_user_device.invalidate_many, (user_id,)) | |
1284 | txn.call_after(self._get_cached_user_device.invalidate, (user_id,)) | |
1285 | 1285 | txn.call_after( |
1286 | 1286 | self.get_device_list_last_stream_id_for_remote.invalidate, (user_id,) |
1287 | 1287 | ) |
859 | 859 | not be deleted. |
860 | 860 | """ |
861 | 861 | txn.call_after( |
862 | self.get_unread_event_push_actions_by_room_for_user.invalidate_many, | |
862 | self.get_unread_event_push_actions_by_room_for_user.invalidate, | |
863 | 863 | (room_id, user_id), |
864 | 864 | ) |
865 | 865 |
1747 | 1747 | }, |
1748 | 1748 | ) |
1749 | 1749 | |
1750 | txn.call_after(self.store.get_relations_for_event.invalidate_many, (parent_id,)) | |
1750 | txn.call_after(self.store.get_relations_for_event.invalidate, (parent_id,)) | |
1751 | 1751 | txn.call_after( |
1752 | self.store.get_aggregation_groups_for_event.invalidate_many, (parent_id,) | |
1752 | self.store.get_aggregation_groups_for_event.invalidate, (parent_id,) | |
1753 | 1753 | ) |
1754 | 1754 | |
1755 | 1755 | if rel_type == RelationTypes.REPLACE: |
1902 | 1902 | |
1903 | 1903 | for user_id in user_ids: |
1904 | 1904 | txn.call_after( |
1905 | self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many, | |
1905 | self.store.get_unread_event_push_actions_by_room_for_user.invalidate, | |
1906 | 1906 | (room_id, user_id), |
1907 | 1907 | ) |
1908 | 1908 | |
1916 | 1916 | def _remove_push_actions_for_event_id_txn(self, txn, room_id, event_id): |
1917 | 1917 | # Sad that we have to blow away the cache for the whole room here |
1918 | 1918 | txn.call_after( |
1919 | self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many, | |
1919 | self.store.get_unread_event_push_actions_by_room_for_user.invalidate, | |
1920 | 1920 | (room_id,), |
1921 | 1921 | ) |
1922 | 1922 | txn.execute( |
21 | 21 | Iterable, |
22 | 22 | List, |
23 | 23 | Optional, |
24 | Set, | |
24 | 25 | Tuple, |
25 | 26 | overload, |
26 | 27 | ) |
54 | 55 | from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator |
55 | 56 | from synapse.storage.util.sequence import build_sequence_generator |
56 | 57 | from synapse.types import JsonDict, get_domain_from_id |
57 | from synapse.util.caches.descriptors import cached | |
58 | from synapse.util.caches.descriptors import cached, cachedList | |
58 | 59 | from synapse.util.caches.lrucache import LruCache |
59 | 60 | from synapse.util.iterutils import batch_iter |
60 | 61 | from synapse.util.metrics import Measure |
1044 | 1045 | |
1045 | 1046 | return {r["event_id"] for r in rows} |
1046 | 1047 | |
1047 | async def have_seen_events(self, event_ids): | |
1048 | async def have_seen_events( | |
1049 | self, room_id: str, event_ids: Iterable[str] | |
1050 | ) -> Set[str]: | |
1048 | 1051 | """Given a list of event ids, check if we have already processed them. |
1049 | 1052 | |
1050 | Args: | |
1051 | event_ids (iterable[str]): | |
1053 | The room_id is only used to structure the cache (so that it can later be | |
1054 | invalidated by room_id) - there is no guarantee that the events are actually | |
1055 | in the room in question. | |
1056 | ||
1057 | Args: | |
1058 | room_id: Room we are polling | |
1059 | event_ids: events we are looking for | |
1052 | 1060 | |
1053 | 1061 | Returns: |
1054 | 1062 | set[str]: The events we have already seen. |
1055 | 1063 | """ |
1064 | res = await self._have_seen_events_dict( | |
1065 | (room_id, event_id) for event_id in event_ids | |
1066 | ) | |
1067 | return {eid for ((_rid, eid), have_event) in res.items() if have_event} | |
1068 | ||
1069 | @cachedList("have_seen_event", "keys") | |
1070 | async def _have_seen_events_dict( | |
1071 | self, keys: Iterable[Tuple[str, str]] | |
1072 | ) -> Dict[Tuple[str, str], bool]: | |
1073 | """Helper for have_seen_events | |
1074 | ||
1075 | Returns: | |
1076 | a dict {(room_id, event_id)-> bool} | |
1077 | """ | |
1056 | 1078 | # if the event cache contains the event, obviously we've seen it. |
1057 | results = {x for x in event_ids if self._get_event_cache.contains(x)} | |
1058 | ||
1059 | def have_seen_events_txn(txn, chunk): | |
1060 | sql = "SELECT event_id FROM events as e WHERE " | |
1079 | ||
1080 | cache_results = { | |
1081 | (rid, eid) for (rid, eid) in keys if self._get_event_cache.contains((eid,)) | |
1082 | } | |
1083 | results = {x: True for x in cache_results} | |
1084 | ||
1085 | def have_seen_events_txn(txn, chunk: Tuple[Tuple[str, str], ...]): | |
1086 | # we deliberately do *not* query the database for room_id, to make the | |
1087 | # query an index-only lookup on `events_event_id_key`. | |
1088 | # | |
1089 | # We therefore pull the events from the database into a set... | |
1090 | ||
1091 | sql = "SELECT event_id FROM events AS e WHERE " | |
1061 | 1092 | clause, args = make_in_list_sql_clause( |
1062 | txn.database_engine, "e.event_id", chunk | |
1093 | txn.database_engine, "e.event_id", [eid for (_rid, eid) in chunk] | |
1063 | 1094 | ) |
1064 | 1095 | txn.execute(sql + clause, args) |
1065 | results.update(row[0] for row in txn) | |
1066 | ||
1067 | for chunk in batch_iter((x for x in event_ids if x not in results), 100): | |
1096 | found_events = {eid for eid, in txn} | |
1097 | ||
1098 | # ... and then we can update the results for each row in the batch | |
1099 | results.update({(rid, eid): (eid in found_events) for (rid, eid) in chunk}) | |
1100 | ||
1101 | # each batch requires its own index scan, so we make the batches as big as | |
1102 | # possible. | |
1103 | for chunk in batch_iter((k for k in keys if k not in cache_results), 500): | |
1068 | 1104 | await self.db_pool.runInteraction( |
1069 | 1105 | "have_seen_events", have_seen_events_txn, chunk |
1070 | 1106 | ) |
1107 | ||
1071 | 1108 | return results |
1109 | ||
1110 | @cached(max_entries=100000, tree=True) | |
1111 | async def have_seen_event(self, room_id: str, event_id: str): | |
1112 | # this only exists for the benefit of the @cachedList descriptor on | |
1113 | # _have_seen_events_dict | |
1114 | raise NotImplementedError() | |
1072 | 1115 | |
1073 | 1116 | def _get_current_state_event_counts_txn(self, txn, room_id): |
1074 | 1117 | """ |
142 | 142 | "created_ts", |
143 | 143 | "quarantined_by", |
144 | 144 | "url_cache", |
145 | "safe_from_quarantine", | |
145 | 146 | ), |
146 | 147 | allow_none=True, |
147 | 148 | desc="get_local_media", |
295 | 296 | desc="store_local_media", |
296 | 297 | ) |
297 | 298 | |
298 | async def mark_local_media_as_safe(self, media_id: str) -> None: | |
299 | """Mark a local media as safe from quarantining.""" | |
299 | async def mark_local_media_as_safe(self, media_id: str, safe: bool = True) -> None: | |
300 | """Mark a local media as safe or unsafe from quarantining.""" | |
300 | 301 | await self.db_pool.simple_update_one( |
301 | 302 | table="local_media_repository", |
302 | 303 | keyvalues={"media_id": media_id}, |
303 | updatevalues={"safe_from_quarantine": True}, | |
304 | updatevalues={"safe_from_quarantine": safe}, | |
304 | 305 | desc="mark_local_media_as_safe", |
305 | 306 | ) |
306 | 307 |
49 | 49 | instance_name=self._instance_name, |
50 | 50 | tables=[("presence_stream", "instance_name", "stream_id")], |
51 | 51 | sequence_name="presence_stream_sequence", |
52 | writers=hs.config.worker.writers.to_device, | |
52 | writers=hs.config.worker.writers.presence, | |
53 | 53 | ) |
54 | 54 | else: |
55 | 55 | self._presence_id_gen = StreamIdGenerator( |
15 | 15 | from typing import Any, List, Set, Tuple |
16 | 16 | |
17 | 17 | from synapse.api.errors import SynapseError |
18 | from synapse.storage._base import SQLBaseStore | |
18 | from synapse.storage.databases.main import CacheInvalidationWorkerStore | |
19 | 19 | from synapse.storage.databases.main.state import StateGroupWorkerStore |
20 | 20 | from synapse.types import RoomStreamToken |
21 | 21 | |
22 | 22 | logger = logging.getLogger(__name__) |
23 | 23 | |
24 | 24 | |
25 | class PurgeEventsStore(StateGroupWorkerStore, SQLBaseStore): | |
25 | class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore): | |
26 | 26 | async def purge_history( |
27 | 27 | self, room_id: str, token: str, delete_local_events: bool |
28 | 28 | ) -> Set[int]: |
202 | 202 | "DELETE FROM event_to_state_groups " |
203 | 203 | "WHERE event_id IN (SELECT event_id from events_to_purge)" |
204 | 204 | ) |
205 | for event_id, _ in event_rows: | |
206 | txn.call_after(self._get_state_group_for_event.invalidate, (event_id,)) | |
207 | 205 | |
208 | 206 | # Delete all remote non-state events |
209 | 207 | for table in ( |
281 | 279 | # finally, drop the temp table. this will commit the txn in sqlite, |
282 | 280 | # so make sure to keep this actually last. |
283 | 281 | txn.execute("DROP TABLE events_to_purge") |
282 | ||
283 | for event_id, should_delete in event_rows: | |
284 | self._invalidate_cache_and_stream( | |
285 | txn, self._get_state_group_for_event, (event_id,) | |
286 | ) | |
287 | ||
288 | # XXX: This is racy, since have_seen_events could be called between the | |
289 | # transaction completing and the invalidation running. On the other hand, | |
290 | # that's no different to calling `have_seen_events` just before the | |
291 | # event is deleted from the database. | |
292 | if should_delete: | |
293 | self._invalidate_cache_and_stream( | |
294 | txn, self.have_seen_event, (room_id, event_id) | |
295 | ) | |
284 | 296 | |
285 | 297 | logger.info("[purge] done") |
286 | 298 | |
421 | 433 | # index on them. In any case we should be clearing out 'stream' tables |
422 | 434 | # periodically anyway (#5888) |
423 | 435 | |
424 | # TODO: we could probably usefully do a bunch of cache invalidation here | |
436 | # TODO: we could probably usefully do a bunch more cache invalidation here | |
437 | ||
438 | # XXX: as with purge_history, this is racy, but no worse than other races | |
439 | # that already exist. | |
440 | self._invalidate_cache_and_stream(txn, self.have_seen_event, (room_id,)) | |
425 | 441 | |
426 | 442 | logger.info("[purge] done") |
427 | 443 |
459 | 459 | |
460 | 460 | def invalidate_caches_for_receipt(self, room_id, receipt_type, user_id): |
461 | 461 | self.get_receipts_for_user.invalidate((user_id, receipt_type)) |
462 | self._get_linearized_receipts_for_room.invalidate_many((room_id,)) | |
462 | self._get_linearized_receipts_for_room.invalidate((room_id,)) | |
463 | 463 | self.get_last_receipt_event_id_for_user.invalidate( |
464 | 464 | (user_id, room_id, receipt_type) |
465 | 465 | ) |
658 | 658 | ) |
659 | 659 | txn.call_after(self.get_receipts_for_user.invalidate, (user_id, receipt_type)) |
660 | 660 | # FIXME: This shouldn't invalidate the whole cache |
661 | txn.call_after( | |
662 | self._get_linearized_receipts_for_room.invalidate_many, (room_id,) | |
663 | ) | |
661 | txn.call_after(self._get_linearized_receipts_for_room.invalidate, (room_id,)) | |
664 | 662 | |
665 | 663 | self.db_pool.simple_delete_txn( |
666 | 664 | txn, |
763 | 763 | self, |
764 | 764 | server_name: str, |
765 | 765 | media_id: str, |
766 | quarantined_by: str, | |
766 | quarantined_by: Optional[str], | |
767 | 767 | ) -> int: |
768 | """quarantines a single local or remote media id | |
768 | """quarantines or unquarantines a single local or remote media id | |
769 | 769 | |
770 | 770 | Args: |
771 | 771 | server_name: The name of the server that holds this media |
772 | 772 | media_id: The ID of the media to be quarantined |
773 | 773 | quarantined_by: The user ID that initiated the quarantine request |
774 | If it is `None` media will be removed from quarantine | |
774 | 775 | """ |
775 | 776 | logger.info("Quarantining media: %s/%s", server_name, media_id) |
776 | 777 | is_local = server_name == self.config.server_name |
837 | 838 | txn, |
838 | 839 | local_mxcs: List[str], |
839 | 840 | remote_mxcs: List[Tuple[str, str]], |
840 | quarantined_by: str, | |
841 | quarantined_by: Optional[str], | |
841 | 842 | ) -> int: |
842 | """Quarantine local and remote media items | |
843 | """Quarantine and unquarantine local and remote media items | |
843 | 844 | |
844 | 845 | Args: |
845 | 846 | txn (cursor) |
847 | 848 | remote_mxcs: A list of (remote server, media id) tuples representing |
848 | 849 | remote mxc URLs |
849 | 850 | quarantined_by: The ID of the user who initiated the quarantine request |
851 | If it is `None` media will be removed from quarantine | |
850 | 852 | Returns: |
851 | 853 | The total number of media items quarantined |
852 | 854 | """ |
855 | ||
853 | 856 | # Update all the tables to set the quarantined_by flag |
854 | txn.executemany( | |
855 | """ | |
857 | sql = """ | |
856 | 858 | UPDATE local_media_repository |
857 | 859 | SET quarantined_by = ? |
858 | WHERE media_id = ? AND safe_from_quarantine = ? | |
859 | """, | |
860 | ((quarantined_by, media_id, False) for media_id in local_mxcs), | |
861 | ) | |
860 | WHERE media_id = ? | |
861 | """ | |
862 | ||
863 | # set quarantine | |
864 | if quarantined_by is not None: | |
865 | sql += "AND safe_from_quarantine = ?" | |
866 | rows = [(quarantined_by, media_id, False) for media_id in local_mxcs] | |
867 | # remove from quarantine | |
868 | else: | |
869 | rows = [(quarantined_by, media_id) for media_id in local_mxcs] | |
870 | ||
871 | txn.executemany(sql, rows) | |
862 | 872 | # Note that a rowcount of -1 can be used to indicate no rows were affected. |
863 | 873 | total_media_quarantined = txn.rowcount if txn.rowcount > 0 else 0 |
864 | 874 | |
1497 | 1507 | room_id: str, |
1498 | 1508 | event_id: str, |
1499 | 1509 | user_id: str, |
1500 | reason: str, | |
1510 | reason: Optional[str], | |
1501 | 1511 | content: JsonDict, |
1502 | 1512 | received_ts: int, |
1503 | 1513 | ) -> None: |
396 | 396 | # ... persist event ... |
397 | 397 | """ |
398 | 398 | |
399 | # If we have a list of instances that are allowed to write to this | |
400 | # stream, make sure we're in it. | |
401 | if self._writers and self._instance_name not in self._writers: | |
402 | raise Exception("Tried to allocate stream ID on non-writer") | |
403 | ||
399 | 404 | return _MultiWriterCtxManager(self) |
400 | 405 | |
401 | 406 | def get_next_mult(self, n: int): |
405 | 410 | # ... persist events ... |
406 | 411 | """ |
407 | 412 | |
413 | # If we have a list of instances that are allowed to write to this | |
414 | # stream, make sure we're in it. | |
415 | if self._writers and self._instance_name not in self._writers: | |
416 | raise Exception("Tried to allocate stream ID on non-writer") | |
417 | ||
408 | 418 | return _MultiWriterCtxManager(self, n) |
409 | 419 | |
410 | 420 | def get_next_txn(self, txn: LoggingTransaction): |
414 | 424 | stream_id = stream_id_gen.get_next(txn) |
415 | 425 | # ... persist event ... |
416 | 426 | """ |
427 | ||
428 | # If we have a list of instances that are allowed to write to this | |
429 | # stream, make sure we're in it. | |
430 | if self._writers and self._instance_name not in self._writers: | |
431 | raise Exception("Tried to allocate stream ID on non-writer") | |
417 | 432 | |
418 | 433 | next_id = self._load_next_id_txn(txn) |
419 | 434 |
14 | 14 | |
15 | 15 | import collections |
16 | 16 | import inspect |
17 | import itertools | |
17 | 18 | import logging |
18 | 19 | from contextlib import contextmanager |
19 | 20 | from typing import ( |
159 | 160 | ) |
160 | 161 | |
161 | 162 | |
163 | T = TypeVar("T") | |
164 | ||
165 | ||
162 | 166 | def concurrently_execute( |
163 | func: Callable, args: Iterable[Any], limit: int | |
167 | func: Callable[[T], Any], args: Iterable[T], limit: int | |
164 | 168 | ) -> defer.Deferred: |
165 | 169 | """Executes the function with each argument concurrently while limiting |
166 | 170 | the number of concurrent executions. |
172 | 176 | limit: Maximum number of conccurent executions. |
173 | 177 | |
174 | 178 | Returns: |
175 | Deferred[list]: Resolved when all function invocations have finished. | |
179 | Deferred: Resolved when all function invocations have finished. | |
176 | 180 | """ |
177 | 181 | it = iter(args) |
178 | 182 | |
179 | async def _concurrently_execute_inner(): | |
183 | async def _concurrently_execute_inner(value: T) -> None: | |
180 | 184 | try: |
181 | 185 | while True: |
182 | await maybe_awaitable(func(next(it))) | |
186 | await maybe_awaitable(func(value)) | |
187 | value = next(it) | |
183 | 188 | except StopIteration: |
184 | 189 | pass |
185 | 190 | |
191 | # We use `itertools.islice` to handle the case where the number of args is | |
192 | # less than the limit, avoiding needlessly spawning unnecessary background | |
193 | # tasks. | |
186 | 194 | return make_deferred_yieldable( |
187 | 195 | defer.gatherResults( |
188 | [run_in_background(_concurrently_execute_inner) for _ in range(limit)], | |
196 | [ | |
197 | run_in_background(_concurrently_execute_inner, value) | |
198 | for value in itertools.islice(it, limit) | |
199 | ], | |
189 | 200 | consumeErrors=True, |
190 | 201 | ) |
191 | 202 | ).addErrback(unwrapFirstError) |
24 | 24 | TypeVar, |
25 | 25 | ) |
26 | 26 | |
27 | from prometheus_client import Gauge | |
28 | ||
27 | 29 | from twisted.internet import defer |
28 | 30 | |
29 | 31 | from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable |
30 | from synapse.metrics import LaterGauge | |
31 | 32 | from synapse.metrics.background_process_metrics import run_as_background_process |
32 | 33 | from synapse.util import Clock |
33 | 34 | |
36 | 37 | |
37 | 38 | V = TypeVar("V") |
38 | 39 | R = TypeVar("R") |
40 | ||
41 | number_queued = Gauge( | |
42 | "synapse_util_batching_queue_number_queued", | |
43 | "The number of items waiting in the queue across all keys", | |
44 | labelnames=("name",), | |
45 | ) | |
46 | ||
47 | number_in_flight = Gauge( | |
48 | "synapse_util_batching_queue_number_pending", | |
49 | "The number of items across all keys either being processed or waiting in a queue", | |
50 | labelnames=("name",), | |
51 | ) | |
52 | ||
53 | number_of_keys = Gauge( | |
54 | "synapse_util_batching_queue_number_of_keys", | |
55 | "The number of distinct keys that have items queued", | |
56 | labelnames=("name",), | |
57 | ) | |
39 | 58 | |
40 | 59 | |
41 | 60 | class BatchingQueue(Generic[V, R]): |
47 | 66 | called, and will keep being called until the queue has been drained (for the |
48 | 67 | given key). |
49 | 68 | |
69 | If the processing function raises an exception then the exception is proxied | |
70 | through to the callers waiting on that batch of work. | |
71 | ||
50 | 72 | Note that the return value of `add_to_queue` will be the return value of the |
51 | 73 | processing function that processed the given item. This means that the |
52 | 74 | returned value will likely include data for other items that were in the |
53 | 75 | batch. |
76 | ||
77 | Args: | |
78 | name: A name for the queue, used for logging contexts and metrics. | |
79 | This must be unique, otherwise the metrics will be wrong. | |
80 | clock: The clock to use to schedule work. | |
81 | process_batch_callback: The callback to to be run to process a batch of | |
82 | work. | |
54 | 83 | """ |
55 | 84 | |
56 | 85 | def __init__( |
72 | 101 | # The function to call with batches of values. |
73 | 102 | self._process_batch_callback = process_batch_callback |
74 | 103 | |
75 | LaterGauge( | |
76 | "synapse_util_batching_queue_number_queued", | |
77 | "The number of items waiting in the queue across all keys", | |
78 | labels=("name",), | |
79 | caller=lambda: sum(len(v) for v in self._next_values.values()), | |
104 | number_queued.labels(self._name).set_function( | |
105 | lambda: sum(len(q) for q in self._next_values.values()) | |
80 | 106 | ) |
81 | 107 | |
82 | LaterGauge( | |
83 | "synapse_util_batching_queue_number_of_keys", | |
84 | "The number of distinct keys that have items queued", | |
85 | labels=("name",), | |
86 | caller=lambda: len(self._next_values), | |
87 | ) | |
108 | number_of_keys.labels(self._name).set_function(lambda: len(self._next_values)) | |
109 | ||
110 | self._number_in_flight_metric = number_in_flight.labels( | |
111 | self._name | |
112 | ) # type: Gauge | |
88 | 113 | |
89 | 114 | async def add_to_queue(self, value: V, key: Hashable = ()) -> R: |
90 | 115 | """Adds the value to the queue with the given key, returning the result |
106 | 131 | if key not in self._processing_keys: |
107 | 132 | run_as_background_process(self._name, self._process_queue, key) |
108 | 133 | |
109 | return await make_deferred_yieldable(d) | |
134 | with self._number_in_flight_metric.track_inprogress(): | |
135 | return await make_deferred_yieldable(d) | |
110 | 136 | |
111 | 137 | async def _process_queue(self, key: Hashable) -> None: |
112 | 138 | """A background task to repeatedly pull things off the queue for the |
113 | 139 | given key and call the `self._process_batch_callback` with the values. |
114 | 140 | """ |
115 | 141 | |
142 | if key in self._processing_keys: | |
143 | return | |
144 | ||
116 | 145 | try: |
117 | if key in self._processing_keys: | |
118 | return | |
119 | ||
120 | 146 | self._processing_keys.add(key) |
121 | 147 | |
122 | 148 | while True: |
136 | 162 | values = [value for value, _ in next_values] |
137 | 163 | results = await self._process_batch_callback(values) |
138 | 164 | |
139 | for _, deferred in next_values: | |
140 | with PreserveLoggingContext(): | |
165 | with PreserveLoggingContext(): | |
166 | for _, deferred in next_values: | |
141 | 167 | deferred.callback(results) |
142 | 168 | |
143 | 169 | except Exception as e: |
144 | for _, deferred in next_values: | |
145 | if deferred.called: | |
146 | continue | |
170 | with PreserveLoggingContext(): | |
171 | for _, deferred in next_values: | |
172 | if deferred.called: | |
173 | continue | |
147 | 174 | |
148 | with PreserveLoggingContext(): | |
149 | 175 | deferred.errback(e) |
150 | 176 | |
151 | 177 | finally: |
15 | 15 | |
16 | 16 | import enum |
17 | 17 | import threading |
18 | from typing import ( | |
19 | Callable, | |
20 | Generic, | |
21 | Iterable, | |
22 | MutableMapping, | |
23 | Optional, | |
24 | TypeVar, | |
25 | Union, | |
26 | cast, | |
27 | ) | |
18 | from typing import Callable, Generic, Iterable, MutableMapping, Optional, TypeVar, Union | |
28 | 19 | |
29 | 20 | from prometheus_client import Gauge |
30 | 21 | |
90 | 81 | # _pending_deferred_cache maps from the key value to a `CacheEntry` object. |
91 | 82 | self._pending_deferred_cache = ( |
92 | 83 | cache_type() |
93 | ) # type: MutableMapping[KT, CacheEntry] | |
84 | ) # type: Union[TreeCache, MutableMapping[KT, CacheEntry]] | |
94 | 85 | |
95 | 86 | def metrics_cb(): |
96 | 87 | cache_pending_metric.labels(name).set(len(self._pending_deferred_cache)) |
286 | 277 | self.cache.set(key, value, callbacks=callbacks) |
287 | 278 | |
288 | 279 | def invalidate(self, key): |
280 | """Delete a key, or tree of entries | |
281 | ||
282 | If the cache is backed by a regular dict, then "key" must be of | |
283 | the right type for this cache | |
284 | ||
285 | If the cache is backed by a TreeCache, then "key" must be a tuple, but | |
286 | may be of lower cardinality than the TreeCache - in which case the whole | |
287 | subtree is deleted. | |
288 | """ | |
289 | 289 | self.check_thread() |
290 | self.cache.pop(key, None) | |
290 | self.cache.del_multi(key) | |
291 | 291 | |
292 | 292 | # if we have a pending lookup for this key, remove it from the |
293 | 293 | # _pending_deferred_cache, which will (a) stop it being returned |
298 | 298 | # run the invalidation callbacks now, rather than waiting for the |
299 | 299 | # deferred to resolve. |
300 | 300 | if entry: |
301 | entry.invalidate() | |
302 | ||
303 | def invalidate_many(self, key: KT): | |
304 | self.check_thread() | |
305 | if not isinstance(key, tuple): | |
306 | raise TypeError("The cache key must be a tuple not %r" % (type(key),)) | |
307 | key = cast(KT, key) | |
308 | self.cache.del_multi(key) | |
309 | ||
310 | # if we have a pending lookup for this key, remove it from the | |
311 | # _pending_deferred_cache, as above | |
312 | entry_dict = self._pending_deferred_cache.pop(key, None) | |
313 | if entry_dict is not None: | |
314 | for entry in iterate_tree_cache_entry(entry_dict): | |
301 | # _pending_deferred_cache.pop should either return a CacheEntry, or, in the | |
302 | # case of a TreeCache, a dict of keys to cache entries. Either way calling | |
303 | # iterate_tree_cache_entry on it will do the right thing. | |
304 | for entry in iterate_tree_cache_entry(entry): | |
315 | 305 | entry.invalidate() |
316 | 306 | |
317 | 307 | def invalidate_all(self): |
47 | 47 | class _CachedFunction(Generic[F]): |
48 | 48 | invalidate = None # type: Any |
49 | 49 | invalidate_all = None # type: Any |
50 | invalidate_many = None # type: Any | |
51 | 50 | prefill = None # type: Any |
52 | 51 | cache = None # type: Any |
53 | 52 | num_args = None # type: Any |
261 | 260 | ): |
262 | 261 | super().__init__(orig, num_args=num_args, cache_context=cache_context) |
263 | 262 | |
263 | if tree and self.num_args < 2: | |
264 | raise RuntimeError( | |
265 | "tree=True is nonsensical for cached functions with a single parameter" | |
266 | ) | |
267 | ||
264 | 268 | self.max_entries = max_entries |
265 | 269 | self.tree = tree |
266 | 270 | self.iterable = iterable |
301 | 305 | wrapped = cast(_CachedFunction, _wrapped) |
302 | 306 | |
303 | 307 | if self.num_args == 1: |
308 | assert not self.tree | |
304 | 309 | wrapped.invalidate = lambda key: cache.invalidate(key[0]) |
305 | 310 | wrapped.prefill = lambda key, val: cache.prefill(key[0], val) |
306 | 311 | else: |
307 | 312 | wrapped.invalidate = cache.invalidate |
308 | wrapped.invalidate_many = cache.invalidate_many | |
309 | 313 | wrapped.prefill = cache.prefill |
310 | 314 | |
311 | 315 | wrapped.invalidate_all = cache.invalidate_all |
151 | 151 | """ |
152 | 152 | Least-recently-used cache, supporting prometheus metrics and invalidation callbacks. |
153 | 153 | |
154 | Supports del_multi only if cache_type=TreeCache | |
155 | 154 | If cache_type=TreeCache, all keys must be tuples. |
156 | 155 | """ |
157 | 156 | |
392 | 391 | |
393 | 392 | @synchronized |
394 | 393 | def cache_del_multi(key: KT) -> None: |
394 | """Delete an entry, or tree of entries | |
395 | ||
396 | If the LruCache is backed by a regular dict, then "key" must be of | |
397 | the right type for this cache | |
398 | ||
399 | If the LruCache is backed by a TreeCache, then "key" must be a tuple, but | |
400 | may be of lower cardinality than the TreeCache - in which case the whole | |
401 | subtree is deleted. | |
395 | 402 | """ |
396 | This will only work if constructed with cache_type=TreeCache | |
397 | """ | |
398 | popped = cache.pop(key) | |
403 | popped = cache.pop(key, None) | |
399 | 404 | if popped is None: |
400 | 405 | return |
401 | 406 | # for each deleted node, we now need to remove it from the linked list |
429 | 434 | self.set = cache_set |
430 | 435 | self.setdefault = cache_set_default |
431 | 436 | self.pop = cache_pop |
437 | self.del_multi = cache_del_multi | |
432 | 438 | # `invalidate` is exposed for consistency with DeferredCache, so that it can be |
433 | 439 | # invalidated by the cache invalidation replication stream. |
434 | self.invalidate = cache_pop | |
435 | if cache_type is TreeCache: | |
436 | self.del_multi = cache_del_multi | |
440 | self.invalidate = cache_del_multi | |
437 | 441 | self.len = synchronized(cache_len) |
438 | 442 | self.contains = cache_contains |
439 | 443 | self.clear = cache_clear |
88 | 88 | value. If the key is partial, the TreeCacheNode corresponding to the part |
89 | 89 | of the tree that was removed. |
90 | 90 | """ |
91 | if not isinstance(key, tuple): | |
92 | raise TypeError("The cache key must be a tuple not %r" % (type(key),)) | |
93 | ||
91 | 94 | # a list of the nodes we have touched on the way down the tree |
92 | 95 | nodes = [] |
93 | 96 |
96 | 96 | write("started %s(%s)" % (app, ",".join(config_files)), colour=GREEN) |
97 | 97 | return True |
98 | 98 | except subprocess.CalledProcessError as e: |
99 | write( | |
100 | "error starting %s(%s) (exit code: %d); see above for logs" | |
101 | % (app, ",".join(config_files), e.returncode), | |
102 | colour=RED, | |
103 | ) | |
99 | err = "%s(%s) failed to start (exit code: %d). Check the Synapse logfile" % ( | |
100 | app, | |
101 | ",".join(config_files), | |
102 | e.returncode, | |
103 | ) | |
104 | if daemonize: | |
105 | err += ", or run synctl with --no-daemonize" | |
106 | err += "." | |
107 | write(err, colour=RED, stream=sys.stderr) | |
104 | 108 | return False |
105 | 109 | |
106 | 110 |
73 | 73 | |
74 | 74 | config = { |
75 | 75 | "tls_certificate_path": os.path.join(config_dir, "cert.pem"), |
76 | "tls_fingerprints": [], | |
77 | 76 | } |
78 | 77 | |
79 | 78 | t = TestConfig() |
80 | 79 | t.read_config(config, config_dir_path="", data_dir_path="") |
81 | t.read_certificate_from_disk(require_cert_and_key=False) | |
80 | t.read_tls_certificate() | |
82 | 81 | |
83 | 82 | warnings = self.flushWarnings() |
84 | 83 | self.assertEqual(len(warnings), 1) |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | 13 | import time |
14 | from typing import Dict, List | |
14 | 15 | from unittest.mock import Mock |
15 | 16 | |
16 | 17 | import attr |
20 | 21 | from nacl.signing import SigningKey |
21 | 22 | from signedjson.key import encode_verify_key_base64, get_verify_key |
22 | 23 | |
23 | from twisted.internet import defer | |
24 | 24 | from twisted.internet.defer import Deferred, ensureDeferred |
25 | 25 | |
26 | 26 | from synapse.api.errors import SynapseError |
91 | 91 | # deferred completes. |
92 | 92 | first_lookup_deferred = Deferred() |
93 | 93 | |
94 | async def first_lookup_fetch(keys_to_fetch): | |
95 | self.assertEquals(current_context().request.id, "context_11") | |
96 | self.assertEqual(keys_to_fetch, {"server10": {get_key_id(key1): 0}}) | |
94 | async def first_lookup_fetch( | |
95 | server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
96 | ) -> Dict[str, FetchKeyResult]: | |
97 | # self.assertEquals(current_context().request.id, "context_11") | |
98 | self.assertEqual(server_name, "server10") | |
99 | self.assertEqual(key_ids, [get_key_id(key1)]) | |
100 | self.assertEqual(minimum_valid_until_ts, 0) | |
97 | 101 | |
98 | 102 | await make_deferred_yieldable(first_lookup_deferred) |
99 | return { | |
100 | "server10": { | |
101 | get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100) | |
102 | } | |
103 | } | |
103 | return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)} | |
104 | 104 | |
105 | 105 | mock_fetcher.get_keys.side_effect = first_lookup_fetch |
106 | 106 | |
107 | 107 | async def first_lookup(): |
108 | 108 | with LoggingContext("context_11", request=FakeRequest("context_11")): |
109 | 109 | res_deferreds = kr.verify_json_objects_for_server( |
110 | [("server10", json1, 0, "test10"), ("server11", {}, 0, "test11")] | |
110 | [("server10", json1, 0), ("server11", {}, 0)] | |
111 | 111 | ) |
112 | 112 | |
113 | 113 | # the unsigned json should be rejected pretty quickly |
125 | 125 | |
126 | 126 | d0 = ensureDeferred(first_lookup()) |
127 | 127 | |
128 | self.pump() | |
129 | ||
128 | 130 | mock_fetcher.get_keys.assert_called_once() |
129 | 131 | |
130 | 132 | # a second request for a server with outstanding requests |
131 | 133 | # should block rather than start a second call |
132 | 134 | |
133 | async def second_lookup_fetch(keys_to_fetch): | |
134 | self.assertEquals(current_context().request.id, "context_12") | |
135 | return { | |
136 | "server10": { | |
137 | get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100) | |
138 | } | |
139 | } | |
135 | async def second_lookup_fetch( | |
136 | server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
137 | ) -> Dict[str, FetchKeyResult]: | |
138 | # self.assertEquals(current_context().request.id, "context_12") | |
139 | return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)} | |
140 | 140 | |
141 | 141 | mock_fetcher.get_keys.reset_mock() |
142 | 142 | mock_fetcher.get_keys.side_effect = second_lookup_fetch |
145 | 145 | async def second_lookup(): |
146 | 146 | with LoggingContext("context_12", request=FakeRequest("context_12")): |
147 | 147 | res_deferreds_2 = kr.verify_json_objects_for_server( |
148 | [("server10", json1, 0, "test")] | |
148 | [ | |
149 | ( | |
150 | "server10", | |
151 | json1, | |
152 | 0, | |
153 | ) | |
154 | ] | |
149 | 155 | ) |
150 | 156 | res_deferreds_2[0].addBoth(self.check_context, None) |
151 | 157 | second_lookup_state[0] = 1 |
182 | 188 | signedjson.sign.sign_json(json1, "server9", key1) |
183 | 189 | |
184 | 190 | # should fail immediately on an unsigned object |
185 | d = _verify_json_for_server(kr, "server9", {}, 0, "test unsigned") | |
191 | d = kr.verify_json_for_server("server9", {}, 0) | |
186 | 192 | self.get_failure(d, SynapseError) |
187 | 193 | |
188 | 194 | # should succeed on a signed object |
189 | d = _verify_json_for_server(kr, "server9", json1, 500, "test signed") | |
195 | d = kr.verify_json_for_server("server9", json1, 500) | |
190 | 196 | # self.assertFalse(d.called) |
191 | 197 | self.get_success(d) |
192 | 198 | |
213 | 219 | signedjson.sign.sign_json(json1, "server9", key1) |
214 | 220 | |
215 | 221 | # should fail immediately on an unsigned object |
216 | d = _verify_json_for_server(kr, "server9", {}, 0, "test unsigned") | |
222 | d = kr.verify_json_for_server("server9", {}, 0) | |
217 | 223 | self.get_failure(d, SynapseError) |
218 | 224 | |
219 | 225 | # should fail on a signed object with a non-zero minimum_valid_until_ms, |
220 | 226 | # as it tries to refetch the keys and fails. |
221 | d = _verify_json_for_server( | |
222 | kr, "server9", json1, 500, "test signed non-zero min" | |
223 | ) | |
227 | d = kr.verify_json_for_server("server9", json1, 500) | |
224 | 228 | self.get_failure(d, SynapseError) |
225 | 229 | |
226 | 230 | # We expect the keyring tried to refetch the key once. |
227 | 231 | mock_fetcher.get_keys.assert_called_once_with( |
228 | {"server9": {get_key_id(key1): 500}} | |
232 | "server9", [get_key_id(key1)], 500 | |
229 | 233 | ) |
230 | 234 | |
231 | 235 | # should succeed on a signed object with a 0 minimum_valid_until_ms |
232 | d = _verify_json_for_server( | |
233 | kr, "server9", json1, 0, "test signed with zero min" | |
236 | d = kr.verify_json_for_server( | |
237 | "server9", | |
238 | json1, | |
239 | 0, | |
234 | 240 | ) |
235 | 241 | self.get_success(d) |
236 | 242 | |
238 | 244 | """Two requests for the same key should be deduped.""" |
239 | 245 | key1 = signedjson.key.generate_signing_key(1) |
240 | 246 | |
241 | async def get_keys(keys_to_fetch): | |
247 | async def get_keys( | |
248 | server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
249 | ) -> Dict[str, FetchKeyResult]: | |
242 | 250 | # there should only be one request object (with the max validity) |
243 | self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}}) | |
244 | ||
245 | return { | |
246 | "server1": { | |
247 | get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200) | |
248 | } | |
249 | } | |
251 | self.assertEqual(server_name, "server1") | |
252 | self.assertEqual(key_ids, [get_key_id(key1)]) | |
253 | self.assertEqual(minimum_valid_until_ts, 1500) | |
254 | ||
255 | return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)} | |
250 | 256 | |
251 | 257 | mock_fetcher = Mock() |
252 | 258 | mock_fetcher.get_keys = Mock(side_effect=get_keys) |
258 | 264 | # the first request should succeed; the second should fail because the key |
259 | 265 | # has expired |
260 | 266 | results = kr.verify_json_objects_for_server( |
261 | [("server1", json1, 500, "test1"), ("server1", json1, 1500, "test2")] | |
267 | [ | |
268 | ( | |
269 | "server1", | |
270 | json1, | |
271 | 500, | |
272 | ), | |
273 | ("server1", json1, 1500), | |
274 | ] | |
262 | 275 | ) |
263 | 276 | self.assertEqual(len(results), 2) |
264 | 277 | self.get_success(results[0]) |
273 | 286 | """If the first fetcher cannot provide a recent enough key, we fall back""" |
274 | 287 | key1 = signedjson.key.generate_signing_key(1) |
275 | 288 | |
276 | async def get_keys1(keys_to_fetch): | |
277 | self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}}) | |
278 | return { | |
279 | "server1": {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 800)} | |
280 | } | |
281 | ||
282 | async def get_keys2(keys_to_fetch): | |
283 | self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}}) | |
284 | return { | |
285 | "server1": { | |
286 | get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200) | |
287 | } | |
288 | } | |
289 | async def get_keys1( | |
290 | server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
291 | ) -> Dict[str, FetchKeyResult]: | |
292 | self.assertEqual(server_name, "server1") | |
293 | self.assertEqual(key_ids, [get_key_id(key1)]) | |
294 | self.assertEqual(minimum_valid_until_ts, 1500) | |
295 | return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 800)} | |
296 | ||
297 | async def get_keys2( | |
298 | server_name: str, key_ids: List[str], minimum_valid_until_ts: int | |
299 | ) -> Dict[str, FetchKeyResult]: | |
300 | self.assertEqual(server_name, "server1") | |
301 | self.assertEqual(key_ids, [get_key_id(key1)]) | |
302 | self.assertEqual(minimum_valid_until_ts, 1500) | |
303 | return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)} | |
289 | 304 | |
290 | 305 | mock_fetcher1 = Mock() |
291 | 306 | mock_fetcher1.get_keys = Mock(side_effect=get_keys1) |
297 | 312 | signedjson.sign.sign_json(json1, "server1", key1) |
298 | 313 | |
299 | 314 | results = kr.verify_json_objects_for_server( |
300 | [("server1", json1, 1200, "test1"), ("server1", json1, 1500, "test2")] | |
315 | [ | |
316 | ( | |
317 | "server1", | |
318 | json1, | |
319 | 1200, | |
320 | ), | |
321 | ( | |
322 | "server1", | |
323 | json1, | |
324 | 1500, | |
325 | ), | |
326 | ] | |
301 | 327 | ) |
302 | 328 | self.assertEqual(len(results), 2) |
303 | 329 | self.get_success(results[0]) |
348 | 374 | |
349 | 375 | self.http_client.get_json.side_effect = get_json |
350 | 376 | |
351 | keys_to_fetch = {SERVER_NAME: {"key1": 0}} | |
352 | keys = self.get_success(fetcher.get_keys(keys_to_fetch)) | |
353 | k = keys[SERVER_NAME][testverifykey_id] | |
377 | keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0)) | |
378 | k = keys[testverifykey_id] | |
354 | 379 | self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS) |
355 | 380 | self.assertEqual(k.verify_key, testverifykey) |
356 | 381 | self.assertEqual(k.verify_key.alg, "ed25519") |
377 | 402 | # change the server name: the result should be ignored |
378 | 403 | response["server_name"] = "OTHER_SERVER" |
379 | 404 | |
380 | keys = self.get_success(fetcher.get_keys(keys_to_fetch)) | |
405 | keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0)) | |
381 | 406 | self.assertEqual(keys, {}) |
382 | 407 | |
383 | 408 | |
464 | 489 | |
465 | 490 | self.expect_outgoing_key_query(SERVER_NAME, "key1", response) |
466 | 491 | |
467 | keys_to_fetch = {SERVER_NAME: {"key1": 0}} | |
468 | keys = self.get_success(fetcher.get_keys(keys_to_fetch)) | |
469 | self.assertIn(SERVER_NAME, keys) | |
470 | k = keys[SERVER_NAME][testverifykey_id] | |
492 | keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0)) | |
493 | self.assertIn(testverifykey_id, keys) | |
494 | k = keys[testverifykey_id] | |
471 | 495 | self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS) |
472 | 496 | self.assertEqual(k.verify_key, testverifykey) |
473 | 497 | self.assertEqual(k.verify_key.alg, "ed25519") |
514 | 538 | |
515 | 539 | self.expect_outgoing_key_query(SERVER_NAME, "key1", response) |
516 | 540 | |
517 | keys_to_fetch = {SERVER_NAME: {"key1": 0}} | |
518 | keys = self.get_success(fetcher.get_keys(keys_to_fetch)) | |
519 | self.assertIn(SERVER_NAME, keys) | |
520 | k = keys[SERVER_NAME][testverifykey_id] | |
541 | keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0)) | |
542 | self.assertIn(testverifykey_id, keys) | |
543 | k = keys[testverifykey_id] | |
521 | 544 | self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS) |
522 | 545 | self.assertEqual(k.verify_key, testverifykey) |
523 | 546 | self.assertEqual(k.verify_key.alg, "ed25519") |
558 | 581 | |
559 | 582 | def get_key_from_perspectives(response): |
560 | 583 | fetcher = PerspectivesKeyFetcher(self.hs) |
561 | keys_to_fetch = {SERVER_NAME: {"key1": 0}} | |
562 | 584 | self.expect_outgoing_key_query(SERVER_NAME, "key1", response) |
563 | return self.get_success(fetcher.get_keys(keys_to_fetch)) | |
585 | return self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0)) | |
564 | 586 | |
565 | 587 | # start with a valid response so we can check we are testing the right thing |
566 | 588 | response = build_response() |
567 | 589 | keys = get_key_from_perspectives(response) |
568 | k = keys[SERVER_NAME][testverifykey_id] | |
590 | k = keys[testverifykey_id] | |
569 | 591 | self.assertEqual(k.verify_key, testverifykey) |
570 | 592 | |
571 | 593 | # remove the perspectives server's signature |
584 | 606 | def get_key_id(key): |
585 | 607 | """Get the matrix ID tag for a given SigningKey or VerifyKey""" |
586 | 608 | return "%s:%s" % (key.alg, key.version) |
587 | ||
588 | ||
589 | @defer.inlineCallbacks | |
590 | def run_in_context(f, *args, **kwargs): | |
591 | with LoggingContext("testctx"): | |
592 | rv = yield f(*args, **kwargs) | |
593 | return rv | |
594 | ||
595 | ||
596 | def _verify_json_for_server(kr, *args): | |
597 | """thin wrapper around verify_json_for_server which makes sure it is wrapped | |
598 | with the patched defer.inlineCallbacks. | |
599 | """ | |
600 | ||
601 | @defer.inlineCallbacks | |
602 | def v(): | |
603 | rv1 = yield kr.verify_json_for_server(*args) | |
604 | return rv1 | |
605 | ||
606 | return run_in_context(v) |
56 | 56 | sender="@someone:anywhere", type="m.room.message", room_id="!foo:bar" |
57 | 57 | ) |
58 | 58 | self.mock_store.get_new_events_for_appservice.side_effect = [ |
59 | make_awaitable((0, [event])), | |
60 | 59 | make_awaitable((0, [])), |
60 | make_awaitable((1, [event])), | |
61 | 61 | ] |
62 | self.handler.notify_interested_services(RoomStreamToken(None, 0)) | |
62 | self.handler.notify_interested_services(RoomStreamToken(None, 1)) | |
63 | 63 | |
64 | 64 | self.mock_scheduler.submit_event_for_as.assert_called_once_with( |
65 | 65 | interested_service, event |
76 | 76 | self.mock_as_api.query_user.return_value = make_awaitable(True) |
77 | 77 | self.mock_store.get_new_events_for_appservice.side_effect = [ |
78 | 78 | make_awaitable((0, [event])), |
79 | make_awaitable((0, [])), | |
80 | 79 | ] |
81 | 80 | |
82 | 81 | self.handler.notify_interested_services(RoomStreamToken(None, 0)) |
94 | 93 | self.mock_as_api.query_user.return_value = make_awaitable(True) |
95 | 94 | self.mock_store.get_new_events_for_appservice.side_effect = [ |
96 | 95 | make_awaitable((0, [event])), |
97 | make_awaitable((0, [])), | |
98 | 96 | ] |
99 | 97 | |
100 | 98 | self.handler.notify_interested_services(RoomStreamToken(None, 0)) |
63 | 63 | user_tok=self.admin_user_tok, |
64 | 64 | ) |
65 | 65 | for _ in range(5): |
66 | self._create_event_and_report( | |
66 | self._create_event_and_report_without_parameters( | |
67 | 67 | room_id=self.room_id2, |
68 | 68 | user_tok=self.admin_user_tok, |
69 | 69 | ) |
373 | 373 | "POST", |
374 | 374 | "rooms/%s/report/%s" % (room_id, event_id), |
375 | 375 | json.dumps({"score": -100, "reason": "this makes me sad"}), |
376 | access_token=user_tok, | |
377 | ) | |
378 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
379 | ||
380 | def _create_event_and_report_without_parameters(self, room_id, user_tok): | |
381 | """Create and report an event, but omit reason and score""" | |
382 | resp = self.helper.send(room_id, tok=user_tok) | |
383 | event_id = resp["event_id"] | |
384 | ||
385 | channel = self.make_request( | |
386 | "POST", | |
387 | "rooms/%s/report/%s" % (room_id, event_id), | |
388 | json.dumps({}), | |
376 | 389 | access_token=user_tok, |
377 | 390 | ) |
378 | 391 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
14 | 14 | import json |
15 | 15 | import os |
16 | 16 | from binascii import unhexlify |
17 | ||
18 | from parameterized import parameterized | |
17 | 19 | |
18 | 20 | import synapse.rest.admin |
19 | 21 | from synapse.api.errors import Codes |
561 | 563 | ) |
562 | 564 | # Test that the file is deleted |
563 | 565 | self.assertFalse(os.path.exists(local_path)) |
566 | ||
567 | ||
568 | class QuarantineMediaByIDTestCase(unittest.HomeserverTestCase): | |
569 | ||
570 | servlets = [ | |
571 | synapse.rest.admin.register_servlets, | |
572 | synapse.rest.admin.register_servlets_for_media_repo, | |
573 | login.register_servlets, | |
574 | ] | |
575 | ||
576 | def prepare(self, reactor, clock, hs): | |
577 | media_repo = hs.get_media_repository_resource() | |
578 | self.store = hs.get_datastore() | |
579 | self.server_name = hs.hostname | |
580 | ||
581 | self.admin_user = self.register_user("admin", "pass", admin=True) | |
582 | self.admin_user_tok = self.login("admin", "pass") | |
583 | ||
584 | # Create media | |
585 | upload_resource = media_repo.children[b"upload"] | |
586 | # file size is 67 Byte | |
587 | image_data = unhexlify( | |
588 | b"89504e470d0a1a0a0000000d4948445200000001000000010806" | |
589 | b"0000001f15c4890000000a49444154789c63000100000500010d" | |
590 | b"0a2db40000000049454e44ae426082" | |
591 | ) | |
592 | ||
593 | # Upload some media into the room | |
594 | response = self.helper.upload_media( | |
595 | upload_resource, image_data, tok=self.admin_user_tok, expect_code=200 | |
596 | ) | |
597 | # Extract media ID from the response | |
598 | server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://' | |
599 | self.media_id = server_and_media_id.split("/")[1] | |
600 | ||
601 | self.url = "/_synapse/admin/v1/media/%s/%s/%s" | |
602 | ||
603 | @parameterized.expand(["quarantine", "unquarantine"]) | |
604 | def test_no_auth(self, action: str): | |
605 | """ | |
606 | Try to protect media without authentication. | |
607 | """ | |
608 | ||
609 | channel = self.make_request( | |
610 | "POST", | |
611 | self.url % (action, self.server_name, self.media_id), | |
612 | b"{}", | |
613 | ) | |
614 | ||
615 | self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"]) | |
616 | self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"]) | |
617 | ||
618 | @parameterized.expand(["quarantine", "unquarantine"]) | |
619 | def test_requester_is_no_admin(self, action: str): | |
620 | """ | |
621 | If the user is not a server admin, an error is returned. | |
622 | """ | |
623 | self.other_user = self.register_user("user", "pass") | |
624 | self.other_user_token = self.login("user", "pass") | |
625 | ||
626 | channel = self.make_request( | |
627 | "POST", | |
628 | self.url % (action, self.server_name, self.media_id), | |
629 | access_token=self.other_user_token, | |
630 | ) | |
631 | ||
632 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) | |
633 | self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) | |
634 | ||
635 | def test_quarantine_media(self): | |
636 | """ | |
637 | Tests that quarantining and remove from quarantine a media is successfully | |
638 | """ | |
639 | ||
640 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
641 | self.assertFalse(media_info["quarantined_by"]) | |
642 | ||
643 | # quarantining | |
644 | channel = self.make_request( | |
645 | "POST", | |
646 | self.url % ("quarantine", self.server_name, self.media_id), | |
647 | access_token=self.admin_user_tok, | |
648 | ) | |
649 | ||
650 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
651 | self.assertFalse(channel.json_body) | |
652 | ||
653 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
654 | self.assertTrue(media_info["quarantined_by"]) | |
655 | ||
656 | # remove from quarantine | |
657 | channel = self.make_request( | |
658 | "POST", | |
659 | self.url % ("unquarantine", self.server_name, self.media_id), | |
660 | access_token=self.admin_user_tok, | |
661 | ) | |
662 | ||
663 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
664 | self.assertFalse(channel.json_body) | |
665 | ||
666 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
667 | self.assertFalse(media_info["quarantined_by"]) | |
668 | ||
669 | def test_quarantine_protected_media(self): | |
670 | """ | |
671 | Tests that quarantining from protected media fails | |
672 | """ | |
673 | ||
674 | # protect | |
675 | self.get_success(self.store.mark_local_media_as_safe(self.media_id, safe=True)) | |
676 | ||
677 | # verify protection | |
678 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
679 | self.assertTrue(media_info["safe_from_quarantine"]) | |
680 | ||
681 | # quarantining | |
682 | channel = self.make_request( | |
683 | "POST", | |
684 | self.url % ("quarantine", self.server_name, self.media_id), | |
685 | access_token=self.admin_user_tok, | |
686 | ) | |
687 | ||
688 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
689 | self.assertFalse(channel.json_body) | |
690 | ||
691 | # verify that is not in quarantine | |
692 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
693 | self.assertFalse(media_info["quarantined_by"]) | |
694 | ||
695 | ||
696 | class ProtectMediaByIDTestCase(unittest.HomeserverTestCase): | |
697 | ||
698 | servlets = [ | |
699 | synapse.rest.admin.register_servlets, | |
700 | synapse.rest.admin.register_servlets_for_media_repo, | |
701 | login.register_servlets, | |
702 | ] | |
703 | ||
704 | def prepare(self, reactor, clock, hs): | |
705 | media_repo = hs.get_media_repository_resource() | |
706 | self.store = hs.get_datastore() | |
707 | ||
708 | self.admin_user = self.register_user("admin", "pass", admin=True) | |
709 | self.admin_user_tok = self.login("admin", "pass") | |
710 | ||
711 | # Create media | |
712 | upload_resource = media_repo.children[b"upload"] | |
713 | # file size is 67 Byte | |
714 | image_data = unhexlify( | |
715 | b"89504e470d0a1a0a0000000d4948445200000001000000010806" | |
716 | b"0000001f15c4890000000a49444154789c63000100000500010d" | |
717 | b"0a2db40000000049454e44ae426082" | |
718 | ) | |
719 | ||
720 | # Upload some media into the room | |
721 | response = self.helper.upload_media( | |
722 | upload_resource, image_data, tok=self.admin_user_tok, expect_code=200 | |
723 | ) | |
724 | # Extract media ID from the response | |
725 | server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://' | |
726 | self.media_id = server_and_media_id.split("/")[1] | |
727 | ||
728 | self.url = "/_synapse/admin/v1/media/%s/%s" | |
729 | ||
730 | @parameterized.expand(["protect", "unprotect"]) | |
731 | def test_no_auth(self, action: str): | |
732 | """ | |
733 | Try to protect media without authentication. | |
734 | """ | |
735 | ||
736 | channel = self.make_request("POST", self.url % (action, self.media_id), b"{}") | |
737 | ||
738 | self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"]) | |
739 | self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"]) | |
740 | ||
741 | @parameterized.expand(["protect", "unprotect"]) | |
742 | def test_requester_is_no_admin(self, action: str): | |
743 | """ | |
744 | If the user is not a server admin, an error is returned. | |
745 | """ | |
746 | self.other_user = self.register_user("user", "pass") | |
747 | self.other_user_token = self.login("user", "pass") | |
748 | ||
749 | channel = self.make_request( | |
750 | "POST", | |
751 | self.url % (action, self.media_id), | |
752 | access_token=self.other_user_token, | |
753 | ) | |
754 | ||
755 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) | |
756 | self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) | |
757 | ||
758 | def test_protect_media(self): | |
759 | """ | |
760 | Tests that protect and unprotect a media is successfully | |
761 | """ | |
762 | ||
763 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
764 | self.assertFalse(media_info["safe_from_quarantine"]) | |
765 | ||
766 | # protect | |
767 | channel = self.make_request( | |
768 | "POST", | |
769 | self.url % ("protect", self.media_id), | |
770 | access_token=self.admin_user_tok, | |
771 | ) | |
772 | ||
773 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
774 | self.assertFalse(channel.json_body) | |
775 | ||
776 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
777 | self.assertTrue(media_info["safe_from_quarantine"]) | |
778 | ||
779 | # unprotect | |
780 | channel = self.make_request( | |
781 | "POST", | |
782 | self.url % ("unprotect", self.media_id), | |
783 | access_token=self.admin_user_tok, | |
784 | ) | |
785 | ||
786 | self.assertEqual(200, channel.code, msg=channel.json_body) | |
787 | self.assertFalse(channel.json_body) | |
788 | ||
789 | media_info = self.get_success(self.store.get_local_media(self.media_id)) | |
790 | self.assertFalse(media_info["safe_from_quarantine"]) |
1879 | 1879 | """Calls the endpoint under test. returns the json response object.""" |
1880 | 1880 | channel = self.make_request( |
1881 | 1881 | "GET", |
1882 | "/_matrix/client/unstable/org.matrix.msc2432/rooms/%s/aliases" | |
1883 | % (self.room_id,), | |
1882 | "/_matrix/client/r0/rooms/%s/aliases" % (self.room_id,), | |
1884 | 1883 | access_token=access_token, |
1885 | 1884 | ) |
1886 | 1885 | self.assertEqual(channel.code, expected_code, channel.result) |
0 | # Copyright 2021 Callum Brown | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | import json | |
15 | ||
16 | import synapse.rest.admin | |
17 | from synapse.rest.client.v1 import login, room | |
18 | from synapse.rest.client.v2_alpha import report_event | |
19 | ||
20 | from tests import unittest | |
21 | ||
22 | ||
23 | class ReportEventTestCase(unittest.HomeserverTestCase): | |
24 | servlets = [ | |
25 | synapse.rest.admin.register_servlets, | |
26 | login.register_servlets, | |
27 | room.register_servlets, | |
28 | report_event.register_servlets, | |
29 | ] | |
30 | ||
31 | def prepare(self, reactor, clock, hs): | |
32 | self.admin_user = self.register_user("admin", "pass", admin=True) | |
33 | self.admin_user_tok = self.login("admin", "pass") | |
34 | self.other_user = self.register_user("user", "pass") | |
35 | self.other_user_tok = self.login("user", "pass") | |
36 | ||
37 | self.room_id = self.helper.create_room_as( | |
38 | self.other_user, tok=self.other_user_tok, is_public=True | |
39 | ) | |
40 | self.helper.join(self.room_id, user=self.admin_user, tok=self.admin_user_tok) | |
41 | resp = self.helper.send(self.room_id, tok=self.admin_user_tok) | |
42 | self.event_id = resp["event_id"] | |
43 | self.report_path = "rooms/{}/report/{}".format(self.room_id, self.event_id) | |
44 | ||
45 | def test_reason_str_and_score_int(self): | |
46 | data = {"reason": "this makes me sad", "score": -100} | |
47 | self._assert_status(200, data) | |
48 | ||
49 | def test_no_reason(self): | |
50 | data = {"score": 0} | |
51 | self._assert_status(200, data) | |
52 | ||
53 | def test_no_score(self): | |
54 | data = {"reason": "this makes me sad"} | |
55 | self._assert_status(200, data) | |
56 | ||
57 | def test_no_reason_and_no_score(self): | |
58 | data = {} | |
59 | self._assert_status(200, data) | |
60 | ||
61 | def test_reason_int_and_score_str(self): | |
62 | data = {"reason": 10, "score": "string"} | |
63 | self._assert_status(400, data) | |
64 | ||
65 | def test_reason_zero_and_score_blank(self): | |
66 | data = {"reason": 0, "score": ""} | |
67 | self._assert_status(400, data) | |
68 | ||
69 | def test_reason_and_score_null(self): | |
70 | data = {"reason": None, "score": None} | |
71 | self._assert_status(400, data) | |
72 | ||
73 | def _assert_status(self, response_status, data): | |
74 | channel = self.make_request( | |
75 | "POST", | |
76 | self.report_path, | |
77 | json.dumps(data), | |
78 | access_token=self.other_user_tok, | |
79 | ) | |
80 | self.assertEqual( | |
81 | response_status, int(channel.result["code"]), msg=channel.result["body"] | |
82 | ) |
207 | 207 | keyid = "ed25519:%s" % (testkey.version,) |
208 | 208 | |
209 | 209 | fetcher = PerspectivesKeyFetcher(self.hs2) |
210 | d = fetcher.get_keys({"targetserver": {keyid: 1000}}) | |
210 | d = fetcher.get_keys("targetserver", [keyid], 1000) | |
211 | 211 | res = self.get_success(d) |
212 | self.assertIn("targetserver", res) | |
213 | keyres = res["targetserver"][keyid] | |
212 | self.assertIn(keyid, res) | |
213 | keyres = res[keyid] | |
214 | 214 | assert isinstance(keyres, FetchKeyResult) |
215 | 215 | self.assertEqual( |
216 | 216 | signedjson.key.encode_verify_key_base64(keyres.verify_key), |
229 | 229 | keyid = "ed25519:%s" % (testkey.version,) |
230 | 230 | |
231 | 231 | fetcher = PerspectivesKeyFetcher(self.hs2) |
232 | d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}}) | |
232 | d = fetcher.get_keys(self.hs.hostname, [keyid], 1000) | |
233 | 233 | res = self.get_success(d) |
234 | self.assertIn(self.hs.hostname, res) | |
235 | keyres = res[self.hs.hostname][keyid] | |
234 | self.assertIn(keyid, res) | |
235 | keyres = res[keyid] | |
236 | 236 | assert isinstance(keyres, FetchKeyResult) |
237 | 237 | self.assertEqual( |
238 | 238 | signedjson.key.encode_verify_key_base64(keyres.verify_key), |
246 | 246 | keyid = "ed25519:%s" % (self.hs_signing_key.version,) |
247 | 247 | |
248 | 248 | fetcher = PerspectivesKeyFetcher(self.hs2) |
249 | d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}}) | |
249 | d = fetcher.get_keys(self.hs.hostname, [keyid], 1000) | |
250 | 250 | res = self.get_success(d) |
251 | self.assertIn(self.hs.hostname, res) | |
252 | keyres = res[self.hs.hostname][keyid] | |
251 | self.assertIn(keyid, res) | |
252 | keyres = res[keyid] | |
253 | 253 | assert isinstance(keyres, FetchKeyResult) |
254 | 254 | self.assertEqual( |
255 | 255 | signedjson.key.encode_verify_key_base64(keyres.verify_key), |
0 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. |
0 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. |
0 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | import json | |
14 | ||
15 | from synapse.logging.context import LoggingContext | |
16 | from synapse.storage.databases.main.events_worker import EventsWorkerStore | |
17 | ||
18 | from tests import unittest | |
19 | ||
20 | ||
21 | class HaveSeenEventsTestCase(unittest.HomeserverTestCase): | |
22 | def prepare(self, reactor, clock, hs): | |
23 | self.store: EventsWorkerStore = hs.get_datastore() | |
24 | ||
25 | # insert some test data | |
26 | for rid in ("room1", "room2"): | |
27 | self.get_success( | |
28 | self.store.db_pool.simple_insert( | |
29 | "rooms", | |
30 | {"room_id": rid, "room_version": 4}, | |
31 | ) | |
32 | ) | |
33 | ||
34 | for idx, (rid, eid) in enumerate( | |
35 | ( | |
36 | ("room1", "event10"), | |
37 | ("room1", "event11"), | |
38 | ("room1", "event12"), | |
39 | ("room2", "event20"), | |
40 | ) | |
41 | ): | |
42 | self.get_success( | |
43 | self.store.db_pool.simple_insert( | |
44 | "events", | |
45 | { | |
46 | "event_id": eid, | |
47 | "room_id": rid, | |
48 | "topological_ordering": idx, | |
49 | "stream_ordering": idx, | |
50 | "type": "test", | |
51 | "processed": True, | |
52 | "outlier": False, | |
53 | }, | |
54 | ) | |
55 | ) | |
56 | self.get_success( | |
57 | self.store.db_pool.simple_insert( | |
58 | "event_json", | |
59 | { | |
60 | "event_id": eid, | |
61 | "room_id": rid, | |
62 | "json": json.dumps({"type": "test", "room_id": rid}), | |
63 | "internal_metadata": "{}", | |
64 | "format_version": 3, | |
65 | }, | |
66 | ) | |
67 | ) | |
68 | ||
69 | def test_simple(self): | |
70 | with LoggingContext(name="test") as ctx: | |
71 | res = self.get_success( | |
72 | self.store.have_seen_events("room1", ["event10", "event19"]) | |
73 | ) | |
74 | self.assertEquals(res, {"event10"}) | |
75 | ||
76 | # that should result in a single db query | |
77 | self.assertEquals(ctx.get_resource_usage().db_txn_count, 1) | |
78 | ||
79 | # a second lookup of the same events should cause no queries | |
80 | with LoggingContext(name="test") as ctx: | |
81 | res = self.get_success( | |
82 | self.store.have_seen_events("room1", ["event10", "event19"]) | |
83 | ) | |
84 | self.assertEquals(res, {"event10"}) | |
85 | self.assertEquals(ctx.get_resource_usage().db_txn_count, 0) | |
86 | ||
87 | def test_query_via_event_cache(self): | |
88 | # fetch an event into the event cache | |
89 | self.get_success(self.store.get_event("event10")) | |
90 | ||
91 | # looking it up should now cause no db hits | |
92 | with LoggingContext(name="test") as ctx: | |
93 | res = self.get_success(self.store.have_seen_events("room1", ["event10"])) | |
94 | self.assertEquals(res, {"event10"}) | |
95 | self.assertEquals(ctx.get_resource_usage().db_txn_count, 0) |
621 | 621 | self.assertEquals(callcount2[0], 1) |
622 | 622 | |
623 | 623 | a.func2.invalidate(("foo",)) |
624 | self.assertEquals(a.func2.cache.cache.pop.call_count, 1) | |
624 | self.assertEquals(a.func2.cache.cache.del_multi.call_count, 1) | |
625 | 625 | |
626 | 626 | yield a.func2("foo") |
627 | 627 | a.func2.invalidate(("foo",)) |
628 | self.assertEquals(a.func2.cache.cache.pop.call_count, 2) | |
628 | self.assertEquals(a.func2.cache.cache.del_multi.call_count, 2) | |
629 | 629 | |
630 | 630 | self.assertEquals(callcount[0], 1) |
631 | 631 | self.assertEquals(callcount2[0], 2) |
632 | 632 | |
633 | 633 | a.func.invalidate(("foo",)) |
634 | self.assertEquals(a.func2.cache.cache.pop.call_count, 3) | |
634 | self.assertEquals(a.func2.cache.cache.del_multi.call_count, 3) | |
635 | 635 | yield a.func("foo") |
636 | 636 | |
637 | 637 | self.assertEquals(callcount[0], 2) |
13 | 13 | from twisted.internet import defer |
14 | 14 | |
15 | 15 | from synapse.logging.context import make_deferred_yieldable |
16 | from synapse.util.batching_queue import BatchingQueue | |
16 | from synapse.util.batching_queue import ( | |
17 | BatchingQueue, | |
18 | number_in_flight, | |
19 | number_of_keys, | |
20 | number_queued, | |
21 | ) | |
17 | 22 | |
18 | 23 | from tests.server import get_clock |
19 | 24 | from tests.unittest import TestCase |
22 | 27 | class BatchingQueueTestCase(TestCase): |
23 | 28 | def setUp(self): |
24 | 29 | self.clock, hs_clock = get_clock() |
30 | ||
31 | # We ensure that we remove any existing metrics for "test_queue". | |
32 | try: | |
33 | number_queued.remove("test_queue") | |
34 | number_of_keys.remove("test_queue") | |
35 | number_in_flight.remove("test_queue") | |
36 | except KeyError: | |
37 | pass | |
25 | 38 | |
26 | 39 | self._pending_calls = [] |
27 | 40 | self.queue = BatchingQueue("test_queue", hs_clock, self._process_queue) |
31 | 44 | self._pending_calls.append((values, d)) |
32 | 45 | return await make_deferred_yieldable(d) |
33 | 46 | |
47 | def _get_sample_with_name(self, metric, name) -> int: | |
48 | """For a prometheus metric get the value of the sample that has a | |
49 | matching "name" label. | |
50 | """ | |
51 | for sample in metric.collect()[0].samples: | |
52 | if sample.labels.get("name") == name: | |
53 | return sample.value | |
54 | ||
55 | self.fail("Found no matching sample") | |
56 | ||
57 | def _assert_metrics(self, queued, keys, in_flight): | |
58 | """Assert that the metrics are correct""" | |
59 | ||
60 | sample = self._get_sample_with_name(number_queued, self.queue._name) | |
61 | self.assertEqual( | |
62 | sample, | |
63 | queued, | |
64 | "number_queued", | |
65 | ) | |
66 | ||
67 | sample = self._get_sample_with_name(number_of_keys, self.queue._name) | |
68 | self.assertEqual(sample, keys, "number_of_keys") | |
69 | ||
70 | sample = self._get_sample_with_name(number_in_flight, self.queue._name) | |
71 | self.assertEqual( | |
72 | sample, | |
73 | in_flight, | |
74 | "number_in_flight", | |
75 | ) | |
76 | ||
34 | 77 | def test_simple(self): |
35 | 78 | """Tests the basic case of calling `add_to_queue` once and having |
36 | 79 | `_process_queue` return. |
40 | 83 | |
41 | 84 | queue_d = defer.ensureDeferred(self.queue.add_to_queue("foo")) |
42 | 85 | |
86 | self._assert_metrics(queued=1, keys=1, in_flight=1) | |
87 | ||
43 | 88 | # The queue should wait a reactor tick before calling the processing |
44 | 89 | # function. |
45 | 90 | self.assertFalse(self._pending_calls) |
51 | 96 | self.assertEqual(len(self._pending_calls), 1) |
52 | 97 | self.assertEqual(self._pending_calls[0][0], ["foo"]) |
53 | 98 | self.assertFalse(queue_d.called) |
99 | self._assert_metrics(queued=0, keys=0, in_flight=1) | |
54 | 100 | |
55 | 101 | # Return value of the `_process_queue` should be propagated back. |
56 | 102 | self._pending_calls.pop()[1].callback("bar") |
57 | 103 | |
58 | 104 | self.assertEqual(self.successResultOf(queue_d), "bar") |
105 | ||
106 | self._assert_metrics(queued=0, keys=0, in_flight=0) | |
59 | 107 | |
60 | 108 | def test_batching(self): |
61 | 109 | """Test that multiple calls at the same time get batched up into one |
67 | 115 | queue_d1 = defer.ensureDeferred(self.queue.add_to_queue("foo1")) |
68 | 116 | queue_d2 = defer.ensureDeferred(self.queue.add_to_queue("foo2")) |
69 | 117 | |
118 | self._assert_metrics(queued=2, keys=1, in_flight=2) | |
119 | ||
70 | 120 | self.clock.pump([0]) |
71 | 121 | |
72 | 122 | # We should see only *one* call to `_process_queue` |
74 | 124 | self.assertEqual(self._pending_calls[0][0], ["foo1", "foo2"]) |
75 | 125 | self.assertFalse(queue_d1.called) |
76 | 126 | self.assertFalse(queue_d2.called) |
127 | self._assert_metrics(queued=0, keys=0, in_flight=2) | |
77 | 128 | |
78 | 129 | # Return value of the `_process_queue` should be propagated back to both. |
79 | 130 | self._pending_calls.pop()[1].callback("bar") |
80 | 131 | |
81 | 132 | self.assertEqual(self.successResultOf(queue_d1), "bar") |
82 | 133 | self.assertEqual(self.successResultOf(queue_d2), "bar") |
134 | self._assert_metrics(queued=0, keys=0, in_flight=0) | |
83 | 135 | |
84 | 136 | def test_queuing(self): |
85 | 137 | """Test that we queue up requests while a `_process_queue` is being |
91 | 143 | queue_d1 = defer.ensureDeferred(self.queue.add_to_queue("foo1")) |
92 | 144 | self.clock.pump([0]) |
93 | 145 | |
146 | self.assertEqual(len(self._pending_calls), 1) | |
147 | ||
148 | # We queue up work after the process function has been called, testing | |
149 | # that they get correctly queued up. | |
94 | 150 | queue_d2 = defer.ensureDeferred(self.queue.add_to_queue("foo2")) |
151 | queue_d3 = defer.ensureDeferred(self.queue.add_to_queue("foo3")) | |
95 | 152 | |
96 | 153 | # We should see only *one* call to `_process_queue` |
97 | 154 | self.assertEqual(len(self._pending_calls), 1) |
98 | 155 | self.assertEqual(self._pending_calls[0][0], ["foo1"]) |
99 | 156 | self.assertFalse(queue_d1.called) |
100 | 157 | self.assertFalse(queue_d2.called) |
158 | self.assertFalse(queue_d3.called) | |
159 | self._assert_metrics(queued=2, keys=1, in_flight=3) | |
101 | 160 | |
102 | 161 | # Return value of the `_process_queue` should be propagated back to the |
103 | 162 | # first. |
105 | 164 | |
106 | 165 | self.assertEqual(self.successResultOf(queue_d1), "bar1") |
107 | 166 | self.assertFalse(queue_d2.called) |
167 | self.assertFalse(queue_d3.called) | |
168 | self._assert_metrics(queued=2, keys=1, in_flight=2) | |
108 | 169 | |
109 | 170 | # We should now see a second call to `_process_queue` |
110 | 171 | self.clock.pump([0]) |
111 | 172 | self.assertEqual(len(self._pending_calls), 1) |
112 | self.assertEqual(self._pending_calls[0][0], ["foo2"]) | |
113 | self.assertFalse(queue_d2.called) | |
173 | self.assertEqual(self._pending_calls[0][0], ["foo2", "foo3"]) | |
174 | self.assertFalse(queue_d2.called) | |
175 | self.assertFalse(queue_d3.called) | |
176 | self._assert_metrics(queued=0, keys=0, in_flight=2) | |
114 | 177 | |
115 | 178 | # Return value of the `_process_queue` should be propagated back to the |
116 | 179 | # second. |
117 | 180 | self._pending_calls.pop()[1].callback("bar2") |
118 | 181 | |
119 | 182 | self.assertEqual(self.successResultOf(queue_d2), "bar2") |
183 | self.assertEqual(self.successResultOf(queue_d3), "bar2") | |
184 | self._assert_metrics(queued=0, keys=0, in_flight=0) | |
120 | 185 | |
121 | 186 | def test_different_keys(self): |
122 | 187 | """Test that calls to different keys get processed in parallel.""" |
139 | 204 | self.assertFalse(queue_d1.called) |
140 | 205 | self.assertFalse(queue_d2.called) |
141 | 206 | self.assertFalse(queue_d3.called) |
207 | self._assert_metrics(queued=1, keys=1, in_flight=3) | |
142 | 208 | |
143 | 209 | # Return value of the `_process_queue` should be propagated back to the |
144 | 210 | # first. |
147 | 213 | self.assertEqual(self.successResultOf(queue_d1), "bar1") |
148 | 214 | self.assertFalse(queue_d2.called) |
149 | 215 | self.assertFalse(queue_d3.called) |
216 | self._assert_metrics(queued=1, keys=1, in_flight=2) | |
150 | 217 | |
151 | 218 | # Return value of the `_process_queue` should be propagated back to the |
152 | 219 | # second. |
160 | 227 | self.assertEqual(len(self._pending_calls), 1) |
161 | 228 | self.assertEqual(self._pending_calls[0][0], ["foo3"]) |
162 | 229 | self.assertFalse(queue_d3.called) |
230 | self._assert_metrics(queued=0, keys=0, in_flight=1) | |
163 | 231 | |
164 | 232 | # Return value of the `_process_queue` should be propagated back to the |
165 | 233 | # third deferred. |
166 | 234 | self._pending_calls.pop()[1].callback("bar4") |
167 | 235 | |
168 | 236 | self.assertEqual(self.successResultOf(queue_d3), "bar4") |
237 | self._assert_metrics(queued=0, keys=0, in_flight=0) |