Codebase list matrix-synapse / 7c66fe3
Update upstream source from tag 'upstream/1.36.0' Update to upstream version '1.36.0' with Debian dir 8af317fdf161708fb514a7208659a47682d1077f Andrej Shadura 2 years ago
121 changed file(s) with 4357 addition(s) and 2622 deletion(s). Raw diff Collapse all Expand all
4040 - dockerhubuploadlatest:
4141 filters:
4242 branches:
43 only: master
43 only: [ master, main ]
4444
4545 commands:
4646 docker_prepare:
0 name: Deploy the documentation
1
2 on:
3 push:
4 branches:
5 - develop
6
7 workflow_dispatch:
8
9 jobs:
10 pages:
11 name: GitHub Pages
12 runs-on: ubuntu-latest
13 steps:
14 - uses: actions/checkout@v2
15
16 - name: Setup mdbook
17 uses: peaceiris/actions-mdbook@4b5ef36b314c2599664ca107bb8c02412548d79d # v1.1.14
18 with:
19 mdbook-version: '0.4.9'
20
21 - name: Build the documentation
22 run: mdbook build
23
24 - name: Deploy latest documentation
25 uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
26 with:
27 github_token: ${{ secrets.GITHUB_TOKEN }}
28 keep_files: true
29 publish_dir: ./book
30 destination_dir: ./develop
3333 if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }}
3434 runs-on: ubuntu-latest
3535 steps:
36 - uses: actions/checkout@v2
36 # Note: This and the script can be simplified once we drop Buildkite. See:
37 # https://github.com/actions/checkout/issues/266#issuecomment-638346893
38 # https://github.com/actions/checkout/issues/416
39 - uses: actions/checkout@v2
40 with:
41 ref: ${{ github.event.pull_request.head.sha }}
42 fetch-depth: 0
3743 - uses: actions/setup-python@v2
3844 - run: pip install tox
3945 - name: Patch Buildkite-specific test script
225231 - name: Run SyTest
226232 run: /bootstrap.sh synapse
227233 working-directory: /src
228 - name: Dump results.tap
234 - name: Summarise results.tap
229235 if: ${{ always() }}
230 run: cat /logs/results.tap
236 run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
231237 - name: Upload SyTest logs
232238 uses: actions/upload-artifact@v2
233239 if: ${{ always() }}
4545 /docs/build/
4646 /htmlcov
4747 /pip-wheel-metadata/
48
49 # docs
50 book/
0 Synapse 1.36.0 (2021-06-15)
1 ===========================
2
3 No significant changes.
4
5
6 Synapse 1.36.0rc2 (2021-06-11)
7 ==============================
8
9 Bugfixes
10 --------
11
12 - Fix a bug which caused presence updates to stop working some time after a restart, when using a presence writer worker. Broke in v1.33.0. ([\#10149](https://github.com/matrix-org/synapse/issues/10149))
13 - Fix a bug when using federation sender worker where it would send out more presence updates than necessary, leading to high resource usage. Broke in v1.33.0. ([\#10163](https://github.com/matrix-org/synapse/issues/10163))
14 - Fix a bug where Synapse could send the same presence update to a remote twice. ([\#10165](https://github.com/matrix-org/synapse/issues/10165))
15
16
17 Synapse 1.36.0rc1 (2021-06-08)
18 ==============================
19
20 Features
21 --------
22
23 - Add new endpoint `/_matrix/client/r0/rooms/{roomId}/aliases` from Client-Server API r0.6.1 (previously [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)). ([\#9224](https://github.com/matrix-org/synapse/issues/9224))
24 - Improve performance of incoming federation transactions in large rooms. ([\#9953](https://github.com/matrix-org/synapse/issues/9953), [\#9973](https://github.com/matrix-org/synapse/issues/9973))
25 - Rewrite logic around verifying JSON object and fetching server keys to be more performant and use less memory. ([\#10035](https://github.com/matrix-org/synapse/issues/10035))
26 - Add new admin APIs for unprotecting local media from quarantine. Contributed by @dklimpel. ([\#10040](https://github.com/matrix-org/synapse/issues/10040))
27 - Add new admin APIs to remove media by media ID from quarantine. Contributed by @dklimpel. ([\#10044](https://github.com/matrix-org/synapse/issues/10044))
28 - Make reason and score parameters optional for reporting content. Implements [MSC2414](https://github.com/matrix-org/matrix-doc/pull/2414). Contributed by Callum Brown. ([\#10077](https://github.com/matrix-org/synapse/issues/10077))
29 - Add support for routing more requests to workers. ([\#10084](https://github.com/matrix-org/synapse/issues/10084))
30 - Report OpenTracing spans for database activity. ([\#10113](https://github.com/matrix-org/synapse/issues/10113), [\#10136](https://github.com/matrix-org/synapse/issues/10136), [\#10141](https://github.com/matrix-org/synapse/issues/10141))
31 - Significantly reduce memory usage of joining large remote rooms. ([\#10117](https://github.com/matrix-org/synapse/issues/10117))
32
33
34 Bugfixes
35 --------
36
37 - Fixed a bug causing replication requests to fail when receiving a lot of events via federation. ([\#10082](https://github.com/matrix-org/synapse/issues/10082))
38 - Fix a bug in the `force_tracing_for_users` option introduced in Synapse v1.35 which meant that the OpenTracing spans produced were missing most tags. ([\#10092](https://github.com/matrix-org/synapse/issues/10092))
39 - Fixed a bug that could cause Synapse to stop notifying application services. Contributed by Willem Mulder. ([\#10107](https://github.com/matrix-org/synapse/issues/10107))
40 - Fix bug where the server would attempt to fetch the same history in the room from a remote server multiple times in parallel. ([\#10116](https://github.com/matrix-org/synapse/issues/10116))
41 - Fix a bug introduced in Synapse 1.33.0 which caused replication requests to fail when receiving a lot of very large events via federation. ([\#10118](https://github.com/matrix-org/synapse/issues/10118))
42 - Fix bug when using workers where pagination requests failed if a remote server returned zero events from `/backfill`. Introduced in 1.35.0. ([\#10133](https://github.com/matrix-org/synapse/issues/10133))
43
44
45 Improved Documentation
46 ----------------------
47
48 - Clarify security note regarding hosting Synapse on the same domain as other web applications. ([\#9221](https://github.com/matrix-org/synapse/issues/9221))
49 - Update CAPTCHA documentation to mention turning off the verify origin feature. Contributed by @aaronraimist. ([\#10046](https://github.com/matrix-org/synapse/issues/10046))
50 - Tweak wording of database recommendation in `INSTALL.md`. Contributed by @aaronraimist. ([\#10057](https://github.com/matrix-org/synapse/issues/10057))
51 - Add initial infrastructure for rendering Synapse documentation with mdbook. ([\#10086](https://github.com/matrix-org/synapse/issues/10086))
52 - Convert the remaining Admin API documentation files to markdown. ([\#10089](https://github.com/matrix-org/synapse/issues/10089))
53 - Make a link in docs use HTTPS. Contributed by @RhnSharma. ([\#10130](https://github.com/matrix-org/synapse/issues/10130))
54 - Fix broken link in Docker docs. ([\#10132](https://github.com/matrix-org/synapse/issues/10132))
55
56
57 Deprecations and Removals
58 -------------------------
59
60 - Remove the experimental `spaces_enabled` flag. The spaces features are always available now. ([\#10063](https://github.com/matrix-org/synapse/issues/10063))
61
62
63 Internal Changes
64 ----------------
65
66 - Tell CircleCI to build Docker images from `main` branch. ([\#9906](https://github.com/matrix-org/synapse/issues/9906))
67 - Simplify naming convention for release branches to only include the major and minor version numbers. ([\#10013](https://github.com/matrix-org/synapse/issues/10013))
68 - Add `parse_strings_from_args` for parsing an array from query parameters. ([\#10048](https://github.com/matrix-org/synapse/issues/10048), [\#10137](https://github.com/matrix-org/synapse/issues/10137))
69 - Remove some dead code regarding TLS certificate handling. ([\#10054](https://github.com/matrix-org/synapse/issues/10054))
70 - Remove redundant, unmaintained `convert_server_keys` script. ([\#10055](https://github.com/matrix-org/synapse/issues/10055))
71 - Improve the error message printed by synctl when synapse fails to start. ([\#10059](https://github.com/matrix-org/synapse/issues/10059))
72 - Fix GitHub Actions lint for newsfragments. ([\#10069](https://github.com/matrix-org/synapse/issues/10069))
73 - Update opentracing to inject the right context into the carrier. ([\#10074](https://github.com/matrix-org/synapse/issues/10074))
74 - Fix up `BatchingQueue` implementation. ([\#10078](https://github.com/matrix-org/synapse/issues/10078))
75 - Log method and path when dropping request due to size limit. ([\#10091](https://github.com/matrix-org/synapse/issues/10091))
76 - In Github Actions workflows, summarize the Sytest results in an easy-to-read format. ([\#10094](https://github.com/matrix-org/synapse/issues/10094))
77 - Make `/sync` do fewer state resolutions. ([\#10102](https://github.com/matrix-org/synapse/issues/10102))
78 - Add missing type hints to the admin API servlets. ([\#10105](https://github.com/matrix-org/synapse/issues/10105))
79 - Improve opentracing annotations for `Notifier`. ([\#10111](https://github.com/matrix-org/synapse/issues/10111))
80 - Enable Prometheus metrics for the jaeger client library. ([\#10112](https://github.com/matrix-org/synapse/issues/10112))
81 - Work to improve the responsiveness of `/sync` requests. ([\#10124](https://github.com/matrix-org/synapse/issues/10124))
82 - OpenTracing: use a consistent name for background processes. ([\#10135](https://github.com/matrix-org/synapse/issues/10135))
83
84
085 Synapse 1.35.1 (2021-06-03)
186 ===========================
287
398398
399399 ### Using PostgreSQL
400400
401 By default Synapse uses [SQLite](https://sqlite.org/) and in doing so trades performance for convenience.
402 SQLite is only recommended in Synapse for testing purposes or for servers with
403 very light workloads.
404
405 Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org). Advantages include:
401 By default Synapse uses an [SQLite](https://sqlite.org/) database and in doing so trades
402 performance for convenience. Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org)
403 instead. Advantages include:
406404
407405 - significant performance improvements due to the superior threading and
408406 caching model, smarter query optimiser
410408
411409 For information on how to install and use PostgreSQL in Synapse, please see
412410 [docs/postgres.md](docs/postgres.md)
411
412 SQLite is only acceptable for testing purposes. SQLite should not be used in
413 a production server. Synapse will perform poorly when using
414 SQLite, especially when participating in large rooms.
413415
414416 ### TLS certificates
415417
3939 exclude sytest-blacklist
4040 exclude test_postgresql.sh
4141
42 include book.toml
4243 include pyproject.toml
4344 recursive-include changelog.d *
4445
148148 automatically, please see `<docs/ACME.md>`_.
149149
150150
151 Security Note
151 Security note
152152 =============
153153
154 Matrix serves raw user generated data in some APIs - specifically the `content
155 repository endpoints <https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
156
157 Whilst we have tried to mitigate against possible XSS attacks (e.g.
158 https://github.com/matrix-org/synapse/pull/1021) we recommend running
159 matrix homeservers on a dedicated domain name, to limit any malicious user generated
160 content served to web browsers a matrix API from being able to attack webapps hosted
161 on the same domain. This is particularly true of sharing a matrix webclient and
162 server on the same domain.
163
164 See https://github.com/vector-im/riot-web/issues/1977 and
165 https://developer.github.com/changes/2014-04-25-user-content-security for more details.
154 Matrix serves raw, user-supplied data in some APIs -- specifically the `content
155 repository endpoints`_.
156
157 .. _content repository endpoints: https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid
158
159 Whilst we make a reasonable effort to mitigate against XSS attacks (for
160 instance, by using `CSP`_), a Matrix homeserver should not be hosted on a
161 domain hosting other web applications. This especially applies to sharing
162 the domain with Matrix web clients and other sensitive applications like
163 webmail. See
164 https://developer.github.com/changes/2014-04-25-user-content-security for more
165 information.
166
167 .. _CSP: https://github.com/matrix-org/synapse/pull/1021
168
169 Ideally, the homeserver should not simply be on a different subdomain, but on
170 a completely different `registered domain`_ (also known as top-level site or
171 eTLD+1). This is because `some attacks`_ are still possible as long as the two
172 applications share the same registered domain.
173
174 .. _registered domain: https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-2.3
175
176 .. _some attacks: https://en.wikipedia.org/wiki/Session_fixation#Attacks_using_cross-subdomain_cookie
177
178 To illustrate this with an example, if your Element Web or other sensitive web
179 application is hosted on ``A.example1.com``, you should ideally host Synapse on
180 ``example2.com``. Some amount of protection is offered by hosting on
181 ``B.example1.com`` instead, so this is also acceptable in some scenarios.
182 However, you should *not* host your Synapse on ``A.example1.com``.
183
184 Note that all of the above refers exclusively to the domain used in Synapse's
185 ``public_baseurl`` setting. In particular, it has no bearing on the domain
186 mentioned in MXIDs hosted on that server.
187
188 Following this advice ensures that even if an XSS is found in Synapse, the
189 impact to other applications will be minimal.
166190
167191
168192 Upgrading an existing Synapse
0 # Documentation for possible options in this file is at
1 # https://rust-lang.github.io/mdBook/format/config.html
2 [book]
3 title = "Synapse"
4 authors = ["The Matrix.org Foundation C.I.C."]
5 language = "en"
6 multilingual = false
7
8 # The directory that documentation files are stored in
9 src = "docs"
10
11 [build]
12 # Prevent markdown pages from being automatically generated when they're
13 # linked to in SUMMARY.md
14 create-missing = false
15
16 [output.html]
17 # The URL visitors will be directed to when they try to edit a page
18 edit-url-template = "https://github.com/matrix-org/synapse/edit/develop/{path}"
19
20 # Remove the numbers that appear before each item in the sidebar, as they can
21 # get quite messy as we nest deeper
22 no-section-label = true
23
24 # The source code URL of the repository
25 git-repository-url = "https://github.com/matrix-org/synapse"
26
27 # The path that the docs are hosted on
28 site-url = "/synapse/"
29
30 # Additional HTML, JS, CSS that's injected into each page of the book.
31 # More information available in docs/website_files/README.md
32 additional-css = [
33 "docs/website_files/table-of-contents.css",
34 "docs/website_files/remove-nav-buttons.css",
35 "docs/website_files/indent-section-headers.css",
36 ]
37 additional-js = ["docs/website_files/table-of-contents.js"]
38 theme = "docs/website_files/theme"
225225 ## Using jemalloc
226226
227227 Jemalloc is embedded in the image and will be used instead of the default allocator.
228 You can read about jemalloc by reading the Synapse [README](../README.md).
228 You can read about jemalloc by reading the Synapse [README](../README.rst).
00 # Overview
1 Captcha can be enabled for this home server. This file explains how to do that.
2 The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google.
1 A captcha can be enabled on your homeserver to help prevent bots from registering
2 accounts. Synapse currently uses Google's reCAPTCHA service which requires API keys
3 from Google.
34
4 ## Getting keys
5 ## Getting API keys
56
6 Requires a site/secret key pair from:
7
8 <https://developers.google.com/recaptcha/>
9
10 Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option
11
12 ## Setting ReCaptcha Keys
13
14 The keys are a config option on the home server config. If they are not
15 visible, you can generate them via `--generate-config`. Set the following value:
16
7 1. Create a new site at <https://www.google.com/recaptcha/admin/create>
8 1. Set the label to anything you want
9 1. Set the type to reCAPTCHA v2 using the "I'm not a robot" Checkbox option.
10 This is the only type of captcha that works with Synapse.
11 1. Add the public hostname for your server, as set in `public_baseurl`
12 in `homeserver.yaml`, to the list of authorized domains. If you have not set
13 `public_baseurl`, use `server_name`.
14 1. Agree to the terms of service and submit.
15 1. Copy your site key and secret key and add them to your `homeserver.yaml`
16 configuration file
17 ```
1718 recaptcha_public_key: YOUR_SITE_KEY
1819 recaptcha_private_key: YOUR_SECRET_KEY
19
20 In addition, you MUST enable captchas via:
21
20 ```
21 1. Enable the CAPTCHA for new registrations
22 ```
2223 enable_registration_captcha: true
24 ```
25 1. Go to the settings page for the CAPTCHA you just created
26 1. Uncheck the "Verify the origin of reCAPTCHA solutions" checkbox so that the
27 captcha can be displayed in any client. If you do not disable this option then you
28 must specify the domains of every client that is allowed to display the CAPTCHA.
2329
2430 ## Configuring IP used for auth
2531
26 The ReCaptcha API requires that the IP address of the user who solved the
27 captcha is sent. If the client is connecting through a proxy or load balancer,
32 The reCAPTCHA API requires that the IP address of the user who solved the
33 CAPTCHA is sent. If the client is connecting through a proxy or load balancer,
2834 it may be required to use the `X-Forwarded-For` (XFF) header instead of the origin
2935 IP address. This can be configured using the `x_forwarded` directive in the
30 listeners section of the homeserver.yaml configuration file.
36 listeners section of the `homeserver.yaml` configuration file.
00 # Synapse Documentation
11
2 This directory contains documentation specific to the `synapse` homeserver.
2 **The documentation is currently hosted [here](https://matrix-org.github.io/synapse).**
3 Please update any links to point to the new website instead.
34
4 All matrix-generic documentation now lives in its own project, located at [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc)
5 ## About
56
6 (Note: some items here may be moved to [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc) at some point in the future.)
7 This directory currently holds a series of markdown files documenting how to install, use
8 and develop Synapse, the reference Matrix homeserver. The documentation is readable directly
9 from this repository, but it is recommended to instead browse through the
10 [website](https://matrix-org.github.io/synapse) for easier discoverability.
11
12 ## Adding to the documentation
13
14 Most of the documentation currently exists as top-level files, as when organising them into
15 a structured website, these files were kept in place so that existing links would not break.
16 The rest of the documentation is stored in folders, such as `setup`, `usage`, and `development`
17 etc. **All new documentation files should be placed in structured folders.** For example:
18
19 To create a new user-facing documentation page about a new Single Sign-On protocol named
20 "MyCoolProtocol", one should create a new file with a relevant name, such as "my_cool_protocol.md".
21 This file might fit into the documentation structure at:
22
23 - Usage
24 - Configuration
25 - User Authentication
26 - Single Sign-On
27 - **My Cool Protocol**
28
29 Given that, one would place the new file under
30 `usage/configuration/user_authentication/single_sign_on/my_cool_protocol.md`.
31
32 Note that the structure of the documentation (and thus the left sidebar on the website) is determined
33 by the list in [SUMMARY.md](SUMMARY.md). The final thing to do when adding a new page is to add a new
34 line linking to the new documentation file:
35
36 ```markdown
37 - [My Cool Protocol](usage/configuration/user_authentication/single_sign_on/my_cool_protocol.md)
38 ```
39
40 ## Building the documentation
41
42 The documentation is built with [mdbook](https://rust-lang.github.io/mdBook/), and the outline of the
43 documentation is determined by the structure of [SUMMARY.md](SUMMARY.md).
44
45 First, [get mdbook](https://github.com/rust-lang/mdBook#installation). Then, **from the root of the repository**,
46 build the documentation with:
47
48 ```sh
49 mdbook build
50 ```
51
52 The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
53 browse the book by opening `book/index.html` in a web browser.
54
55 You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
56
57 ```sh
58 mdbook serve
59 ```
60
61 The URL at which the docs can be viewed at will be logged.
62
63 ## Configuration and theming
64
65 The look and behaviour of the website is configured by the [book.toml](../book.toml) file
66 at the root of the repository. See
67 [mdbook's documentation on configuration](https://rust-lang.github.io/mdBook/format/config.html)
68 for available options.
69
70 The site can be themed and additionally extended with extra UI and features. See
71 [website_files/README.md](website_files/README.md) for details.
0 # Summary
1
2 # Introduction
3 - [Welcome and Overview](welcome_and_overview.md)
4
5 # Setup
6 - [Installation](setup/installation.md)
7 - [Using Postgres](postgres.md)
8 - [Configuring a Reverse Proxy](reverse_proxy.md)
9 - [Configuring a Turn Server](turn-howto.md)
10 - [Delegation](delegate.md)
11
12 # Upgrading
13 - [Upgrading between Synapse Versions](upgrading/README.md)
14 - [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
15
16 # Usage
17 - [Federation](federate.md)
18 - [Configuration](usage/configuration/README.md)
19 - [Homeserver Sample Config File](usage/configuration/homeserver_sample_config.md)
20 - [Logging Sample Config File](usage/configuration/logging_sample_config.md)
21 - [Structured Logging](structured_logging.md)
22 - [User Authentication](usage/configuration/user_authentication/README.md)
23 - [Single-Sign On]()
24 - [OpenID Connect](openid.md)
25 - [SAML]()
26 - [CAS]()
27 - [SSO Mapping Providers](sso_mapping_providers.md)
28 - [Password Auth Providers](password_auth_providers.md)
29 - [JSON Web Tokens](jwt.md)
30 - [Registration Captcha](CAPTCHA_SETUP.md)
31 - [Application Services](application_services.md)
32 - [Server Notices](server_notices.md)
33 - [Consent Tracking](consent_tracking.md)
34 - [URL Previews](url_previews.md)
35 - [User Directory](user_directory.md)
36 - [Message Retention Policies](message_retention_policies.md)
37 - [Pluggable Modules]()
38 - [Third Party Rules]()
39 - [Spam Checker](spam_checker.md)
40 - [Presence Router](presence_router_module.md)
41 - [Media Storage Providers]()
42 - [Workers](workers.md)
43 - [Using `synctl` with Workers](synctl_workers.md)
44 - [Systemd](systemd-with-workers/README.md)
45 - [Administration](usage/administration/README.md)
46 - [Admin API](usage/administration/admin_api/README.md)
47 - [Account Validity](admin_api/account_validity.md)
48 - [Delete Group](admin_api/delete_group.md)
49 - [Event Reports](admin_api/event_reports.md)
50 - [Media](admin_api/media_admin_api.md)
51 - [Purge History](admin_api/purge_history_api.md)
52 - [Purge Rooms](admin_api/purge_room.md)
53 - [Register Users](admin_api/register_api.md)
54 - [Manipulate Room Membership](admin_api/room_membership.md)
55 - [Rooms](admin_api/rooms.md)
56 - [Server Notices](admin_api/server_notices.md)
57 - [Shutdown Room](admin_api/shutdown_room.md)
58 - [Statistics](admin_api/statistics.md)
59 - [Users](admin_api/user_admin_api.md)
60 - [Server Version](admin_api/version_api.md)
61 - [Manhole](manhole.md)
62 - [Monitoring](metrics-howto.md)
63 - [Scripts]()
64
65 # Development
66 - [Contributing Guide](development/contributing_guide.md)
67 - [Code Style](code_style.md)
68 - [Git Usage](dev/git.md)
69 - [Testing]()
70 - [OpenTracing](opentracing.md)
71 - [Synapse Architecture]()
72 - [Log Contexts](log_contexts.md)
73 - [Replication](replication.md)
74 - [TCP Replication](tcp_replication.md)
75 - [Internal Documentation](development/internal_documentation/README.md)
76 - [Single Sign-On]()
77 - [SAML](dev/saml.md)
78 - [CAS](dev/cas.md)
79 - [State Resolution]()
80 - [The Auth Chain Difference Algorithm](auth_chain_difference_algorithm.md)
81 - [Media Repository](media_repository.md)
82 - [Room and User Statistics](room_and_user_statistics.md)
83 - [Scripts]()
84
85 # Other
86 - [Dependency Deprecation Policy](deprecation_policy.md)
00 Admin APIs
11 ==========
22
3 **Note**: The latest documentation can be viewed `here <https://matrix-org.github.io/synapse>`_.
4 See `docs/README.md <../docs/README.md>`_ for more information.
5
6 **Please update links to point to the website instead.** Existing files in this directory
7 are preserved to maintain historical links, but may be moved in the future.
8
39 This directory includes documentation for the various synapse specific admin
4 APIs available.
10 APIs available. Updates to the existing Admin API documentation should still
11 be made to these files, but any new documentation files should instead be placed under
12 `docs/usage/administration/admin_api <../docs/usage/administration/admin_api>`_.
513
6 Authenticating as a server admin
7 --------------------------------
8
9 Many of the API calls in the admin api will require an `access_token` for a
10 server admin. (Note that a server admin is distinct from a room admin.)
11
12 A user can be marked as a server admin by updating the database directly, e.g.:
13
14 .. code-block:: sql
15
16 UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
17
18 A new server admin user can also be created using the
19 ``register_new_matrix_user`` script.
20
21 Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
22
23 Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:
24
25 ``curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>``
26
27 Fore more details, please refer to the complete `matrix spec documentation <https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens>`_.
0 # Account validity API
1
2 This API allows a server administrator to manage the validity of an account. To
3 use it, you must enable the account validity feature (under
4 `account_validity`) in Synapse's configuration.
5
6 ## Renew account
7
8 This API extends the validity of an account by as much time as configured in the
9 `period` parameter from the `account_validity` configuration.
10
11 The API is:
12
13 ```
14 POST /_synapse/admin/v1/account_validity/validity
15 ```
16
17 with the following body:
18
19 ```json
20 {
21 "user_id": "<user ID for the account to renew>",
22 "expiration_ts": 0,
23 "enable_renewal_emails": true
24 }
25 ```
26
27
28 `expiration_ts` is an optional parameter and overrides the expiration date,
29 which otherwise defaults to now + validity period.
30
31 `enable_renewal_emails` is also an optional parameter and enables/disables
32 sending renewal emails to the user. Defaults to true.
33
34 The API returns with the new expiration date for this account, as a timestamp in
35 milliseconds since epoch:
36
37 ```json
38 {
39 "expiration_ts": 0
40 }
41 ```
+0
-42
docs/admin_api/account_validity.rst less more
0 Account validity API
1 ====================
2
3 This API allows a server administrator to manage the validity of an account. To
4 use it, you must enable the account validity feature (under
5 ``account_validity``) in Synapse's configuration.
6
7 Renew account
8 -------------
9
10 This API extends the validity of an account by as much time as configured in the
11 ``period`` parameter from the ``account_validity`` configuration.
12
13 The API is::
14
15 POST /_synapse/admin/v1/account_validity/validity
16
17 with the following body:
18
19 .. code:: json
20
21 {
22 "user_id": "<user ID for the account to renew>",
23 "expiration_ts": 0,
24 "enable_renewal_emails": true
25 }
26
27
28 ``expiration_ts`` is an optional parameter and overrides the expiration date,
29 which otherwise defaults to now + validity period.
30
31 ``enable_renewal_emails`` is also an optional parameter and enables/disables
32 sending renewal emails to the user. Defaults to true.
33
34 The API returns with the new expiration date for this account, as a timestamp in
35 milliseconds since epoch:
36
37 .. code:: json
38
39 {
40 "expiration_ts": 0
41 }
1010 ```
1111
1212 To use it, you will need to authenticate by providing an `access_token` for a
13 server admin: see [README.rst](README.rst).
13 server admin: see [Admin API](../../usage/administration/admin_api).
66 GET /_synapse/admin/v1/event_reports?from=0&limit=10
77 ```
88 To use it, you will need to authenticate by providing an `access_token` for a
9 server admin: see [README.rst](README.rst).
9 server admin: see [Admin API](../../usage/administration/admin_api).
1010
1111 It returns a JSON body like the following:
1212
7474 * `name`: string - The name of the room.
7575 * `event_id`: string - The ID of the reported event.
7676 * `user_id`: string - This is the user who reported the event and wrote the reason.
77 * `reason`: string - Comment made by the `user_id` in this report. May be blank.
77 * `reason`: string - Comment made by the `user_id` in this report. May be blank or `null`.
7878 * `score`: integer - Content is reported based upon a negative score, where -100 is
79 "most offensive" and 0 is "inoffensive".
79 "most offensive" and 0 is "inoffensive". May be `null`.
8080 * `sender`: string - This is the ID of the user who sent the original message/event that
8181 was reported.
8282 * `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
9494 GET /_synapse/admin/v1/event_reports/<report_id>
9595 ```
9696 To use it, you will need to authenticate by providing an `access_token` for a
97 server admin: see [README.rst](README.rst).
97 server admin: see [Admin API](../../usage/administration/admin_api).
9898
9999 It returns a JSON body like the following:
100100
33 * [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
44 - [Quarantine media](#quarantine-media)
55 * [Quarantining media by ID](#quarantining-media-by-id)
6 * [Remove media from quarantine by ID](#remove-media-from-quarantine-by-id)
67 * [Quarantining media in a room](#quarantining-media-in-a-room)
78 * [Quarantining all media of a user](#quarantining-all-media-of-a-user)
89 * [Protecting media from being quarantined](#protecting-media-from-being-quarantined)
10 * [Unprotecting media from being quarantined](#unprotecting-media-from-being-quarantined)
911 - [Delete local media](#delete-local-media)
1012 * [Delete a specific local media](#delete-a-specific-local-media)
1113 * [Delete local media by date or size](#delete-local-media-by-date-or-size)
2527 GET /_synapse/admin/v1/room/<room_id>/media
2628 ```
2729 To use it, you will need to authenticate by providing an `access_token` for a
28 server admin: see [README.rst](README.rst).
30 server admin: see [Admin API](../../usage/administration/admin_api).
2931
3032 The API returns a JSON body like the following:
3133 ```json
7577 {}
7678 ```
7779
80 ## Remove media from quarantine by ID
81
82 This API removes a single piece of local or remote media from quarantine.
83
84 Request:
85
86 ```
87 POST /_synapse/admin/v1/media/unquarantine/<server_name>/<media_id>
88
89 {}
90 ```
91
92 Where `server_name` is in the form of `example.org`, and `media_id` is in the
93 form of `abcdefg12345...`.
94
95 Response:
96
97 ```json
98 {}
99 ```
100
78101 ## Quarantining media in a room
79102
80103 This API quarantines all local and remote media in a room.
146169
147170 ```
148171 POST /_synapse/admin/v1/media/protect/<media_id>
172
173 {}
174 ```
175
176 Where `media_id` is in the form of `abcdefg12345...`.
177
178 Response:
179
180 ```json
181 {}
182 ```
183
184 ## Unprotecting media from being quarantined
185
186 This API reverts the protection of a media.
187
188 Request:
189
190 ```
191 POST /_synapse/admin/v1/media/unprotect/<media_id>
149192
150193 {}
151194 ```
267310 * `deleted`: integer - The number of media items successfully deleted
268311
269312 To use it, you will need to authenticate by providing an `access_token` for a
270 server admin: see [README.rst](README.rst).
313 server admin: see [Admin API](../../usage/administration/admin_api).
271314
272315 If the user re-requests purged remote media, synapse will re-request the media
273316 from the originating server.
0 # Purge History API
1
2 The purge history API allows server admins to purge historic events from their
3 database, reclaiming disk space.
4
5 Depending on the amount of history being purged a call to the API may take
6 several minutes or longer. During this period users will not be able to
7 paginate further back in the room from the point being purged from.
8
9 Note that Synapse requires at least one message in each room, so it will never
10 delete the last message in a room.
11
12 The API is:
13
14 ```
15 POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
16 ```
17
18 To use it, you will need to authenticate by providing an `access_token` for a
19 server admin: [Admin API](../../usage/administration/admin_api)
20
21 By default, events sent by local users are not deleted, as they may represent
22 the only copies of this content in existence. (Events sent by remote users are
23 deleted.)
24
25 Room state data (such as joins, leaves, topic) is always preserved.
26
27 To delete local message events as well, set `delete_local_events` in the body:
28
29 ```
30 {
31 "delete_local_events": true
32 }
33 ```
34
35 The caller must specify the point in the room to purge up to. This can be
36 specified by including an event_id in the URI, or by setting a
37 `purge_up_to_event_id` or `purge_up_to_ts` in the request body. If an event
38 id is given, that event (and others at the same graph depth) will be retained.
39 If `purge_up_to_ts` is given, it should be a timestamp since the unix epoch,
40 in milliseconds.
41
42 The API starts the purge running, and returns immediately with a JSON body with
43 a purge id:
44
45 ```json
46 {
47 "purge_id": "<opaque id>"
48 }
49 ```
50
51 ## Purge status query
52
53 It is possible to poll for updates on recent purges with a second API;
54
55 ```
56 GET /_synapse/admin/v1/purge_history_status/<purge_id>
57 ```
58
59 Again, you will need to authenticate by providing an `access_token` for a
60 server admin.
61
62 This API returns a JSON body like the following:
63
64 ```json
65 {
66 "status": "active"
67 }
68 ```
69
70 The status will be one of `active`, `complete`, or `failed`.
71
72 ## Reclaim disk space (Postgres)
73
74 To reclaim the disk space and return it to the operating system, you need to run
75 `VACUUM FULL;` on the database.
76
77 <https://www.postgresql.org/docs/current/sql-vacuum.html>
+0
-77
docs/admin_api/purge_history_api.rst less more
0 Purge History API
1 =================
2
3 The purge history API allows server admins to purge historic events from their
4 database, reclaiming disk space.
5
6 Depending on the amount of history being purged a call to the API may take
7 several minutes or longer. During this period users will not be able to
8 paginate further back in the room from the point being purged from.
9
10 Note that Synapse requires at least one message in each room, so it will never
11 delete the last message in a room.
12
13 The API is:
14
15 ``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
16
17 To use it, you will need to authenticate by providing an ``access_token`` for a
18 server admin: see `README.rst <README.rst>`_.
19
20 By default, events sent by local users are not deleted, as they may represent
21 the only copies of this content in existence. (Events sent by remote users are
22 deleted.)
23
24 Room state data (such as joins, leaves, topic) is always preserved.
25
26 To delete local message events as well, set ``delete_local_events`` in the body:
27
28 .. code:: json
29
30 {
31 "delete_local_events": true
32 }
33
34 The caller must specify the point in the room to purge up to. This can be
35 specified by including an event_id in the URI, or by setting a
36 ``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event
37 id is given, that event (and others at the same graph depth) will be retained.
38 If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch,
39 in milliseconds.
40
41 The API starts the purge running, and returns immediately with a JSON body with
42 a purge id:
43
44 .. code:: json
45
46 {
47 "purge_id": "<opaque id>"
48 }
49
50 Purge status query
51 ------------------
52
53 It is possible to poll for updates on recent purges with a second API;
54
55 ``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
56
57 Again, you will need to authenticate by providing an ``access_token`` for a
58 server admin.
59
60 This API returns a JSON body like the following:
61
62 .. code:: json
63
64 {
65 "status": "active"
66 }
67
68 The status will be one of ``active``, ``complete``, or ``failed``.
69
70 Reclaim disk space (Postgres)
71 -----------------------------
72
73 To reclaim the disk space and return it to the operating system, you need to run
74 `VACUUM FULL;` on the database.
75
76 https://www.postgresql.org/docs/current/sql-vacuum.html
0 # Shared-Secret Registration
1
2 This API allows for the creation of users in an administrative and
3 non-interactive way. This is generally used for bootstrapping a Synapse
4 instance with administrator accounts.
5
6 To authenticate yourself to the server, you will need both the shared secret
7 (`registration_shared_secret` in the homeserver configuration), and a
8 one-time nonce. If the registration shared secret is not configured, this API
9 is not enabled.
10
11 To fetch the nonce, you need to request one from the API:
12
13 ```
14 > GET /_synapse/admin/v1/register
15
16 < {"nonce": "thisisanonce"}
17 ```
18
19 Once you have the nonce, you can make a `POST` to the same URL with a JSON
20 body containing the nonce, username, password, whether they are an admin
21 (optional, False by default), and a HMAC digest of the content. Also you can
22 set the displayname (optional, `username` by default).
23
24 As an example:
25
26 ```
27 > POST /_synapse/admin/v1/register
28 > {
29 "nonce": "thisisanonce",
30 "username": "pepper_roni",
31 "displayname": "Pepper Roni",
32 "password": "pizza",
33 "admin": true,
34 "mac": "mac_digest_here"
35 }
36
37 < {
38 "access_token": "token_here",
39 "user_id": "@pepper_roni:localhost",
40 "home_server": "test",
41 "device_id": "device_id_here"
42 }
43 ```
44
45 The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
46 the shared secret and the content being the nonce, user, password, either the
47 string "admin" or "notadmin", and optionally the user_type
48 each separated by NULs. For an example of generation in Python:
49
50 ```python
51 import hmac, hashlib
52
53 def generate_mac(nonce, user, password, admin=False, user_type=None):
54
55 mac = hmac.new(
56 key=shared_secret,
57 digestmod=hashlib.sha1,
58 )
59
60 mac.update(nonce.encode('utf8'))
61 mac.update(b"\x00")
62 mac.update(user.encode('utf8'))
63 mac.update(b"\x00")
64 mac.update(password.encode('utf8'))
65 mac.update(b"\x00")
66 mac.update(b"admin" if admin else b"notadmin")
67 if user_type:
68 mac.update(b"\x00")
69 mac.update(user_type.encode('utf8'))
70
71 return mac.hexdigest()
72 ```
+0
-68
docs/admin_api/register_api.rst less more
0 Shared-Secret Registration
1 ==========================
2
3 This API allows for the creation of users in an administrative and
4 non-interactive way. This is generally used for bootstrapping a Synapse
5 instance with administrator accounts.
6
7 To authenticate yourself to the server, you will need both the shared secret
8 (``registration_shared_secret`` in the homeserver configuration), and a
9 one-time nonce. If the registration shared secret is not configured, this API
10 is not enabled.
11
12 To fetch the nonce, you need to request one from the API::
13
14 > GET /_synapse/admin/v1/register
15
16 < {"nonce": "thisisanonce"}
17
18 Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
19 body containing the nonce, username, password, whether they are an admin
20 (optional, False by default), and a HMAC digest of the content. Also you can
21 set the displayname (optional, ``username`` by default).
22
23 As an example::
24
25 > POST /_synapse/admin/v1/register
26 > {
27 "nonce": "thisisanonce",
28 "username": "pepper_roni",
29 "displayname": "Pepper Roni",
30 "password": "pizza",
31 "admin": true,
32 "mac": "mac_digest_here"
33 }
34
35 < {
36 "access_token": "token_here",
37 "user_id": "@pepper_roni:localhost",
38 "home_server": "test",
39 "device_id": "device_id_here"
40 }
41
42 The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
43 the shared secret and the content being the nonce, user, password, either the
44 string "admin" or "notadmin", and optionally the user_type
45 each separated by NULs. For an example of generation in Python::
46
47 import hmac, hashlib
48
49 def generate_mac(nonce, user, password, admin=False, user_type=None):
50
51 mac = hmac.new(
52 key=shared_secret,
53 digestmod=hashlib.sha1,
54 )
55
56 mac.update(nonce.encode('utf8'))
57 mac.update(b"\x00")
58 mac.update(user.encode('utf8'))
59 mac.update(b"\x00")
60 mac.update(password.encode('utf8'))
61 mac.update(b"\x00")
62 mac.update(b"admin" if admin else b"notadmin")
63 if user_type:
64 mac.update(b"\x00")
65 mac.update(user_type.encode('utf8'))
66
67 return mac.hexdigest()
2323 ```
2424
2525 To use it, you will need to authenticate by providing an `access_token` for a
26 server admin: see [README.rst](README.rst).
26 server admin: see [Admin API](../../usage/administration/admin_api).
2727
2828 Response:
2929
442442 ```
443443
444444 To use it, you will need to authenticate by providing an ``access_token`` for a
445 server admin: see [README.rst](README.rst).
445 server admin: see [Admin API](../../usage/administration/admin_api).
446446
447447 A response body like the following is returned:
448448
99 ```
1010
1111 To use it, you will need to authenticate by providing an `access_token`
12 for a server admin: see [README.rst](README.rst).
12 for a server admin: see [Admin API](../../usage/administration/admin_api).
1313
1414 A response body like the following is returned:
1515
0 # User Admin API
1
2 ## Query User Account
3
4 This API returns information about a specific user account.
5
6 The api is:
7
8 ```
9 GET /_synapse/admin/v2/users/<user_id>
10 ```
11
12 To use it, you will need to authenticate by providing an `access_token` for a
13 server admin: [Admin API](../../usage/administration/admin_api)
14
15 It returns a JSON body like the following:
16
17 ```json
18 {
19 "displayname": "User",
20 "threepids": [
21 {
22 "medium": "email",
23 "address": "<user_mail_1>"
24 },
25 {
26 "medium": "email",
27 "address": "<user_mail_2>"
28 }
29 ],
30 "avatar_url": "<avatar_url>",
31 "admin": 0,
32 "deactivated": 0,
33 "shadow_banned": 0,
34 "password_hash": "$2b$12$p9B4GkqYdRTPGD",
35 "creation_ts": 1560432506,
36 "appservice_id": null,
37 "consent_server_notice_sent": null,
38 "consent_version": null
39 }
40 ```
41
42 URL parameters:
43
44 - `user_id`: fully-qualified user id: for example, `@user:server.com`.
45
46 ## Create or modify Account
47
48 This API allows an administrator to create or modify a user account with a
49 specific `user_id`.
50
51 This api is:
52
53 ```
54 PUT /_synapse/admin/v2/users/<user_id>
55 ```
56
57 with a body of:
58
59 ```json
60 {
61 "password": "user_password",
62 "displayname": "User",
63 "threepids": [
64 {
65 "medium": "email",
66 "address": "<user_mail_1>"
67 },
68 {
69 "medium": "email",
70 "address": "<user_mail_2>"
71 }
72 ],
73 "avatar_url": "<avatar_url>",
74 "admin": false,
75 "deactivated": false
76 }
77 ```
78
79 To use it, you will need to authenticate by providing an `access_token` for a
80 server admin: [Admin API](../../usage/administration/admin_api)
81
82 URL parameters:
83
84 - `user_id`: fully-qualified user id: for example, `@user:server.com`.
85
86 Body parameters:
87
88 - `password`, optional. If provided, the user's password is updated and all
89 devices are logged out.
90
91 - `displayname`, optional, defaults to the value of `user_id`.
92
93 - `threepids`, optional, allows setting the third-party IDs (email, msisdn)
94 belonging to a user.
95
96 - `avatar_url`, optional, must be a
97 [MXC URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
98
99 - `admin`, optional, defaults to `false`.
100
101 - `deactivated`, optional. If unspecified, deactivation state will be left
102 unchanged on existing accounts and set to `false` for new accounts.
103 A user cannot be erased by deactivating with this API. For details on
104 deactivating users see [Deactivate Account](#deactivate-account).
105
106 If the user already exists then optional parameters default to the current value.
107
108 In order to re-activate an account `deactivated` must be set to `false`. If
109 users do not login via single-sign-on, a new `password` must be provided.
110
111 ## List Accounts
112
113 This API returns all local user accounts.
114 By default, the response is ordered by ascending user ID.
115
116 ```
117 GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
118 ```
119
120 To use it, you will need to authenticate by providing an `access_token` for a
121 server admin: [Admin API](../../usage/administration/admin_api)
122
123 A response body like the following is returned:
124
125 ```json
126 {
127 "users": [
128 {
129 "name": "<user_id1>",
130 "is_guest": 0,
131 "admin": 0,
132 "user_type": null,
133 "deactivated": 0,
134 "shadow_banned": 0,
135 "displayname": "<User One>",
136 "avatar_url": null
137 }, {
138 "name": "<user_id2>",
139 "is_guest": 0,
140 "admin": 1,
141 "user_type": null,
142 "deactivated": 0,
143 "shadow_banned": 0,
144 "displayname": "<User Two>",
145 "avatar_url": "<avatar_url>"
146 }
147 ],
148 "next_token": "100",
149 "total": 200
150 }
151 ```
152
153 To paginate, check for `next_token` and if present, call the endpoint again
154 with `from` set to the value of `next_token`. This will return a new page.
155
156 If the endpoint does not return a `next_token` then there are no more users
157 to paginate through.
158
159 **Parameters**
160
161 The following parameters should be set in the URL:
162
163 - `user_id` - Is optional and filters to only return users with user IDs
164 that contain this value. This parameter is ignored when using the `name` parameter.
165 - `name` - Is optional and filters to only return users with user ID localparts
166 **or** displaynames that contain this value.
167 - `guests` - string representing a bool - Is optional and if `false` will **exclude** guest users.
168 Defaults to `true` to include guest users.
169 - `deactivated` - string representing a bool - Is optional and if `true` will **include** deactivated users.
170 Defaults to `false` to exclude deactivated users.
171 - `limit` - string representing a positive integer - Is optional but is used for pagination,
172 denoting the maximum number of items to return in this call. Defaults to `100`.
173 - `from` - string representing a positive integer - Is optional but used for pagination,
174 denoting the offset in the returned results. This should be treated as an opaque value and
175 not explicitly set to anything other than the return value of `next_token` from a previous call.
176 Defaults to `0`.
177 - `order_by` - The method by which to sort the returned list of users.
178 If the ordered field has duplicates, the second order is always by ascending `name`,
179 which guarantees a stable ordering. Valid values are:
180
181 - `name` - Users are ordered alphabetically by `name`. This is the default.
182 - `is_guest` - Users are ordered by `is_guest` status.
183 - `admin` - Users are ordered by `admin` status.
184 - `user_type` - Users are ordered alphabetically by `user_type`.
185 - `deactivated` - Users are ordered by `deactivated` status.
186 - `shadow_banned` - Users are ordered by `shadow_banned` status.
187 - `displayname` - Users are ordered alphabetically by `displayname`.
188 - `avatar_url` - Users are ordered alphabetically by avatar URL.
189
190 - `dir` - Direction of media order. Either `f` for forwards or `b` for backwards.
191 Setting this value to `b` will reverse the above sort order. Defaults to `f`.
192
193 Caution. The database only has indexes on the columns `name` and `created_ts`.
194 This means that if a different sort order is used (`is_guest`, `admin`,
195 `user_type`, `deactivated`, `shadow_banned`, `avatar_url` or `displayname`),
196 this can cause a large load on the database, especially for large environments.
197
198 **Response**
199
200 The following fields are returned in the JSON response body:
201
202 - `users` - An array of objects, each containing information about an user.
203 User objects contain the following fields:
204
205 - `name` - string - Fully-qualified user ID (ex. `@user:server.com`).
206 - `is_guest` - bool - Status if that user is a guest account.
207 - `admin` - bool - Status if that user is a server administrator.
208 - `user_type` - string - Type of the user. Normal users are type `None`.
209 This allows user type specific behaviour. There are also types `support` and `bot`.
210 - `deactivated` - bool - Status if that user has been marked as deactivated.
211 - `shadow_banned` - bool - Status if that user has been marked as shadow banned.
212 - `displayname` - string - The user's display name if they have set one.
213 - `avatar_url` - string - The user's avatar URL if they have set one.
214
215 - `next_token`: string representing a positive integer - Indication for pagination. See above.
216 - `total` - integer - Total number of media.
217
218
219 ## Query current sessions for a user
220
221 This API returns information about the active sessions for a specific user.
222
223 The endpoints are:
224
225 ```
226 GET /_synapse/admin/v1/whois/<user_id>
227 ```
228
229 and:
230
231 ```
232 GET /_matrix/client/r0/admin/whois/<userId>
233 ```
234
235 See also: [Client Server
236 API Whois](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid).
237
238 To use it, you will need to authenticate by providing an `access_token` for a
239 server admin: [Admin API](../../usage/administration/admin_api)
240
241 It returns a JSON body like the following:
242
243 ```json
244 {
245 "user_id": "<user_id>",
246 "devices": {
247 "": {
248 "sessions": [
249 {
250 "connections": [
251 {
252 "ip": "1.2.3.4",
253 "last_seen": 1417222374433,
254 "user_agent": "Mozilla/5.0 ..."
255 },
256 {
257 "ip": "1.2.3.10",
258 "last_seen": 1417222374500,
259 "user_agent": "Dalvik/2.1.0 ..."
260 }
261 ]
262 }
263 ]
264 }
265 }
266 }
267 ```
268
269 `last_seen` is measured in milliseconds since the Unix epoch.
270
271 ## Deactivate Account
272
273 This API deactivates an account. It removes active access tokens, resets the
274 password, and deletes third-party IDs (to prevent the user requesting a
275 password reset).
276
277 It can also mark the user as GDPR-erased. This means messages sent by the
278 user will still be visible by anyone that was in the room when these messages
279 were sent, but hidden from users joining the room afterwards.
280
281 The api is:
282
283 ```
284 POST /_synapse/admin/v1/deactivate/<user_id>
285 ```
286
287 with a body of:
288
289 ```json
290 {
291 "erase": true
292 }
293 ```
294
295 To use it, you will need to authenticate by providing an `access_token` for a
296 server admin: [Admin API](../../usage/administration/admin_api)
297
298 The erase parameter is optional and defaults to `false`.
299 An empty body may be passed for backwards compatibility.
300
301 The following actions are performed when deactivating an user:
302
303 - Try to unpind 3PIDs from the identity server
304 - Remove all 3PIDs from the homeserver
305 - Delete all devices and E2EE keys
306 - Delete all access tokens
307 - Delete the password hash
308 - Removal from all rooms the user is a member of
309 - Remove the user from the user directory
310 - Reject all pending invites
311 - Remove all account validity information related to the user
312
313 The following additional actions are performed during deactivation if `erase`
314 is set to `true`:
315
316 - Remove the user's display name
317 - Remove the user's avatar URL
318 - Mark the user as erased
319
320
321 ## Reset password
322
323 Changes the password of another user. This will automatically log the user out of all their devices.
324
325 The api is:
326
327 ```
328 POST /_synapse/admin/v1/reset_password/<user_id>
329 ```
330
331 with a body of:
332
333 ```json
334 {
335 "new_password": "<secret>",
336 "logout_devices": true
337 }
338 ```
339
340 To use it, you will need to authenticate by providing an `access_token` for a
341 server admin: [Admin API](../../usage/administration/admin_api)
342
343 The parameter `new_password` is required.
344 The parameter `logout_devices` is optional and defaults to `true`.
345
346
347 ## Get whether a user is a server administrator or not
348
349 The api is:
350
351 ```
352 GET /_synapse/admin/v1/users/<user_id>/admin
353 ```
354
355 To use it, you will need to authenticate by providing an `access_token` for a
356 server admin: [Admin API](../../usage/administration/admin_api)
357
358 A response body like the following is returned:
359
360 ```json
361 {
362 "admin": true
363 }
364 ```
365
366
367 ## Change whether a user is a server administrator or not
368
369 Note that you cannot demote yourself.
370
371 The api is:
372
373 ```
374 PUT /_synapse/admin/v1/users/<user_id>/admin
375 ```
376
377 with a body of:
378
379 ```json
380 {
381 "admin": true
382 }
383 ```
384
385 To use it, you will need to authenticate by providing an `access_token` for a
386 server admin: [Admin API](../../usage/administration/admin_api)
387
388
389 ## List room memberships of a user
390
391 Gets a list of all `room_id` that a specific `user_id` is member.
392
393 The API is:
394
395 ```
396 GET /_synapse/admin/v1/users/<user_id>/joined_rooms
397 ```
398
399 To use it, you will need to authenticate by providing an `access_token` for a
400 server admin: [Admin API](../../usage/administration/admin_api)
401
402 A response body like the following is returned:
403
404 ```json
405 {
406 "joined_rooms": [
407 "!DuGcnbhHGaSZQoNQR:matrix.org",
408 "!ZtSaPCawyWtxfWiIy:matrix.org"
409 ],
410 "total": 2
411 }
412 ```
413
414 The server returns the list of rooms of which the user and the server
415 are member. If the user is local, all the rooms of which the user is
416 member are returned.
417
418 **Parameters**
419
420 The following parameters should be set in the URL:
421
422 - `user_id` - fully qualified: for example, `@user:server.com`.
423
424 **Response**
425
426 The following fields are returned in the JSON response body:
427
428 - `joined_rooms` - An array of `room_id`.
429 - `total` - Number of rooms.
430
431
432 ## List media of a user
433 Gets a list of all local media that a specific `user_id` has created.
434 By default, the response is ordered by descending creation date and ascending media ID.
435 The newest media is on top. You can change the order with parameters
436 `order_by` and `dir`.
437
438 The API is:
439
440 ```
441 GET /_synapse/admin/v1/users/<user_id>/media
442 ```
443
444 To use it, you will need to authenticate by providing an `access_token` for a
445 server admin: [Admin API](../../usage/administration/admin_api)
446
447 A response body like the following is returned:
448
449 ```json
450 {
451 "media": [
452 {
453 "created_ts": 100400,
454 "last_access_ts": null,
455 "media_id": "qXhyRzulkwLsNHTbpHreuEgo",
456 "media_length": 67,
457 "media_type": "image/png",
458 "quarantined_by": null,
459 "safe_from_quarantine": false,
460 "upload_name": "test1.png"
461 },
462 {
463 "created_ts": 200400,
464 "last_access_ts": null,
465 "media_id": "FHfiSnzoINDatrXHQIXBtahw",
466 "media_length": 67,
467 "media_type": "image/png",
468 "quarantined_by": null,
469 "safe_from_quarantine": false,
470 "upload_name": "test2.png"
471 }
472 ],
473 "next_token": 3,
474 "total": 2
475 }
476 ```
477
478 To paginate, check for `next_token` and if present, call the endpoint again
479 with `from` set to the value of `next_token`. This will return a new page.
480
481 If the endpoint does not return a `next_token` then there are no more
482 reports to paginate through.
483
484 **Parameters**
485
486 The following parameters should be set in the URL:
487
488 - `user_id` - string - fully qualified: for example, `@user:server.com`.
489 - `limit`: string representing a positive integer - Is optional but is used for pagination,
490 denoting the maximum number of items to return in this call. Defaults to `100`.
491 - `from`: string representing a positive integer - Is optional but used for pagination,
492 denoting the offset in the returned results. This should be treated as an opaque value and
493 not explicitly set to anything other than the return value of `next_token` from a previous call.
494 Defaults to `0`.
495 - `order_by` - The method by which to sort the returned list of media.
496 If the ordered field has duplicates, the second order is always by ascending `media_id`,
497 which guarantees a stable ordering. Valid values are:
498
499 - `media_id` - Media are ordered alphabetically by `media_id`.
500 - `upload_name` - Media are ordered alphabetically by name the media was uploaded with.
501 - `created_ts` - Media are ordered by when the content was uploaded in ms.
502 Smallest to largest. This is the default.
503 - `last_access_ts` - Media are ordered by when the content was last accessed in ms.
504 Smallest to largest.
505 - `media_length` - Media are ordered by length of the media in bytes.
506 Smallest to largest.
507 - `media_type` - Media are ordered alphabetically by MIME-type.
508 - `quarantined_by` - Media are ordered alphabetically by the user ID that
509 initiated the quarantine request for this media.
510 - `safe_from_quarantine` - Media are ordered by the status if this media is safe
511 from quarantining.
512
513 - `dir` - Direction of media order. Either `f` for forwards or `b` for backwards.
514 Setting this value to `b` will reverse the above sort order. Defaults to `f`.
515
516 If neither `order_by` nor `dir` is set, the default order is newest media on top
517 (corresponds to `order_by` = `created_ts` and `dir` = `b`).
518
519 Caution. The database only has indexes on the columns `media_id`,
520 `user_id` and `created_ts`. This means that if a different sort order is used
521 (`upload_name`, `last_access_ts`, `media_length`, `media_type`,
522 `quarantined_by` or `safe_from_quarantine`), this can cause a large load on the
523 database, especially for large environments.
524
525 **Response**
526
527 The following fields are returned in the JSON response body:
528
529 - `media` - An array of objects, each containing information about a media.
530 Media objects contain the following fields:
531
532 - `created_ts` - integer - Timestamp when the content was uploaded in ms.
533 - `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
534 - `media_id` - string - The id used to refer to the media.
535 - `media_length` - integer - Length of the media in bytes.
536 - `media_type` - string - The MIME-type of the media.
537 - `quarantined_by` - string - The user ID that initiated the quarantine request
538 for this media.
539
540 - `safe_from_quarantine` - bool - Status if this media is safe from quarantining.
541 - `upload_name` - string - The name the media was uploaded with.
542
543 - `next_token`: integer - Indication for pagination. See above.
544 - `total` - integer - Total number of media.
545
546 ## Login as a user
547
548 Get an access token that can be used to authenticate as that user. Useful for
549 when admins wish to do actions on behalf of a user.
550
551 The API is:
552
553 ```
554 POST /_synapse/admin/v1/users/<user_id>/login
555 {}
556 ```
557
558 An optional `valid_until_ms` field can be specified in the request body as an
559 integer timestamp that specifies when the token should expire. By default tokens
560 do not expire.
561
562 A response body like the following is returned:
563
564 ```json
565 {
566 "access_token": "<opaque_access_token_string>"
567 }
568 ```
569
570 This API does *not* generate a new device for the user, and so will not appear
571 their `/devices` list, and in general the target user should not be able to
572 tell they have been logged in as.
573
574 To expire the token call the standard `/logout` API with the token.
575
576 Note: The token will expire if the *admin* user calls `/logout/all` from any
577 of their devices, but the token will *not* expire if the target user does the
578 same.
579
580
581 ## User devices
582
583 ### List all devices
584 Gets information about all devices for a specific `user_id`.
585
586 The API is:
587
588 ```
589 GET /_synapse/admin/v2/users/<user_id>/devices
590 ```
591
592 To use it, you will need to authenticate by providing an `access_token` for a
593 server admin: [Admin API](../../usage/administration/admin_api)
594
595 A response body like the following is returned:
596
597 ```json
598 {
599 "devices": [
600 {
601 "device_id": "QBUAZIFURK",
602 "display_name": "android",
603 "last_seen_ip": "1.2.3.4",
604 "last_seen_ts": 1474491775024,
605 "user_id": "<user_id>"
606 },
607 {
608 "device_id": "AUIECTSRND",
609 "display_name": "ios",
610 "last_seen_ip": "1.2.3.5",
611 "last_seen_ts": 1474491775025,
612 "user_id": "<user_id>"
613 }
614 ],
615 "total": 2
616 }
617 ```
618
619 **Parameters**
620
621 The following parameters should be set in the URL:
622
623 - `user_id` - fully qualified: for example, `@user:server.com`.
624
625 **Response**
626
627 The following fields are returned in the JSON response body:
628
629 - `devices` - An array of objects, each containing information about a device.
630 Device objects contain the following fields:
631
632 - `device_id` - Identifier of device.
633 - `display_name` - Display name set by the user for this device.
634 Absent if no name has been set.
635 - `last_seen_ip` - The IP address where this device was last seen.
636 (May be a few minutes out of date, for efficiency reasons).
637 - `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this
638 devices was last seen. (May be a few minutes out of date, for efficiency reasons).
639 - `user_id` - Owner of device.
640
641 - `total` - Total number of user's devices.
642
643 ### Delete multiple devices
644 Deletes the given devices for a specific `user_id`, and invalidates
645 any access token associated with them.
646
647 The API is:
648
649 ```
650 POST /_synapse/admin/v2/users/<user_id>/delete_devices
651
652 {
653 "devices": [
654 "QBUAZIFURK",
655 "AUIECTSRND"
656 ],
657 }
658 ```
659
660 To use it, you will need to authenticate by providing an `access_token` for a
661 server admin: [Admin API](../../usage/administration/admin_api)
662
663 An empty JSON dict is returned.
664
665 **Parameters**
666
667 The following parameters should be set in the URL:
668
669 - `user_id` - fully qualified: for example, `@user:server.com`.
670
671 The following fields are required in the JSON request body:
672
673 - `devices` - The list of device IDs to delete.
674
675 ### Show a device
676 Gets information on a single device, by `device_id` for a specific `user_id`.
677
678 The API is:
679
680 ```
681 GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
682 ```
683
684 To use it, you will need to authenticate by providing an `access_token` for a
685 server admin: [Admin API](../../usage/administration/admin_api)
686
687 A response body like the following is returned:
688
689 ```json
690 {
691 "device_id": "<device_id>",
692 "display_name": "android",
693 "last_seen_ip": "1.2.3.4",
694 "last_seen_ts": 1474491775024,
695 "user_id": "<user_id>"
696 }
697 ```
698
699 **Parameters**
700
701 The following parameters should be set in the URL:
702
703 - `user_id` - fully qualified: for example, `@user:server.com`.
704 - `device_id` - The device to retrieve.
705
706 **Response**
707
708 The following fields are returned in the JSON response body:
709
710 - `device_id` - Identifier of device.
711 - `display_name` - Display name set by the user for this device.
712 Absent if no name has been set.
713 - `last_seen_ip` - The IP address where this device was last seen.
714 (May be a few minutes out of date, for efficiency reasons).
715 - `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this
716 devices was last seen. (May be a few minutes out of date, for efficiency reasons).
717 - `user_id` - Owner of device.
718
719 ### Update a device
720 Updates the metadata on the given `device_id` for a specific `user_id`.
721
722 The API is:
723
724 ```
725 PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
726
727 {
728 "display_name": "My other phone"
729 }
730 ```
731
732 To use it, you will need to authenticate by providing an `access_token` for a
733 server admin: [Admin API](../../usage/administration/admin_api)
734
735 An empty JSON dict is returned.
736
737 **Parameters**
738
739 The following parameters should be set in the URL:
740
741 - `user_id` - fully qualified: for example, `@user:server.com`.
742 - `device_id` - The device to update.
743
744 The following fields are required in the JSON request body:
745
746 - `display_name` - The new display name for this device. If not given,
747 the display name is unchanged.
748
749 ### Delete a device
750 Deletes the given `device_id` for a specific `user_id`,
751 and invalidates any access token associated with it.
752
753 The API is:
754
755 ```
756 DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
757
758 {}
759 ```
760
761 To use it, you will need to authenticate by providing an `access_token` for a
762 server admin: [Admin API](../../usage/administration/admin_api)
763
764 An empty JSON dict is returned.
765
766 **Parameters**
767
768 The following parameters should be set in the URL:
769
770 - `user_id` - fully qualified: for example, `@user:server.com`.
771 - `device_id` - The device to delete.
772
773 ## List all pushers
774 Gets information about all pushers for a specific `user_id`.
775
776 The API is:
777
778 ```
779 GET /_synapse/admin/v1/users/<user_id>/pushers
780 ```
781
782 To use it, you will need to authenticate by providing an `access_token` for a
783 server admin: [Admin API](../../usage/administration/admin_api)
784
785 A response body like the following is returned:
786
787 ```json
788 {
789 "pushers": [
790 {
791 "app_display_name":"HTTP Push Notifications",
792 "app_id":"m.http",
793 "data": {
794 "url":"example.com"
795 },
796 "device_display_name":"pushy push",
797 "kind":"http",
798 "lang":"None",
799 "profile_tag":"",
800 "pushkey":"a@example.com"
801 }
802 ],
803 "total": 1
804 }
805 ```
806
807 **Parameters**
808
809 The following parameters should be set in the URL:
810
811 - `user_id` - fully qualified: for example, `@user:server.com`.
812
813 **Response**
814
815 The following fields are returned in the JSON response body:
816
817 - `pushers` - An array containing the current pushers for the user
818
819 - `app_display_name` - string - A string that will allow the user to identify
820 what application owns this pusher.
821
822 - `app_id` - string - This is a reverse-DNS style identifier for the application.
823 Max length, 64 chars.
824
825 - `data` - A dictionary of information for the pusher implementation itself.
826
827 - `url` - string - Required if `kind` is `http`. The URL to use to send
828 notifications to.
829
830 - `format` - string - The format to use when sending notifications to the
831 Push Gateway.
832
833 - `device_display_name` - string - A string that will allow the user to identify
834 what device owns this pusher.
835
836 - `profile_tag` - string - This string determines which set of device specific rules
837 this pusher executes.
838
839 - `kind` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes.
840 - `lang` - string - The preferred language for receiving notifications
841 (e.g. 'en' or 'en-US')
842
843 - `profile_tag` - string - This string determines which set of device specific rules
844 this pusher executes.
845
846 - `pushkey` - string - This is a unique identifier for this pusher.
847 Max length, 512 bytes.
848
849 - `total` - integer - Number of pushers.
850
851 See also the
852 [Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
853
854 ## Shadow-banning users
855
856 Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
857 A shadow-banned users receives successful responses to their client-server API requests,
858 but the events are not propagated into rooms. This can be an effective tool as it
859 (hopefully) takes longer for the user to realise they are being moderated before
860 pivoting to another account.
861
862 Shadow-banning a user should be used as a tool of last resort and may lead to confusing
863 or broken behaviour for the client. A shadow-banned user will not receive any
864 notification and it is generally more appropriate to ban or kick abusive users.
865 A shadow-banned user will be unable to contact anyone on the server.
866
867 The API is:
868
869 ```
870 POST /_synapse/admin/v1/users/<user_id>/shadow_ban
871 ```
872
873 To use it, you will need to authenticate by providing an `access_token` for a
874 server admin: [Admin API](../../usage/administration/admin_api)
875
876 An empty JSON dict is returned.
877
878 **Parameters**
879
880 The following parameters should be set in the URL:
881
882 - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must
883 be local.
884
885 ## Override ratelimiting for users
886
887 This API allows to override or disable ratelimiting for a specific user.
888 There are specific APIs to set, get and delete a ratelimit.
889
890 ### Get status of ratelimit
891
892 The API is:
893
894 ```
895 GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
896 ```
897
898 To use it, you will need to authenticate by providing an `access_token` for a
899 server admin: [Admin API](../../usage/administration/admin_api)
900
901 A response body like the following is returned:
902
903 ```json
904 {
905 "messages_per_second": 0,
906 "burst_count": 0
907 }
908 ```
909
910 **Parameters**
911
912 The following parameters should be set in the URL:
913
914 - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must
915 be local.
916
917 **Response**
918
919 The following fields are returned in the JSON response body:
920
921 - `messages_per_second` - integer - The number of actions that can
922 be performed in a second. `0` mean that ratelimiting is disabled for this user.
923 - `burst_count` - integer - How many actions that can be performed before
924 being limited.
925
926 If **no** custom ratelimit is set, an empty JSON dict is returned.
927
928 ```json
929 {}
930 ```
931
932 ### Set ratelimit
933
934 The API is:
935
936 ```
937 POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
938 ```
939
940 To use it, you will need to authenticate by providing an `access_token` for a
941 server admin: [Admin API](../../usage/administration/admin_api)
942
943 A response body like the following is returned:
944
945 ```json
946 {
947 "messages_per_second": 0,
948 "burst_count": 0
949 }
950 ```
951
952 **Parameters**
953
954 The following parameters should be set in the URL:
955
956 - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must
957 be local.
958
959 Body parameters:
960
961 - `messages_per_second` - positive integer, optional. The number of actions that can
962 be performed in a second. Defaults to `0`.
963 - `burst_count` - positive integer, optional. How many actions that can be performed
964 before being limited. Defaults to `0`.
965
966 To disable users' ratelimit set both values to `0`.
967
968 **Response**
969
970 The following fields are returned in the JSON response body:
971
972 - `messages_per_second` - integer - The number of actions that can
973 be performed in a second.
974 - `burst_count` - integer - How many actions that can be performed before
975 being limited.
976
977 ### Delete ratelimit
978
979 The API is:
980
981 ```
982 DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
983 ```
984
985 To use it, you will need to authenticate by providing an `access_token` for a
986 server admin: [Admin API](../../usage/administration/admin_api)
987
988 An empty JSON dict is returned.
989
990 ```json
991 {}
992 ```
993
994 **Parameters**
995
996 The following parameters should be set in the URL:
997
998 - `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must
999 be local.
1000
+0
-981
docs/admin_api/user_admin_api.rst less more
0 .. contents::
1
2 Query User Account
3 ==================
4
5 This API returns information about a specific user account.
6
7 The api is::
8
9 GET /_synapse/admin/v2/users/<user_id>
10
11 To use it, you will need to authenticate by providing an ``access_token`` for a
12 server admin: see `README.rst <README.rst>`_.
13
14 It returns a JSON body like the following:
15
16 .. code:: json
17
18 {
19 "displayname": "User",
20 "threepids": [
21 {
22 "medium": "email",
23 "address": "<user_mail_1>"
24 },
25 {
26 "medium": "email",
27 "address": "<user_mail_2>"
28 }
29 ],
30 "avatar_url": "<avatar_url>",
31 "admin": 0,
32 "deactivated": 0,
33 "shadow_banned": 0,
34 "password_hash": "$2b$12$p9B4GkqYdRTPGD",
35 "creation_ts": 1560432506,
36 "appservice_id": null,
37 "consent_server_notice_sent": null,
38 "consent_version": null
39 }
40
41 URL parameters:
42
43 - ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
44
45 Create or modify Account
46 ========================
47
48 This API allows an administrator to create or modify a user account with a
49 specific ``user_id``.
50
51 This api is::
52
53 PUT /_synapse/admin/v2/users/<user_id>
54
55 with a body of:
56
57 .. code:: json
58
59 {
60 "password": "user_password",
61 "displayname": "User",
62 "threepids": [
63 {
64 "medium": "email",
65 "address": "<user_mail_1>"
66 },
67 {
68 "medium": "email",
69 "address": "<user_mail_2>"
70 }
71 ],
72 "avatar_url": "<avatar_url>",
73 "admin": false,
74 "deactivated": false
75 }
76
77 To use it, you will need to authenticate by providing an ``access_token`` for a
78 server admin: see `README.rst <README.rst>`_.
79
80 URL parameters:
81
82 - ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
83
84 Body parameters:
85
86 - ``password``, optional. If provided, the user's password is updated and all
87 devices are logged out.
88
89 - ``displayname``, optional, defaults to the value of ``user_id``.
90
91 - ``threepids``, optional, allows setting the third-party IDs (email, msisdn)
92 belonging to a user.
93
94 - ``avatar_url``, optional, must be a
95 `MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_.
96
97 - ``admin``, optional, defaults to ``false``.
98
99 - ``deactivated``, optional. If unspecified, deactivation state will be left
100 unchanged on existing accounts and set to ``false`` for new accounts.
101 A user cannot be erased by deactivating with this API. For details on deactivating users see
102 `Deactivate Account <#deactivate-account>`_.
103
104 If the user already exists then optional parameters default to the current value.
105
106 In order to re-activate an account ``deactivated`` must be set to ``false``. If
107 users do not login via single-sign-on, a new ``password`` must be provided.
108
109 List Accounts
110 =============
111
112 This API returns all local user accounts.
113 By default, the response is ordered by ascending user ID.
114
115 The API is::
116
117 GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
118
119 To use it, you will need to authenticate by providing an ``access_token`` for a
120 server admin: see `README.rst <README.rst>`_.
121
122 A response body like the following is returned:
123
124 .. code:: json
125
126 {
127 "users": [
128 {
129 "name": "<user_id1>",
130 "is_guest": 0,
131 "admin": 0,
132 "user_type": null,
133 "deactivated": 0,
134 "shadow_banned": 0,
135 "displayname": "<User One>",
136 "avatar_url": null
137 }, {
138 "name": "<user_id2>",
139 "is_guest": 0,
140 "admin": 1,
141 "user_type": null,
142 "deactivated": 0,
143 "shadow_banned": 0,
144 "displayname": "<User Two>",
145 "avatar_url": "<avatar_url>"
146 }
147 ],
148 "next_token": "100",
149 "total": 200
150 }
151
152 To paginate, check for ``next_token`` and if present, call the endpoint again
153 with ``from`` set to the value of ``next_token``. This will return a new page.
154
155 If the endpoint does not return a ``next_token`` then there are no more users
156 to paginate through.
157
158 **Parameters**
159
160 The following parameters should be set in the URL:
161
162 - ``user_id`` - Is optional and filters to only return users with user IDs
163 that contain this value. This parameter is ignored when using the ``name`` parameter.
164 - ``name`` - Is optional and filters to only return users with user ID localparts
165 **or** displaynames that contain this value.
166 - ``guests`` - string representing a bool - Is optional and if ``false`` will **exclude** guest users.
167 Defaults to ``true`` to include guest users.
168 - ``deactivated`` - string representing a bool - Is optional and if ``true`` will **include** deactivated users.
169 Defaults to ``false`` to exclude deactivated users.
170 - ``limit`` - string representing a positive integer - Is optional but is used for pagination,
171 denoting the maximum number of items to return in this call. Defaults to ``100``.
172 - ``from`` - string representing a positive integer - Is optional but used for pagination,
173 denoting the offset in the returned results. This should be treated as an opaque value and
174 not explicitly set to anything other than the return value of ``next_token`` from a previous call.
175 Defaults to ``0``.
176 - ``order_by`` - The method by which to sort the returned list of users.
177 If the ordered field has duplicates, the second order is always by ascending ``name``,
178 which guarantees a stable ordering. Valid values are:
179
180 - ``name`` - Users are ordered alphabetically by ``name``. This is the default.
181 - ``is_guest`` - Users are ordered by ``is_guest`` status.
182 - ``admin`` - Users are ordered by ``admin`` status.
183 - ``user_type`` - Users are ordered alphabetically by ``user_type``.
184 - ``deactivated`` - Users are ordered by ``deactivated`` status.
185 - ``shadow_banned`` - Users are ordered by ``shadow_banned`` status.
186 - ``displayname`` - Users are ordered alphabetically by ``displayname``.
187 - ``avatar_url`` - Users are ordered alphabetically by avatar URL.
188
189 - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
190 Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
191
192 Caution. The database only has indexes on the columns ``name`` and ``created_ts``.
193 This means that if a different sort order is used (``is_guest``, ``admin``,
194 ``user_type``, ``deactivated``, ``shadow_banned``, ``avatar_url`` or ``displayname``),
195 this can cause a large load on the database, especially for large environments.
196
197 **Response**
198
199 The following fields are returned in the JSON response body:
200
201 - ``users`` - An array of objects, each containing information about an user.
202 User objects contain the following fields:
203
204 - ``name`` - string - Fully-qualified user ID (ex. ``@user:server.com``).
205 - ``is_guest`` - bool - Status if that user is a guest account.
206 - ``admin`` - bool - Status if that user is a server administrator.
207 - ``user_type`` - string - Type of the user. Normal users are type ``None``.
208 This allows user type specific behaviour. There are also types ``support`` and ``bot``.
209 - ``deactivated`` - bool - Status if that user has been marked as deactivated.
210 - ``shadow_banned`` - bool - Status if that user has been marked as shadow banned.
211 - ``displayname`` - string - The user's display name if they have set one.
212 - ``avatar_url`` - string - The user's avatar URL if they have set one.
213
214 - ``next_token``: string representing a positive integer - Indication for pagination. See above.
215 - ``total`` - integer - Total number of media.
216
217
218 Query current sessions for a user
219 =================================
220
221 This API returns information about the active sessions for a specific user.
222
223 The api is::
224
225 GET /_synapse/admin/v1/whois/<user_id>
226
227 and::
228
229 GET /_matrix/client/r0/admin/whois/<userId>
230
231 See also: `Client Server API Whois
232 <https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>`_
233
234 To use it, you will need to authenticate by providing an ``access_token`` for a
235 server admin: see `README.rst <README.rst>`_.
236
237 It returns a JSON body like the following:
238
239 .. code:: json
240
241 {
242 "user_id": "<user_id>",
243 "devices": {
244 "": {
245 "sessions": [
246 {
247 "connections": [
248 {
249 "ip": "1.2.3.4",
250 "last_seen": 1417222374433,
251 "user_agent": "Mozilla/5.0 ..."
252 },
253 {
254 "ip": "1.2.3.10",
255 "last_seen": 1417222374500,
256 "user_agent": "Dalvik/2.1.0 ..."
257 }
258 ]
259 }
260 ]
261 }
262 }
263 }
264
265 ``last_seen`` is measured in milliseconds since the Unix epoch.
266
267 Deactivate Account
268 ==================
269
270 This API deactivates an account. It removes active access tokens, resets the
271 password, and deletes third-party IDs (to prevent the user requesting a
272 password reset).
273
274 It can also mark the user as GDPR-erased. This means messages sent by the
275 user will still be visible by anyone that was in the room when these messages
276 were sent, but hidden from users joining the room afterwards.
277
278 The api is::
279
280 POST /_synapse/admin/v1/deactivate/<user_id>
281
282 with a body of:
283
284 .. code:: json
285
286 {
287 "erase": true
288 }
289
290 To use it, you will need to authenticate by providing an ``access_token`` for a
291 server admin: see `README.rst <README.rst>`_.
292
293 The erase parameter is optional and defaults to ``false``.
294 An empty body may be passed for backwards compatibility.
295
296 The following actions are performed when deactivating an user:
297
298 - Try to unpind 3PIDs from the identity server
299 - Remove all 3PIDs from the homeserver
300 - Delete all devices and E2EE keys
301 - Delete all access tokens
302 - Delete the password hash
303 - Removal from all rooms the user is a member of
304 - Remove the user from the user directory
305 - Reject all pending invites
306 - Remove all account validity information related to the user
307
308 The following additional actions are performed during deactivation if ``erase``
309 is set to ``true``:
310
311 - Remove the user's display name
312 - Remove the user's avatar URL
313 - Mark the user as erased
314
315
316 Reset password
317 ==============
318
319 Changes the password of another user. This will automatically log the user out of all their devices.
320
321 The api is::
322
323 POST /_synapse/admin/v1/reset_password/<user_id>
324
325 with a body of:
326
327 .. code:: json
328
329 {
330 "new_password": "<secret>",
331 "logout_devices": true
332 }
333
334 To use it, you will need to authenticate by providing an ``access_token`` for a
335 server admin: see `README.rst <README.rst>`_.
336
337 The parameter ``new_password`` is required.
338 The parameter ``logout_devices`` is optional and defaults to ``true``.
339
340 Get whether a user is a server administrator or not
341 ===================================================
342
343
344 The api is::
345
346 GET /_synapse/admin/v1/users/<user_id>/admin
347
348 To use it, you will need to authenticate by providing an ``access_token`` for a
349 server admin: see `README.rst <README.rst>`_.
350
351 A response body like the following is returned:
352
353 .. code:: json
354
355 {
356 "admin": true
357 }
358
359
360 Change whether a user is a server administrator or not
361 ======================================================
362
363 Note that you cannot demote yourself.
364
365 The api is::
366
367 PUT /_synapse/admin/v1/users/<user_id>/admin
368
369 with a body of:
370
371 .. code:: json
372
373 {
374 "admin": true
375 }
376
377 To use it, you will need to authenticate by providing an ``access_token`` for a
378 server admin: see `README.rst <README.rst>`_.
379
380
381 List room memberships of an user
382 ================================
383 Gets a list of all ``room_id`` that a specific ``user_id`` is member.
384
385 The API is::
386
387 GET /_synapse/admin/v1/users/<user_id>/joined_rooms
388
389 To use it, you will need to authenticate by providing an ``access_token`` for a
390 server admin: see `README.rst <README.rst>`_.
391
392 A response body like the following is returned:
393
394 .. code:: json
395
396 {
397 "joined_rooms": [
398 "!DuGcnbhHGaSZQoNQR:matrix.org",
399 "!ZtSaPCawyWtxfWiIy:matrix.org"
400 ],
401 "total": 2
402 }
403
404 The server returns the list of rooms of which the user and the server
405 are member. If the user is local, all the rooms of which the user is
406 member are returned.
407
408 **Parameters**
409
410 The following parameters should be set in the URL:
411
412 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
413
414 **Response**
415
416 The following fields are returned in the JSON response body:
417
418 - ``joined_rooms`` - An array of ``room_id``.
419 - ``total`` - Number of rooms.
420
421
422 List media of a user
423 ====================
424 Gets a list of all local media that a specific ``user_id`` has created.
425 By default, the response is ordered by descending creation date and ascending media ID.
426 The newest media is on top. You can change the order with parameters
427 ``order_by`` and ``dir``.
428
429 The API is::
430
431 GET /_synapse/admin/v1/users/<user_id>/media
432
433 To use it, you will need to authenticate by providing an ``access_token`` for a
434 server admin: see `README.rst <README.rst>`_.
435
436 A response body like the following is returned:
437
438 .. code:: json
439
440 {
441 "media": [
442 {
443 "created_ts": 100400,
444 "last_access_ts": null,
445 "media_id": "qXhyRzulkwLsNHTbpHreuEgo",
446 "media_length": 67,
447 "media_type": "image/png",
448 "quarantined_by": null,
449 "safe_from_quarantine": false,
450 "upload_name": "test1.png"
451 },
452 {
453 "created_ts": 200400,
454 "last_access_ts": null,
455 "media_id": "FHfiSnzoINDatrXHQIXBtahw",
456 "media_length": 67,
457 "media_type": "image/png",
458 "quarantined_by": null,
459 "safe_from_quarantine": false,
460 "upload_name": "test2.png"
461 }
462 ],
463 "next_token": 3,
464 "total": 2
465 }
466
467 To paginate, check for ``next_token`` and if present, call the endpoint again
468 with ``from`` set to the value of ``next_token``. This will return a new page.
469
470 If the endpoint does not return a ``next_token`` then there are no more
471 reports to paginate through.
472
473 **Parameters**
474
475 The following parameters should be set in the URL:
476
477 - ``user_id`` - string - fully qualified: for example, ``@user:server.com``.
478 - ``limit``: string representing a positive integer - Is optional but is used for pagination,
479 denoting the maximum number of items to return in this call. Defaults to ``100``.
480 - ``from``: string representing a positive integer - Is optional but used for pagination,
481 denoting the offset in the returned results. This should be treated as an opaque value and
482 not explicitly set to anything other than the return value of ``next_token`` from a previous call.
483 Defaults to ``0``.
484 - ``order_by`` - The method by which to sort the returned list of media.
485 If the ordered field has duplicates, the second order is always by ascending ``media_id``,
486 which guarantees a stable ordering. Valid values are:
487
488 - ``media_id`` - Media are ordered alphabetically by ``media_id``.
489 - ``upload_name`` - Media are ordered alphabetically by name the media was uploaded with.
490 - ``created_ts`` - Media are ordered by when the content was uploaded in ms.
491 Smallest to largest. This is the default.
492 - ``last_access_ts`` - Media are ordered by when the content was last accessed in ms.
493 Smallest to largest.
494 - ``media_length`` - Media are ordered by length of the media in bytes.
495 Smallest to largest.
496 - ``media_type`` - Media are ordered alphabetically by MIME-type.
497 - ``quarantined_by`` - Media are ordered alphabetically by the user ID that
498 initiated the quarantine request for this media.
499 - ``safe_from_quarantine`` - Media are ordered by the status if this media is safe
500 from quarantining.
501
502 - ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
503 Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
504
505 If neither ``order_by`` nor ``dir`` is set, the default order is newest media on top
506 (corresponds to ``order_by`` = ``created_ts`` and ``dir`` = ``b``).
507
508 Caution. The database only has indexes on the columns ``media_id``,
509 ``user_id`` and ``created_ts``. This means that if a different sort order is used
510 (``upload_name``, ``last_access_ts``, ``media_length``, ``media_type``,
511 ``quarantined_by`` or ``safe_from_quarantine``), this can cause a large load on the
512 database, especially for large environments.
513
514 **Response**
515
516 The following fields are returned in the JSON response body:
517
518 - ``media`` - An array of objects, each containing information about a media.
519 Media objects contain the following fields:
520
521 - ``created_ts`` - integer - Timestamp when the content was uploaded in ms.
522 - ``last_access_ts`` - integer - Timestamp when the content was last accessed in ms.
523 - ``media_id`` - string - The id used to refer to the media.
524 - ``media_length`` - integer - Length of the media in bytes.
525 - ``media_type`` - string - The MIME-type of the media.
526 - ``quarantined_by`` - string - The user ID that initiated the quarantine request
527 for this media.
528
529 - ``safe_from_quarantine`` - bool - Status if this media is safe from quarantining.
530 - ``upload_name`` - string - The name the media was uploaded with.
531
532 - ``next_token``: integer - Indication for pagination. See above.
533 - ``total`` - integer - Total number of media.
534
535 Login as a user
536 ===============
537
538 Get an access token that can be used to authenticate as that user. Useful for
539 when admins wish to do actions on behalf of a user.
540
541 The API is::
542
543 POST /_synapse/admin/v1/users/<user_id>/login
544 {}
545
546 An optional ``valid_until_ms`` field can be specified in the request body as an
547 integer timestamp that specifies when the token should expire. By default tokens
548 do not expire.
549
550 A response body like the following is returned:
551
552 .. code:: json
553
554 {
555 "access_token": "<opaque_access_token_string>"
556 }
557
558
559 This API does *not* generate a new device for the user, and so will not appear
560 their ``/devices`` list, and in general the target user should not be able to
561 tell they have been logged in as.
562
563 To expire the token call the standard ``/logout`` API with the token.
564
565 Note: The token will expire if the *admin* user calls ``/logout/all`` from any
566 of their devices, but the token will *not* expire if the target user does the
567 same.
568
569
570 User devices
571 ============
572
573 List all devices
574 ----------------
575 Gets information about all devices for a specific ``user_id``.
576
577 The API is::
578
579 GET /_synapse/admin/v2/users/<user_id>/devices
580
581 To use it, you will need to authenticate by providing an ``access_token`` for a
582 server admin: see `README.rst <README.rst>`_.
583
584 A response body like the following is returned:
585
586 .. code:: json
587
588 {
589 "devices": [
590 {
591 "device_id": "QBUAZIFURK",
592 "display_name": "android",
593 "last_seen_ip": "1.2.3.4",
594 "last_seen_ts": 1474491775024,
595 "user_id": "<user_id>"
596 },
597 {
598 "device_id": "AUIECTSRND",
599 "display_name": "ios",
600 "last_seen_ip": "1.2.3.5",
601 "last_seen_ts": 1474491775025,
602 "user_id": "<user_id>"
603 }
604 ],
605 "total": 2
606 }
607
608 **Parameters**
609
610 The following parameters should be set in the URL:
611
612 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
613
614 **Response**
615
616 The following fields are returned in the JSON response body:
617
618 - ``devices`` - An array of objects, each containing information about a device.
619 Device objects contain the following fields:
620
621 - ``device_id`` - Identifier of device.
622 - ``display_name`` - Display name set by the user for this device.
623 Absent if no name has been set.
624 - ``last_seen_ip`` - The IP address where this device was last seen.
625 (May be a few minutes out of date, for efficiency reasons).
626 - ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
627 devices was last seen. (May be a few minutes out of date, for efficiency reasons).
628 - ``user_id`` - Owner of device.
629
630 - ``total`` - Total number of user's devices.
631
632 Delete multiple devices
633 ------------------
634 Deletes the given devices for a specific ``user_id``, and invalidates
635 any access token associated with them.
636
637 The API is::
638
639 POST /_synapse/admin/v2/users/<user_id>/delete_devices
640
641 {
642 "devices": [
643 "QBUAZIFURK",
644 "AUIECTSRND"
645 ],
646 }
647
648 To use it, you will need to authenticate by providing an ``access_token`` for a
649 server admin: see `README.rst <README.rst>`_.
650
651 An empty JSON dict is returned.
652
653 **Parameters**
654
655 The following parameters should be set in the URL:
656
657 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
658
659 The following fields are required in the JSON request body:
660
661 - ``devices`` - The list of device IDs to delete.
662
663 Show a device
664 ---------------
665 Gets information on a single device, by ``device_id`` for a specific ``user_id``.
666
667 The API is::
668
669 GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
670
671 To use it, you will need to authenticate by providing an ``access_token`` for a
672 server admin: see `README.rst <README.rst>`_.
673
674 A response body like the following is returned:
675
676 .. code:: json
677
678 {
679 "device_id": "<device_id>",
680 "display_name": "android",
681 "last_seen_ip": "1.2.3.4",
682 "last_seen_ts": 1474491775024,
683 "user_id": "<user_id>"
684 }
685
686 **Parameters**
687
688 The following parameters should be set in the URL:
689
690 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
691 - ``device_id`` - The device to retrieve.
692
693 **Response**
694
695 The following fields are returned in the JSON response body:
696
697 - ``device_id`` - Identifier of device.
698 - ``display_name`` - Display name set by the user for this device.
699 Absent if no name has been set.
700 - ``last_seen_ip`` - The IP address where this device was last seen.
701 (May be a few minutes out of date, for efficiency reasons).
702 - ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
703 devices was last seen. (May be a few minutes out of date, for efficiency reasons).
704 - ``user_id`` - Owner of device.
705
706 Update a device
707 ---------------
708 Updates the metadata on the given ``device_id`` for a specific ``user_id``.
709
710 The API is::
711
712 PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
713
714 {
715 "display_name": "My other phone"
716 }
717
718 To use it, you will need to authenticate by providing an ``access_token`` for a
719 server admin: see `README.rst <README.rst>`_.
720
721 An empty JSON dict is returned.
722
723 **Parameters**
724
725 The following parameters should be set in the URL:
726
727 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
728 - ``device_id`` - The device to update.
729
730 The following fields are required in the JSON request body:
731
732 - ``display_name`` - The new display name for this device. If not given,
733 the display name is unchanged.
734
735 Delete a device
736 ---------------
737 Deletes the given ``device_id`` for a specific ``user_id``,
738 and invalidates any access token associated with it.
739
740 The API is::
741
742 DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
743
744 {}
745
746 To use it, you will need to authenticate by providing an ``access_token`` for a
747 server admin: see `README.rst <README.rst>`_.
748
749 An empty JSON dict is returned.
750
751 **Parameters**
752
753 The following parameters should be set in the URL:
754
755 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
756 - ``device_id`` - The device to delete.
757
758 List all pushers
759 ================
760 Gets information about all pushers for a specific ``user_id``.
761
762 The API is::
763
764 GET /_synapse/admin/v1/users/<user_id>/pushers
765
766 To use it, you will need to authenticate by providing an ``access_token`` for a
767 server admin: see `README.rst <README.rst>`_.
768
769 A response body like the following is returned:
770
771 .. code:: json
772
773 {
774 "pushers": [
775 {
776 "app_display_name":"HTTP Push Notifications",
777 "app_id":"m.http",
778 "data": {
779 "url":"example.com"
780 },
781 "device_display_name":"pushy push",
782 "kind":"http",
783 "lang":"None",
784 "profile_tag":"",
785 "pushkey":"a@example.com"
786 }
787 ],
788 "total": 1
789 }
790
791 **Parameters**
792
793 The following parameters should be set in the URL:
794
795 - ``user_id`` - fully qualified: for example, ``@user:server.com``.
796
797 **Response**
798
799 The following fields are returned in the JSON response body:
800
801 - ``pushers`` - An array containing the current pushers for the user
802
803 - ``app_display_name`` - string - A string that will allow the user to identify
804 what application owns this pusher.
805
806 - ``app_id`` - string - This is a reverse-DNS style identifier for the application.
807 Max length, 64 chars.
808
809 - ``data`` - A dictionary of information for the pusher implementation itself.
810
811 - ``url`` - string - Required if ``kind`` is ``http``. The URL to use to send
812 notifications to.
813
814 - ``format`` - string - The format to use when sending notifications to the
815 Push Gateway.
816
817 - ``device_display_name`` - string - A string that will allow the user to identify
818 what device owns this pusher.
819
820 - ``profile_tag`` - string - This string determines which set of device specific rules
821 this pusher executes.
822
823 - ``kind`` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes.
824 - ``lang`` - string - The preferred language for receiving notifications
825 (e.g. 'en' or 'en-US')
826
827 - ``profile_tag`` - string - This string determines which set of device specific rules
828 this pusher executes.
829
830 - ``pushkey`` - string - This is a unique identifier for this pusher.
831 Max length, 512 bytes.
832
833 - ``total`` - integer - Number of pushers.
834
835 See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_
836
837 Shadow-banning users
838 ====================
839
840 Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
841 A shadow-banned users receives successful responses to their client-server API requests,
842 but the events are not propagated into rooms. This can be an effective tool as it
843 (hopefully) takes longer for the user to realise they are being moderated before
844 pivoting to another account.
845
846 Shadow-banning a user should be used as a tool of last resort and may lead to confusing
847 or broken behaviour for the client. A shadow-banned user will not receive any
848 notification and it is generally more appropriate to ban or kick abusive users.
849 A shadow-banned user will be unable to contact anyone on the server.
850
851 The API is::
852
853 POST /_synapse/admin/v1/users/<user_id>/shadow_ban
854
855 To use it, you will need to authenticate by providing an ``access_token`` for a
856 server admin: see `README.rst <README.rst>`_.
857
858 An empty JSON dict is returned.
859
860 **Parameters**
861
862 The following parameters should be set in the URL:
863
864 - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
865 be local.
866
867 Override ratelimiting for users
868 ===============================
869
870 This API allows to override or disable ratelimiting for a specific user.
871 There are specific APIs to set, get and delete a ratelimit.
872
873 Get status of ratelimit
874 -----------------------
875
876 The API is::
877
878 GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
879
880 To use it, you will need to authenticate by providing an ``access_token`` for a
881 server admin: see `README.rst <README.rst>`_.
882
883 A response body like the following is returned:
884
885 .. code:: json
886
887 {
888 "messages_per_second": 0,
889 "burst_count": 0
890 }
891
892 **Parameters**
893
894 The following parameters should be set in the URL:
895
896 - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
897 be local.
898
899 **Response**
900
901 The following fields are returned in the JSON response body:
902
903 - ``messages_per_second`` - integer - The number of actions that can
904 be performed in a second. `0` mean that ratelimiting is disabled for this user.
905 - ``burst_count`` - integer - How many actions that can be performed before
906 being limited.
907
908 If **no** custom ratelimit is set, an empty JSON dict is returned.
909
910 .. code:: json
911
912 {}
913
914 Set ratelimit
915 -------------
916
917 The API is::
918
919 POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
920
921 To use it, you will need to authenticate by providing an ``access_token`` for a
922 server admin: see `README.rst <README.rst>`_.
923
924 A response body like the following is returned:
925
926 .. code:: json
927
928 {
929 "messages_per_second": 0,
930 "burst_count": 0
931 }
932
933 **Parameters**
934
935 The following parameters should be set in the URL:
936
937 - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
938 be local.
939
940 Body parameters:
941
942 - ``messages_per_second`` - positive integer, optional. The number of actions that can
943 be performed in a second. Defaults to ``0``.
944 - ``burst_count`` - positive integer, optional. How many actions that can be performed
945 before being limited. Defaults to ``0``.
946
947 To disable users' ratelimit set both values to ``0``.
948
949 **Response**
950
951 The following fields are returned in the JSON response body:
952
953 - ``messages_per_second`` - integer - The number of actions that can
954 be performed in a second.
955 - ``burst_count`` - integer - How many actions that can be performed before
956 being limited.
957
958 Delete ratelimit
959 ----------------
960
961 The API is::
962
963 DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
964
965 To use it, you will need to authenticate by providing an ``access_token`` for a
966 server admin: see `README.rst <README.rst>`_.
967
968 An empty JSON dict is returned.
969
970 .. code:: json
971
972 {}
973
974 **Parameters**
975
976 The following parameters should be set in the URL:
977
978 - ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
979 be local.
980
0 # Version API
1
2 This API returns the running Synapse version and the Python version
3 on which Synapse is being run. This is useful when a Synapse instance
4 is behind a proxy that does not forward the 'Server' header (which also
5 contains Synapse version information).
6
7 The api is:
8
9 ```
10 GET /_synapse/admin/v1/server_version
11 ```
12
13 It returns a JSON body like the following:
14
15 ```json
16 {
17 "server_version": "0.99.2rc1 (b=develop, abcdef123)",
18 "python_version": "3.6.8"
19 }
20 ```
+0
-20
docs/admin_api/version_api.rst less more
0 Version API
1 ===========
2
3 This API returns the running Synapse version and the Python version
4 on which Synapse is being run. This is useful when a Synapse instance
5 is behind a proxy that does not forward the 'Server' header (which also
6 contains Synapse version information).
7
8 The api is::
9
10 GET /_synapse/admin/v1/server_version
11
12 It returns a JSON body like the following:
13
14 .. code:: json
15
16 {
17 "server_version": "0.99.2rc1 (b=develop, abcdef123)",
18 "python_version": "3.6.8"
19 }
121121 that our active branches are ordered thus, from more-stable to less-stable:
122122
123123 * `master` (tracks our last release).
124 * `release-vX.Y.Z` (the branch where we prepare the next release)<sup
124 * `release-vX.Y` (the branch where we prepare the next release)<sup
125125 id="a3">[3](#f3)</sup>.
126126 * PR branches which are targeting the release.
127127 * `develop` (our "mainline" branch containing our bleeding-edge).
128128 * regular PR branches.
129129
130130 The corollary is: if you have a bugfix that needs to land in both
131 `release-vX.Y.Z` *and* `develop`, then you should base your PR on
132 `release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
131 `release-vX.Y` *and* `develop`, then you should base your PR on
132 `release-vX.Y`, get it merged there, and then merge from `release-vX.Y` to
133133 `develop`. (If a fix lands in `develop` and we later need it in a
134134 release-branch, we can of course cherry-pick it, but landing it in the release
135135 branch first helps reduce the chance of annoying conflicts.)
144144
145145 <b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
146146 the history of Synapse), we've had two releases in flight at once. Obviously,
147 `release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)
147 `release-v1.2` is more-stable than `release-v1.3`. [^](#a3)
0 <!--
1 Include the contents of CONTRIBUTING.md from the project root (where GitHub likes it
2 to be)
3 -->
4 # Contributing
5
6 {{#include ../../CONTRIBUTING.md}}
0 # Internal Documentation
1
2 This section covers implementation documentation for various parts of Synapse.
3
4 If a developer is planning to make a change to a feature of Synapse, it can be useful for
5 general documentation of how that feature is implemented to be available. This saves the
6 developer time in place of needing to understand how the feature works by reading the
7 code.
8
9 Documentation that would be more useful for the perspective of a system administrator,
10 rather than a developer who's intending to change to code, should instead be placed
11 under the Usage section of the documentation.
Binary diff not shown
0 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
1 <svg
2 xmlns:dc="http://purl.org/dc/elements/1.1/"
3 xmlns:cc="http://creativecommons.org/ns#"
4 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
5 xmlns:svg="http://www.w3.org/2000/svg"
6 xmlns="http://www.w3.org/2000/svg"
7 xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
8 xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
9 viewBox="0 0 199.7 184.2"
10 version="1.1"
11 id="svg62"
12 sodipodi:docname="mdbook-favicon.svg"
13 inkscape:version="1.0.2 (e86c870879, 2021-01-15, custom)">
14 <metadata
15 id="metadata68">
16 <rdf:RDF>
17 <cc:Work
18 rdf:about="">
19 <dc:format>image/svg+xml</dc:format>
20 <dc:type
21 rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
22 </cc:Work>
23 </rdf:RDF>
24 </metadata>
25 <defs
26 id="defs66" />
27 <sodipodi:namedview
28 pagecolor="#ffffff"
29 bordercolor="#666666"
30 borderopacity="1"
31 objecttolerance="10"
32 gridtolerance="10"
33 guidetolerance="10"
34 inkscape:pageopacity="0"
35 inkscape:pageshadow="2"
36 inkscape:window-width="1920"
37 inkscape:window-height="1026"
38 id="namedview64"
39 showgrid="false"
40 inkscape:zoom="3.2245912"
41 inkscape:cx="84.790185"
42 inkscape:cy="117.96478"
43 inkscape:window-x="0"
44 inkscape:window-y="0"
45 inkscape:window-maximized="1"
46 inkscape:current-layer="svg62" />
47 <style
48 id="style58">
49 @media (prefers-color-scheme: dark) {
50 svg { fill: white; }
51 }
52 </style>
53 <path
54 d="m 189.5,36.8 c 0.2,2.8 0,5.1 -0.6,6.8 L 153,162 c -0.6,2.1 -2,3.7 -4.2,5 -2.2,1.2 -4.4,1.9 -6.7,1.9 H 31.4 c -9.6,0 -15.3,-2.8 -17.3,-8.4 -0.8,-2.2 -0.8,-3.9 0.1,-5.2 0.9,-1.2 2.4,-1.8 4.6,-1.8 H 123 c 7.4,0 12.6,-1.4 15.4,-4.1 2.8,-2.7 5.7,-8.9 8.6,-18.4 L 179.9,22.4 c 1.8,-5.9 1,-11.1 -2.2,-15.6 C 174.5,2.3 169.9,0 164,0 H 72.7 c -1,0 -3.1,0.4 -6.1,1.1 L 66.7,0.7 C 64.5,0.2 62.6,0 61,0.1 c -1.6,0.1 -3,0.5 -4.3,1.4 -1.3,0.9 -2.4,1.8 -3.2,2.8 -0.8,1 -1.5,2.2 -2.3,3.8 -0.8,1.6 -1.4,3 -1.9,4.3 -0.5,1.3 -1.1,2.7 -1.8,4.2 -0.7,1.5 -1.3,2.7 -2,3.7 -0.5,0.6 -1.2,1.5 -2,2.5 -0.8,1 -1.6,2 -2.2,2.8 -0.6,0.8 -0.9,1.5 -1.1,2.2 -0.2,0.7 -0.1,1.8 0.2,3.2 0.3,1.4 0.4,2.4 0.4,3.1 -0.3,3 -1.4,6.9 -3.3,11.6 -1.9,4.7 -3.6,8.1 -5.1,10.1 -0.3,0.4 -1.2,1.3 -2.6,2.7 -1.4,1.4 -2.3,2.6 -2.6,3.7 -0.3,0.4 -0.3,1.5 -0.1,3.4 0.3,1.8 0.4,3.1 0.3,3.8 -0.3,2.7 -1.3,6.3 -3,10.8 -2.406801,6.370944 -3.4,8.2 -5,11 -0.2,0.5 -0.9,1.4 -2,2.8 -1.1,1.4 -1.8,2.5 -2,3.4 -0.2,0.6 -0.1,1.8 0.1,3.4 0.2,1.6 0.2,2.8 -0.1,3.6 -0.6,3 -1.8,6.7 -3.6,11 -1.8,4.3 -3.6,7.9 -5.4,11 -0.5,0.8 -1.1,1.7 -2,2.8 -0.8,1.1 -1.5,2 -2,2.8 -0.5,0.8 -0.8,1.6 -1,2.5 -0.1,0.5 0,1.3 0.4,2.3 0.3,1.1 0.4,1.9 0.4,2.6 -0.1,1.1 -0.2,2.6 -0.5,4.4 -0.2,1.8 -0.4,2.9 -0.4,3.2 -1.8,4.8 -1.7,9.9 0.2,15.2 2.2,6.2 6.2,11.5 11.9,15.8 5.7,4.3 11.7,6.4 17.8,6.4 h 110.7 c 5.2,0 10.1,-1.7 14.7,-5.2 4.6,-3.5 7.7,-7.8 9.2,-12.9 l 33,-108.6 c 1.8,-5.8 1,-10.9 -2.2,-15.5 -1.7,-2.5 -4,-4.2 -7.1,-5.4 z M 38.14858,105.59813 60.882735,41.992545 h 10.8 c 6.340631,0 33.351895,0.778957 70.804135,0.970479 -18.18245,63.254766 0,0 -18.18245,63.254766 -23.00947,-0.10382 -63.362955,-0.6218 -72.55584,-0.51966 -18,0.2 -13.6,-0.1 -13.6,-0.1 z m 80.621,-5.891206 c 15.19043,-50.034423 0,1e-5 15.19043,-50.034423 l -11.90624,-0.13228 2.73304,-9.302941 -44.32863,0.07339 -2.532953,8.036036 -11.321128,-0.18864 -17.955519,51.440073 c 0.02698,0.027 4.954586,0.0514 12.187488,0.0717 l -2.997994,9.804886 c 11.36463,0.0271 1.219679,-0.0736 46.117666,-0.31499 l 2.65246,-9.571696 c 7.08021,0.14819 11.59705,0.13117 12.16138,0.1189 z m -56.149615,-3.855606 13.7,-42.5 h 9.8 l 1.194896,32.99936 23.205109,-32.99936 h 9.9 l -13.6,42.5 h -7.099996 l 12.499996,-35.4 -24.50001,35.4 h -6.799995 l -0.8,-35 -10.8,35 z"
55 id="path60"
56 sodipodi:nodetypes="ccccssccsssccsssccsssssscsssscssscccscscscsccsccccccssssccccccsccsccccccccccccccccccccccccccccc" />
57 </svg>
29152915 # Optional password if configured on the Redis instance
29162916 #
29172917 #password: <secret_password>
2918
2919
2920 # Enable experimental features in Synapse.
2921 #
2922 # Experimental features might break or be removed without a deprecation
2923 # period.
2924 #
2925 experimental_features:
2926 # Support for Spaces (MSC1772), it enables the following:
2927 #
2928 # * The Spaces Summary API (MSC2946).
2929 # * Restricting room membership based on space membership (MSC3083).
2930 #
2931 # Uncomment to disable support for Spaces.
2932 #spaces_enabled: false
0 <!--
1 Include the contents of INSTALL.md from the project root without moving it, which may
2 break links around the internet. Additionally, note that SUMMARY.md is unable to
3 directly link to content outside of the docs/ directory. So we use this file as a
4 redirection.
5 -->
6 {{#include ../../INSTALL.md}}
33 TURN.
44
55 The synapse Matrix Home Server supports integration with TURN server via the
6 [TURN server REST API](<http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
6 [TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
77 allows the Home Server to generate credentials that are valid for use on the
88 TURN server through the use of a secret shared between the Home Server and the
99 TURN server.
0 <!--
1 Include the contents of UPGRADE.rst from the project root without moving it, which may
2 break links around the internet. Additionally, note that SUMMARY.md is unable to
3 directly link to content outside of the docs/ directory. So we use this file as a
4 redirection.
5 -->
6 {{#include ../../UPGRADE.rst}}
0 # Administration
1
2 This section contains information on managing your Synapse homeserver. This includes:
3
4 * Managing users, rooms and media via the Admin API.
5 * Setting up metrics and monitoring to give you insight into your homeserver's health.
6 * Configuring structured logging.
0 # The Admin API
1
2 ## Authenticate as a server admin
3
4 Many of the API calls in the admin api will require an `access_token` for a
5 server admin. (Note that a server admin is distinct from a room admin.)
6
7 A user can be marked as a server admin by updating the database directly, e.g.:
8
9 ```sql
10 UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
11 ```
12
13 A new server admin user can also be created using the `register_new_matrix_user`
14 command. This is a script that is located in the `scripts/` directory, or possibly
15 already on your `$PATH` depending on how Synapse was installed.
16
17 Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
18
19 ## Making an Admin API request
20 Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by
21 providing the token as either a query parameter or a request header. To add it as a request header in cURL:
22
23 ```sh
24 curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>
25 ```
26
27 For more details on access tokens in Matrix, please refer to the complete
28 [matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens).
0 # Configuration
1
2 This section contains information on tweaking Synapse via the various options in the configuration file. A configuration
3 file should have been generated when you [installed Synapse](../../setup/installation.html).
0 # Homeserver Sample Configuration File
1
2 Below is a sample homeserver configuration file. The homeserver configuration file
3 can be tweaked to change the behaviour of your homeserver. A restart of the server is
4 generally required to apply any changes made to this file.
5
6 Note that the contents below are *not* intended to be copied and used as the basis for
7 a real homeserver.yaml. Instead, if you are starting from scratch, please generate
8 a fresh config using Synapse by following the instructions in
9 [Installation](../../setup/installation.md).
10
11 ```yaml
12 {{#include ../../sample_config.yaml}}
13 ```
0 # Logging Sample Configuration File
1
2 Below is a sample logging configuration file. This file can be tweaked to control how your
3 homeserver will output logs. A restart of the server is generally required to apply any
4 changes made to this file.
5
6 Note that the contents below are *not* intended to be copied and used as the basis for
7 a real homeserver.yaml. Instead, if you are starting from scratch, please generate
8 a fresh config using Synapse by following the instructions in
9 [Installation](../../setup/installation.md).
10
11 ```yaml
12 {{#include ../../sample_log_config.yaml}}
13 ``__`
0 # User Authentication
1
2 Synapse supports multiple methods of authenticating users, either out-of-the-box or through custom pluggable
3 authentication modules.
4
5 Included in Synapse is support for authenticating users via:
6
7 * A username and password.
8 * An email address and password.
9 * Single Sign-On through the SAML, Open ID Connect or CAS protocols.
10 * JSON Web Tokens.
11 * An administrator's shared secret.
12
13 Synapse can additionally be extended to support custom authentication schemes through optional "password auth provider"
14 modules.
0 # Documentation Website Files and Assets
1
2 This directory contains extra files for modifying the look and functionality of
3 [mdbook](https://github.com/rust-lang/mdBook), the documentation software that's
4 used to generate Synapse's documentation website.
5
6 The configuration options in the `output.html` section of [book.toml](../../book.toml)
7 point to additional JS/CSS in this directory that are added on each page load. In
8 addition, the `theme` directory contains files that overwrite their counterparts in
9 each of the default themes included with mdbook.
10
11 Currently we use these files to generate a floating Table of Contents panel. The code for
12 which was partially taken from
13 [JorelAli/mdBook-pagetoc](https://github.com/JorelAli/mdBook-pagetoc/)
14 before being modified such that it scrolls with the content of the page. This is handled
15 by the `table-of-contents.js/css` files. The table of contents panel only appears on pages
16 that have more than one header, as well as only appearing on desktop-sized monitors.
17
18 We remove the navigation arrows which typically appear on the left and right side of the
19 screen on desktop as they interfere with the table of contents. This is handled by
20 the `remove-nav-buttons.css` file.
21
22 Finally, we also stylise the chapter titles in the left sidebar by indenting them
23 slightly so that they are more visually distinguishable from the section headers
24 (the bold titles). This is done through the `indent-section-headers.css` file.
25
26 More information can be found in mdbook's official documentation for
27 [injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
28 and
29 [customising the default themes](https://rust-lang.github.io/mdBook/format/theme/index.html).
0 /*
1 * Indents each chapter title in the left sidebar so that they aren't
2 * at the same level as the section headers.
3 */
4 .chapter-item {
5 margin-left: 1em;
6 }
0 /* Remove the prev, next chapter buttons as they interfere with the
1 * table of contents.
2 * Note that the table of contents only appears on desktop, thus we
3 * only remove the desktop (wide) chapter buttons.
4 */
5 .nav-wide-wrapper {
6 display: none
7 }
0 @media only screen and (max-width:1439px) {
1 .sidetoc {
2 display: none;
3 }
4 }
5
6 @media only screen and (min-width:1440px) {
7 main {
8 position: relative;
9 margin-left: 100px !important;
10 }
11 .sidetoc {
12 margin-left: auto;
13 margin-right: auto;
14 left: calc(100% + (var(--content-max-width))/4 - 140px);
15 position: absolute;
16 text-align: right;
17 }
18 .pagetoc {
19 position: fixed;
20 width: 250px;
21 overflow: auto;
22 right: 20px;
23 height: calc(100% - var(--menu-bar-height));
24 }
25 .pagetoc a {
26 color: var(--fg) !important;
27 display: block;
28 padding: 5px 15px 5px 10px;
29 text-align: left;
30 text-decoration: none;
31 }
32 .pagetoc a:hover,
33 .pagetoc a.active {
34 background: var(--sidebar-bg) !important;
35 color: var(--sidebar-fg) !important;
36 }
37 .pagetoc .active {
38 background: var(--sidebar-bg);
39 color: var(--sidebar-fg);
40 }
41 }
0 const getPageToc = () => document.getElementsByClassName('pagetoc')[0];
1
2 const pageToc = getPageToc();
3 const pageTocChildren = [...pageToc.children];
4 const headers = [...document.getElementsByClassName('header')];
5
6
7 // Select highlighted item in ToC when clicking an item
8 pageTocChildren.forEach(child => {
9 child.addEventHandler('click', () => {
10 pageTocChildren.forEach(child => {
11 child.classList.remove('active');
12 });
13 child.classList.add('active');
14 });
15 });
16
17
18 /**
19 * Test whether a node is in the viewport
20 */
21 function isInViewport(node) {
22 const rect = node.getBoundingClientRect();
23 return rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && rect.right <= (window.innerWidth || document.documentElement.clientWidth);
24 }
25
26
27 /**
28 * Set a new ToC entry.
29 * Clear any previously highlighted ToC items, set the new one,
30 * and adjust the ToC scroll position.
31 */
32 function setTocEntry() {
33 let activeEntry;
34 const pageTocChildren = [...getPageToc().children];
35
36 // Calculate which header is the current one at the top of screen
37 headers.forEach(header => {
38 if (window.pageYOffset >= header.offsetTop) {
39 activeEntry = header;
40 }
41 });
42
43 // Update selected item in ToC when scrolling
44 pageTocChildren.forEach(child => {
45 if (activeEntry.href.localeCompare(child.href) === 0) {
46 child.classList.add('active');
47 } else {
48 child.classList.remove('active');
49 }
50 });
51
52 let tocEntryForLocation = document.querySelector(`nav a[href="${activeEntry.href}"]`);
53 if (tocEntryForLocation) {
54 const headingForLocation = document.querySelector(activeEntry.hash);
55 if (headingForLocation && isInViewport(headingForLocation)) {
56 // Update ToC scroll
57 const nav = getPageToc();
58 const content = document.querySelector('html');
59 if (content.scrollTop !== 0) {
60 nav.scrollTo({
61 top: tocEntryForLocation.offsetTop - 100,
62 left: 0,
63 behavior: 'smooth',
64 });
65 } else {
66 nav.scrollTop = 0;
67 }
68 }
69 }
70 }
71
72
73 /**
74 * Populate sidebar on load
75 */
76 window.addEventListener('load', () => {
77 // Only create table of contents if there is more than one header on the page
78 if (headers.length <= 1) {
79 return;
80 }
81
82 // Create an entry in the page table of contents for each header in the document
83 headers.forEach((header, index) => {
84 const link = document.createElement('a');
85
86 // Indent shows hierarchy
87 let indent = '0px';
88 switch (header.parentElement.tagName) {
89 case 'H1':
90 indent = '5px';
91 break;
92 case 'H2':
93 indent = '20px';
94 break;
95 case 'H3':
96 indent = '30px';
97 break;
98 case 'H4':
99 indent = '40px';
100 break;
101 case 'H5':
102 indent = '50px';
103 break;
104 case 'H6':
105 indent = '60px';
106 break;
107 default:
108 break;
109 }
110
111 let tocEntry;
112 if (index == 0) {
113 // Create a bolded title for the first element
114 tocEntry = document.createElement("strong");
115 tocEntry.innerHTML = header.text;
116 } else {
117 // All other elements are non-bold
118 tocEntry = document.createTextNode(header.text);
119 }
120 link.appendChild(tocEntry);
121
122 link.style.paddingLeft = indent;
123 link.href = header.href;
124 pageToc.appendChild(link);
125 });
126 setTocEntry.call();
127 });
128
129
130 // Handle active headers on scroll, if there is more than one header on the page
131 if (headers.length > 1) {
132 window.addEventListener('scroll', setTocEntry);
133 }
0 <!DOCTYPE HTML>
1 <html lang="{{ language }}" class="sidebar-visible no-js {{ default_theme }}">
2 <head>
3 <!-- Book generated using mdBook -->
4 <meta charset="UTF-8">
5 <title>{{ title }}</title>
6 {{#if is_print }}
7 <meta name="robots" content="noindex" />
8 {{/if}}
9 {{#if base_url}}
10 <base href="{{ base_url }}">
11 {{/if}}
12
13
14 <!-- Custom HTML head -->
15 {{> head}}
16
17 <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
18 <meta name="description" content="{{ description }}">
19 <meta name="viewport" content="width=device-width, initial-scale=1">
20 <meta name="theme-color" content="#ffffff" />
21
22 {{#if favicon_svg}}
23 <link rel="icon" href="{{ path_to_root }}favicon.svg">
24 {{/if}}
25 {{#if favicon_png}}
26 <link rel="shortcut icon" href="{{ path_to_root }}favicon.png">
27 {{/if}}
28 <link rel="stylesheet" href="{{ path_to_root }}css/variables.css">
29 <link rel="stylesheet" href="{{ path_to_root }}css/general.css">
30 <link rel="stylesheet" href="{{ path_to_root }}css/chrome.css">
31 {{#if print_enable}}
32 <link rel="stylesheet" href="{{ path_to_root }}css/print.css" media="print">
33 {{/if}}
34
35 <!-- Fonts -->
36 <link rel="stylesheet" href="{{ path_to_root }}FontAwesome/css/font-awesome.css">
37 {{#if copy_fonts}}
38 <link rel="stylesheet" href="{{ path_to_root }}fonts/fonts.css">
39 {{/if}}
40
41 <!-- Highlight.js Stylesheets -->
42 <link rel="stylesheet" href="{{ path_to_root }}highlight.css">
43 <link rel="stylesheet" href="{{ path_to_root }}tomorrow-night.css">
44 <link rel="stylesheet" href="{{ path_to_root }}ayu-highlight.css">
45
46 <!-- Custom theme stylesheets -->
47 {{#each additional_css}}
48 <link rel="stylesheet" href="{{ ../path_to_root }}{{ this }}">
49 {{/each}}
50
51 {{#if mathjax_support}}
52 <!-- MathJax -->
53 <script async type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
54 {{/if}}
55 </head>
56 <body>
57 <!-- Provide site root to javascript -->
58 <script type="text/javascript">
59 var path_to_root = "{{ path_to_root }}";
60 var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "{{ preferred_dark_theme }}" : "{{ default_theme }}";
61 </script>
62
63 <!-- Work around some values being stored in localStorage wrapped in quotes -->
64 <script type="text/javascript">
65 try {
66 var theme = localStorage.getItem('mdbook-theme');
67 var sidebar = localStorage.getItem('mdbook-sidebar');
68 if (theme.startsWith('"') && theme.endsWith('"')) {
69 localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
70 }
71 if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
72 localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
73 }
74 } catch (e) { }
75 </script>
76
77 <!-- Set the theme before any content is loaded, prevents flash -->
78 <script type="text/javascript">
79 var theme;
80 try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
81 if (theme === null || theme === undefined) { theme = default_theme; }
82 var html = document.querySelector('html');
83 html.classList.remove('no-js')
84 html.classList.remove('{{ default_theme }}')
85 html.classList.add(theme);
86 html.classList.add('js');
87 </script>
88
89 <!-- Hide / unhide sidebar before it is displayed -->
90 <script type="text/javascript">
91 var html = document.querySelector('html');
92 var sidebar = 'hidden';
93 if (document.body.clientWidth >= 1080) {
94 try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
95 sidebar = sidebar || 'visible';
96 }
97 html.classList.remove('sidebar-visible');
98 html.classList.add("sidebar-" + sidebar);
99 </script>
100
101 <nav id="sidebar" class="sidebar" aria-label="Table of contents">
102 <div class="sidebar-scrollbox">
103 {{#toc}}{{/toc}}
104 </div>
105 <div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
106 </nav>
107
108 <div id="page-wrapper" class="page-wrapper">
109
110 <div class="page">
111 {{> header}}
112 <div id="menu-bar-hover-placeholder"></div>
113 <div id="menu-bar" class="menu-bar sticky bordered">
114 <div class="left-buttons">
115 <button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
116 <i class="fa fa-bars"></i>
117 </button>
118 <button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
119 <i class="fa fa-paint-brush"></i>
120 </button>
121 <ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
122 <li role="none"><button role="menuitem" class="theme" id="light">{{ theme_option "Light" }}</button></li>
123 <li role="none"><button role="menuitem" class="theme" id="rust">{{ theme_option "Rust" }}</button></li>
124 <li role="none"><button role="menuitem" class="theme" id="coal">{{ theme_option "Coal" }}</button></li>
125 <li role="none"><button role="menuitem" class="theme" id="navy">{{ theme_option "Navy" }}</button></li>
126 <li role="none"><button role="menuitem" class="theme" id="ayu">{{ theme_option "Ayu" }}</button></li>
127 </ul>
128 {{#if search_enabled}}
129 <button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar">
130 <i class="fa fa-search"></i>
131 </button>
132 {{/if}}
133 </div>
134
135 <h1 class="menu-title">{{ book_title }}</h1>
136
137 <div class="right-buttons">
138 {{#if print_enable}}
139 <a href="{{ path_to_root }}print.html" title="Print this book" aria-label="Print this book">
140 <i id="print-button" class="fa fa-print"></i>
141 </a>
142 {{/if}}
143 {{#if git_repository_url}}
144 <a href="{{git_repository_url}}" title="Git repository" aria-label="Git repository">
145 <i id="git-repository-button" class="fa {{git_repository_icon}}"></i>
146 </a>
147 {{/if}}
148 {{#if git_repository_edit_url}}
149 <a href="{{git_repository_edit_url}}" title="Suggest an edit" aria-label="Suggest an edit">
150 <i id="git-edit-button" class="fa fa-edit"></i>
151 </a>
152 {{/if}}
153
154 </div>
155 </div>
156
157 {{#if search_enabled}}
158 <div id="search-wrapper" class="hidden">
159 <form id="searchbar-outer" class="searchbar-outer">
160 <input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header">
161 </form>
162 <div id="searchresults-outer" class="searchresults-outer hidden">
163 <div id="searchresults-header" class="searchresults-header"></div>
164 <ul id="searchresults">
165 </ul>
166 </div>
167 </div>
168 {{/if}}
169
170 <!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
171 <script type="text/javascript">
172 document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
173 document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
174 Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
175 link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
176 });
177 </script>
178
179 <div id="content" class="content">
180 <main>
181 <!-- Page table of contents -->
182 <div class="sidetoc">
183 <nav class="pagetoc"></nav>
184 </div>
185
186 {{{ content }}}
187 </main>
188
189 <nav class="nav-wrapper" aria-label="Page navigation">
190 <!-- Mobile navigation buttons -->
191 {{#previous}}
192 <a rel="prev" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
193 <i class="fa fa-angle-left"></i>
194 </a>
195 {{/previous}}
196
197 {{#next}}
198 <a rel="next" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
199 <i class="fa fa-angle-right"></i>
200 </a>
201 {{/next}}
202
203 <div style="clear: both"></div>
204 </nav>
205 </div>
206 </div>
207
208 <nav class="nav-wide-wrapper" aria-label="Page navigation">
209 {{#previous}}
210 <a rel="prev" href="{{ path_to_root }}{{link}}" class="nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
211 <i class="fa fa-angle-left"></i>
212 </a>
213 {{/previous}}
214
215 {{#next}}
216 <a rel="next" href="{{ path_to_root }}{{link}}" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
217 <i class="fa fa-angle-right"></i>
218 </a>
219 {{/next}}
220 </nav>
221
222 </div>
223
224 {{#if livereload}}
225 <!-- Livereload script (if served using the cli tool) -->
226 <script type="text/javascript">
227 var socket = new WebSocket("{{{livereload}}}");
228 socket.onmessage = function (event) {
229 if (event.data === "reload") {
230 socket.close();
231 location.reload();
232 }
233 };
234 window.onbeforeunload = function() {
235 socket.close();
236 }
237 </script>
238 {{/if}}
239
240 {{#if google_analytics}}
241 <!-- Google Analytics Tag -->
242 <script type="text/javascript">
243 var localAddrs = ["localhost", "127.0.0.1", ""];
244 // make sure we don't activate google analytics if the developer is
245 // inspecting the book locally...
246 if (localAddrs.indexOf(document.location.hostname) === -1) {
247 (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
248 (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
249 m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
250 })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
251 ga('create', '{{google_analytics}}', 'auto');
252 ga('send', 'pageview');
253 }
254 </script>
255 {{/if}}
256
257 {{#if playground_line_numbers}}
258 <script type="text/javascript">
259 window.playground_line_numbers = true;
260 </script>
261 {{/if}}
262
263 {{#if playground_copyable}}
264 <script type="text/javascript">
265 window.playground_copyable = true;
266 </script>
267 {{/if}}
268
269 {{#if playground_js}}
270 <script src="{{ path_to_root }}ace.js" type="text/javascript" charset="utf-8"></script>
271 <script src="{{ path_to_root }}editor.js" type="text/javascript" charset="utf-8"></script>
272 <script src="{{ path_to_root }}mode-rust.js" type="text/javascript" charset="utf-8"></script>
273 <script src="{{ path_to_root }}theme-dawn.js" type="text/javascript" charset="utf-8"></script>
274 <script src="{{ path_to_root }}theme-tomorrow_night.js" type="text/javascript" charset="utf-8"></script>
275 {{/if}}
276
277 {{#if search_js}}
278 <script src="{{ path_to_root }}elasticlunr.min.js" type="text/javascript" charset="utf-8"></script>
279 <script src="{{ path_to_root }}mark.min.js" type="text/javascript" charset="utf-8"></script>
280 <script src="{{ path_to_root }}searcher.js" type="text/javascript" charset="utf-8"></script>
281 {{/if}}
282
283 <script src="{{ path_to_root }}clipboard.min.js" type="text/javascript" charset="utf-8"></script>
284 <script src="{{ path_to_root }}highlight.js" type="text/javascript" charset="utf-8"></script>
285 <script src="{{ path_to_root }}book.js" type="text/javascript" charset="utf-8"></script>
286
287 <!-- Custom JS scripts -->
288 {{#each additional_js}}
289 <script type="text/javascript" src="{{ ../path_to_root }}{{this}}"></script>
290 {{/each}}
291
292 {{#if is_print}}
293 {{#if mathjax_support}}
294 <script type="text/javascript">
295 window.addEventListener('load', function() {
296 MathJax.Hub.Register.StartupHook('End', function() {
297 window.setTimeout(window.print, 100);
298 });
299 });
300 </script>
301 {{else}}
302 <script type="text/javascript">
303 window.addEventListener('load', function() {
304 window.setTimeout(window.print, 100);
305 });
306 </script>
307 {{/if}}
308 {{/if}}
309
310 </body>
311 </html>
0 # Introduction
1
2 Welcome to the documentation repository for Synapse, the reference
3 [Matrix](https://matrix.org) homeserver implementation.
227227 ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
228228 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
229229 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
230 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
231 ^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
232 ^/_matrix/client/(api/v1|r0|unstable)/search$
230233
231234 # Registration/login requests
232235 ^/_matrix/client/(api/v1|r0|unstable)/login$
3131 synapse/http/federation/matrix_federation_agent.py,
3232 synapse/http/federation/well_known_resolver.py,
3333 synapse/http/matrixfederationclient.py,
34 synapse/http/servlet.py,
3435 synapse/http/server.py,
3536 synapse/http/site.py,
3637 synapse/logging,
129130 [mypy-canonicaljson]
130131 ignore_missing_imports = True
131132
132 [mypy-jaeger_client]
133 [mypy-jaeger_client.*]
133134 ignore_missing_imports = True
134135
135136 [mypy-jsonschema]
+0
-108
scripts-dev/convert_server_keys.py less more
0 import json
1 import sys
2 import time
3
4 import psycopg2
5 import yaml
6 from canonicaljson import encode_canonical_json
7 from signedjson.key import read_signing_keys
8 from signedjson.sign import sign_json
9 from unpaddedbase64 import encode_base64
10
11 db_binary_type = memoryview
12
13
14 def select_v1_keys(connection):
15 cursor = connection.cursor()
16 cursor.execute("SELECT server_name, key_id, verify_key FROM server_signature_keys")
17 rows = cursor.fetchall()
18 cursor.close()
19 results = {}
20 for server_name, key_id, verify_key in rows:
21 results.setdefault(server_name, {})[key_id] = encode_base64(verify_key)
22 return results
23
24
25 def select_v1_certs(connection):
26 cursor = connection.cursor()
27 cursor.execute("SELECT server_name, tls_certificate FROM server_tls_certificates")
28 rows = cursor.fetchall()
29 cursor.close()
30 results = {}
31 for server_name, tls_certificate in rows:
32 results[server_name] = tls_certificate
33 return results
34
35
36 def select_v2_json(connection):
37 cursor = connection.cursor()
38 cursor.execute("SELECT server_name, key_id, key_json FROM server_keys_json")
39 rows = cursor.fetchall()
40 cursor.close()
41 results = {}
42 for server_name, key_id, key_json in rows:
43 results.setdefault(server_name, {})[key_id] = json.loads(
44 str(key_json).decode("utf-8")
45 )
46 return results
47
48
49 def convert_v1_to_v2(server_name, valid_until, keys, certificate):
50 return {
51 "old_verify_keys": {},
52 "server_name": server_name,
53 "verify_keys": {key_id: {"key": key} for key_id, key in keys.items()},
54 "valid_until_ts": valid_until,
55 }
56
57
58 def rows_v2(server, json):
59 valid_until = json["valid_until_ts"]
60 key_json = encode_canonical_json(json)
61 for key_id in json["verify_keys"]:
62 yield (server, key_id, "-", valid_until, valid_until, db_binary_type(key_json))
63
64
65 def main():
66 config = yaml.safe_load(open(sys.argv[1]))
67 valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24
68
69 server_name = config["server_name"]
70 signing_key = read_signing_keys(open(config["signing_key_path"]))[0]
71
72 database = config["database"]
73 assert database["name"] == "psycopg2", "Can only convert for postgresql"
74 args = database["args"]
75 args.pop("cp_max")
76 args.pop("cp_min")
77 connection = psycopg2.connect(**args)
78 keys = select_v1_keys(connection)
79 certificates = select_v1_certs(connection)
80 json = select_v2_json(connection)
81
82 result = {}
83 for server in keys:
84 if server not in json:
85 v2_json = convert_v1_to_v2(
86 server, valid_until, keys[server], certificates[server]
87 )
88 v2_json = sign_json(v2_json, server_name, signing_key)
89 result[server] = v2_json
90
91 yaml.safe_dump(result, sys.stdout, default_flow_style=False)
92
93 rows = [row for server, json in result.items() for row in rows_v2(server, json)]
94
95 cursor = connection.cursor()
96 cursor.executemany(
97 "INSERT INTO server_keys_json ("
98 " server_name, key_id, from_server,"
99 " ts_added_ms, ts_valid_until_ms, key_json"
100 ") VALUES (%s, %s, %s, %s, %s, %s)",
101 rows,
102 )
103 connection.commit()
104
105
106 if __name__ == "__main__":
107 main()
138138 click.get_current_context().abort()
139139
140140 # Switch to the release branch.
141 release_branch_name = f"release-v{base_version}"
141 release_branch_name = f"release-v{current_version.major}.{current_version.minor}"
142142 release_branch = find_ref(repo, release_branch_name)
143143 if release_branch:
144144 if release_branch.is_remote():
4646 except ImportError:
4747 pass
4848
49 __version__ = "1.35.1"
49 __version__ = "1.36.0"
5050
5151 if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
5252 # We import here so that we don't have to install a bunch of deps when
205205 requester = create_requester(user_id, app_service=app_service)
206206
207207 request.requester = user_id
208 if user_id in self._force_tracing_for_users:
209 opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
208210 opentracing.set_tag("authenticated_entity", user_id)
209211 opentracing.set_tag("user_id", user_id)
210212 opentracing.set_tag("appservice_id", app_service.id)
211 if user_id in self._force_tracing_for_users:
212 opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
213213
214214 return requester
215215
258258 )
259259
260260 request.requester = requester
261 if user_info.token_owner in self._force_tracing_for_users:
262 opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
261263 opentracing.set_tag("authenticated_entity", user_info.token_owner)
262264 opentracing.set_tag("user_id", user_info.user_id)
263265 if device_id:
264266 opentracing.set_tag("device_id", device_id)
265 if user_info.token_owner in self._force_tracing_for_users:
266 opentracing.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
267267
268268 return requester
269269 except KeyError:
180180 RoomVersions.V5,
181181 RoomVersions.V6,
182182 RoomVersions.MSC2176,
183 RoomVersions.MSC3083,
183184 )
184 # Note that we do not include MSC3083 here unless it is enabled in the config.
185185 } # type: Dict[str, RoomVersion]
260260 Refresh the TLS certificates that Synapse is using by re-reading them from
261261 disk and updating the TLS context factories to use them.
262262 """
263
264263 if not hs.config.has_tls_listener():
265 # attempt to reload the certs for the good of the tls_fingerprints
266 hs.config.read_certificate_from_disk(require_cert_and_key=False)
267264 return
268265
269 hs.config.read_certificate_from_disk(require_cert_and_key=True)
266 hs.config.read_certificate_from_disk()
270267 hs.tls_server_context_factory = context_factory.ServerContextFactory(hs.config)
271268
272269 if hs._listening_services:
108108 MonthlyActiveUsersWorkerStore,
109109 )
110110 from synapse.storage.databases.main.presence import PresenceStore
111 from synapse.storage.databases.main.search import SearchWorkerStore
111 from synapse.storage.databases.main.search import SearchStore
112112 from synapse.storage.databases.main.stats import StatsStore
113113 from synapse.storage.databases.main.transactions import TransactionWorkerStore
114114 from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
241241 MonthlyActiveUsersWorkerStore,
242242 MediaRepositoryStore,
243243 ServerMetricsStore,
244 SearchWorkerStore,
244 SearchStore,
245245 TransactionWorkerStore,
246246 BaseSlavedStore,
247247 ):
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
1313
14 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
1514 from synapse.config._base import Config
1615 from synapse.types import JsonDict
1716
2726 # MSC2858 (multiple SSO identity providers)
2827 self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool
2928
30 # Spaces (MSC1772, MSC2946, MSC3083, etc)
31 self.spaces_enabled = experimental.get("spaces_enabled", True) # type: bool
32 if self.spaces_enabled:
33 KNOWN_ROOM_VERSIONS[RoomVersions.MSC3083.identifier] = RoomVersions.MSC3083
34
3529 # MSC3026 (busy presence state)
3630 self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool
37
38 def generate_config_section(self, **kwargs):
39 return """\
40 # Enable experimental features in Synapse.
41 #
42 # Experimental features might break or be removed without a deprecation
43 # period.
44 #
45 experimental_features:
46 # Support for Spaces (MSC1772), it enables the following:
47 #
48 # * The Spaces Summary API (MSC2946).
49 # * Restricting room membership based on space membership (MSC3083).
50 #
51 # Uncomment to disable support for Spaces.
52 #spaces_enabled: false
53 """
214214 days_remaining = (expires_on - now).days
215215 return days_remaining
216216
217 def read_certificate_from_disk(self, require_cert_and_key: bool):
217 def read_certificate_from_disk(self):
218218 """
219219 Read the certificates and private key from disk.
220
221 Args:
222 require_cert_and_key: set to True to throw an error if the certificate
223 and key file are not given
224 """
225 if require_cert_and_key:
226 self.tls_private_key = self.read_tls_private_key()
227 self.tls_certificate = self.read_tls_certificate()
228 elif self.tls_certificate_file:
229 # we only need the certificate for the tls_fingerprints. Reload it if we
230 # can, but it's not a fatal error if we can't.
231 try:
232 self.tls_certificate = self.read_tls_certificate()
233 except Exception as e:
234 logger.info(
235 "Unable to read TLS certificate (%s). Ignoring as no "
236 "tls listeners enabled.",
237 e,
238 )
220 """
221 self.tls_private_key = self.read_tls_private_key()
222 self.tls_certificate = self.read_tls_certificate()
239223
240224 def generate_config_section(
241225 self,
1515 import abc
1616 import logging
1717 import urllib
18 from collections import defaultdict
19 from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Set, Tuple
18 from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
2019
2120 import attr
2221 from signedjson.key import (
4342 from synapse.config.key import TrustedKeyServer
4443 from synapse.events import EventBase
4544 from synapse.events.utils import prune_event_dict
46 from synapse.logging.context import (
47 PreserveLoggingContext,
48 make_deferred_yieldable,
49 preserve_fn,
50 run_in_background,
51 )
45 from synapse.logging.context import make_deferred_yieldable, run_in_background
5246 from synapse.storage.keys import FetchKeyResult
5347 from synapse.types import JsonDict
5448 from synapse.util import unwrapFirstError
5549 from synapse.util.async_helpers import yieldable_gather_results
56 from synapse.util.metrics import Measure
50 from synapse.util.batching_queue import BatchingQueue
5751 from synapse.util.retryutils import NotRetryingDestination
5852
5953 if TYPE_CHECKING:
7973 minimum_valid_until_ts: time at which we require the signing key to
8074 be valid. (0 implies we don't care)
8175
82 request_name: The name of the request.
83
8476 key_ids: The set of key_ids to that could be used to verify the JSON object
85
86 key_ready (Deferred[str, str, nacl.signing.VerifyKey]):
87 A deferred (server_name, key_id, verify_key) tuple that resolves when
88 a verify key has been fetched. The deferreds' callbacks are run with no
89 logcontext.
90
91 If we are unable to find a key which satisfies the request, the deferred
92 errbacks with an M_UNAUTHORIZED SynapseError.
9377 """
9478
9579 server_name = attr.ib(type=str)
9680 get_json_object = attr.ib(type=Callable[[], JsonDict])
9781 minimum_valid_until_ts = attr.ib(type=int)
98 request_name = attr.ib(type=str)
9982 key_ids = attr.ib(type=List[str])
100 key_ready = attr.ib(default=attr.Factory(defer.Deferred), type=defer.Deferred)
10183
10284 @staticmethod
10385 def from_json_object(
10486 server_name: str,
10587 json_object: JsonDict,
10688 minimum_valid_until_ms: int,
107 request_name: str,
10889 ):
10990 """Create a VerifyJsonRequest to verify all signatures on a signed JSON
11091 object for the given server.
11495 server_name,
11596 lambda: json_object,
11697 minimum_valid_until_ms,
117 request_name=request_name,
11898 key_ids=key_ids,
11999 )
120100
134114 # memory than the Event object itself.
135115 lambda: prune_event_dict(event.room_version, event.get_pdu_json()),
136116 minimum_valid_until_ms,
137 request_name=event.event_id,
138117 key_ids=key_ids,
118 )
119
120 def to_fetch_key_request(self) -> "_FetchKeyRequest":
121 """Create a key fetch request for all keys needed to satisfy the
122 verification request.
123 """
124 return _FetchKeyRequest(
125 server_name=self.server_name,
126 minimum_valid_until_ts=self.minimum_valid_until_ts,
127 key_ids=self.key_ids,
139128 )
140129
141130
143132 pass
144133
145134
135 @attr.s(slots=True)
136 class _FetchKeyRequest:
137 """A request for keys for a given server.
138
139 We will continue to try and fetch until we have all the keys listed under
140 `key_ids` (with an appropriate `valid_until_ts` property) or we run out of
141 places to fetch keys from.
142
143 Attributes:
144 server_name: The name of the server that owns the keys.
145 minimum_valid_until_ts: The timestamp which the keys must be valid until.
146 key_ids: The IDs of the keys to attempt to fetch
147 """
148
149 server_name = attr.ib(type=str)
150 minimum_valid_until_ts = attr.ib(type=int)
151 key_ids = attr.ib(type=List[str])
152
153
146154 class Keyring:
155 """Handles verifying signed JSON objects and fetching the keys needed to do
156 so.
157 """
158
147159 def __init__(
148160 self, hs: "HomeServer", key_fetchers: "Optional[Iterable[KeyFetcher]]" = None
149161 ):
157169 )
158170 self._key_fetchers = key_fetchers
159171
160 # map from server name to Deferred. Has an entry for each server with
161 # an ongoing key download; the Deferred completes once the download
162 # completes.
163 #
164 # These are regular, logcontext-agnostic Deferreds.
165 self.key_downloads = {} # type: Dict[str, defer.Deferred]
166
167 def verify_json_for_server(
172 self._server_queue = BatchingQueue(
173 "keyring_server",
174 clock=hs.get_clock(),
175 process_batch_callback=self._inner_fetch_key_requests,
176 ) # type: BatchingQueue[_FetchKeyRequest, Dict[str, Dict[str, FetchKeyResult]]]
177
178 async def verify_json_for_server(
168179 self,
169180 server_name: str,
170181 json_object: JsonDict,
171182 validity_time: int,
172 request_name: str,
173 ) -> defer.Deferred:
183 ) -> None:
174184 """Verify that a JSON object has been signed by a given server
185
186 Completes if the the object was correctly signed, otherwise raises.
175187
176188 Args:
177189 server_name: name of the server which must have signed this object
180192
181193 validity_time: timestamp at which we require the signing key to
182194 be valid. (0 implies we don't care)
183
184 request_name: an identifier for this json object (eg, an event id)
185 for logging.
186
187 Returns:
188 Deferred[None]: completes if the the object was correctly signed, otherwise
189 errbacks with an error
190195 """
191196 request = VerifyJsonRequest.from_json_object(
192197 server_name,
193198 json_object,
194199 validity_time,
195 request_name,
196 )
197 requests = (request,)
198 return make_deferred_yieldable(self._verify_objects(requests)[0])
200 )
201 return await self.process_request(request)
199202
200203 def verify_json_objects_for_server(
201 self, server_and_json: Iterable[Tuple[str, dict, int, str]]
204 self, server_and_json: Iterable[Tuple[str, dict, int]]
202205 ) -> List[defer.Deferred]:
203206 """Bulk verifies signatures of json objects, bulk fetching keys as
204207 necessary.
205208
206209 Args:
207210 server_and_json:
208 Iterable of (server_name, json_object, validity_time, request_name)
211 Iterable of (server_name, json_object, validity_time)
209212 tuples.
210213
211214 validity_time is a timestamp at which the signing key must be
212215 valid.
213
214 request_name is an identifier for this json object (eg, an event id)
215 for logging.
216216
217217 Returns:
218218 List<Deferred[None]>: for each input triplet, a deferred indicating success
220220 server_name. The deferreds run their callbacks in the sentinel
221221 logcontext.
222222 """
223 return self._verify_objects(
224 VerifyJsonRequest.from_json_object(
225 server_name, json_object, validity_time, request_name
226 )
227 for server_name, json_object, validity_time, request_name in server_and_json
228 )
229
230 def verify_events_for_server(
231 self, server_and_events: Iterable[Tuple[str, EventBase, int]]
232 ) -> List[defer.Deferred]:
233 """Bulk verification of signatures on events.
234
235 Args:
236 server_and_events:
237 Iterable of `(server_name, event, validity_time)` tuples.
238
239 `server_name` is which server we are verifying the signature for
240 on the event.
241
242 `event` is the event that we'll verify the signatures of for
243 the given `server_name`.
244
245 `validity_time` is a timestamp at which the signing key must be
246 valid.
247
248 Returns:
249 List<Deferred[None]>: for each input triplet, a deferred indicating success
250 or failure to verify each event's signature for the given
251 server_name. The deferreds run their callbacks in the sentinel
252 logcontext.
253 """
254 return self._verify_objects(
255 VerifyJsonRequest.from_event(server_name, event, validity_time)
256 for server_name, event, validity_time in server_and_events
257 )
258
259 def _verify_objects(
260 self, verify_requests: Iterable[VerifyJsonRequest]
261 ) -> List[defer.Deferred]:
262 """Does the work of verify_json_[objects_]for_server
263
264
265 Args:
266 verify_requests: Iterable of verification requests.
267
268 Returns:
269 List<Deferred[None]>: for each input item, a deferred indicating success
270 or failure to verify each json object's signature for the given
271 server_name. The deferreds run their callbacks in the sentinel
272 logcontext.
273 """
274 # a list of VerifyJsonRequests which are awaiting a key lookup
275 key_lookups = []
276 handle = preserve_fn(_handle_key_deferred)
277
278 def process(verify_request: VerifyJsonRequest) -> defer.Deferred:
279 """Process an entry in the request list
280
281 Adds a key request to key_lookups, and returns a deferred which
282 will complete or fail (in the sentinel context) when verification completes.
283 """
284 if not verify_request.key_ids:
285 return defer.fail(
286 SynapseError(
287 400,
288 "Not signed by %s" % (verify_request.server_name,),
289 Codes.UNAUTHORIZED,
290 )
291 )
292
293 logger.debug(
294 "Verifying %s for %s with key_ids %s, min_validity %i",
295 verify_request.request_name,
223 return [
224 run_in_background(
225 self.process_request,
226 VerifyJsonRequest.from_json_object(
227 server_name,
228 json_object,
229 validity_time,
230 ),
231 )
232 for server_name, json_object, validity_time in server_and_json
233 ]
234
235 async def verify_event_for_server(
236 self,
237 server_name: str,
238 event: EventBase,
239 validity_time: int,
240 ) -> None:
241 await self.process_request(
242 VerifyJsonRequest.from_event(
243 server_name,
244 event,
245 validity_time,
246 )
247 )
248
249 async def process_request(self, verify_request: VerifyJsonRequest) -> None:
250 """Processes the `VerifyJsonRequest`. Raises if the object is not signed
251 by the server, the signatures don't match or we failed to fetch the
252 necessary keys.
253 """
254
255 if not verify_request.key_ids:
256 raise SynapseError(
257 400,
258 f"Not signed by {verify_request.server_name}",
259 Codes.UNAUTHORIZED,
260 )
261
262 # Add the keys we need to verify to the queue for retrieval. We queue
263 # up requests for the same server so we don't end up with many in flight
264 # requests for the same keys.
265 key_request = verify_request.to_fetch_key_request()
266 found_keys_by_server = await self._server_queue.add_to_queue(
267 key_request, key=verify_request.server_name
268 )
269
270 # Since we batch up requests the returned set of keys may contain keys
271 # from other servers, so we pull out only the ones we care about.s
272 found_keys = found_keys_by_server.get(verify_request.server_name, {})
273
274 # Verify each signature we got valid keys for, raising if we can't
275 # verify any of them.
276 verified = False
277 for key_id in verify_request.key_ids:
278 key_result = found_keys.get(key_id)
279 if not key_result:
280 continue
281
282 if key_result.valid_until_ts < verify_request.minimum_valid_until_ts:
283 continue
284
285 verify_key = key_result.verify_key
286 json_object = verify_request.get_json_object()
287 try:
288 verify_signed_json(
289 json_object,
290 verify_request.server_name,
291 verify_key,
292 )
293 verified = True
294 except SignatureVerifyException as e:
295 logger.debug(
296 "Error verifying signature for %s:%s:%s with key %s: %s",
297 verify_request.server_name,
298 verify_key.alg,
299 verify_key.version,
300 encode_verify_key_base64(verify_key),
301 str(e),
302 )
303 raise SynapseError(
304 401,
305 "Invalid signature for server %s with key %s:%s: %s"
306 % (
307 verify_request.server_name,
308 verify_key.alg,
309 verify_key.version,
310 str(e),
311 ),
312 Codes.UNAUTHORIZED,
313 )
314
315 if not verified:
316 raise SynapseError(
317 401,
318 f"Failed to find any key to satisfy: {key_request}",
319 Codes.UNAUTHORIZED,
320 )
321
322 async def _inner_fetch_key_requests(
323 self, requests: List[_FetchKeyRequest]
324 ) -> Dict[str, Dict[str, FetchKeyResult]]:
325 """Processing function for the queue of `_FetchKeyRequest`."""
326
327 logger.debug("Starting fetch for %s", requests)
328
329 # First we need to deduplicate requests for the same key. We do this by
330 # taking the *maximum* requested `minimum_valid_until_ts` for each pair
331 # of server name/key ID.
332 server_to_key_to_ts = {} # type: Dict[str, Dict[str, int]]
333 for request in requests:
334 by_server = server_to_key_to_ts.setdefault(request.server_name, {})
335 for key_id in request.key_ids:
336 existing_ts = by_server.get(key_id, 0)
337 by_server[key_id] = max(request.minimum_valid_until_ts, existing_ts)
338
339 deduped_requests = [
340 _FetchKeyRequest(server_name, minimum_valid_ts, [key_id])
341 for server_name, by_server in server_to_key_to_ts.items()
342 for key_id, minimum_valid_ts in by_server.items()
343 ]
344
345 logger.debug("Deduplicated key requests to %s", deduped_requests)
346
347 # For each key we call `_inner_verify_request` which will handle
348 # fetching each key. Note these shouldn't throw if we fail to contact
349 # other servers etc.
350 results_per_request = await yieldable_gather_results(
351 self._inner_fetch_key_request,
352 deduped_requests,
353 )
354
355 # We now convert the returned list of results into a map from server
356 # name to key ID to FetchKeyResult, to return.
357 to_return = {} # type: Dict[str, Dict[str, FetchKeyResult]]
358 for (request, results) in zip(deduped_requests, results_per_request):
359 to_return_by_server = to_return.setdefault(request.server_name, {})
360 for key_id, key_result in results.items():
361 existing = to_return_by_server.get(key_id)
362 if not existing or existing.valid_until_ts < key_result.valid_until_ts:
363 to_return_by_server[key_id] = key_result
364
365 return to_return
366
367 async def _inner_fetch_key_request(
368 self, verify_request: _FetchKeyRequest
369 ) -> Dict[str, FetchKeyResult]:
370 """Attempt to fetch the given key by calling each key fetcher one by
371 one.
372 """
373 logger.debug("Starting fetch for %s", verify_request)
374
375 found_keys: Dict[str, FetchKeyResult] = {}
376 missing_key_ids = set(verify_request.key_ids)
377
378 for fetcher in self._key_fetchers:
379 if not missing_key_ids:
380 break
381
382 logger.debug("Getting keys from %s for %s", fetcher, verify_request)
383 keys = await fetcher.get_keys(
296384 verify_request.server_name,
297 verify_request.key_ids,
385 list(missing_key_ids),
298386 verify_request.minimum_valid_until_ts,
299387 )
300388
301 # add the key request to the queue, but don't start it off yet.
302 key_lookups.append(verify_request)
303
304 # now run _handle_key_deferred, which will wait for the key request
305 # to complete and then do the verification.
306 #
307 # We want _handle_key_request to log to the right context, so we
308 # wrap it with preserve_fn (aka run_in_background)
309 return handle(verify_request)
310
311 results = [process(r) for r in verify_requests]
312
313 if key_lookups:
314 run_in_background(self._start_key_lookups, key_lookups)
315
316 return results
317
318 async def _start_key_lookups(
319 self, verify_requests: List[VerifyJsonRequest]
320 ) -> None:
321 """Sets off the key fetches for each verify request
322
323 Once each fetch completes, verify_request.key_ready will be resolved.
324
325 Args:
326 verify_requests:
327 """
328
329 try:
330 # map from server name to a set of outstanding request ids
331 server_to_request_ids = {} # type: Dict[str, Set[int]]
332
333 for verify_request in verify_requests:
334 server_name = verify_request.server_name
335 request_id = id(verify_request)
336 server_to_request_ids.setdefault(server_name, set()).add(request_id)
337
338 # Wait for any previous lookups to complete before proceeding.
339 await self.wait_for_previous_lookups(server_to_request_ids.keys())
340
341 # take out a lock on each of the servers by sticking a Deferred in
342 # key_downloads
343 for server_name in server_to_request_ids.keys():
344 self.key_downloads[server_name] = defer.Deferred()
345 logger.debug("Got key lookup lock on %s", server_name)
346
347 # When we've finished fetching all the keys for a given server_name,
348 # drop the lock by resolving the deferred in key_downloads.
349 def drop_server_lock(server_name):
350 d = self.key_downloads.pop(server_name)
351 d.callback(None)
352
353 def lookup_done(res, verify_request):
354 server_name = verify_request.server_name
355 server_requests = server_to_request_ids[server_name]
356 server_requests.remove(id(verify_request))
357
358 # if there are no more requests for this server, we can drop the lock.
359 if not server_requests:
360 logger.debug("Releasing key lookup lock on %s", server_name)
361 drop_server_lock(server_name)
362
363 return res
364
365 for verify_request in verify_requests:
366 verify_request.key_ready.addBoth(lookup_done, verify_request)
367
368 # Actually start fetching keys.
369 self._get_server_verify_keys(verify_requests)
370 except Exception:
371 logger.exception("Error starting key lookups")
372
373 async def wait_for_previous_lookups(self, server_names: Iterable[str]) -> None:
374 """Waits for any previous key lookups for the given servers to finish.
375
376 Args:
377 server_names: list of servers which we want to look up
378
379 Returns:
380 Resolves once all key lookups for the given servers have
381 completed. Follows the synapse rules of logcontext preservation.
382 """
383 loop_count = 1
384 while True:
385 wait_on = [
386 (server_name, self.key_downloads[server_name])
387 for server_name in server_names
388 if server_name in self.key_downloads
389 ]
390 if not wait_on:
391 break
392 logger.info(
393 "Waiting for existing lookups for %s to complete [loop %i]",
394 [w[0] for w in wait_on],
395 loop_count,
396 )
397 with PreserveLoggingContext():
398 await defer.DeferredList((w[1] for w in wait_on))
399
400 loop_count += 1
401
402 def _get_server_verify_keys(self, verify_requests: List[VerifyJsonRequest]) -> None:
403 """Tries to find at least one key for each verify request
404
405 For each verify_request, verify_request.key_ready is called back with
406 params (server_name, key_id, VerifyKey) if a key is found, or errbacked
407 with a SynapseError if none of the keys are found.
408
409 Args:
410 verify_requests: list of verify requests
411 """
412
413 remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called}
414
415 async def do_iterations():
416 try:
417 with Measure(self.clock, "get_server_verify_keys"):
418 for f in self._key_fetchers:
419 if not remaining_requests:
420 return
421 await self._attempt_key_fetches_with_fetcher(
422 f, remaining_requests
423 )
424
425 # look for any requests which weren't satisfied
426 while remaining_requests:
427 verify_request = remaining_requests.pop()
428 rq_str = (
429 "VerifyJsonRequest(server=%s, key_ids=%s, min_valid=%i)"
430 % (
431 verify_request.server_name,
432 verify_request.key_ids,
433 verify_request.minimum_valid_until_ts,
434 )
435 )
436
437 # If we run the errback immediately, it may cancel our
438 # loggingcontext while we are still in it, so instead we
439 # schedule it for the next time round the reactor.
440 #
441 # (this also ensures that we don't get a stack overflow if we
442 # has a massive queue of lookups waiting for this server).
443 self.clock.call_later(
444 0,
445 verify_request.key_ready.errback,
446 SynapseError(
447 401,
448 "Failed to find any key to satisfy %s" % (rq_str,),
449 Codes.UNAUTHORIZED,
450 ),
451 )
452 except Exception as err:
453 # we don't really expect to get here, because any errors should already
454 # have been caught and logged. But if we do, let's log the error and make
455 # sure that all of the deferreds are resolved.
456 logger.error("Unexpected error in _get_server_verify_keys: %s", err)
457 with PreserveLoggingContext():
458 for verify_request in remaining_requests:
459 if not verify_request.key_ready.called:
460 verify_request.key_ready.errback(err)
461
462 run_in_background(do_iterations)
463
464 async def _attempt_key_fetches_with_fetcher(
465 self, fetcher: "KeyFetcher", remaining_requests: Set[VerifyJsonRequest]
466 ):
467 """Use a key fetcher to attempt to satisfy some key requests
468
469 Args:
470 fetcher: fetcher to use to fetch the keys
471 remaining_requests: outstanding key requests.
472 Any successfully-completed requests will be removed from the list.
473 """
474 # The keys to fetch.
475 # server_name -> key_id -> min_valid_ts
476 missing_keys = defaultdict(dict) # type: Dict[str, Dict[str, int]]
477
478 for verify_request in remaining_requests:
479 # any completed requests should already have been removed
480 assert not verify_request.key_ready.called
481 keys_for_server = missing_keys[verify_request.server_name]
482
483 for key_id in verify_request.key_ids:
484 # If we have several requests for the same key, then we only need to
485 # request that key once, but we should do so with the greatest
486 # min_valid_until_ts of the requests, so that we can satisfy all of
487 # the requests.
488 keys_for_server[key_id] = max(
489 keys_for_server.get(key_id, -1),
490 verify_request.minimum_valid_until_ts,
491 )
492
493 results = await fetcher.get_keys(missing_keys)
494
495 completed = []
496 for verify_request in remaining_requests:
497 server_name = verify_request.server_name
498
499 # see if any of the keys we got this time are sufficient to
500 # complete this VerifyJsonRequest.
501 result_keys = results.get(server_name, {})
502 for key_id in verify_request.key_ids:
503 fetch_key_result = result_keys.get(key_id)
504 if not fetch_key_result:
505 # we didn't get a result for this key
389 for key_id, key in keys.items():
390 if not key:
506391 continue
507392
508 if (
509 fetch_key_result.valid_until_ts
510 < verify_request.minimum_valid_until_ts
511 ):
512 # key was not valid at this point
393 # If we already have a result for the given key ID we keep the
394 # one with the highest `valid_until_ts`.
395 existing_key = found_keys.get(key_id)
396 if existing_key:
397 if key.valid_until_ts <= existing_key.valid_until_ts:
398 continue
399
400 # We always store the returned key even if it doesn't the
401 # `minimum_valid_until_ts` requirement, as some verification
402 # requests may still be able to be satisfied by it.
403 #
404 # We still keep looking for the key from other fetchers in that
405 # case though.
406 found_keys[key_id] = key
407
408 if key.valid_until_ts < verify_request.minimum_valid_until_ts:
513409 continue
514410
515 # we have a valid key for this request. If we run the callback
516 # immediately, it may cancel our loggingcontext while we are still in
517 # it, so instead we schedule it for the next time round the reactor.
518 #
519 # (this also ensures that we don't get a stack overflow if we had
520 # a massive queue of lookups waiting for this server).
521 logger.debug(
522 "Found key %s:%s for %s",
523 server_name,
524 key_id,
525 verify_request.request_name,
526 )
527 self.clock.call_later(
528 0,
529 verify_request.key_ready.callback,
530 (server_name, key_id, fetch_key_result.verify_key),
531 )
532 completed.append(verify_request)
533 break
534
535 remaining_requests.difference_update(completed)
411 missing_key_ids.discard(key_id)
412
413 return found_keys
536414
537415
538416 class KeyFetcher(metaclass=abc.ABCMeta):
417 def __init__(self, hs: "HomeServer"):
418 self._queue = BatchingQueue(
419 self.__class__.__name__, hs.get_clock(), self._fetch_keys
420 )
421
422 async def get_keys(
423 self, server_name: str, key_ids: List[str], minimum_valid_until_ts: int
424 ) -> Dict[str, FetchKeyResult]:
425 results = await self._queue.add_to_queue(
426 _FetchKeyRequest(
427 server_name=server_name,
428 key_ids=key_ids,
429 minimum_valid_until_ts=minimum_valid_until_ts,
430 )
431 )
432 return results.get(server_name, {})
433
539434 @abc.abstractmethod
540 async def get_keys(
541 self, keys_to_fetch: Dict[str, Dict[str, int]]
435 async def _fetch_keys(
436 self, keys_to_fetch: List[_FetchKeyRequest]
542437 ) -> Dict[str, Dict[str, FetchKeyResult]]:
543 """
544 Args:
545 keys_to_fetch:
546 the keys to be fetched. server_name -> key_id -> min_valid_ts
547
548 Returns:
549 Map from server_name -> key_id -> FetchKeyResult
550 """
551 raise NotImplementedError
438 pass
552439
553440
554441 class StoreKeyFetcher(KeyFetcher):
555442 """KeyFetcher impl which fetches keys from our data store"""
556443
557444 def __init__(self, hs: "HomeServer"):
445 super().__init__(hs)
446
558447 self.store = hs.get_datastore()
559448
560 async def get_keys(
561 self, keys_to_fetch: Dict[str, Dict[str, int]]
562 ) -> Dict[str, Dict[str, FetchKeyResult]]:
563 """see KeyFetcher.get_keys"""
564
449 async def _fetch_keys(self, keys_to_fetch: List[_FetchKeyRequest]):
565450 key_ids_to_fetch = (
566 (server_name, key_id)
567 for server_name, keys_for_server in keys_to_fetch.items()
568 for key_id in keys_for_server.keys()
451 (queue_value.server_name, key_id)
452 for queue_value in keys_to_fetch
453 for key_id in queue_value.key_ids
569454 )
570455
571456 res = await self.store.get_server_verify_keys(key_ids_to_fetch)
577462
578463 class BaseV2KeyFetcher(KeyFetcher):
579464 def __init__(self, hs: "HomeServer"):
465 super().__init__(hs)
466
580467 self.store = hs.get_datastore()
581468 self.config = hs.config
582469
684571 self.client = hs.get_federation_http_client()
685572 self.key_servers = self.config.key_servers
686573
687 async def get_keys(
688 self, keys_to_fetch: Dict[str, Dict[str, int]]
574 async def _fetch_keys(
575 self, keys_to_fetch: List[_FetchKeyRequest]
689576 ) -> Dict[str, Dict[str, FetchKeyResult]]:
690 """see KeyFetcher.get_keys"""
577 """see KeyFetcher._fetch_keys"""
691578
692579 async def get_key(key_server: TrustedKeyServer) -> Dict:
693580 try:
723610 return union_of_keys
724611
725612 async def get_server_verify_key_v2_indirect(
726 self, keys_to_fetch: Dict[str, Dict[str, int]], key_server: TrustedKeyServer
613 self, keys_to_fetch: List[_FetchKeyRequest], key_server: TrustedKeyServer
727614 ) -> Dict[str, Dict[str, FetchKeyResult]]:
728615 """
729616 Args:
730617 keys_to_fetch:
731 the keys to be fetched. server_name -> key_id -> min_valid_ts
618 the keys to be fetched.
732619
733620 key_server: notary server to query for the keys
734621
742629 perspective_name = key_server.server_name
743630 logger.info(
744631 "Requesting keys %s from notary server %s",
745 keys_to_fetch.items(),
632 keys_to_fetch,
746633 perspective_name,
747634 )
748635
752639 path="/_matrix/key/v2/query",
753640 data={
754641 "server_keys": {
755 server_name: {
756 key_id: {"minimum_valid_until_ts": min_valid_ts}
757 for key_id, min_valid_ts in server_keys.items()
642 queue_value.server_name: {
643 key_id: {
644 "minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
645 }
646 for key_id in queue_value.key_ids
758647 }
759 for server_name, server_keys in keys_to_fetch.items()
648 for queue_value in keys_to_fetch
760649 }
761650 },
762651 )
857746 self.client = hs.get_federation_http_client()
858747
859748 async def get_keys(
860 self, keys_to_fetch: Dict[str, Dict[str, int]]
749 self, server_name: str, key_ids: List[str], minimum_valid_until_ts: int
750 ) -> Dict[str, FetchKeyResult]:
751 results = await self._queue.add_to_queue(
752 _FetchKeyRequest(
753 server_name=server_name,
754 key_ids=key_ids,
755 minimum_valid_until_ts=minimum_valid_until_ts,
756 ),
757 key=server_name,
758 )
759 return results.get(server_name, {})
760
761 async def _fetch_keys(
762 self, keys_to_fetch: List[_FetchKeyRequest]
861763 ) -> Dict[str, Dict[str, FetchKeyResult]]:
862764 """
863765 Args:
870772
871773 results = {}
872774
873 async def get_key(key_to_fetch_item: Tuple[str, Dict[str, int]]) -> None:
874 server_name, key_ids = key_to_fetch_item
775 async def get_key(key_to_fetch_item: _FetchKeyRequest) -> None:
776 server_name = key_to_fetch_item.server_name
777 key_ids = key_to_fetch_item.key_ids
778
875779 try:
876780 keys = await self.get_server_verify_key_v2_direct(server_name, key_ids)
877781 results[server_name] = keys
882786 except Exception:
883787 logger.exception("Error getting keys %s from %s", key_ids, server_name)
884788
885 await yieldable_gather_results(get_key, keys_to_fetch.items())
789 await yieldable_gather_results(get_key, keys_to_fetch)
886790 return results
887791
888792 async def get_server_verify_key_v2_direct(
954858 keys.update(response_keys)
955859
956860 return keys
957
958
959 async def _handle_key_deferred(verify_request: VerifyJsonRequest) -> None:
960 """Waits for the key to become available, and then performs a verification
961
962 Args:
963 verify_request:
964
965 Raises:
966 SynapseError if there was a problem performing the verification
967 """
968 server_name = verify_request.server_name
969 with PreserveLoggingContext():
970 _, key_id, verify_key = await verify_request.key_ready
971
972 json_object = verify_request.get_json_object()
973
974 try:
975 verify_signed_json(json_object, server_name, verify_key)
976 except SignatureVerifyException as e:
977 logger.debug(
978 "Error verifying signature for %s:%s:%s with key %s: %s",
979 server_name,
980 verify_key.alg,
981 verify_key.version,
982 encode_verify_key_base64(verify_key),
983 str(e),
984 )
985 raise SynapseError(
986 401,
987 "Invalid signature for server %s with key %s:%s: %s"
988 % (server_name, verify_key.alg, verify_key.version, str(e)),
989 Codes.UNAUTHORIZED,
990 )
1313 # limitations under the License.
1414 import logging
1515 from collections import namedtuple
16 from typing import Iterable, List
17
18 from twisted.internet import defer
19 from twisted.internet.defer import Deferred, DeferredList
20 from twisted.python.failure import Failure
2116
2217 from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
2318 from synapse.api.errors import Codes, SynapseError
2722 from synapse.events import EventBase, make_event_from_dict
2823 from synapse.events.utils import prune_event, validate_canonicaljson
2924 from synapse.http.servlet import assert_params_in_dict
30 from synapse.logging.context import (
31 PreserveLoggingContext,
32 current_context,
33 make_deferred_yieldable,
34 )
3525 from synapse.types import JsonDict, get_domain_from_id
3626
3727 logger = logging.getLogger(__name__)
4737 self.store = hs.get_datastore()
4838 self._clock = hs.get_clock()
4939
50 def _check_sigs_and_hash(
40 async def _check_sigs_and_hash(
5141 self, room_version: RoomVersion, pdu: EventBase
52 ) -> Deferred:
53 return make_deferred_yieldable(
54 self._check_sigs_and_hashes(room_version, [pdu])[0]
55 )
56
57 def _check_sigs_and_hashes(
58 self, room_version: RoomVersion, pdus: List[EventBase]
59 ) -> List[Deferred]:
60 """Checks that each of the received events is correctly signed by the
61 sending server.
42 ) -> EventBase:
43 """Checks that event is correctly signed by the sending server.
6244
6345 Args:
64 room_version: The room version of the PDUs
65 pdus: the events to be checked
46 room_version: The room version of the PDU
47 pdu: the event to be checked
6648
6749 Returns:
68 For each input event, a deferred which:
69 * returns the original event if the checks pass
70 * returns a redacted version of the event (if the signature
50 * the original event if the checks pass
51 * a redacted version of the event (if the signature
7152 matched but the hash did not)
72 * throws a SynapseError if the signature check failed.
73 The deferreds run their callbacks in the sentinel
74 """
75 deferreds = _check_sigs_on_pdus(self.keyring, room_version, pdus)
76
77 ctx = current_context()
78
79 @defer.inlineCallbacks
80 def callback(_, pdu: EventBase):
81 with PreserveLoggingContext(ctx):
82 if not check_event_content_hash(pdu):
83 # let's try to distinguish between failures because the event was
84 # redacted (which are somewhat expected) vs actual ball-tampering
85 # incidents.
86 #
87 # This is just a heuristic, so we just assume that if the keys are
88 # about the same between the redacted and received events, then the
89 # received event was probably a redacted copy (but we then use our
90 # *actual* redacted copy to be on the safe side.)
91 redacted_event = prune_event(pdu)
92 if set(redacted_event.keys()) == set(pdu.keys()) and set(
93 redacted_event.content.keys()
94 ) == set(pdu.content.keys()):
95 logger.info(
96 "Event %s seems to have been redacted; using our redacted "
97 "copy",
98 pdu.event_id,
99 )
100 else:
101 logger.warning(
102 "Event %s content has been tampered, redacting",
103 pdu.event_id,
104 )
105 return redacted_event
106
107 result = yield defer.ensureDeferred(
108 self.spam_checker.check_event_for_spam(pdu)
53 * throws a SynapseError if the signature check failed."""
54 try:
55 await _check_sigs_on_pdu(self.keyring, room_version, pdu)
56 except SynapseError as e:
57 logger.warning(
58 "Signature check failed for %s: %s",
59 pdu.event_id,
60 e,
61 )
62 raise
63
64 if not check_event_content_hash(pdu):
65 # let's try to distinguish between failures because the event was
66 # redacted (which are somewhat expected) vs actual ball-tampering
67 # incidents.
68 #
69 # This is just a heuristic, so we just assume that if the keys are
70 # about the same between the redacted and received events, then the
71 # received event was probably a redacted copy (but we then use our
72 # *actual* redacted copy to be on the safe side.)
73 redacted_event = prune_event(pdu)
74 if set(redacted_event.keys()) == set(pdu.keys()) and set(
75 redacted_event.content.keys()
76 ) == set(pdu.content.keys()):
77 logger.info(
78 "Event %s seems to have been redacted; using our redacted copy",
79 pdu.event_id,
10980 )
110
111 if result:
112 logger.warning(
113 "Event contains spam, redacting %s: %s",
114 pdu.event_id,
115 pdu.get_pdu_json(),
116 )
117 return prune_event(pdu)
118
119 return pdu
120
121 def errback(failure: Failure, pdu: EventBase):
122 failure.trap(SynapseError)
123 with PreserveLoggingContext(ctx):
81 else:
12482 logger.warning(
125 "Signature check failed for %s: %s",
83 "Event %s content has been tampered, redacting",
12684 pdu.event_id,
127 failure.getErrorMessage(),
12885 )
129 return failure
130
131 for deferred, pdu in zip(deferreds, pdus):
132 deferred.addCallbacks(
133 callback, errback, callbackArgs=[pdu], errbackArgs=[pdu]
134 )
135
136 return deferreds
86 return redacted_event
87
88 result = await self.spam_checker.check_event_for_spam(pdu)
89
90 if result:
91 logger.warning(
92 "Event contains spam, redacting %s: %s",
93 pdu.event_id,
94 pdu.get_pdu_json(),
95 )
96 return prune_event(pdu)
97
98 return pdu
13799
138100
139101 class PduToCheckSig(namedtuple("PduToCheckSig", ["pdu", "sender_domain", "deferreds"])):
140102 pass
141103
142104
143 def _check_sigs_on_pdus(
144 keyring: Keyring, room_version: RoomVersion, pdus: Iterable[EventBase]
145 ) -> List[Deferred]:
105 async def _check_sigs_on_pdu(
106 keyring: Keyring, room_version: RoomVersion, pdu: EventBase
107 ) -> None:
146108 """Check that the given events are correctly signed
109
110 Raise a SynapseError if the event wasn't correctly signed.
147111
148112 Args:
149113 keyring: keyring object to do the checks
150114 room_version: the room version of the PDUs
151115 pdus: the events to be checked
152
153 Returns:
154 A Deferred for each event in pdus, which will either succeed if
155 the signatures are valid, or fail (with a SynapseError) if not.
156116 """
157117
158118 # we want to check that the event is signed by:
176136 # let's start by getting the domain for each pdu, and flattening the event back
177137 # to JSON.
178138
179 pdus_to_check = [
180 PduToCheckSig(
181 pdu=p,
182 sender_domain=get_domain_from_id(p.sender),
183 deferreds=[],
184 )
185 for p in pdus
186 ]
187
188139 # First we check that the sender event is signed by the sender's domain
189140 # (except if its a 3pid invite, in which case it may be sent by any server)
190 pdus_to_check_sender = [p for p in pdus_to_check if not _is_invite_via_3pid(p.pdu)]
191
192 more_deferreds = keyring.verify_events_for_server(
193 [
194 (
195 p.sender_domain,
196 p.pdu,
197 p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
198 )
199 for p in pdus_to_check_sender
200 ]
201 )
202
203 def sender_err(e, pdu_to_check):
204 errmsg = "event id %s: unable to verify signature for sender %s: %s" % (
205 pdu_to_check.pdu.event_id,
206 pdu_to_check.sender_domain,
207 e.getErrorMessage(),
208 )
209 raise SynapseError(403, errmsg, Codes.FORBIDDEN)
210
211 for p, d in zip(pdus_to_check_sender, more_deferreds):
212 d.addErrback(sender_err, p)
213 p.deferreds.append(d)
141 if not _is_invite_via_3pid(pdu):
142 try:
143 await keyring.verify_event_for_server(
144 get_domain_from_id(pdu.sender),
145 pdu,
146 pdu.origin_server_ts if room_version.enforce_key_validity else 0,
147 )
148 except Exception as e:
149 errmsg = "event id %s: unable to verify signature for sender %s: %s" % (
150 pdu.event_id,
151 get_domain_from_id(pdu.sender),
152 e,
153 )
154 raise SynapseError(403, errmsg, Codes.FORBIDDEN)
214155
215156 # now let's look for events where the sender's domain is different to the
216157 # event id's domain (normally only the case for joins/leaves), and add additional
217158 # checks. Only do this if the room version has a concept of event ID domain
218159 # (ie, the room version uses old-style non-hash event IDs).
219 if room_version.event_format == EventFormatVersions.V1:
220 pdus_to_check_event_id = [
221 p
222 for p in pdus_to_check
223 if p.sender_domain != get_domain_from_id(p.pdu.event_id)
224 ]
225
226 more_deferreds = keyring.verify_events_for_server(
227 [
228 (
229 get_domain_from_id(p.pdu.event_id),
230 p.pdu,
231 p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
160 if room_version.event_format == EventFormatVersions.V1 and get_domain_from_id(
161 pdu.event_id
162 ) != get_domain_from_id(pdu.sender):
163 try:
164 await keyring.verify_event_for_server(
165 get_domain_from_id(pdu.event_id),
166 pdu,
167 pdu.origin_server_ts if room_version.enforce_key_validity else 0,
168 )
169 except Exception as e:
170 errmsg = (
171 "event id %s: unable to verify signature for event id domain %s: %s"
172 % (
173 pdu.event_id,
174 get_domain_from_id(pdu.event_id),
175 e,
232176 )
233 for p in pdus_to_check_event_id
234 ]
235 )
236
237 def event_err(e, pdu_to_check):
238 errmsg = (
239 "event id %s: unable to verify signature for event id domain: %s"
240 % (pdu_to_check.pdu.event_id, e.getErrorMessage())
241177 )
242178 raise SynapseError(403, errmsg, Codes.FORBIDDEN)
243
244 for p, d in zip(pdus_to_check_event_id, more_deferreds):
245 d.addErrback(event_err, p)
246 p.deferreds.append(d)
247
248 # replace lists of deferreds with single Deferreds
249 return [_flatten_deferred_list(p.deferreds) for p in pdus_to_check]
250
251
252 def _flatten_deferred_list(deferreds: List[Deferred]) -> Deferred:
253 """Given a list of deferreds, either return the single deferred,
254 combine into a DeferredList, or return an already resolved deferred.
255 """
256 if len(deferreds) > 1:
257 return DeferredList(deferreds, fireOnOneErrback=True, consumeErrors=True)
258 elif len(deferreds) == 1:
259 return deferreds[0]
260 else:
261 return defer.succeed(None)
262179
263180
264181 def _is_invite_via_3pid(event: EventBase) -> bool:
2020 Any,
2121 Awaitable,
2222 Callable,
23 Collection,
2324 Dict,
2425 Iterable,
2526 List,
3334
3435 import attr
3536 from prometheus_client import Counter
36
37 from twisted.internet import defer
38 from twisted.internet.defer import Deferred
3937
4038 from synapse.api.constants import EventTypes, Membership
4139 from synapse.api.errors import (
5553 from synapse.events import EventBase, builder
5654 from synapse.federation.federation_base import FederationBase, event_from_pdu_json
5755 from synapse.federation.transport.client import SendJoinResponse
58 from synapse.logging.context import make_deferred_yieldable, preserve_fn
5956 from synapse.logging.utils import log_function
6057 from synapse.types import JsonDict, get_domain_from_id
61 from synapse.util import unwrapFirstError
58 from synapse.util.async_helpers import concurrently_execute
6259 from synapse.util.caches.expiringcache import ExpiringCache
6360 from synapse.util.retryutils import NotRetryingDestination
6461
359356 async def _check_sigs_and_hash_and_fetch(
360357 self,
361358 origin: str,
362 pdus: List[EventBase],
359 pdus: Collection[EventBase],
363360 room_version: RoomVersion,
364361 outlier: bool = False,
365 include_none: bool = False,
366362 ) -> List[EventBase]:
367363 """Takes a list of PDUs and checks the signatures and hashes of each
368364 one. If a PDU fails its signature check then we check if we have it in
373369
374370 The given list of PDUs are not modified, instead the function returns
375371 a new list.
372
373 Args:
374 origin
375 pdu
376 room_version
377 outlier: Whether the events are outliers or not
378
379 Returns:
380 A list of PDUs that have valid signatures and hashes.
381 """
382
383 # We limit how many PDUs we check at once, as if we try to do hundreds
384 # of thousands of PDUs at once we see large memory spikes.
385
386 valid_pdus = []
387
388 async def _execute(pdu: EventBase) -> None:
389 valid_pdu = await self._check_sigs_and_hash_and_fetch_one(
390 pdu=pdu,
391 origin=origin,
392 outlier=outlier,
393 room_version=room_version,
394 )
395
396 if valid_pdu:
397 valid_pdus.append(valid_pdu)
398
399 await concurrently_execute(_execute, pdus, 10000)
400
401 return valid_pdus
402
403 async def _check_sigs_and_hash_and_fetch_one(
404 self,
405 pdu: EventBase,
406 origin: str,
407 room_version: RoomVersion,
408 outlier: bool = False,
409 ) -> Optional[EventBase]:
410 """Takes a PDU and checks its signatures and hashes. If the PDU fails
411 its signature check then we check if we have it in the database and if
412 not then request if from the originating server of that PDU.
413
414 If then PDU fails its content hash check then it is redacted.
376415
377416 Args:
378417 origin
383422 for events that have failed their checks
384423
385424 Returns:
386 A list of PDUs that have valid signatures and hashes.
387 """
388 deferreds = self._check_sigs_and_hashes(room_version, pdus)
389
390 async def handle_check_result(pdu: EventBase, deferred: Deferred):
425 The PDU (possibly redacted) if it has valid signatures and hashes.
426 """
427
428 res = None
429 try:
430 res = await self._check_sigs_and_hash(room_version, pdu)
431 except SynapseError:
432 pass
433
434 if not res:
435 # Check local db.
436 res = await self.store.get_event(
437 pdu.event_id, allow_rejected=True, allow_none=True
438 )
439
440 pdu_origin = get_domain_from_id(pdu.sender)
441 if not res and pdu_origin != origin:
391442 try:
392 res = await make_deferred_yieldable(deferred)
443 res = await self.get_pdu(
444 destinations=[pdu_origin],
445 event_id=pdu.event_id,
446 room_version=room_version,
447 outlier=outlier,
448 timeout=10000,
449 )
393450 except SynapseError:
394 res = None
395
396 if not res:
397 # Check local db.
398 res = await self.store.get_event(
399 pdu.event_id, allow_rejected=True, allow_none=True
400 )
401
402 pdu_origin = get_domain_from_id(pdu.sender)
403 if not res and pdu_origin != origin:
404 try:
405 res = await self.get_pdu(
406 destinations=[pdu_origin],
407 event_id=pdu.event_id,
408 room_version=room_version,
409 outlier=outlier,
410 timeout=10000,
411 )
412 except SynapseError:
413 pass
414
415 if not res:
416 logger.warning(
417 "Failed to find copy of %s with valid signature", pdu.event_id
418 )
419
420 return res
421
422 handle = preserve_fn(handle_check_result)
423 deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
424
425 valid_pdus = await make_deferred_yieldable(
426 defer.gatherResults(deferreds2, consumeErrors=True)
427 ).addErrback(unwrapFirstError)
428
429 if include_none:
430 return valid_pdus
431 else:
432 return [p for p in valid_pdus if p]
451 pass
452
453 if not res:
454 logger.warning(
455 "Failed to find copy of %s with valid signature", pdu.event_id
456 )
457
458 return res
433459
434460 async def get_event_auth(
435461 self, destination: str, room_id: str, event_id: str
670696 state = response.state
671697 auth_chain = response.auth_events
672698
673 pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)}
674
675699 create_event = None
676700 for e in state:
677701 if (e.type, e.state_key) == (EventTypes.Create, ""):
695719 % (create_room_version,)
696720 )
697721
698 valid_pdus = await self._check_sigs_and_hash_and_fetch(
699 destination,
700 list(pdus.values()),
701 outlier=True,
702 room_version=room_version,
703 )
704
705 valid_pdus_map = {p.event_id: p for p in valid_pdus}
722 logger.info(
723 "Processing from send_join %d events", len(state) + len(auth_chain)
724 )
725
726 # We now go and check the signatures and hashes for the event. Note
727 # that we limit how many events we process at a time to keep the
728 # memory overhead from exploding.
729 valid_pdus_map: Dict[str, EventBase] = {}
730
731 async def _execute(pdu: EventBase) -> None:
732 valid_pdu = await self._check_sigs_and_hash_and_fetch_one(
733 pdu=pdu,
734 origin=destination,
735 outlier=True,
736 room_version=room_version,
737 )
738
739 if valid_pdu:
740 valid_pdus_map[valid_pdu.event_id] = valid_pdu
741
742 await concurrently_execute(
743 _execute, itertools.chain(state, auth_chain), 10000
744 )
706745
707746 # NB: We *need* to copy to ensure that we don't have multiple
708747 # references being passed on, as that causes... issues.
3636 )
3737 from synapse.logging.context import run_in_background
3838 from synapse.logging.opentracing import (
39 SynapseTags,
3940 start_active_span,
4041 start_active_span_from_request,
4142 tags,
150151 )
151152
152153 await self.keyring.verify_json_for_server(
153 origin, json_request, now, "Incoming request"
154 origin,
155 json_request,
156 now,
154157 )
155158
156159 logger.debug("Request from %s", origin)
313316 raise
314317
315318 request_tags = {
316 "request_id": request.get_request_id(),
319 SynapseTags.REQUEST_ID: request.get_request_id(),
317320 tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
318321 tags.HTTP_METHOD: request.get_method(),
319322 tags.HTTP_URL: request.get_redacted_uri(),
15611564 server_name=hs.hostname,
15621565 ).register(resource)
15631566
1564 if hs.config.experimental.spaces_enabled:
1565 FederationSpaceSummaryServlet(
1566 handler=hs.get_space_summary_handler(),
1567 authenticator=authenticator,
1568 ratelimiter=ratelimiter,
1569 server_name=hs.hostname,
1570 ).register(resource)
1567 FederationSpaceSummaryServlet(
1568 handler=hs.get_space_summary_handler(),
1569 authenticator=authenticator,
1570 ratelimiter=ratelimiter,
1571 server_name=hs.hostname,
1572 ).register(resource)
15711573
15721574 if "openid" in servlet_groups:
15731575 for servletclass in OPENID_SERVLET_CLASSES:
107107
108108 assert server_name is not None
109109 await self.keyring.verify_json_for_server(
110 server_name, attestation, now, "Group attestation"
110 server_name,
111 attestation,
112 now,
111113 )
112114
113115 def create_attestation(self, group_id: str, user_id: str) -> JsonDict:
8686 self.is_processing = True
8787 try:
8888 limit = 100
89 while True:
89 upper_bound = -1
90 while upper_bound < self.current_max:
9091 (
9192 upper_bound,
9293 events,
9394 ) = await self.store.get_new_events_for_appservice(
9495 self.current_max, limit
9596 )
96
97 if not events:
98 break
9997
10098 events_by_room = {} # type: Dict[str, List[EventBase]]
10199 for event in events:
152150
153151 await self.store.set_appservice_last_pos(upper_bound)
154152
155 now = self.clock.time_msec()
156 ts = await self.store.get_received_ts(events[-1].event_id)
157
158153 synapse.metrics.event_processing_positions.labels(
159154 "appservice_sender"
160155 ).set(upper_bound)
167162
168163 event_processing_loop_counter.labels("appservice_sender").inc()
169164
170 synapse.metrics.event_processing_lag.labels(
171 "appservice_sender"
172 ).set(now - ts)
173 synapse.metrics.event_processing_last_ts.labels(
174 "appservice_sender"
175 ).set(ts)
165 if events:
166 now = self.clock.time_msec()
167 ts = await self.store.get_received_ts(events[-1].event_id)
168
169 synapse.metrics.event_processing_lag.labels(
170 "appservice_sender"
171 ).set(now - ts)
172 synapse.metrics.event_processing_last_ts.labels(
173 "appservice_sender"
174 ).set(ts)
176175 finally:
177176 self.is_processing = False
178177
2121 from http import HTTPStatus
2222 from typing import (
2323 TYPE_CHECKING,
24 Collection,
2425 Dict,
2526 Iterable,
2627 List,
177178 self.room_queues = {} # type: Dict[str, List[Tuple[EventBase, str]]]
178179 self._room_pdu_linearizer = Linearizer("fed_room_pdu")
179180
181 self._room_backfill = Linearizer("room_backfill")
182
180183 self.third_party_event_rules = hs.get_third_party_event_rules()
181184
182185 self._ephemeral_messages_enabled = hs.config.enable_ephemeral_messages
576579
577580 # Fetch the state events from the DB, and check we have the auth events.
578581 event_map = await self.store.get_events(state_event_ids, allow_rejected=True)
579 auth_events_in_store = await self.store.have_seen_events(auth_event_ids)
582 auth_events_in_store = await self.store.have_seen_events(
583 room_id, auth_event_ids
584 )
580585
581586 # Check for missing events. We handle state and auth event seperately,
582587 # as we want to pull the state from the DB, but we don't for the auth
609614
610615 if missing_auth_events:
611616 auth_events_in_store = await self.store.have_seen_events(
612 missing_auth_events
617 room_id, missing_auth_events
613618 )
614619 missing_auth_events.difference_update(auth_events_in_store)
615620
709714
710715 missing_auth_events = set(auth_event_ids) - fetched_events.keys()
711716 missing_auth_events.difference_update(
712 await self.store.have_seen_events(missing_auth_events)
717 await self.store.have_seen_events(room_id, missing_auth_events)
713718 )
714719 logger.debug("We are also missing %i auth events", len(missing_auth_events))
715720
10381043 return. This is used as part of the heuristic to decide if we
10391044 should back paginate.
10401045 """
1046 with (await self._room_backfill.queue(room_id)):
1047 return await self._maybe_backfill_inner(room_id, current_depth, limit)
1048
1049 async def _maybe_backfill_inner(
1050 self, room_id: str, current_depth: int, limit: int
1051 ) -> bool:
10411052 extremities = await self.store.get_oldest_events_with_depth_in_room(room_id)
10421053
10431054 if not extremities:
13531364
13541365 event_infos.append(_NewEventInfo(event, None, auth))
13551366
1356 await self._auth_and_persist_events(
1357 destination,
1358 room_id,
1359 event_infos,
1360 )
1367 if event_infos:
1368 await self._auth_and_persist_events(
1369 destination,
1370 room_id,
1371 event_infos,
1372 )
13611373
13621374 def _sanity_check_event(self, ev: EventBase) -> None:
13631375 """
20662078 self,
20672079 origin: str,
20682080 room_id: str,
2069 event_infos: Iterable[_NewEventInfo],
2081 event_infos: Collection[_NewEventInfo],
20702082 backfilled: bool = False,
20712083 ) -> None:
20722084 """Creates the appropriate contexts and persists events. The events
20762088
20772089 Notifies about the events where appropriate.
20782090 """
2091
2092 if not event_infos:
2093 return
20792094
20802095 async def prep(ev_info: _NewEventInfo):
20812096 event = ev_info.event
22052220 raise
22062221 events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR
22072222
2208 await self.persist_events_and_notify(
2209 room_id,
2210 [
2211 (e, events_to_context[e.event_id])
2212 for e in itertools.chain(auth_events, state)
2213 ],
2214 )
2223 if auth_events or state:
2224 await self.persist_events_and_notify(
2225 room_id,
2226 [
2227 (e, events_to_context[e.event_id])
2228 for e in itertools.chain(auth_events, state)
2229 ],
2230 )
22152231
22162232 new_event_context = await self.state_handler.compute_event_context(
22172233 event, old_state=state
24742490 #
24752491 # we start by checking if they are in the store, and then try calling /event_auth/.
24762492 if missing_auth:
2477 have_events = await self.store.have_seen_events(missing_auth)
2493 have_events = await self.store.have_seen_events(event.room_id, missing_auth)
24782494 logger.debug("Events %s are in the store", have_events)
24792495 missing_auth.difference_update(have_events)
24802496
24932509 return context
24942510
24952511 seen_remotes = await self.store.have_seen_events(
2496 [e.event_id for e in remote_auth_chain]
2512 event.room_id, [e.event_id for e in remote_auth_chain]
24972513 )
24982514
24992515 for e in remote_auth_chain:
30503066 the same room.
30513067 backfilled: Whether these events are a result of
30523068 backfilling or not
3053 """
3069
3070 Returns:
3071 The stream ID after which all events have been persisted.
3072 """
3073 if not event_and_contexts:
3074 return self.store.get_current_events_token()
3075
30543076 instance = self.config.worker.events_shard_config.get_instance(room_id)
30553077 if instance != self._instance_name:
3056 # Limit the number of events sent over federation.
3057 for batch in batch_iter(event_and_contexts, 1000):
3078 # Limit the number of events sent over replication. We choose 200
3079 # here as that is what we default to in `max_request_body_size(..)`
3080 for batch in batch_iter(event_and_contexts, 200):
30583081 result = await self._send_events(
30593082 instance_name=instance,
30603083 store=self.store,
298298 if not states:
299299 return
300300
301 hosts_and_states = await get_interested_remotes(
301 hosts_to_states = await get_interested_remotes(
302302 self.store,
303303 self.presence_router,
304304 states,
305305 )
306306
307 for destinations, states in hosts_and_states:
308 self._federation.send_presence_to_destinations(states, destinations)
307 for destination, host_states in hosts_to_states.items():
308 self._federation.send_presence_to_destinations(host_states, [destination])
309309
310310 async def send_full_presence_to_users(self, user_ids: Collection[str]):
311311 """
493493 rooms=room_ids_to_states.keys(),
494494 users=users_to_states.keys(),
495495 )
496
497 # If this is a federation sender, notify about presence updates.
498 await self.maybe_send_presence_to_interested_destinations(states)
499496
500497 async def process_replication_rows(
501498 self, stream_name: str, instance_name: str, token: int, rows: list
518515 for row in rows
519516 ]
520517
521 for state in states:
522 self.user_to_current_state[state.user_id] = state
518 # The list of states to notify sync streams and remote servers about.
519 # This is calculated by comparing the old and new states for each user
520 # using `should_notify(..)`.
521 #
522 # Note that this is necessary as the presence writer will periodically
523 # flush presence state changes that should not be notified about to the
524 # DB, and so will be sent over the replication stream.
525 state_to_notify = []
526
527 for new_state in states:
528 old_state = self.user_to_current_state.get(new_state.user_id)
529 self.user_to_current_state[new_state.user_id] = new_state
530
531 if not old_state or should_notify(old_state, new_state):
532 state_to_notify.append(new_state)
523533
524534 stream_id = token
525 await self.notify_from_replication(states, stream_id)
535 await self.notify_from_replication(state_to_notify, stream_id)
536
537 # If this is a federation sender, notify about presence updates.
538 await self.maybe_send_presence_to_interested_destinations(state_to_notify)
526539
527540 def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
528541 return [
828841 if to_federation_ping:
829842 federation_presence_out_counter.inc(len(to_federation_ping))
830843
831 hosts_and_states = await get_interested_remotes(
844 hosts_to_states = await get_interested_remotes(
832845 self.store,
833846 self.presence_router,
834847 list(to_federation_ping.values()),
835848 )
836849
837 for destinations, states in hosts_and_states:
850 for destination, states in hosts_to_states.items():
838851 self._federation_queue.send_presence_to_destinations(
839 states, destinations
852 states, [destination]
840853 )
841854
842855 async def _handle_timeouts(self) -> None:
19611974 store: DataStore,
19621975 presence_router: PresenceRouter,
19631976 states: List[UserPresenceState],
1964 ) -> List[Tuple[Collection[str], List[UserPresenceState]]]:
1977 ) -> Dict[str, Set[UserPresenceState]]:
19651978 """Given a list of presence states figure out which remote servers
19661979 should be sent which.
19671980
19731986 states: A list of incoming user presence updates.
19741987
19751988 Returns:
1976 A list of 2-tuples of destinations and states, where for
1977 each tuple the list of UserPresenceState should be sent to each
1978 destination
1989 A map from destinations to presence states to send to that destination.
19791990 """
1980 hosts_and_states = [] # type: List[Tuple[Collection[str], List[UserPresenceState]]]
1991 hosts_and_states: Dict[str, Set[UserPresenceState]] = {}
19811992
19821993 # First we look up the rooms each user is in (as well as any explicit
19831994 # subscriptions), then for each distinct room we look up the remote
19892000 for room_id, states in room_ids_to_states.items():
19902001 user_ids = await store.get_users_in_room(room_id)
19912002 hosts = {get_domain_from_id(user_id) for user_id in user_ids}
1992 hosts_and_states.append((hosts, states))
2003 for host in hosts:
2004 hosts_and_states.setdefault(host, set()).update(states)
19932005
19942006 for user_id, states in users_to_states.items():
19952007 host = get_domain_from_id(user_id)
1996 hosts_and_states.append(([host], states))
2008 hosts_and_states.setdefault(host, set()).update(states)
19972009
19982010 return hosts_and_states
19992011
314314 if context:
315315 context.tag = sync_type
316316
317 # if we have a since token, delete any to-device messages before that token
318 # (since we now know that the device has received them)
319 if since_token is not None:
320 since_stream_id = since_token.to_device_key
321 deleted = await self.store.delete_messages_for_device(
322 sync_config.user.to_string(), sync_config.device_id, since_stream_id
323 )
324 logger.debug(
325 "Deleted %d to-device messages up to %d", deleted, since_stream_id
326 )
327
317328 if timeout == 0 or since_token is None or full_state:
318329 # we are going to return immediately, so don't bother calling
319330 # notifier.wait_for_events.
462473 # ensure that we always include current state in the timeline
463474 current_state_ids = frozenset() # type: FrozenSet[str]
464475 if any(e.is_state() for e in recents):
465 current_state_ids_map = await self.state.get_current_state_ids(
476 current_state_ids_map = await self.store.get_current_state_ids(
466477 room_id
467478 )
468479 current_state_ids = frozenset(current_state_ids_map.values())
522533 # ensure that we always include current state in the timeline
523534 current_state_ids = frozenset()
524535 if any(e.is_state() for e in loaded_recents):
525 current_state_ids_map = await self.state.get_current_state_ids(
536 current_state_ids_map = await self.store.get_current_state_ids(
526537 room_id
527538 )
528539 current_state_ids = frozenset(current_state_ids_map.values())
12291240 since_stream_id = int(sync_result_builder.since_token.to_device_key)
12301241
12311242 if since_stream_id != int(now_token.to_device_key):
1232 # We only delete messages when a new message comes in, but that's
1233 # fine so long as we delete them at some point.
1234
1235 deleted = await self.store.delete_messages_for_device(
1236 user_id, device_id, since_stream_id
1237 )
1238 logger.debug(
1239 "Deleted %d to-device messages up to %d", deleted, since_stream_id
1240 )
1241
12421243 messages, stream_id = await self.store.get_new_messages_for_device(
12431244 user_id, device_id, since_stream_id, now_token.to_device_key
12441245 )
1414 """ This module contains base REST classes for constructing REST servlets. """
1515
1616 import logging
17 from typing import Dict, Iterable, List, Optional, overload
18
19 from typing_extensions import Literal
20
21 from twisted.web.server import Request
1722
1823 from synapse.api.errors import Codes, SynapseError
1924 from synapse.util import json_decoder
104109 return default
105110
106111
112 @overload
113 def parse_bytes_from_args(
114 args: Dict[bytes, List[bytes]],
115 name: str,
116 default: Literal[None] = None,
117 required: Literal[True] = True,
118 ) -> bytes:
119 ...
120
121
122 @overload
123 def parse_bytes_from_args(
124 args: Dict[bytes, List[bytes]],
125 name: str,
126 default: Optional[bytes] = None,
127 required: bool = False,
128 ) -> Optional[bytes]:
129 ...
130
131
132 def parse_bytes_from_args(
133 args: Dict[bytes, List[bytes]],
134 name: str,
135 default: Optional[bytes] = None,
136 required: bool = False,
137 ) -> Optional[bytes]:
138 """
139 Parse a string parameter as bytes from the request query string.
140
141 Args:
142 args: A mapping of request args as bytes to a list of bytes (e.g. request.args).
143 name: the name of the query parameter.
144 default: value to use if the parameter is absent,
145 defaults to None. Must be bytes if encoding is None.
146 required: whether to raise a 400 SynapseError if the
147 parameter is absent, defaults to False.
148 Returns:
149 Bytes or the default value.
150
151 Raises:
152 SynapseError if the parameter is absent and required.
153 """
154 name_bytes = name.encode("ascii")
155
156 if name_bytes in args:
157 return args[name_bytes][0]
158 elif required:
159 message = "Missing string query parameter %s" % (name,)
160 raise SynapseError(400, message, errcode=Codes.MISSING_PARAM)
161
162 return default
163
164
107165 def parse_string(
108 request,
109 name,
110 default=None,
111 required=False,
112 allowed_values=None,
113 param_type="string",
114 encoding="ascii",
166 request: Request,
167 name: str,
168 default: Optional[str] = None,
169 required: bool = False,
170 allowed_values: Optional[Iterable[str]] = None,
171 encoding: str = "ascii",
115172 ):
116173 """
117174 Parse a string parameter from the request query string.
121178
122179 Args:
123180 request: the twisted HTTP request.
124 name (bytes|unicode): the name of the query parameter.
125 default (bytes|unicode|None): value to use if the parameter is absent,
126 defaults to None. Must be bytes if encoding is None.
127 required (bool): whether to raise a 400 SynapseError if the
128 parameter is absent, defaults to False.
129 allowed_values (list[bytes|unicode]): List of allowed values for the
181 name: the name of the query parameter.
182 default: value to use if the parameter is absent, defaults to None.
183 required: whether to raise a 400 SynapseError if the
184 parameter is absent, defaults to False.
185 allowed_values: List of allowed values for the
130186 string, or None if any value is allowed, defaults to None. Must be
131187 the same type as name, if given.
132 encoding (str|None): The encoding to decode the string content with.
133
134 Returns:
135 bytes/unicode|None: A string value or the default. Unicode if encoding
136 was given, bytes otherwise.
188 encoding: The encoding to decode the string content with.
189
190 Returns:
191 A string value or the default.
137192
138193 Raises:
139194 SynapseError if the parameter is absent and required, or if the
140195 parameter is present, must be one of a list of allowed values and
141196 is not one of those allowed values.
142197 """
198 args = request.args # type: Dict[bytes, List[bytes]] # type: ignore
143199 return parse_string_from_args(
144 request.args, name, default, required, allowed_values, param_type, encoding
200 args, name, default, required, allowed_values, encoding
145201 )
146202
147203
148 def parse_string_from_args(
149 args,
150 name,
151 default=None,
152 required=False,
153 allowed_values=None,
154 param_type="string",
155 encoding="ascii",
156 ):
157
158 if not isinstance(name, bytes):
159 name = name.encode("ascii")
160
161 if name in args:
162 value = args[name][0]
163
164 if encoding:
165 try:
166 value = value.decode(encoding)
167 except ValueError:
168 raise SynapseError(
169 400, "Query parameter %r must be %s" % (name, encoding)
170 )
171
172 if allowed_values is not None and value not in allowed_values:
173 message = "Query parameter %r must be one of [%s]" % (
174 name,
175 ", ".join(repr(v) for v in allowed_values),
176 )
177 raise SynapseError(400, message)
178 else:
179 return value
204 def _parse_string_value(
205 value: bytes,
206 allowed_values: Optional[Iterable[str]],
207 name: str,
208 encoding: str,
209 ) -> str:
210 try:
211 value_str = value.decode(encoding)
212 except ValueError:
213 raise SynapseError(400, "Query parameter %r must be %s" % (name, encoding))
214
215 if allowed_values is not None and value_str not in allowed_values:
216 message = "Query parameter %r must be one of [%s]" % (
217 name,
218 ", ".join(repr(v) for v in allowed_values),
219 )
220 raise SynapseError(400, message)
221 else:
222 return value_str
223
224
225 @overload
226 def parse_strings_from_args(
227 args: Dict[bytes, List[bytes]],
228 name: str,
229 default: Optional[List[str]] = None,
230 required: Literal[True] = True,
231 allowed_values: Optional[Iterable[str]] = None,
232 encoding: str = "ascii",
233 ) -> List[str]:
234 ...
235
236
237 @overload
238 def parse_strings_from_args(
239 args: Dict[bytes, List[bytes]],
240 name: str,
241 default: Optional[List[str]] = None,
242 required: bool = False,
243 allowed_values: Optional[Iterable[str]] = None,
244 encoding: str = "ascii",
245 ) -> Optional[List[str]]:
246 ...
247
248
249 def parse_strings_from_args(
250 args: Dict[bytes, List[bytes]],
251 name: str,
252 default: Optional[List[str]] = None,
253 required: bool = False,
254 allowed_values: Optional[Iterable[str]] = None,
255 encoding: str = "ascii",
256 ) -> Optional[List[str]]:
257 """
258 Parse a string parameter from the request query string list.
259
260 The content of the query param will be decoded to Unicode using the encoding.
261
262 Args:
263 args: A mapping of request args as bytes to a list of bytes (e.g. request.args).
264 name: the name of the query parameter.
265 default: value to use if the parameter is absent, defaults to None.
266 required: whether to raise a 400 SynapseError if the
267 parameter is absent, defaults to False.
268 allowed_values: List of allowed values for the
269 string, or None if any value is allowed, defaults to None.
270 encoding: The encoding to decode the string content with.
271
272 Returns:
273 A string value or the default.
274
275 Raises:
276 SynapseError if the parameter is absent and required, or if the
277 parameter is present, must be one of a list of allowed values and
278 is not one of those allowed values.
279 """
280 name_bytes = name.encode("ascii")
281
282 if name_bytes in args:
283 values = args[name_bytes]
284
285 return [
286 _parse_string_value(value, allowed_values, name=name, encoding=encoding)
287 for value in values
288 ]
180289 else:
181290 if required:
182 message = "Missing %s query parameter %r" % (param_type, name)
291 message = "Missing string query parameter %r" % (name,)
183292 raise SynapseError(400, message, errcode=Codes.MISSING_PARAM)
184 else:
185
186 if encoding and isinstance(default, bytes):
187 return default.decode(encoding)
188
189 return default
293
294 return default
295
296
297 def parse_string_from_args(
298 args: Dict[bytes, List[bytes]],
299 name: str,
300 default: Optional[str] = None,
301 required: bool = False,
302 allowed_values: Optional[Iterable[str]] = None,
303 encoding: str = "ascii",
304 ) -> Optional[str]:
305 """
306 Parse the string parameter from the request query string list
307 and return the first result.
308
309 The content of the query param will be decoded to Unicode using the encoding.
310
311 Args:
312 args: A mapping of request args as bytes to a list of bytes (e.g. request.args).
313 name: the name of the query parameter.
314 default: value to use if the parameter is absent, defaults to None.
315 required: whether to raise a 400 SynapseError if the
316 parameter is absent, defaults to False.
317 allowed_values: List of allowed values for the
318 string, or None if any value is allowed, defaults to None. Must be
319 the same type as name, if given.
320 encoding: The encoding to decode the string content with.
321
322 Returns:
323 A string value or the default.
324
325 Raises:
326 SynapseError if the parameter is absent and required, or if the
327 parameter is present, must be one of a list of allowed values and
328 is not one of those allowed values.
329 """
330
331 strings = parse_strings_from_args(
332 args,
333 name,
334 default=[default] if default is not None else None,
335 required=required,
336 allowed_values=allowed_values,
337 encoding=encoding,
338 )
339
340 if strings is None:
341 return None
342
343 return strings[0]
190344
191345
192346 def parse_json_value_from_request(request, allow_empty_body=False):
214368 try:
215369 content = json_decoder.decode(content_bytes.decode("utf-8"))
216370 except Exception as e:
217 logger.warning("Unable to parse JSON: %s", e)
371 logger.warning("Unable to parse JSON: %s (%s)", e, content_bytes)
218372 raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)
219373
220374 return content
277431
278432 def register(self, http_server):
279433 """ Register this servlet with the given HTTP server. """
280 if hasattr(self, "PATTERNS"):
281 patterns = self.PATTERNS
282
434 patterns = getattr(self, "PATTERNS", None)
435 if patterns:
283436 for method in ("GET", "PUT", "POST", "DELETE"):
284437 if hasattr(self, "on_%s" % (method,)):
285438 servlet_classname = self.__class__.__name__
264264 # Whether the sync response has new data to be returned to the client.
265265 SYNC_RESULT = "sync.new_data"
266266
267 # incoming HTTP request ID (as written in the logs)
268 REQUEST_ID = "request_id"
269
270 # HTTP request tag (used to distinguish full vs incremental syncs, etc)
271 REQUEST_TAG = "request_tag"
272
273 # Text description of a database transaction
274 DB_TXN_DESC = "db.txn_desc"
275
276 # Uniqueish ID of a database transaction
277 DB_TXN_ID = "db.txn_id"
278
267279
268280 # Block everything by default
269281 # A regex which matches the server_names to expose traces for.
324336 @contextlib.contextmanager
325337 def noop_context_manager(*args, **kwargs):
326338 """Does exactly what it says on the tin"""
339 # TODO: replace with contextlib.nullcontext once we drop support for Python 3.6
327340 yield
328341
329342
349362
350363 set_homeserver_whitelist(hs.config.opentracer_whitelist)
351364
365 from jaeger_client.metrics.prometheus import PrometheusMetricsFactory
366
352367 config = JaegerConfig(
353368 config=hs.config.jaeger_config,
354369 service_name="{} {}".format(hs.config.server_name, hs.get_instance_name()),
355370 scope_manager=LogContextScopeManager(hs.config),
371 metrics_factory=PrometheusMetricsFactory(),
356372 )
357373
358374 # If we have the rust jaeger reporter available let's use that.
587603
588604 span = opentracing.tracer.active_span
589605 carrier = {} # type: Dict[str, str]
590 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
606 opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, carrier)
591607
592608 for key, value in carrier.items():
593609 headers.addRawHeaders(key, value)
624640 span = opentracing.tracer.active_span
625641
626642 carrier = {} # type: Dict[str, str]
627 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
643 opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, carrier)
628644
629645 for key, value in carrier.items():
630646 headers[key.encode()] = [value.encode()]
658674 return
659675
660676 opentracing.tracer.inject(
661 opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
677 opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier
662678 )
663679
664680
680696
681697 carrier = {} # type: Dict[str, str]
682698 opentracing.tracer.inject(
683 opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
699 opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier
684700 )
685701
686702 return carrier
695711 carrier = {} # type: Dict[str, str]
696712 if opentracing:
697713 opentracing.tracer.inject(
698 opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
714 opentracing.tracer.active_span.context, opentracing.Format.TEXT_MAP, carrier
699715 )
700716 return json_encoder.encode(carrier)
701717
823839 return
824840
825841 request_tags = {
826 "request_id": request.get_request_id(),
842 SynapseTags.REQUEST_ID: request.get_request_id(),
827843 tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
828844 tags.HTTP_METHOD: request.get_method(),
829845 tags.HTTP_URL: request.get_redacted_uri(),
832848
833849 request_name = request.request_metrics.name
834850 if extract_context:
835 scope = start_active_span_from_request(request, request_name, tags=request_tags)
851 scope = start_active_span_from_request(request, request_name)
836852 else:
837 scope = start_active_span(request_name, tags=request_tags)
853 scope = start_active_span(request_name)
838854
839855 with scope:
840856 try:
844860 # with JsonResource).
845861 scope.span.set_operation_name(request.request_metrics.name)
846862
847 scope.span.set_tag("request_tag", request.request_metrics.start_context.tag)
863 # set the tags *after* the servlet completes, in case it decided to
864 # prioritise the span (tags will get dropped on unprioritised spans)
865 request_tags[
866 SynapseTags.REQUEST_TAG
867 ] = request.request_metrics.start_context.tag
868
869 for k, v in request_tags.items():
870 scope.span.set_tag(k, v)
2121 from twisted.internet import defer
2222
2323 from synapse.logging.context import LoggingContext, PreserveLoggingContext
24 from synapse.logging.opentracing import noop_context_manager, start_active_span
24 from synapse.logging.opentracing import (
25 SynapseTags,
26 noop_context_manager,
27 start_active_span,
28 )
2529 from synapse.util.async_helpers import maybe_awaitable
2630
2731 if TYPE_CHECKING:
199203
200204 with BackgroundProcessLoggingContext(desc, count) as context:
201205 try:
202 ctx = noop_context_manager()
203206 if bg_start_span:
204 ctx = start_active_span(desc, tags={"request_id": str(context)})
207 ctx = start_active_span(
208 f"bgproc.{desc}", tags={SynapseTags.REQUEST_ID: str(context)}
209 )
210 else:
211 ctx = noop_context_manager()
205212 with ctx:
206213 return await maybe_awaitable(func(*args, **kwargs))
207214 except Exception:
484484 end_time = self.clock.time_msec() + timeout
485485
486486 while not result:
487 try:
488 now = self.clock.time_msec()
489 if end_time <= now:
490 break
491
492 # Now we wait for the _NotifierUserStream to be told there
493 # is a new token.
494 listener = user_stream.new_listener(prev_token)
495 listener.deferred = timeout_deferred(
496 listener.deferred,
497 (end_time - now) / 1000.0,
498 self.hs.get_reactor(),
499 )
500
501 with start_active_span("wait_for_events.deferred"):
487 with start_active_span("wait_for_events"):
488 try:
489 now = self.clock.time_msec()
490 if end_time <= now:
491 break
492
493 # Now we wait for the _NotifierUserStream to be told there
494 # is a new token.
495 listener = user_stream.new_listener(prev_token)
496 listener.deferred = timeout_deferred(
497 listener.deferred,
498 (end_time - now) / 1000.0,
499 self.hs.get_reactor(),
500 )
501
502502 log_kv(
503503 {
504504 "wait_for_events": "sleep",
516516 }
517517 )
518518
519 current_token = user_stream.current_token
520
521 result = await callback(prev_token, current_token)
522 log_kv(
523 {
524 "wait_for_events": "result",
525 "result": bool(result),
526 }
527 )
528 if result:
519 current_token = user_stream.current_token
520
521 result = await callback(prev_token, current_token)
522 log_kv(
523 {
524 "wait_for_events": "result",
525 "result": bool(result),
526 }
527 )
528 if result:
529 break
530
531 # Update the prev_token to the current_token since nothing
532 # has happened between the old prev_token and the current_token
533 prev_token = current_token
534 except defer.TimeoutError:
535 log_kv({"wait_for_events": "timeout"})
529536 break
530
531 # Update the prev_token to the current_token since nothing
532 # has happened between the old prev_token and the current_token
533 prev_token = current_token
534 except defer.TimeoutError:
535 log_kv({"wait_for_events": "timeout"})
536 break
537 except defer.CancelledError:
538 log_kv({"wait_for_events": "cancelled"})
539 break
537 except defer.CancelledError:
538 log_kv({"wait_for_events": "cancelled"})
539 break
540540
541541 if result is None:
542542 # This happened if there was no timeout or if the timeout had
6767 if row.entity.startswith("@"):
6868 self._device_list_stream_cache.entity_has_changed(row.entity, token)
6969 self.get_cached_devices_for_user.invalidate((row.entity,))
70 self._get_cached_user_device.invalidate_many((row.entity,))
70 self._get_cached_user_device.invalidate((row.entity,))
7171 self.get_device_list_last_stream_id_for_remote.invalidate((row.entity,))
7272
7373 else:
1616
1717 import logging
1818 import platform
19 from typing import TYPE_CHECKING, Optional, Tuple
1920
2021 import synapse
2122 from synapse.api.errors import Codes, NotFoundError, SynapseError
22 from synapse.http.server import JsonResource
23 from synapse.http.server import HttpServer, JsonResource
2324 from synapse.http.servlet import RestServlet, parse_json_object_from_request
25 from synapse.http.site import SynapseRequest
2426 from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
2527 from synapse.rest.admin.devices import (
2628 DeleteDevicesRestServlet,
6567 UserTokenRestServlet,
6668 WhoisRestServlet,
6769 )
68 from synapse.types import RoomStreamToken
70 from synapse.types import JsonDict, RoomStreamToken
6971 from synapse.util.versionstring import get_version_string
72
73 if TYPE_CHECKING:
74 from synapse.server import HomeServer
7075
7176 logger = logging.getLogger(__name__)
7277
7479 class VersionServlet(RestServlet):
7580 PATTERNS = admin_patterns("/server_version$")
7681
77 def __init__(self, hs):
82 def __init__(self, hs: "HomeServer"):
7883 self.res = {
7984 "server_version": get_version_string(synapse),
8085 "python_version": platform.python_version(),
8186 }
8287
83 def on_GET(self, request):
88 def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
8489 return 200, self.res
8590
8691
8994 "/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?"
9095 )
9196
92 def __init__(self, hs):
93 """
94
95 Args:
96 hs (synapse.server.HomeServer)
97 """
97 def __init__(self, hs: "HomeServer"):
9898 self.pagination_handler = hs.get_pagination_handler()
9999 self.store = hs.get_datastore()
100100 self.auth = hs.get_auth()
101101
102 async def on_POST(self, request, room_id, event_id):
102 async def on_POST(
103 self, request: SynapseRequest, room_id: str, event_id: Optional[str]
104 ) -> Tuple[int, JsonDict]:
103105 await assert_requester_is_admin(self.auth, request)
104106
105107 body = parse_json_object_from_request(request, allow_empty_body=True)
118120 if event.room_id != room_id:
119121 raise SynapseError(400, "Event is for wrong room.")
120122
123 # RoomStreamToken expects [int] not Optional[int]
124 assert event.internal_metadata.stream_ordering is not None
121125 room_token = RoomStreamToken(
122126 event.depth, event.internal_metadata.stream_ordering
123127 )
172176 class PurgeHistoryStatusRestServlet(RestServlet):
173177 PATTERNS = admin_patterns("/purge_history_status/(?P<purge_id>[^/]+)")
174178
175 def __init__(self, hs):
176 """
177
178 Args:
179 hs (synapse.server.HomeServer)
180 """
179 def __init__(self, hs: "HomeServer"):
181180 self.pagination_handler = hs.get_pagination_handler()
182181 self.auth = hs.get_auth()
183182
184 async def on_GET(self, request, purge_id):
183 async def on_GET(
184 self, request: SynapseRequest, purge_id: str
185 ) -> Tuple[int, JsonDict]:
185186 await assert_requester_is_admin(self.auth, request)
186187
187188 purge_status = self.pagination_handler.get_purge_status(purge_id)
202203 class AdminRestResource(JsonResource):
203204 """The REST resource which gets mounted at /_synapse/admin"""
204205
205 def __init__(self, hs):
206 def __init__(self, hs: "HomeServer"):
206207 JsonResource.__init__(self, hs, canonical_json=False)
207208 register_servlets(hs, self)
208209
209210
210 def register_servlets(hs, http_server):
211 def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
211212 """
212213 Register all the admin servlets.
213214 """
241242 RateLimitRestServlet(hs).register(http_server)
242243
243244
244 def register_servlets_for_client_rest_resource(hs, http_server):
245 def register_servlets_for_client_rest_resource(
246 hs: "HomeServer", http_server: HttpServer
247 ) -> None:
245248 """Register only the servlets which need to be exposed on /_matrix/client/xxx"""
246249 WhoisRestServlet(hs).register(http_server)
247250 PurgeHistoryStatusRestServlet(hs).register(http_server)
1212 # limitations under the License.
1313
1414 import re
15 from typing import Iterable, Pattern
1516
1617 from synapse.api.auth import Auth
1718 from synapse.api.errors import AuthError
1920 from synapse.types import UserID
2021
2122
22 def admin_patterns(path_regex: str, version: str = "v1"):
23 def admin_patterns(path_regex: str, version: str = "v1") -> Iterable[Pattern]:
2324 """Returns the list of patterns for an admin endpoint
2425
2526 Args:
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
1313 import logging
14 from typing import TYPE_CHECKING, Tuple
1415
1516 from synapse.api.errors import SynapseError
1617 from synapse.http.servlet import RestServlet
18 from synapse.http.site import SynapseRequest
1719 from synapse.rest.admin._base import admin_patterns, assert_user_is_admin
20 from synapse.types import JsonDict
21
22 if TYPE_CHECKING:
23 from synapse.server import HomeServer
1824
1925 logger = logging.getLogger(__name__)
2026
2430
2531 PATTERNS = admin_patterns("/delete_group/(?P<group_id>[^/]*)")
2632
27 def __init__(self, hs):
33 def __init__(self, hs: "HomeServer"):
2834 self.group_server = hs.get_groups_server_handler()
2935 self.is_mine_id = hs.is_mine_id
3036 self.auth = hs.get_auth()
3137
32 async def on_POST(self, request, group_id):
38 async def on_POST(
39 self, request: SynapseRequest, group_id: str
40 ) -> Tuple[int, JsonDict]:
3341 requester = await self.auth.get_user_by_req(request)
3442 await assert_user_is_admin(self.auth, requester.user)
3543
1616 from typing import TYPE_CHECKING, Tuple
1717
1818 from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
19 from synapse.http.server import HttpServer
1920 from synapse.http.servlet import RestServlet, parse_boolean, parse_integer
2021 from synapse.http.site import SynapseRequest
2122 from synapse.rest.admin._base import (
3637 this server.
3738 """
3839
39 PATTERNS = (
40 admin_patterns("/room/(?P<room_id>[^/]+)/media/quarantine")
41 +
40 PATTERNS = [
41 *admin_patterns("/room/(?P<room_id>[^/]+)/media/quarantine"),
4242 # This path kept around for legacy reasons
43 admin_patterns("/quarantine_media/(?P<room_id>[^/]+)")
44 )
43 *admin_patterns("/quarantine_media/(?P<room_id>[^/]+)"),
44 ]
4545
4646 def __init__(self, hs: "HomeServer"):
4747 self.store = hs.get_datastore()
119119 return 200, {}
120120
121121
122 class UnquarantineMediaByID(RestServlet):
123 """Quarantines local or remote media by a given ID so that no one can download
124 it via this server.
125 """
126
127 PATTERNS = admin_patterns(
128 "/media/unquarantine/(?P<server_name>[^/]+)/(?P<media_id>[^/]+)"
129 )
130
131 def __init__(self, hs: "HomeServer"):
132 self.store = hs.get_datastore()
133 self.auth = hs.get_auth()
134
135 async def on_POST(
136 self, request: SynapseRequest, server_name: str, media_id: str
137 ) -> Tuple[int, JsonDict]:
138 requester = await self.auth.get_user_by_req(request)
139 await assert_user_is_admin(self.auth, requester.user)
140
141 logging.info(
142 "Remove from quarantine local media by ID: %s/%s", server_name, media_id
143 )
144
145 # Remove from quarantine this media id
146 await self.store.quarantine_media_by_id(server_name, media_id, None)
147
148 return 200, {}
149
150
122151 class ProtectMediaByID(RestServlet):
123152 """Protect local media from being quarantined."""
124153
136165
137166 logging.info("Protecting local media by ID: %s", media_id)
138167
139 # Quarantine this media id
140 await self.store.mark_local_media_as_safe(media_id)
168 # Protect this media id
169 await self.store.mark_local_media_as_safe(media_id, safe=True)
170
171 return 200, {}
172
173
174 class UnprotectMediaByID(RestServlet):
175 """Unprotect local media from being quarantined."""
176
177 PATTERNS = admin_patterns("/media/unprotect/(?P<media_id>[^/]+)")
178
179 def __init__(self, hs: "HomeServer"):
180 self.store = hs.get_datastore()
181 self.auth = hs.get_auth()
182
183 async def on_POST(
184 self, request: SynapseRequest, media_id: str
185 ) -> Tuple[int, JsonDict]:
186 requester = await self.auth.get_user_by_req(request)
187 await assert_user_is_admin(self.auth, requester.user)
188
189 logging.info("Unprotecting local media by ID: %s", media_id)
190
191 # Unprotect this media id
192 await self.store.mark_local_media_as_safe(media_id, safe=False)
141193
142194 return 200, {}
143195
259311 return 200, {"deleted_media": deleted_media, "total": total}
260312
261313
262 def register_servlets_for_media_repo(hs: "HomeServer", http_server):
314 def register_servlets_for_media_repo(hs: "HomeServer", http_server: HttpServer) -> None:
263315 """
264316 Media repo specific APIs.
265317 """
266318 PurgeMediaCacheRestServlet(hs).register(http_server)
267319 QuarantineMediaInRoom(hs).register(http_server)
268320 QuarantineMediaByID(hs).register(http_server)
321 UnquarantineMediaByID(hs).register(http_server)
269322 QuarantineMediaByUser(hs).register(http_server)
270323 ProtectMediaByID(hs).register(http_server)
324 UnprotectMediaByID(hs).register(http_server)
271325 ListMediaInRoom(hs).register(http_server)
272326 DeleteMediaByID(hs).register(http_server)
273327 DeleteMediaByDateSize(hs).register(http_server)
648648 limit = parse_integer(request, "limit", default=10)
649649
650650 # picking the API shape for symmetry with /messages
651 filter_str = parse_string(request, b"filter", encoding="utf-8")
651 filter_str = parse_string(request, "filter", encoding="utf-8")
652652 if filter_str:
653653 filter_json = urlparse.unquote(filter_str)
654654 event_filter = Filter(
477477
478478 class WhoisRestServlet(RestServlet):
479479 path_regex = "/whois/(?P<user_id>[^/]*)$"
480 PATTERNS = (
481 admin_patterns(path_regex)
482 +
480 PATTERNS = [
481 *admin_patterns(path_regex),
483482 # URL for spec reason
484483 # https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid
485 client_patterns("/admin" + path_regex, v1=True)
486 )
484 *client_patterns("/admin" + path_regex, v1=True),
485 ]
487486
488487 def __init__(self, hs: "HomeServer"):
489488 self.hs = hs
552551 class AccountValidityRenewServlet(RestServlet):
553552 PATTERNS = admin_patterns("/account_validity/validity$")
554553
555 def __init__(self, hs):
556 """
557 Args:
558 hs (synapse.server.HomeServer): server
559 """
554 def __init__(self, hs: "HomeServer"):
560555 self.hs = hs
561556 self.account_activity_handler = hs.get_account_validity_handler()
562557 self.auth = hs.get_auth()
1313
1414 import logging
1515 import re
16 from typing import TYPE_CHECKING, Awaitable, Callable, Dict, Optional
16 from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional
1717
1818 from synapse.api.errors import Codes, LoginError, SynapseError
1919 from synapse.api.ratelimiting import Ratelimiter
2424 from synapse.http.server import HttpServer, finish_request
2525 from synapse.http.servlet import (
2626 RestServlet,
27 parse_bytes_from_args,
2728 parse_json_object_from_request,
2829 parse_string,
2930 )
436437 finish_request(request)
437438 return
438439
439 client_redirect_url = parse_string(
440 request, "redirectUrl", required=True, encoding=None
441 )
440 args = request.args # type: Dict[bytes, List[bytes]] # type: ignore
441 client_redirect_url = parse_bytes_from_args(args, "redirectUrl", required=True)
442442 sso_url = await self._sso_handler.handle_redirect_request(
443443 request,
444444 client_redirect_url,
536536 self.store, request, default_limit=10
537537 )
538538 as_client_event = b"raw" not in request.args
539 filter_str = parse_string(request, b"filter", encoding="utf-8")
539 filter_str = parse_string(request, "filter", encoding="utf-8")
540540 if filter_str:
541541 filter_json = urlparse.unquote(filter_str)
542542 event_filter = Filter(
651651 limit = parse_integer(request, "limit", default=10)
652652
653653 # picking the API shape for symmetry with /messages
654 filter_str = parse_string(request, b"filter", encoding="utf-8")
654 filter_str = parse_string(request, "filter", encoding="utf-8")
655655 if filter_str:
656656 filter_json = urlparse.unquote(filter_str)
657657 event_filter = Filter(
909909 r"^/_matrix/client/unstable/org\.matrix\.msc2432"
910910 r"/rooms/(?P<room_id>[^/]*)/aliases"
911911 ),
912 ]
912 ] + list(client_patterns("/rooms/(?P<room_id>[^/]*)/aliases$", unstable=False))
913913
914914 def __init__(self, hs: "HomeServer"):
915915 super().__init__()
10591059 RoomRedactEventRestServlet(hs).register(http_server)
10601060 RoomTypingRestServlet(hs).register(http_server)
10611061 RoomEventContextServlet(hs).register(http_server)
1062
1063 if hs.config.experimental.spaces_enabled:
1064 RoomSpaceSummaryRestServlet(hs).register(http_server)
1062 RoomSpaceSummaryRestServlet(hs).register(http_server)
1063 RoomEventServlet(hs).register(http_server)
1064 JoinedRoomsRestServlet(hs).register(http_server)
1065 RoomAliasListServlet(hs).register(http_server)
1066 SearchRestServlet(hs).register(http_server)
10651067
10661068 # Some servlets only get registered for the main process.
10671069 if not is_worker:
10681070 RoomCreateRestServlet(hs).register(http_server)
10691071 RoomForgetRestServlet(hs).register(http_server)
1070 SearchRestServlet(hs).register(http_server)
1071 JoinedRoomsRestServlet(hs).register(http_server)
1072 RoomEventServlet(hs).register(http_server)
1073 RoomAliasListServlet(hs).register(http_server)
10741072
10751073
10761074 def register_deprecated_servlets(hs, http_server):
1515 from http import HTTPStatus
1616
1717 from synapse.api.errors import Codes, SynapseError
18 from synapse.http.servlet import (
19 RestServlet,
20 assert_params_in_dict,
21 parse_json_object_from_request,
22 )
18 from synapse.http.servlet import RestServlet, parse_json_object_from_request
2319
2420 from ._base import client_patterns
2521
4137 user_id = requester.user.to_string()
4238
4339 body = parse_json_object_from_request(request)
44 assert_params_in_dict(body, ("reason", "score"))
4540
46 if not isinstance(body["reason"], str):
41 if not isinstance(body.get("reason", ""), str):
4742 raise SynapseError(
4843 HTTPStatus.BAD_REQUEST,
4944 "Param 'reason' must be a string",
5045 Codes.BAD_JSON,
5146 )
52 if not isinstance(body["score"], int):
47 if not isinstance(body.get("score", 0), int):
5348 raise SynapseError(
5449 HTTPStatus.BAD_REQUEST,
5550 "Param 'score' must be an integer",
6055 room_id=room_id,
6156 event_id=event_id,
6257 user_id=user_id,
63 reason=body["reason"],
58 reason=body.get("reason"),
6459 content=body,
6560 received_ts=self.clock.time_msec(),
6661 )
1616 from hashlib import sha256
1717 from http import HTTPStatus
1818 from os import path
19 from typing import Dict, List
1920
2021 import jinja2
2122 from jinja2 import TemplateNotFound
2324 from synapse.api.errors import NotFoundError, StoreError, SynapseError
2425 from synapse.config import ConfigError
2526 from synapse.http.server import DirectServeHtmlResource, respond_with_html
26 from synapse.http.servlet import parse_string
27 from synapse.http.servlet import parse_bytes_from_args, parse_string
2728 from synapse.types import UserID
2829
2930 # language to use for the templates. TODO: figure this out from Accept-Language
115116 has_consented = False
116117 public_version = username == ""
117118 if not public_version:
118 userhmac_bytes = parse_string(request, "h", required=True, encoding=None)
119 args = request.args # type: Dict[bytes, List[bytes]]
120 userhmac_bytes = parse_bytes_from_args(args, "h", required=True)
119121
120122 self._check_hash(username, userhmac_bytes)
121123
151153 """
152154 version = parse_string(request, "v", required=True)
153155 username = parse_string(request, "u", required=True)
154 userhmac = parse_string(request, "h", required=True, encoding=None)
156 args = request.args # type: Dict[bytes, List[bytes]]
157 userhmac = parse_bytes_from_args(args, "h", required=True)
155158
156159 self._check_hash(username, userhmac)
157160
2121 from synapse.http.server import DirectServeJsonResource, respond_with_json
2222 from synapse.http.servlet import parse_integer, parse_json_object_from_request
2323 from synapse.util import json_decoder
24 from synapse.util.async_helpers import yieldable_gather_results
2425
2526 logger = logging.getLogger(__name__)
2627
209210 # If there is a cache miss, request the missing keys, then recurse (and
210211 # ensure the result is sent).
211212 if cache_misses and query_remote_on_cache_miss:
212 await self.fetcher.get_keys(cache_misses)
213 await yieldable_gather_results(
214 lambda t: self.fetcher.get_keys(*t),
215 (
216 (server_name, list(keys), 0)
217 for server_name, keys in cache_misses.items()
218 ),
219 )
213220 await self.query_keys(request, query, query_remote_on_cache_miss=False)
214221 else:
215222 signed_keys = []
1313 # limitations under the License.
1414
1515 import logging
16 from typing import IO, TYPE_CHECKING
16 from typing import IO, TYPE_CHECKING, Dict, List, Optional
1717
1818 from twisted.web.server import Request
1919
2020 from synapse.api.errors import Codes, SynapseError
2121 from synapse.http.server import DirectServeJsonResource, respond_with_json
22 from synapse.http.servlet import parse_string
22 from synapse.http.servlet import parse_bytes_from_args
2323 from synapse.http.site import SynapseRequest
2424 from synapse.rest.media.v1.media_storage import SpamMediaException
2525
6060 errcode=Codes.TOO_LARGE,
6161 )
6262
63 upload_name = parse_string(request, b"filename", encoding=None)
64 if upload_name:
63 args = request.args # type: Dict[bytes, List[bytes]] # type: ignore
64 upload_name_bytes = parse_bytes_from_args(args, "filename")
65 if upload_name_bytes:
6566 try:
66 upload_name = upload_name.decode("utf8")
67 upload_name = upload_name_bytes.decode("utf8") # type: Optional[str]
6768 except UnicodeDecodeError:
6869 raise SynapseError(
6970 msg="Invalid UTF-8 filename parameter: %r" % (upload_name), code=400
3939
4040 from synapse.api.errors import StoreError
4141 from synapse.config.database import DatabaseConnectionConfig
42 from synapse.logging import opentracing
4243 from synapse.logging.context import (
4344 LoggingContext,
4445 current_context,
8990 db_args = dict(db_config.config.get("args", {}))
9091 db_args.setdefault("cp_reconnect", True)
9192
93 def _on_new_connection(conn):
94 # Ensure we have a logging context so we can correctly track queries,
95 # etc.
96 with LoggingContext("db.on_new_connection"):
97 engine.on_new_connection(
98 LoggingDatabaseConnection(conn, engine, "on_new_connection")
99 )
100
92101 return adbapi.ConnectionPool(
93102 db_config.config["name"],
94103 cp_reactor=reactor,
95 cp_openfun=lambda conn: engine.on_new_connection(
96 LoggingDatabaseConnection(conn, engine, "on_new_connection")
97 ),
104 cp_openfun=_on_new_connection,
98105 **db_args,
99106 )
100107
312319 start = time.time()
313320
314321 try:
315 return func(sql, *args)
322 with opentracing.start_active_span(
323 "db.query",
324 tags={
325 opentracing.tags.DATABASE_TYPE: "sql",
326 opentracing.tags.DATABASE_STATEMENT: sql,
327 },
328 ):
329 return func(sql, *args)
316330 except Exception as e:
317331 sql_logger.debug("[SQL FAIL] {%s} %s", self.name, e)
318332 raise
524538 exception_callbacks=exception_callbacks,
525539 )
526540 try:
527 r = func(cursor, *args, **kwargs)
528 conn.commit()
529 return r
541 with opentracing.start_active_span(
542 "db.txn",
543 tags={
544 opentracing.SynapseTags.DB_TXN_DESC: desc,
545 opentracing.SynapseTags.DB_TXN_ID: name,
546 },
547 ):
548 r = func(cursor, *args, **kwargs)
549 opentracing.log_kv({"message": "commit"})
550 conn.commit()
551 return r
530552 except self.engine.module.OperationalError as e:
531553 # This can happen if the database disappears mid
532554 # transaction.
540562 if i < N:
541563 i += 1
542564 try:
543 conn.rollback()
565 with opentracing.start_active_span("db.rollback"):
566 conn.rollback()
544567 except self.engine.module.Error as e1:
545568 transaction_logger.warning("[TXN EROLL] {%s} %s", name, e1)
546569 continue
553576 if i < N:
554577 i += 1
555578 try:
556 conn.rollback()
579 with opentracing.start_active_span("db.rollback"):
580 conn.rollback()
557581 except self.engine.module.Error as e1:
558582 transaction_logger.warning(
559583 "[TXN EROLL] {%s} %s",
652676 logger.warning("Starting db txn '%s' from sentinel context", desc)
653677
654678 try:
655 result = await self.runWithConnection(
656 self.new_transaction,
657 desc,
658 after_callbacks,
659 exception_callbacks,
660 func,
661 *args,
662 db_autocommit=db_autocommit,
663 **kwargs,
664 )
679 with opentracing.start_active_span(f"db.{desc}"):
680 result = await self.runWithConnection(
681 self.new_transaction,
682 desc,
683 after_callbacks,
684 exception_callbacks,
685 func,
686 *args,
687 db_autocommit=db_autocommit,
688 **kwargs,
689 )
665690
666691 for after_callback, after_args, after_kwargs in after_callbacks:
667692 after_callback(*after_args, **after_kwargs)
717742 with LoggingContext(
718743 str(curr_context), parent_context=parent_context
719744 ) as context:
720 sched_duration_sec = monotonic_time() - start_time
721 sql_scheduling_timer.observe(sched_duration_sec)
722 context.add_database_scheduled(sched_duration_sec)
723
724 if self.engine.is_connection_closed(conn):
725 logger.debug("Reconnecting closed database connection")
726 conn.reconnect()
727
728 try:
729 if db_autocommit:
730 self.engine.attempt_to_set_autocommit(conn, True)
731
732 db_conn = LoggingDatabaseConnection(
733 conn, self.engine, "runWithConnection"
734 )
735 return func(db_conn, *args, **kwargs)
736 finally:
737 if db_autocommit:
738 self.engine.attempt_to_set_autocommit(conn, False)
745 with opentracing.start_active_span(
746 operation_name="db.connection",
747 ):
748 sched_duration_sec = monotonic_time() - start_time
749 sql_scheduling_timer.observe(sched_duration_sec)
750 context.add_database_scheduled(sched_duration_sec)
751
752 if self.engine.is_connection_closed(conn):
753 logger.debug("Reconnecting closed database connection")
754 conn.reconnect()
755 opentracing.log_kv({"message": "reconnected"})
756
757 try:
758 if db_autocommit:
759 self.engine.attempt_to_set_autocommit(conn, True)
760
761 db_conn = LoggingDatabaseConnection(
762 conn, self.engine, "runWithConnection"
763 )
764 return func(db_conn, *args, **kwargs)
765 finally:
766 if db_autocommit:
767 self.engine.attempt_to_set_autocommit(conn, False)
739768
740769 return await make_deferred_yieldable(
741770 self._db_pool.runWithConnection(inner_func, *args, **kwargs)
167167 backfilled,
168168 ):
169169 self._invalidate_get_event_cache(event_id)
170 self.have_seen_event.invalidate((room_id, event_id))
170171
171172 self.get_latest_event_ids_in_room.invalidate((room_id,))
172173
173 self.get_unread_event_push_actions_by_room_for_user.invalidate_many((room_id,))
174 self.get_unread_event_push_actions_by_room_for_user.invalidate((room_id,))
174175
175176 if not backfilled:
176177 self._events_stream_cache.entity_has_changed(room_id, stream_ordering)
183184 self.get_invited_rooms_for_local_user.invalidate((state_key,))
184185
185186 if relates_to:
186 self.get_relations_for_event.invalidate_many((relates_to,))
187 self.get_aggregation_groups_for_event.invalidate_many((relates_to,))
187 self.get_relations_for_event.invalidate((relates_to,))
188 self.get_aggregation_groups_for_event.invalidate((relates_to,))
188189 self.get_applicable_edit.invalidate((relates_to,))
189190
190191 async def invalidate_cache_and_stream(self, cache_name: str, keys: Tuple[Any, ...]):
12811281 )
12821282
12831283 txn.call_after(self.get_cached_devices_for_user.invalidate, (user_id,))
1284 txn.call_after(self._get_cached_user_device.invalidate_many, (user_id,))
1284 txn.call_after(self._get_cached_user_device.invalidate, (user_id,))
12851285 txn.call_after(
12861286 self.get_device_list_last_stream_id_for_remote.invalidate, (user_id,)
12871287 )
859859 not be deleted.
860860 """
861861 txn.call_after(
862 self.get_unread_event_push_actions_by_room_for_user.invalidate_many,
862 self.get_unread_event_push_actions_by_room_for_user.invalidate,
863863 (room_id, user_id),
864864 )
865865
17471747 },
17481748 )
17491749
1750 txn.call_after(self.store.get_relations_for_event.invalidate_many, (parent_id,))
1750 txn.call_after(self.store.get_relations_for_event.invalidate, (parent_id,))
17511751 txn.call_after(
1752 self.store.get_aggregation_groups_for_event.invalidate_many, (parent_id,)
1752 self.store.get_aggregation_groups_for_event.invalidate, (parent_id,)
17531753 )
17541754
17551755 if rel_type == RelationTypes.REPLACE:
19021902
19031903 for user_id in user_ids:
19041904 txn.call_after(
1905 self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many,
1905 self.store.get_unread_event_push_actions_by_room_for_user.invalidate,
19061906 (room_id, user_id),
19071907 )
19081908
19161916 def _remove_push_actions_for_event_id_txn(self, txn, room_id, event_id):
19171917 # Sad that we have to blow away the cache for the whole room here
19181918 txn.call_after(
1919 self.store.get_unread_event_push_actions_by_room_for_user.invalidate_many,
1919 self.store.get_unread_event_push_actions_by_room_for_user.invalidate,
19201920 (room_id,),
19211921 )
19221922 txn.execute(
2121 Iterable,
2222 List,
2323 Optional,
24 Set,
2425 Tuple,
2526 overload,
2627 )
5455 from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator
5556 from synapse.storage.util.sequence import build_sequence_generator
5657 from synapse.types import JsonDict, get_domain_from_id
57 from synapse.util.caches.descriptors import cached
58 from synapse.util.caches.descriptors import cached, cachedList
5859 from synapse.util.caches.lrucache import LruCache
5960 from synapse.util.iterutils import batch_iter
6061 from synapse.util.metrics import Measure
10441045
10451046 return {r["event_id"] for r in rows}
10461047
1047 async def have_seen_events(self, event_ids):
1048 async def have_seen_events(
1049 self, room_id: str, event_ids: Iterable[str]
1050 ) -> Set[str]:
10481051 """Given a list of event ids, check if we have already processed them.
10491052
1050 Args:
1051 event_ids (iterable[str]):
1053 The room_id is only used to structure the cache (so that it can later be
1054 invalidated by room_id) - there is no guarantee that the events are actually
1055 in the room in question.
1056
1057 Args:
1058 room_id: Room we are polling
1059 event_ids: events we are looking for
10521060
10531061 Returns:
10541062 set[str]: The events we have already seen.
10551063 """
1064 res = await self._have_seen_events_dict(
1065 (room_id, event_id) for event_id in event_ids
1066 )
1067 return {eid for ((_rid, eid), have_event) in res.items() if have_event}
1068
1069 @cachedList("have_seen_event", "keys")
1070 async def _have_seen_events_dict(
1071 self, keys: Iterable[Tuple[str, str]]
1072 ) -> Dict[Tuple[str, str], bool]:
1073 """Helper for have_seen_events
1074
1075 Returns:
1076 a dict {(room_id, event_id)-> bool}
1077 """
10561078 # if the event cache contains the event, obviously we've seen it.
1057 results = {x for x in event_ids if self._get_event_cache.contains(x)}
1058
1059 def have_seen_events_txn(txn, chunk):
1060 sql = "SELECT event_id FROM events as e WHERE "
1079
1080 cache_results = {
1081 (rid, eid) for (rid, eid) in keys if self._get_event_cache.contains((eid,))
1082 }
1083 results = {x: True for x in cache_results}
1084
1085 def have_seen_events_txn(txn, chunk: Tuple[Tuple[str, str], ...]):
1086 # we deliberately do *not* query the database for room_id, to make the
1087 # query an index-only lookup on `events_event_id_key`.
1088 #
1089 # We therefore pull the events from the database into a set...
1090
1091 sql = "SELECT event_id FROM events AS e WHERE "
10611092 clause, args = make_in_list_sql_clause(
1062 txn.database_engine, "e.event_id", chunk
1093 txn.database_engine, "e.event_id", [eid for (_rid, eid) in chunk]
10631094 )
10641095 txn.execute(sql + clause, args)
1065 results.update(row[0] for row in txn)
1066
1067 for chunk in batch_iter((x for x in event_ids if x not in results), 100):
1096 found_events = {eid for eid, in txn}
1097
1098 # ... and then we can update the results for each row in the batch
1099 results.update({(rid, eid): (eid in found_events) for (rid, eid) in chunk})
1100
1101 # each batch requires its own index scan, so we make the batches as big as
1102 # possible.
1103 for chunk in batch_iter((k for k in keys if k not in cache_results), 500):
10681104 await self.db_pool.runInteraction(
10691105 "have_seen_events", have_seen_events_txn, chunk
10701106 )
1107
10711108 return results
1109
1110 @cached(max_entries=100000, tree=True)
1111 async def have_seen_event(self, room_id: str, event_id: str):
1112 # this only exists for the benefit of the @cachedList descriptor on
1113 # _have_seen_events_dict
1114 raise NotImplementedError()
10721115
10731116 def _get_current_state_event_counts_txn(self, txn, room_id):
10741117 """
142142 "created_ts",
143143 "quarantined_by",
144144 "url_cache",
145 "safe_from_quarantine",
145146 ),
146147 allow_none=True,
147148 desc="get_local_media",
295296 desc="store_local_media",
296297 )
297298
298 async def mark_local_media_as_safe(self, media_id: str) -> None:
299 """Mark a local media as safe from quarantining."""
299 async def mark_local_media_as_safe(self, media_id: str, safe: bool = True) -> None:
300 """Mark a local media as safe or unsafe from quarantining."""
300301 await self.db_pool.simple_update_one(
301302 table="local_media_repository",
302303 keyvalues={"media_id": media_id},
303 updatevalues={"safe_from_quarantine": True},
304 updatevalues={"safe_from_quarantine": safe},
304305 desc="mark_local_media_as_safe",
305306 )
306307
4949 instance_name=self._instance_name,
5050 tables=[("presence_stream", "instance_name", "stream_id")],
5151 sequence_name="presence_stream_sequence",
52 writers=hs.config.worker.writers.to_device,
52 writers=hs.config.worker.writers.presence,
5353 )
5454 else:
5555 self._presence_id_gen = StreamIdGenerator(
1515 from typing import Any, List, Set, Tuple
1616
1717 from synapse.api.errors import SynapseError
18 from synapse.storage._base import SQLBaseStore
18 from synapse.storage.databases.main import CacheInvalidationWorkerStore
1919 from synapse.storage.databases.main.state import StateGroupWorkerStore
2020 from synapse.types import RoomStreamToken
2121
2222 logger = logging.getLogger(__name__)
2323
2424
25 class PurgeEventsStore(StateGroupWorkerStore, SQLBaseStore):
25 class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
2626 async def purge_history(
2727 self, room_id: str, token: str, delete_local_events: bool
2828 ) -> Set[int]:
202202 "DELETE FROM event_to_state_groups "
203203 "WHERE event_id IN (SELECT event_id from events_to_purge)"
204204 )
205 for event_id, _ in event_rows:
206 txn.call_after(self._get_state_group_for_event.invalidate, (event_id,))
207205
208206 # Delete all remote non-state events
209207 for table in (
281279 # finally, drop the temp table. this will commit the txn in sqlite,
282280 # so make sure to keep this actually last.
283281 txn.execute("DROP TABLE events_to_purge")
282
283 for event_id, should_delete in event_rows:
284 self._invalidate_cache_and_stream(
285 txn, self._get_state_group_for_event, (event_id,)
286 )
287
288 # XXX: This is racy, since have_seen_events could be called between the
289 # transaction completing and the invalidation running. On the other hand,
290 # that's no different to calling `have_seen_events` just before the
291 # event is deleted from the database.
292 if should_delete:
293 self._invalidate_cache_and_stream(
294 txn, self.have_seen_event, (room_id, event_id)
295 )
284296
285297 logger.info("[purge] done")
286298
421433 # index on them. In any case we should be clearing out 'stream' tables
422434 # periodically anyway (#5888)
423435
424 # TODO: we could probably usefully do a bunch of cache invalidation here
436 # TODO: we could probably usefully do a bunch more cache invalidation here
437
438 # XXX: as with purge_history, this is racy, but no worse than other races
439 # that already exist.
440 self._invalidate_cache_and_stream(txn, self.have_seen_event, (room_id,))
425441
426442 logger.info("[purge] done")
427443
459459
460460 def invalidate_caches_for_receipt(self, room_id, receipt_type, user_id):
461461 self.get_receipts_for_user.invalidate((user_id, receipt_type))
462 self._get_linearized_receipts_for_room.invalidate_many((room_id,))
462 self._get_linearized_receipts_for_room.invalidate((room_id,))
463463 self.get_last_receipt_event_id_for_user.invalidate(
464464 (user_id, room_id, receipt_type)
465465 )
658658 )
659659 txn.call_after(self.get_receipts_for_user.invalidate, (user_id, receipt_type))
660660 # FIXME: This shouldn't invalidate the whole cache
661 txn.call_after(
662 self._get_linearized_receipts_for_room.invalidate_many, (room_id,)
663 )
661 txn.call_after(self._get_linearized_receipts_for_room.invalidate, (room_id,))
664662
665663 self.db_pool.simple_delete_txn(
666664 txn,
763763 self,
764764 server_name: str,
765765 media_id: str,
766 quarantined_by: str,
766 quarantined_by: Optional[str],
767767 ) -> int:
768 """quarantines a single local or remote media id
768 """quarantines or unquarantines a single local or remote media id
769769
770770 Args:
771771 server_name: The name of the server that holds this media
772772 media_id: The ID of the media to be quarantined
773773 quarantined_by: The user ID that initiated the quarantine request
774 If it is `None` media will be removed from quarantine
774775 """
775776 logger.info("Quarantining media: %s/%s", server_name, media_id)
776777 is_local = server_name == self.config.server_name
837838 txn,
838839 local_mxcs: List[str],
839840 remote_mxcs: List[Tuple[str, str]],
840 quarantined_by: str,
841 quarantined_by: Optional[str],
841842 ) -> int:
842 """Quarantine local and remote media items
843 """Quarantine and unquarantine local and remote media items
843844
844845 Args:
845846 txn (cursor)
847848 remote_mxcs: A list of (remote server, media id) tuples representing
848849 remote mxc URLs
849850 quarantined_by: The ID of the user who initiated the quarantine request
851 If it is `None` media will be removed from quarantine
850852 Returns:
851853 The total number of media items quarantined
852854 """
855
853856 # Update all the tables to set the quarantined_by flag
854 txn.executemany(
855 """
857 sql = """
856858 UPDATE local_media_repository
857859 SET quarantined_by = ?
858 WHERE media_id = ? AND safe_from_quarantine = ?
859 """,
860 ((quarantined_by, media_id, False) for media_id in local_mxcs),
861 )
860 WHERE media_id = ?
861 """
862
863 # set quarantine
864 if quarantined_by is not None:
865 sql += "AND safe_from_quarantine = ?"
866 rows = [(quarantined_by, media_id, False) for media_id in local_mxcs]
867 # remove from quarantine
868 else:
869 rows = [(quarantined_by, media_id) for media_id in local_mxcs]
870
871 txn.executemany(sql, rows)
862872 # Note that a rowcount of -1 can be used to indicate no rows were affected.
863873 total_media_quarantined = txn.rowcount if txn.rowcount > 0 else 0
864874
14971507 room_id: str,
14981508 event_id: str,
14991509 user_id: str,
1500 reason: str,
1510 reason: Optional[str],
15011511 content: JsonDict,
15021512 received_ts: int,
15031513 ) -> None:
396396 # ... persist event ...
397397 """
398398
399 # If we have a list of instances that are allowed to write to this
400 # stream, make sure we're in it.
401 if self._writers and self._instance_name not in self._writers:
402 raise Exception("Tried to allocate stream ID on non-writer")
403
399404 return _MultiWriterCtxManager(self)
400405
401406 def get_next_mult(self, n: int):
405410 # ... persist events ...
406411 """
407412
413 # If we have a list of instances that are allowed to write to this
414 # stream, make sure we're in it.
415 if self._writers and self._instance_name not in self._writers:
416 raise Exception("Tried to allocate stream ID on non-writer")
417
408418 return _MultiWriterCtxManager(self, n)
409419
410420 def get_next_txn(self, txn: LoggingTransaction):
414424 stream_id = stream_id_gen.get_next(txn)
415425 # ... persist event ...
416426 """
427
428 # If we have a list of instances that are allowed to write to this
429 # stream, make sure we're in it.
430 if self._writers and self._instance_name not in self._writers:
431 raise Exception("Tried to allocate stream ID on non-writer")
417432
418433 next_id = self._load_next_id_txn(txn)
419434
1414
1515 import collections
1616 import inspect
17 import itertools
1718 import logging
1819 from contextlib import contextmanager
1920 from typing import (
159160 )
160161
161162
163 T = TypeVar("T")
164
165
162166 def concurrently_execute(
163 func: Callable, args: Iterable[Any], limit: int
167 func: Callable[[T], Any], args: Iterable[T], limit: int
164168 ) -> defer.Deferred:
165169 """Executes the function with each argument concurrently while limiting
166170 the number of concurrent executions.
172176 limit: Maximum number of conccurent executions.
173177
174178 Returns:
175 Deferred[list]: Resolved when all function invocations have finished.
179 Deferred: Resolved when all function invocations have finished.
176180 """
177181 it = iter(args)
178182
179 async def _concurrently_execute_inner():
183 async def _concurrently_execute_inner(value: T) -> None:
180184 try:
181185 while True:
182 await maybe_awaitable(func(next(it)))
186 await maybe_awaitable(func(value))
187 value = next(it)
183188 except StopIteration:
184189 pass
185190
191 # We use `itertools.islice` to handle the case where the number of args is
192 # less than the limit, avoiding needlessly spawning unnecessary background
193 # tasks.
186194 return make_deferred_yieldable(
187195 defer.gatherResults(
188 [run_in_background(_concurrently_execute_inner) for _ in range(limit)],
196 [
197 run_in_background(_concurrently_execute_inner, value)
198 for value in itertools.islice(it, limit)
199 ],
189200 consumeErrors=True,
190201 )
191202 ).addErrback(unwrapFirstError)
2424 TypeVar,
2525 )
2626
27 from prometheus_client import Gauge
28
2729 from twisted.internet import defer
2830
2931 from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
30 from synapse.metrics import LaterGauge
3132 from synapse.metrics.background_process_metrics import run_as_background_process
3233 from synapse.util import Clock
3334
3637
3738 V = TypeVar("V")
3839 R = TypeVar("R")
40
41 number_queued = Gauge(
42 "synapse_util_batching_queue_number_queued",
43 "The number of items waiting in the queue across all keys",
44 labelnames=("name",),
45 )
46
47 number_in_flight = Gauge(
48 "synapse_util_batching_queue_number_pending",
49 "The number of items across all keys either being processed or waiting in a queue",
50 labelnames=("name",),
51 )
52
53 number_of_keys = Gauge(
54 "synapse_util_batching_queue_number_of_keys",
55 "The number of distinct keys that have items queued",
56 labelnames=("name",),
57 )
3958
4059
4160 class BatchingQueue(Generic[V, R]):
4766 called, and will keep being called until the queue has been drained (for the
4867 given key).
4968
69 If the processing function raises an exception then the exception is proxied
70 through to the callers waiting on that batch of work.
71
5072 Note that the return value of `add_to_queue` will be the return value of the
5173 processing function that processed the given item. This means that the
5274 returned value will likely include data for other items that were in the
5375 batch.
76
77 Args:
78 name: A name for the queue, used for logging contexts and metrics.
79 This must be unique, otherwise the metrics will be wrong.
80 clock: The clock to use to schedule work.
81 process_batch_callback: The callback to to be run to process a batch of
82 work.
5483 """
5584
5685 def __init__(
72101 # The function to call with batches of values.
73102 self._process_batch_callback = process_batch_callback
74103
75 LaterGauge(
76 "synapse_util_batching_queue_number_queued",
77 "The number of items waiting in the queue across all keys",
78 labels=("name",),
79 caller=lambda: sum(len(v) for v in self._next_values.values()),
104 number_queued.labels(self._name).set_function(
105 lambda: sum(len(q) for q in self._next_values.values())
80106 )
81107
82 LaterGauge(
83 "synapse_util_batching_queue_number_of_keys",
84 "The number of distinct keys that have items queued",
85 labels=("name",),
86 caller=lambda: len(self._next_values),
87 )
108 number_of_keys.labels(self._name).set_function(lambda: len(self._next_values))
109
110 self._number_in_flight_metric = number_in_flight.labels(
111 self._name
112 ) # type: Gauge
88113
89114 async def add_to_queue(self, value: V, key: Hashable = ()) -> R:
90115 """Adds the value to the queue with the given key, returning the result
106131 if key not in self._processing_keys:
107132 run_as_background_process(self._name, self._process_queue, key)
108133
109 return await make_deferred_yieldable(d)
134 with self._number_in_flight_metric.track_inprogress():
135 return await make_deferred_yieldable(d)
110136
111137 async def _process_queue(self, key: Hashable) -> None:
112138 """A background task to repeatedly pull things off the queue for the
113139 given key and call the `self._process_batch_callback` with the values.
114140 """
115141
142 if key in self._processing_keys:
143 return
144
116145 try:
117 if key in self._processing_keys:
118 return
119
120146 self._processing_keys.add(key)
121147
122148 while True:
136162 values = [value for value, _ in next_values]
137163 results = await self._process_batch_callback(values)
138164
139 for _, deferred in next_values:
140 with PreserveLoggingContext():
165 with PreserveLoggingContext():
166 for _, deferred in next_values:
141167 deferred.callback(results)
142168
143169 except Exception as e:
144 for _, deferred in next_values:
145 if deferred.called:
146 continue
170 with PreserveLoggingContext():
171 for _, deferred in next_values:
172 if deferred.called:
173 continue
147174
148 with PreserveLoggingContext():
149175 deferred.errback(e)
150176
151177 finally:
1515
1616 import enum
1717 import threading
18 from typing import (
19 Callable,
20 Generic,
21 Iterable,
22 MutableMapping,
23 Optional,
24 TypeVar,
25 Union,
26 cast,
27 )
18 from typing import Callable, Generic, Iterable, MutableMapping, Optional, TypeVar, Union
2819
2920 from prometheus_client import Gauge
3021
9081 # _pending_deferred_cache maps from the key value to a `CacheEntry` object.
9182 self._pending_deferred_cache = (
9283 cache_type()
93 ) # type: MutableMapping[KT, CacheEntry]
84 ) # type: Union[TreeCache, MutableMapping[KT, CacheEntry]]
9485
9586 def metrics_cb():
9687 cache_pending_metric.labels(name).set(len(self._pending_deferred_cache))
286277 self.cache.set(key, value, callbacks=callbacks)
287278
288279 def invalidate(self, key):
280 """Delete a key, or tree of entries
281
282 If the cache is backed by a regular dict, then "key" must be of
283 the right type for this cache
284
285 If the cache is backed by a TreeCache, then "key" must be a tuple, but
286 may be of lower cardinality than the TreeCache - in which case the whole
287 subtree is deleted.
288 """
289289 self.check_thread()
290 self.cache.pop(key, None)
290 self.cache.del_multi(key)
291291
292292 # if we have a pending lookup for this key, remove it from the
293293 # _pending_deferred_cache, which will (a) stop it being returned
298298 # run the invalidation callbacks now, rather than waiting for the
299299 # deferred to resolve.
300300 if entry:
301 entry.invalidate()
302
303 def invalidate_many(self, key: KT):
304 self.check_thread()
305 if not isinstance(key, tuple):
306 raise TypeError("The cache key must be a tuple not %r" % (type(key),))
307 key = cast(KT, key)
308 self.cache.del_multi(key)
309
310 # if we have a pending lookup for this key, remove it from the
311 # _pending_deferred_cache, as above
312 entry_dict = self._pending_deferred_cache.pop(key, None)
313 if entry_dict is not None:
314 for entry in iterate_tree_cache_entry(entry_dict):
301 # _pending_deferred_cache.pop should either return a CacheEntry, or, in the
302 # case of a TreeCache, a dict of keys to cache entries. Either way calling
303 # iterate_tree_cache_entry on it will do the right thing.
304 for entry in iterate_tree_cache_entry(entry):
315305 entry.invalidate()
316306
317307 def invalidate_all(self):
4747 class _CachedFunction(Generic[F]):
4848 invalidate = None # type: Any
4949 invalidate_all = None # type: Any
50 invalidate_many = None # type: Any
5150 prefill = None # type: Any
5251 cache = None # type: Any
5352 num_args = None # type: Any
261260 ):
262261 super().__init__(orig, num_args=num_args, cache_context=cache_context)
263262
263 if tree and self.num_args < 2:
264 raise RuntimeError(
265 "tree=True is nonsensical for cached functions with a single parameter"
266 )
267
264268 self.max_entries = max_entries
265269 self.tree = tree
266270 self.iterable = iterable
301305 wrapped = cast(_CachedFunction, _wrapped)
302306
303307 if self.num_args == 1:
308 assert not self.tree
304309 wrapped.invalidate = lambda key: cache.invalidate(key[0])
305310 wrapped.prefill = lambda key, val: cache.prefill(key[0], val)
306311 else:
307312 wrapped.invalidate = cache.invalidate
308 wrapped.invalidate_many = cache.invalidate_many
309313 wrapped.prefill = cache.prefill
310314
311315 wrapped.invalidate_all = cache.invalidate_all
151151 """
152152 Least-recently-used cache, supporting prometheus metrics and invalidation callbacks.
153153
154 Supports del_multi only if cache_type=TreeCache
155154 If cache_type=TreeCache, all keys must be tuples.
156155 """
157156
392391
393392 @synchronized
394393 def cache_del_multi(key: KT) -> None:
394 """Delete an entry, or tree of entries
395
396 If the LruCache is backed by a regular dict, then "key" must be of
397 the right type for this cache
398
399 If the LruCache is backed by a TreeCache, then "key" must be a tuple, but
400 may be of lower cardinality than the TreeCache - in which case the whole
401 subtree is deleted.
395402 """
396 This will only work if constructed with cache_type=TreeCache
397 """
398 popped = cache.pop(key)
403 popped = cache.pop(key, None)
399404 if popped is None:
400405 return
401406 # for each deleted node, we now need to remove it from the linked list
429434 self.set = cache_set
430435 self.setdefault = cache_set_default
431436 self.pop = cache_pop
437 self.del_multi = cache_del_multi
432438 # `invalidate` is exposed for consistency with DeferredCache, so that it can be
433439 # invalidated by the cache invalidation replication stream.
434 self.invalidate = cache_pop
435 if cache_type is TreeCache:
436 self.del_multi = cache_del_multi
440 self.invalidate = cache_del_multi
437441 self.len = synchronized(cache_len)
438442 self.contains = cache_contains
439443 self.clear = cache_clear
8888 value. If the key is partial, the TreeCacheNode corresponding to the part
8989 of the tree that was removed.
9090 """
91 if not isinstance(key, tuple):
92 raise TypeError("The cache key must be a tuple not %r" % (type(key),))
93
9194 # a list of the nodes we have touched on the way down the tree
9295 nodes = []
9396
9696 write("started %s(%s)" % (app, ",".join(config_files)), colour=GREEN)
9797 return True
9898 except subprocess.CalledProcessError as e:
99 write(
100 "error starting %s(%s) (exit code: %d); see above for logs"
101 % (app, ",".join(config_files), e.returncode),
102 colour=RED,
103 )
99 err = "%s(%s) failed to start (exit code: %d). Check the Synapse logfile" % (
100 app,
101 ",".join(config_files),
102 e.returncode,
103 )
104 if daemonize:
105 err += ", or run synctl with --no-daemonize"
106 err += "."
107 write(err, colour=RED, stream=sys.stderr)
104108 return False
105109
106110
7373
7474 config = {
7575 "tls_certificate_path": os.path.join(config_dir, "cert.pem"),
76 "tls_fingerprints": [],
7776 }
7877
7978 t = TestConfig()
8079 t.read_config(config, config_dir_path="", data_dir_path="")
81 t.read_certificate_from_disk(require_cert_and_key=False)
80 t.read_tls_certificate()
8281
8382 warnings = self.flushWarnings()
8483 self.assertEqual(len(warnings), 1)
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
1313 import time
14 from typing import Dict, List
1415 from unittest.mock import Mock
1516
1617 import attr
2021 from nacl.signing import SigningKey
2122 from signedjson.key import encode_verify_key_base64, get_verify_key
2223
23 from twisted.internet import defer
2424 from twisted.internet.defer import Deferred, ensureDeferred
2525
2626 from synapse.api.errors import SynapseError
9191 # deferred completes.
9292 first_lookup_deferred = Deferred()
9393
94 async def first_lookup_fetch(keys_to_fetch):
95 self.assertEquals(current_context().request.id, "context_11")
96 self.assertEqual(keys_to_fetch, {"server10": {get_key_id(key1): 0}})
94 async def first_lookup_fetch(
95 server_name: str, key_ids: List[str], minimum_valid_until_ts: int
96 ) -> Dict[str, FetchKeyResult]:
97 # self.assertEquals(current_context().request.id, "context_11")
98 self.assertEqual(server_name, "server10")
99 self.assertEqual(key_ids, [get_key_id(key1)])
100 self.assertEqual(minimum_valid_until_ts, 0)
97101
98102 await make_deferred_yieldable(first_lookup_deferred)
99 return {
100 "server10": {
101 get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)
102 }
103 }
103 return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)}
104104
105105 mock_fetcher.get_keys.side_effect = first_lookup_fetch
106106
107107 async def first_lookup():
108108 with LoggingContext("context_11", request=FakeRequest("context_11")):
109109 res_deferreds = kr.verify_json_objects_for_server(
110 [("server10", json1, 0, "test10"), ("server11", {}, 0, "test11")]
110 [("server10", json1, 0), ("server11", {}, 0)]
111111 )
112112
113113 # the unsigned json should be rejected pretty quickly
125125
126126 d0 = ensureDeferred(first_lookup())
127127
128 self.pump()
129
128130 mock_fetcher.get_keys.assert_called_once()
129131
130132 # a second request for a server with outstanding requests
131133 # should block rather than start a second call
132134
133 async def second_lookup_fetch(keys_to_fetch):
134 self.assertEquals(current_context().request.id, "context_12")
135 return {
136 "server10": {
137 get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)
138 }
139 }
135 async def second_lookup_fetch(
136 server_name: str, key_ids: List[str], minimum_valid_until_ts: int
137 ) -> Dict[str, FetchKeyResult]:
138 # self.assertEquals(current_context().request.id, "context_12")
139 return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 100)}
140140
141141 mock_fetcher.get_keys.reset_mock()
142142 mock_fetcher.get_keys.side_effect = second_lookup_fetch
145145 async def second_lookup():
146146 with LoggingContext("context_12", request=FakeRequest("context_12")):
147147 res_deferreds_2 = kr.verify_json_objects_for_server(
148 [("server10", json1, 0, "test")]
148 [
149 (
150 "server10",
151 json1,
152 0,
153 )
154 ]
149155 )
150156 res_deferreds_2[0].addBoth(self.check_context, None)
151157 second_lookup_state[0] = 1
182188 signedjson.sign.sign_json(json1, "server9", key1)
183189
184190 # should fail immediately on an unsigned object
185 d = _verify_json_for_server(kr, "server9", {}, 0, "test unsigned")
191 d = kr.verify_json_for_server("server9", {}, 0)
186192 self.get_failure(d, SynapseError)
187193
188194 # should succeed on a signed object
189 d = _verify_json_for_server(kr, "server9", json1, 500, "test signed")
195 d = kr.verify_json_for_server("server9", json1, 500)
190196 # self.assertFalse(d.called)
191197 self.get_success(d)
192198
213219 signedjson.sign.sign_json(json1, "server9", key1)
214220
215221 # should fail immediately on an unsigned object
216 d = _verify_json_for_server(kr, "server9", {}, 0, "test unsigned")
222 d = kr.verify_json_for_server("server9", {}, 0)
217223 self.get_failure(d, SynapseError)
218224
219225 # should fail on a signed object with a non-zero minimum_valid_until_ms,
220226 # as it tries to refetch the keys and fails.
221 d = _verify_json_for_server(
222 kr, "server9", json1, 500, "test signed non-zero min"
223 )
227 d = kr.verify_json_for_server("server9", json1, 500)
224228 self.get_failure(d, SynapseError)
225229
226230 # We expect the keyring tried to refetch the key once.
227231 mock_fetcher.get_keys.assert_called_once_with(
228 {"server9": {get_key_id(key1): 500}}
232 "server9", [get_key_id(key1)], 500
229233 )
230234
231235 # should succeed on a signed object with a 0 minimum_valid_until_ms
232 d = _verify_json_for_server(
233 kr, "server9", json1, 0, "test signed with zero min"
236 d = kr.verify_json_for_server(
237 "server9",
238 json1,
239 0,
234240 )
235241 self.get_success(d)
236242
238244 """Two requests for the same key should be deduped."""
239245 key1 = signedjson.key.generate_signing_key(1)
240246
241 async def get_keys(keys_to_fetch):
247 async def get_keys(
248 server_name: str, key_ids: List[str], minimum_valid_until_ts: int
249 ) -> Dict[str, FetchKeyResult]:
242250 # there should only be one request object (with the max validity)
243 self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}})
244
245 return {
246 "server1": {
247 get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)
248 }
249 }
251 self.assertEqual(server_name, "server1")
252 self.assertEqual(key_ids, [get_key_id(key1)])
253 self.assertEqual(minimum_valid_until_ts, 1500)
254
255 return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)}
250256
251257 mock_fetcher = Mock()
252258 mock_fetcher.get_keys = Mock(side_effect=get_keys)
258264 # the first request should succeed; the second should fail because the key
259265 # has expired
260266 results = kr.verify_json_objects_for_server(
261 [("server1", json1, 500, "test1"), ("server1", json1, 1500, "test2")]
267 [
268 (
269 "server1",
270 json1,
271 500,
272 ),
273 ("server1", json1, 1500),
274 ]
262275 )
263276 self.assertEqual(len(results), 2)
264277 self.get_success(results[0])
273286 """If the first fetcher cannot provide a recent enough key, we fall back"""
274287 key1 = signedjson.key.generate_signing_key(1)
275288
276 async def get_keys1(keys_to_fetch):
277 self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}})
278 return {
279 "server1": {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 800)}
280 }
281
282 async def get_keys2(keys_to_fetch):
283 self.assertEqual(keys_to_fetch, {"server1": {get_key_id(key1): 1500}})
284 return {
285 "server1": {
286 get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)
287 }
288 }
289 async def get_keys1(
290 server_name: str, key_ids: List[str], minimum_valid_until_ts: int
291 ) -> Dict[str, FetchKeyResult]:
292 self.assertEqual(server_name, "server1")
293 self.assertEqual(key_ids, [get_key_id(key1)])
294 self.assertEqual(minimum_valid_until_ts, 1500)
295 return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 800)}
296
297 async def get_keys2(
298 server_name: str, key_ids: List[str], minimum_valid_until_ts: int
299 ) -> Dict[str, FetchKeyResult]:
300 self.assertEqual(server_name, "server1")
301 self.assertEqual(key_ids, [get_key_id(key1)])
302 self.assertEqual(minimum_valid_until_ts, 1500)
303 return {get_key_id(key1): FetchKeyResult(get_verify_key(key1), 1200)}
289304
290305 mock_fetcher1 = Mock()
291306 mock_fetcher1.get_keys = Mock(side_effect=get_keys1)
297312 signedjson.sign.sign_json(json1, "server1", key1)
298313
299314 results = kr.verify_json_objects_for_server(
300 [("server1", json1, 1200, "test1"), ("server1", json1, 1500, "test2")]
315 [
316 (
317 "server1",
318 json1,
319 1200,
320 ),
321 (
322 "server1",
323 json1,
324 1500,
325 ),
326 ]
301327 )
302328 self.assertEqual(len(results), 2)
303329 self.get_success(results[0])
348374
349375 self.http_client.get_json.side_effect = get_json
350376
351 keys_to_fetch = {SERVER_NAME: {"key1": 0}}
352 keys = self.get_success(fetcher.get_keys(keys_to_fetch))
353 k = keys[SERVER_NAME][testverifykey_id]
377 keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0))
378 k = keys[testverifykey_id]
354379 self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS)
355380 self.assertEqual(k.verify_key, testverifykey)
356381 self.assertEqual(k.verify_key.alg, "ed25519")
377402 # change the server name: the result should be ignored
378403 response["server_name"] = "OTHER_SERVER"
379404
380 keys = self.get_success(fetcher.get_keys(keys_to_fetch))
405 keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0))
381406 self.assertEqual(keys, {})
382407
383408
464489
465490 self.expect_outgoing_key_query(SERVER_NAME, "key1", response)
466491
467 keys_to_fetch = {SERVER_NAME: {"key1": 0}}
468 keys = self.get_success(fetcher.get_keys(keys_to_fetch))
469 self.assertIn(SERVER_NAME, keys)
470 k = keys[SERVER_NAME][testverifykey_id]
492 keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0))
493 self.assertIn(testverifykey_id, keys)
494 k = keys[testverifykey_id]
471495 self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS)
472496 self.assertEqual(k.verify_key, testverifykey)
473497 self.assertEqual(k.verify_key.alg, "ed25519")
514538
515539 self.expect_outgoing_key_query(SERVER_NAME, "key1", response)
516540
517 keys_to_fetch = {SERVER_NAME: {"key1": 0}}
518 keys = self.get_success(fetcher.get_keys(keys_to_fetch))
519 self.assertIn(SERVER_NAME, keys)
520 k = keys[SERVER_NAME][testverifykey_id]
541 keys = self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0))
542 self.assertIn(testverifykey_id, keys)
543 k = keys[testverifykey_id]
521544 self.assertEqual(k.valid_until_ts, VALID_UNTIL_TS)
522545 self.assertEqual(k.verify_key, testverifykey)
523546 self.assertEqual(k.verify_key.alg, "ed25519")
558581
559582 def get_key_from_perspectives(response):
560583 fetcher = PerspectivesKeyFetcher(self.hs)
561 keys_to_fetch = {SERVER_NAME: {"key1": 0}}
562584 self.expect_outgoing_key_query(SERVER_NAME, "key1", response)
563 return self.get_success(fetcher.get_keys(keys_to_fetch))
585 return self.get_success(fetcher.get_keys(SERVER_NAME, ["key1"], 0))
564586
565587 # start with a valid response so we can check we are testing the right thing
566588 response = build_response()
567589 keys = get_key_from_perspectives(response)
568 k = keys[SERVER_NAME][testverifykey_id]
590 k = keys[testverifykey_id]
569591 self.assertEqual(k.verify_key, testverifykey)
570592
571593 # remove the perspectives server's signature
584606 def get_key_id(key):
585607 """Get the matrix ID tag for a given SigningKey or VerifyKey"""
586608 return "%s:%s" % (key.alg, key.version)
587
588
589 @defer.inlineCallbacks
590 def run_in_context(f, *args, **kwargs):
591 with LoggingContext("testctx"):
592 rv = yield f(*args, **kwargs)
593 return rv
594
595
596 def _verify_json_for_server(kr, *args):
597 """thin wrapper around verify_json_for_server which makes sure it is wrapped
598 with the patched defer.inlineCallbacks.
599 """
600
601 @defer.inlineCallbacks
602 def v():
603 rv1 = yield kr.verify_json_for_server(*args)
604 return rv1
605
606 return run_in_context(v)
5656 sender="@someone:anywhere", type="m.room.message", room_id="!foo:bar"
5757 )
5858 self.mock_store.get_new_events_for_appservice.side_effect = [
59 make_awaitable((0, [event])),
6059 make_awaitable((0, [])),
60 make_awaitable((1, [event])),
6161 ]
62 self.handler.notify_interested_services(RoomStreamToken(None, 0))
62 self.handler.notify_interested_services(RoomStreamToken(None, 1))
6363
6464 self.mock_scheduler.submit_event_for_as.assert_called_once_with(
6565 interested_service, event
7676 self.mock_as_api.query_user.return_value = make_awaitable(True)
7777 self.mock_store.get_new_events_for_appservice.side_effect = [
7878 make_awaitable((0, [event])),
79 make_awaitable((0, [])),
8079 ]
8180
8281 self.handler.notify_interested_services(RoomStreamToken(None, 0))
9493 self.mock_as_api.query_user.return_value = make_awaitable(True)
9594 self.mock_store.get_new_events_for_appservice.side_effect = [
9695 make_awaitable((0, [event])),
97 make_awaitable((0, [])),
9896 ]
9997
10098 self.handler.notify_interested_services(RoomStreamToken(None, 0))
6363 user_tok=self.admin_user_tok,
6464 )
6565 for _ in range(5):
66 self._create_event_and_report(
66 self._create_event_and_report_without_parameters(
6767 room_id=self.room_id2,
6868 user_tok=self.admin_user_tok,
6969 )
373373 "POST",
374374 "rooms/%s/report/%s" % (room_id, event_id),
375375 json.dumps({"score": -100, "reason": "this makes me sad"}),
376 access_token=user_tok,
377 )
378 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
379
380 def _create_event_and_report_without_parameters(self, room_id, user_tok):
381 """Create and report an event, but omit reason and score"""
382 resp = self.helper.send(room_id, tok=user_tok)
383 event_id = resp["event_id"]
384
385 channel = self.make_request(
386 "POST",
387 "rooms/%s/report/%s" % (room_id, event_id),
388 json.dumps({}),
376389 access_token=user_tok,
377390 )
378391 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1414 import json
1515 import os
1616 from binascii import unhexlify
17
18 from parameterized import parameterized
1719
1820 import synapse.rest.admin
1921 from synapse.api.errors import Codes
561563 )
562564 # Test that the file is deleted
563565 self.assertFalse(os.path.exists(local_path))
566
567
568 class QuarantineMediaByIDTestCase(unittest.HomeserverTestCase):
569
570 servlets = [
571 synapse.rest.admin.register_servlets,
572 synapse.rest.admin.register_servlets_for_media_repo,
573 login.register_servlets,
574 ]
575
576 def prepare(self, reactor, clock, hs):
577 media_repo = hs.get_media_repository_resource()
578 self.store = hs.get_datastore()
579 self.server_name = hs.hostname
580
581 self.admin_user = self.register_user("admin", "pass", admin=True)
582 self.admin_user_tok = self.login("admin", "pass")
583
584 # Create media
585 upload_resource = media_repo.children[b"upload"]
586 # file size is 67 Byte
587 image_data = unhexlify(
588 b"89504e470d0a1a0a0000000d4948445200000001000000010806"
589 b"0000001f15c4890000000a49444154789c63000100000500010d"
590 b"0a2db40000000049454e44ae426082"
591 )
592
593 # Upload some media into the room
594 response = self.helper.upload_media(
595 upload_resource, image_data, tok=self.admin_user_tok, expect_code=200
596 )
597 # Extract media ID from the response
598 server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
599 self.media_id = server_and_media_id.split("/")[1]
600
601 self.url = "/_synapse/admin/v1/media/%s/%s/%s"
602
603 @parameterized.expand(["quarantine", "unquarantine"])
604 def test_no_auth(self, action: str):
605 """
606 Try to protect media without authentication.
607 """
608
609 channel = self.make_request(
610 "POST",
611 self.url % (action, self.server_name, self.media_id),
612 b"{}",
613 )
614
615 self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
616 self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
617
618 @parameterized.expand(["quarantine", "unquarantine"])
619 def test_requester_is_no_admin(self, action: str):
620 """
621 If the user is not a server admin, an error is returned.
622 """
623 self.other_user = self.register_user("user", "pass")
624 self.other_user_token = self.login("user", "pass")
625
626 channel = self.make_request(
627 "POST",
628 self.url % (action, self.server_name, self.media_id),
629 access_token=self.other_user_token,
630 )
631
632 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
633 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
634
635 def test_quarantine_media(self):
636 """
637 Tests that quarantining and remove from quarantine a media is successfully
638 """
639
640 media_info = self.get_success(self.store.get_local_media(self.media_id))
641 self.assertFalse(media_info["quarantined_by"])
642
643 # quarantining
644 channel = self.make_request(
645 "POST",
646 self.url % ("quarantine", self.server_name, self.media_id),
647 access_token=self.admin_user_tok,
648 )
649
650 self.assertEqual(200, channel.code, msg=channel.json_body)
651 self.assertFalse(channel.json_body)
652
653 media_info = self.get_success(self.store.get_local_media(self.media_id))
654 self.assertTrue(media_info["quarantined_by"])
655
656 # remove from quarantine
657 channel = self.make_request(
658 "POST",
659 self.url % ("unquarantine", self.server_name, self.media_id),
660 access_token=self.admin_user_tok,
661 )
662
663 self.assertEqual(200, channel.code, msg=channel.json_body)
664 self.assertFalse(channel.json_body)
665
666 media_info = self.get_success(self.store.get_local_media(self.media_id))
667 self.assertFalse(media_info["quarantined_by"])
668
669 def test_quarantine_protected_media(self):
670 """
671 Tests that quarantining from protected media fails
672 """
673
674 # protect
675 self.get_success(self.store.mark_local_media_as_safe(self.media_id, safe=True))
676
677 # verify protection
678 media_info = self.get_success(self.store.get_local_media(self.media_id))
679 self.assertTrue(media_info["safe_from_quarantine"])
680
681 # quarantining
682 channel = self.make_request(
683 "POST",
684 self.url % ("quarantine", self.server_name, self.media_id),
685 access_token=self.admin_user_tok,
686 )
687
688 self.assertEqual(200, channel.code, msg=channel.json_body)
689 self.assertFalse(channel.json_body)
690
691 # verify that is not in quarantine
692 media_info = self.get_success(self.store.get_local_media(self.media_id))
693 self.assertFalse(media_info["quarantined_by"])
694
695
696 class ProtectMediaByIDTestCase(unittest.HomeserverTestCase):
697
698 servlets = [
699 synapse.rest.admin.register_servlets,
700 synapse.rest.admin.register_servlets_for_media_repo,
701 login.register_servlets,
702 ]
703
704 def prepare(self, reactor, clock, hs):
705 media_repo = hs.get_media_repository_resource()
706 self.store = hs.get_datastore()
707
708 self.admin_user = self.register_user("admin", "pass", admin=True)
709 self.admin_user_tok = self.login("admin", "pass")
710
711 # Create media
712 upload_resource = media_repo.children[b"upload"]
713 # file size is 67 Byte
714 image_data = unhexlify(
715 b"89504e470d0a1a0a0000000d4948445200000001000000010806"
716 b"0000001f15c4890000000a49444154789c63000100000500010d"
717 b"0a2db40000000049454e44ae426082"
718 )
719
720 # Upload some media into the room
721 response = self.helper.upload_media(
722 upload_resource, image_data, tok=self.admin_user_tok, expect_code=200
723 )
724 # Extract media ID from the response
725 server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
726 self.media_id = server_and_media_id.split("/")[1]
727
728 self.url = "/_synapse/admin/v1/media/%s/%s"
729
730 @parameterized.expand(["protect", "unprotect"])
731 def test_no_auth(self, action: str):
732 """
733 Try to protect media without authentication.
734 """
735
736 channel = self.make_request("POST", self.url % (action, self.media_id), b"{}")
737
738 self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
739 self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
740
741 @parameterized.expand(["protect", "unprotect"])
742 def test_requester_is_no_admin(self, action: str):
743 """
744 If the user is not a server admin, an error is returned.
745 """
746 self.other_user = self.register_user("user", "pass")
747 self.other_user_token = self.login("user", "pass")
748
749 channel = self.make_request(
750 "POST",
751 self.url % (action, self.media_id),
752 access_token=self.other_user_token,
753 )
754
755 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
756 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
757
758 def test_protect_media(self):
759 """
760 Tests that protect and unprotect a media is successfully
761 """
762
763 media_info = self.get_success(self.store.get_local_media(self.media_id))
764 self.assertFalse(media_info["safe_from_quarantine"])
765
766 # protect
767 channel = self.make_request(
768 "POST",
769 self.url % ("protect", self.media_id),
770 access_token=self.admin_user_tok,
771 )
772
773 self.assertEqual(200, channel.code, msg=channel.json_body)
774 self.assertFalse(channel.json_body)
775
776 media_info = self.get_success(self.store.get_local_media(self.media_id))
777 self.assertTrue(media_info["safe_from_quarantine"])
778
779 # unprotect
780 channel = self.make_request(
781 "POST",
782 self.url % ("unprotect", self.media_id),
783 access_token=self.admin_user_tok,
784 )
785
786 self.assertEqual(200, channel.code, msg=channel.json_body)
787 self.assertFalse(channel.json_body)
788
789 media_info = self.get_success(self.store.get_local_media(self.media_id))
790 self.assertFalse(media_info["safe_from_quarantine"])
18791879 """Calls the endpoint under test. returns the json response object."""
18801880 channel = self.make_request(
18811881 "GET",
1882 "/_matrix/client/unstable/org.matrix.msc2432/rooms/%s/aliases"
1883 % (self.room_id,),
1882 "/_matrix/client/r0/rooms/%s/aliases" % (self.room_id,),
18841883 access_token=access_token,
18851884 )
18861885 self.assertEqual(channel.code, expected_code, channel.result)
0 # Copyright 2021 Callum Brown
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import json
15
16 import synapse.rest.admin
17 from synapse.rest.client.v1 import login, room
18 from synapse.rest.client.v2_alpha import report_event
19
20 from tests import unittest
21
22
23 class ReportEventTestCase(unittest.HomeserverTestCase):
24 servlets = [
25 synapse.rest.admin.register_servlets,
26 login.register_servlets,
27 room.register_servlets,
28 report_event.register_servlets,
29 ]
30
31 def prepare(self, reactor, clock, hs):
32 self.admin_user = self.register_user("admin", "pass", admin=True)
33 self.admin_user_tok = self.login("admin", "pass")
34 self.other_user = self.register_user("user", "pass")
35 self.other_user_tok = self.login("user", "pass")
36
37 self.room_id = self.helper.create_room_as(
38 self.other_user, tok=self.other_user_tok, is_public=True
39 )
40 self.helper.join(self.room_id, user=self.admin_user, tok=self.admin_user_tok)
41 resp = self.helper.send(self.room_id, tok=self.admin_user_tok)
42 self.event_id = resp["event_id"]
43 self.report_path = "rooms/{}/report/{}".format(self.room_id, self.event_id)
44
45 def test_reason_str_and_score_int(self):
46 data = {"reason": "this makes me sad", "score": -100}
47 self._assert_status(200, data)
48
49 def test_no_reason(self):
50 data = {"score": 0}
51 self._assert_status(200, data)
52
53 def test_no_score(self):
54 data = {"reason": "this makes me sad"}
55 self._assert_status(200, data)
56
57 def test_no_reason_and_no_score(self):
58 data = {}
59 self._assert_status(200, data)
60
61 def test_reason_int_and_score_str(self):
62 data = {"reason": 10, "score": "string"}
63 self._assert_status(400, data)
64
65 def test_reason_zero_and_score_blank(self):
66 data = {"reason": 0, "score": ""}
67 self._assert_status(400, data)
68
69 def test_reason_and_score_null(self):
70 data = {"reason": None, "score": None}
71 self._assert_status(400, data)
72
73 def _assert_status(self, response_status, data):
74 channel = self.make_request(
75 "POST",
76 self.report_path,
77 json.dumps(data),
78 access_token=self.other_user_tok,
79 )
80 self.assertEqual(
81 response_status, int(channel.result["code"]), msg=channel.result["body"]
82 )
207207 keyid = "ed25519:%s" % (testkey.version,)
208208
209209 fetcher = PerspectivesKeyFetcher(self.hs2)
210 d = fetcher.get_keys({"targetserver": {keyid: 1000}})
210 d = fetcher.get_keys("targetserver", [keyid], 1000)
211211 res = self.get_success(d)
212 self.assertIn("targetserver", res)
213 keyres = res["targetserver"][keyid]
212 self.assertIn(keyid, res)
213 keyres = res[keyid]
214214 assert isinstance(keyres, FetchKeyResult)
215215 self.assertEqual(
216216 signedjson.key.encode_verify_key_base64(keyres.verify_key),
229229 keyid = "ed25519:%s" % (testkey.version,)
230230
231231 fetcher = PerspectivesKeyFetcher(self.hs2)
232 d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}})
232 d = fetcher.get_keys(self.hs.hostname, [keyid], 1000)
233233 res = self.get_success(d)
234 self.assertIn(self.hs.hostname, res)
235 keyres = res[self.hs.hostname][keyid]
234 self.assertIn(keyid, res)
235 keyres = res[keyid]
236236 assert isinstance(keyres, FetchKeyResult)
237237 self.assertEqual(
238238 signedjson.key.encode_verify_key_base64(keyres.verify_key),
246246 keyid = "ed25519:%s" % (self.hs_signing_key.version,)
247247
248248 fetcher = PerspectivesKeyFetcher(self.hs2)
249 d = fetcher.get_keys({self.hs.hostname: {keyid: 1000}})
249 d = fetcher.get_keys(self.hs.hostname, [keyid], 1000)
250250 res = self.get_success(d)
251 self.assertIn(self.hs.hostname, res)
252 keyres = res[self.hs.hostname][keyid]
251 self.assertIn(keyid, res)
252 keyres = res[keyid]
253253 assert isinstance(keyres, FetchKeyResult)
254254 self.assertEqual(
255255 signedjson.key.encode_verify_key_base64(keyres.verify_key),
0 # Copyright 2021 The Matrix.org Foundation C.I.C.
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
0 # Copyright 2021 The Matrix.org Foundation C.I.C.
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
0 # Copyright 2021 The Matrix.org Foundation C.I.C.
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13 import json
14
15 from synapse.logging.context import LoggingContext
16 from synapse.storage.databases.main.events_worker import EventsWorkerStore
17
18 from tests import unittest
19
20
21 class HaveSeenEventsTestCase(unittest.HomeserverTestCase):
22 def prepare(self, reactor, clock, hs):
23 self.store: EventsWorkerStore = hs.get_datastore()
24
25 # insert some test data
26 for rid in ("room1", "room2"):
27 self.get_success(
28 self.store.db_pool.simple_insert(
29 "rooms",
30 {"room_id": rid, "room_version": 4},
31 )
32 )
33
34 for idx, (rid, eid) in enumerate(
35 (
36 ("room1", "event10"),
37 ("room1", "event11"),
38 ("room1", "event12"),
39 ("room2", "event20"),
40 )
41 ):
42 self.get_success(
43 self.store.db_pool.simple_insert(
44 "events",
45 {
46 "event_id": eid,
47 "room_id": rid,
48 "topological_ordering": idx,
49 "stream_ordering": idx,
50 "type": "test",
51 "processed": True,
52 "outlier": False,
53 },
54 )
55 )
56 self.get_success(
57 self.store.db_pool.simple_insert(
58 "event_json",
59 {
60 "event_id": eid,
61 "room_id": rid,
62 "json": json.dumps({"type": "test", "room_id": rid}),
63 "internal_metadata": "{}",
64 "format_version": 3,
65 },
66 )
67 )
68
69 def test_simple(self):
70 with LoggingContext(name="test") as ctx:
71 res = self.get_success(
72 self.store.have_seen_events("room1", ["event10", "event19"])
73 )
74 self.assertEquals(res, {"event10"})
75
76 # that should result in a single db query
77 self.assertEquals(ctx.get_resource_usage().db_txn_count, 1)
78
79 # a second lookup of the same events should cause no queries
80 with LoggingContext(name="test") as ctx:
81 res = self.get_success(
82 self.store.have_seen_events("room1", ["event10", "event19"])
83 )
84 self.assertEquals(res, {"event10"})
85 self.assertEquals(ctx.get_resource_usage().db_txn_count, 0)
86
87 def test_query_via_event_cache(self):
88 # fetch an event into the event cache
89 self.get_success(self.store.get_event("event10"))
90
91 # looking it up should now cause no db hits
92 with LoggingContext(name="test") as ctx:
93 res = self.get_success(self.store.have_seen_events("room1", ["event10"]))
94 self.assertEquals(res, {"event10"})
95 self.assertEquals(ctx.get_resource_usage().db_txn_count, 0)
621621 self.assertEquals(callcount2[0], 1)
622622
623623 a.func2.invalidate(("foo",))
624 self.assertEquals(a.func2.cache.cache.pop.call_count, 1)
624 self.assertEquals(a.func2.cache.cache.del_multi.call_count, 1)
625625
626626 yield a.func2("foo")
627627 a.func2.invalidate(("foo",))
628 self.assertEquals(a.func2.cache.cache.pop.call_count, 2)
628 self.assertEquals(a.func2.cache.cache.del_multi.call_count, 2)
629629
630630 self.assertEquals(callcount[0], 1)
631631 self.assertEquals(callcount2[0], 2)
632632
633633 a.func.invalidate(("foo",))
634 self.assertEquals(a.func2.cache.cache.pop.call_count, 3)
634 self.assertEquals(a.func2.cache.cache.del_multi.call_count, 3)
635635 yield a.func("foo")
636636
637637 self.assertEquals(callcount[0], 2)
1313 from twisted.internet import defer
1414
1515 from synapse.logging.context import make_deferred_yieldable
16 from synapse.util.batching_queue import BatchingQueue
16 from synapse.util.batching_queue import (
17 BatchingQueue,
18 number_in_flight,
19 number_of_keys,
20 number_queued,
21 )
1722
1823 from tests.server import get_clock
1924 from tests.unittest import TestCase
2227 class BatchingQueueTestCase(TestCase):
2328 def setUp(self):
2429 self.clock, hs_clock = get_clock()
30
31 # We ensure that we remove any existing metrics for "test_queue".
32 try:
33 number_queued.remove("test_queue")
34 number_of_keys.remove("test_queue")
35 number_in_flight.remove("test_queue")
36 except KeyError:
37 pass
2538
2639 self._pending_calls = []
2740 self.queue = BatchingQueue("test_queue", hs_clock, self._process_queue)
3144 self._pending_calls.append((values, d))
3245 return await make_deferred_yieldable(d)
3346
47 def _get_sample_with_name(self, metric, name) -> int:
48 """For a prometheus metric get the value of the sample that has a
49 matching "name" label.
50 """
51 for sample in metric.collect()[0].samples:
52 if sample.labels.get("name") == name:
53 return sample.value
54
55 self.fail("Found no matching sample")
56
57 def _assert_metrics(self, queued, keys, in_flight):
58 """Assert that the metrics are correct"""
59
60 sample = self._get_sample_with_name(number_queued, self.queue._name)
61 self.assertEqual(
62 sample,
63 queued,
64 "number_queued",
65 )
66
67 sample = self._get_sample_with_name(number_of_keys, self.queue._name)
68 self.assertEqual(sample, keys, "number_of_keys")
69
70 sample = self._get_sample_with_name(number_in_flight, self.queue._name)
71 self.assertEqual(
72 sample,
73 in_flight,
74 "number_in_flight",
75 )
76
3477 def test_simple(self):
3578 """Tests the basic case of calling `add_to_queue` once and having
3679 `_process_queue` return.
4083
4184 queue_d = defer.ensureDeferred(self.queue.add_to_queue("foo"))
4285
86 self._assert_metrics(queued=1, keys=1, in_flight=1)
87
4388 # The queue should wait a reactor tick before calling the processing
4489 # function.
4590 self.assertFalse(self._pending_calls)
5196 self.assertEqual(len(self._pending_calls), 1)
5297 self.assertEqual(self._pending_calls[0][0], ["foo"])
5398 self.assertFalse(queue_d.called)
99 self._assert_metrics(queued=0, keys=0, in_flight=1)
54100
55101 # Return value of the `_process_queue` should be propagated back.
56102 self._pending_calls.pop()[1].callback("bar")
57103
58104 self.assertEqual(self.successResultOf(queue_d), "bar")
105
106 self._assert_metrics(queued=0, keys=0, in_flight=0)
59107
60108 def test_batching(self):
61109 """Test that multiple calls at the same time get batched up into one
67115 queue_d1 = defer.ensureDeferred(self.queue.add_to_queue("foo1"))
68116 queue_d2 = defer.ensureDeferred(self.queue.add_to_queue("foo2"))
69117
118 self._assert_metrics(queued=2, keys=1, in_flight=2)
119
70120 self.clock.pump([0])
71121
72122 # We should see only *one* call to `_process_queue`
74124 self.assertEqual(self._pending_calls[0][0], ["foo1", "foo2"])
75125 self.assertFalse(queue_d1.called)
76126 self.assertFalse(queue_d2.called)
127 self._assert_metrics(queued=0, keys=0, in_flight=2)
77128
78129 # Return value of the `_process_queue` should be propagated back to both.
79130 self._pending_calls.pop()[1].callback("bar")
80131
81132 self.assertEqual(self.successResultOf(queue_d1), "bar")
82133 self.assertEqual(self.successResultOf(queue_d2), "bar")
134 self._assert_metrics(queued=0, keys=0, in_flight=0)
83135
84136 def test_queuing(self):
85137 """Test that we queue up requests while a `_process_queue` is being
91143 queue_d1 = defer.ensureDeferred(self.queue.add_to_queue("foo1"))
92144 self.clock.pump([0])
93145
146 self.assertEqual(len(self._pending_calls), 1)
147
148 # We queue up work after the process function has been called, testing
149 # that they get correctly queued up.
94150 queue_d2 = defer.ensureDeferred(self.queue.add_to_queue("foo2"))
151 queue_d3 = defer.ensureDeferred(self.queue.add_to_queue("foo3"))
95152
96153 # We should see only *one* call to `_process_queue`
97154 self.assertEqual(len(self._pending_calls), 1)
98155 self.assertEqual(self._pending_calls[0][0], ["foo1"])
99156 self.assertFalse(queue_d1.called)
100157 self.assertFalse(queue_d2.called)
158 self.assertFalse(queue_d3.called)
159 self._assert_metrics(queued=2, keys=1, in_flight=3)
101160
102161 # Return value of the `_process_queue` should be propagated back to the
103162 # first.
105164
106165 self.assertEqual(self.successResultOf(queue_d1), "bar1")
107166 self.assertFalse(queue_d2.called)
167 self.assertFalse(queue_d3.called)
168 self._assert_metrics(queued=2, keys=1, in_flight=2)
108169
109170 # We should now see a second call to `_process_queue`
110171 self.clock.pump([0])
111172 self.assertEqual(len(self._pending_calls), 1)
112 self.assertEqual(self._pending_calls[0][0], ["foo2"])
113 self.assertFalse(queue_d2.called)
173 self.assertEqual(self._pending_calls[0][0], ["foo2", "foo3"])
174 self.assertFalse(queue_d2.called)
175 self.assertFalse(queue_d3.called)
176 self._assert_metrics(queued=0, keys=0, in_flight=2)
114177
115178 # Return value of the `_process_queue` should be propagated back to the
116179 # second.
117180 self._pending_calls.pop()[1].callback("bar2")
118181
119182 self.assertEqual(self.successResultOf(queue_d2), "bar2")
183 self.assertEqual(self.successResultOf(queue_d3), "bar2")
184 self._assert_metrics(queued=0, keys=0, in_flight=0)
120185
121186 def test_different_keys(self):
122187 """Test that calls to different keys get processed in parallel."""
139204 self.assertFalse(queue_d1.called)
140205 self.assertFalse(queue_d2.called)
141206 self.assertFalse(queue_d3.called)
207 self._assert_metrics(queued=1, keys=1, in_flight=3)
142208
143209 # Return value of the `_process_queue` should be propagated back to the
144210 # first.
147213 self.assertEqual(self.successResultOf(queue_d1), "bar1")
148214 self.assertFalse(queue_d2.called)
149215 self.assertFalse(queue_d3.called)
216 self._assert_metrics(queued=1, keys=1, in_flight=2)
150217
151218 # Return value of the `_process_queue` should be propagated back to the
152219 # second.
160227 self.assertEqual(len(self._pending_calls), 1)
161228 self.assertEqual(self._pending_calls[0][0], ["foo3"])
162229 self.assertFalse(queue_d3.called)
230 self._assert_metrics(queued=0, keys=0, in_flight=1)
163231
164232 # Return value of the `_process_queue` should be propagated back to the
165233 # third deferred.
166234 self._pending_calls.pop()[1].callback("bar4")
167235
168236 self.assertEqual(self.successResultOf(queue_d3), "bar4")
237 self._assert_metrics(queued=0, keys=0, in_flight=0)