Codebase list matrix-synapse / df31d37
New upstream version 1.3.0 Andrej Shadura 4 years ago
237 changed file(s) with 4556 addition(s) and 2985 deletion(s). Raw diff Collapse all Expand all
4848
4949
5050 - command:
51 - "python -m pip install tox"
51 - "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
52 - "python3.5 -m pip install tox"
5253 - "tox -e py35-old,codecov"
5354 label: ":python: 3.5 / SQLite / Old Deps"
5455 env:
5556 TRIAL_FLAGS: "-j 2"
5657 plugins:
5758 - docker#v3.0.1:
58 image: "python:3.5"
59 image: "ubuntu:xenial" # We use xenail to get an old sqlite and python
5960 propagate-environment: true
6061 retry:
6162 automatic:
116117 limit: 2
117118
118119 - label: ":python: 3.5 / :postgres: 9.5"
119 env:
120 TRIAL_FLAGS: "-j 4"
120 agents:
121 queue: "medium"
122 env:
123 TRIAL_FLAGS: "-j 8"
121124 command:
122125 - "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
123126 plugins:
133136 limit: 2
134137
135138 - label: ":python: 3.7 / :postgres: 9.5"
136 env:
137 TRIAL_FLAGS: "-j 4"
139 agents:
140 queue: "medium"
141 env:
142 TRIAL_FLAGS: "-j 8"
138143 command:
139144 - "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
140145 plugins:
150155 limit: 2
151156
152157 - label: ":python: 3.7 / :postgres: 11"
153 env:
154 TRIAL_FLAGS: "-j 4"
158 agents:
159 queue: "medium"
160 env:
161 TRIAL_FLAGS: "-j 8"
155162 command:
156163 - "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
157164 plugins:
213220 env:
214221 POSTGRES: "1"
215222 WORKERS: "1"
223 BLACKLIST: "synapse-blacklist-with-workers"
216224 command:
217225 - "bash .buildkite/merge_base_branch.sh"
226 - "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
218227 - "bash /synapse_sytest.sh"
219228 plugins:
220229 - docker#v3.0.1:
222231 propagate-environment: true
223232 always-pull: true
224233 workdir: "/src"
225 soft_fail: true
226 retry:
227 automatic:
228 - exit_status: -1
229 limit: 2
230 - exit_status: 2
231 limit: 2
234 retry:
235 automatic:
236 - exit_status: -1
237 limit: 2
238 - exit_status: 2
239 limit: 2
0 # This file serves as a blacklist for SyTest tests that we expect will fail in
1 # Synapse when run under worker mode. For more details, see sytest-blacklist.
2
3 Message history can be paginated
4
5 Can re-join room if re-invited
6
7 /upgrade creates a new room
8
9 The only membership state included in an initial sync is for all the senders in the timeline
10
11 Local device key changes get to remote servers
12
13 If remote user leaves room we no longer receive device updates
14
15 Forgotten room messages cannot be paginated
16
17 Inbound federation can get public room list
18
19 Members from the gap are included in gappy incr LL sync
20
21 Leaves are present in non-gapped incremental syncs
22
23 Old leaves are present in gapped incremental syncs
24
25 User sees updates to presence from other users in the incremental sync.
26
27 Gapped incremental syncs include all state changes
28
29 Old members are included in gappy incr LL sync if they start speaking
0 comment:
1 layout: "diff"
0 comment: off
21
32 coverage:
43 status:
1515 /*.log
1616 /*.log.config
1717 /*.pid
18 /.python-version
1819 /*.signing.key
1920 /env/
2021 /homeserver*.yaml
0 Synapse 1.3.0 (2019-08-15)
1 ==========================
2
3 Bugfixes
4 --------
5
6 - Fix 500 Internal Server Error on `publicRooms` when the public room list was
7 cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))
8
9
10 Synapse 1.3.0rc1 (2019-08-13)
11 ==========================
12
13 Features
14 --------
15
16 - Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
17 - Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
18 - Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
19 - Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
20 - Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))
21
22
23 Bugfixes
24 --------
25
26 - Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
27 - Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
28 - start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
29 - Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
30 - Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
31 - Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
32 - Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
33 - Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
34 - Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
35 - Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
36 - The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))
37
38
39 Deprecations and Removals
40 -------------------------
41
42 - Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
43 - Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))
44
45
46 Internal Changes
47 ----------------
48
49 - Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
50 - Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
51 - Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
52 - Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
53 - Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
54 - Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
55 - Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
56 - Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
57 - Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
58 - Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
59 - Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
60 - Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
61 - Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
62 - Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
63 - Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
64 - Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
65 - Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
66 - Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
67 - Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
68 - Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
69 - Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
70 - Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
71 - Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
72 - Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
73 - Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
74 - Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
75 - Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
76 - Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
77 - Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
78 - Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
79 - Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
80 - Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
81 - Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
82
83
084 Synapse 1.2.1 (2019-07-26)
185 ==========================
286
66 include demo/demo.tls.dh
77 include demo/*.py
88 include demo/*.sh
9 include sytest-blacklist
109
1110 recursive-include synapse/storage/schema *.sql
1211 recursive-include synapse/storage/schema *.sql.postgres
3332 exclude .dockerignore
3433 exclude test_postgresql.sh
3534 exclude .editorconfig
35 exclude sytest-blacklist
3636
3737 include pyproject.toml
3838 recursive-include changelog.d *
5050 # finally start pruning media:
5151 ###############################################################################
5252 set -x # for debugging the generated string
53 curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
53 curl --header "Authorization: Bearer $TOKEN" -X POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
1313 Description=Synapse Matrix homeserver
1414
1515 [Service]
16 Type=simple
16 Type=notify
17 NotifyAccess=main
18 ExecReload=/bin/kill -HUP $MAINPID
1719 Restart=on-abort
1820
1921 User=synapse
33 BindsTo=matrix-synapse.service
44
55 [Service]
6 Type=simple
6 Type=notify
7 NotifyAccess=main
78 User=matrix-synapse
89 WorkingDirectory=/var/lib/matrix-synapse
910 EnvironmentFile=/etc/default/matrix-synapse
11 Description=Synapse Matrix Homeserver
22
33 [Service]
4 Type=simple
4 Type=notify
5 NotifyAccess=main
56 User=matrix-synapse
67 WorkingDirectory=/var/lib/matrix-synapse
78 EnvironmentFile=/etc/default/matrix-synapse
0 matrix-synapse-py3 (1.2.1) stable; urgency=medium
1
2 * New synapse release 1.2.1.
3
4 -- Synapse Packaging team <packages@matrix.org> Fri, 26 Jul 2019 11:32:47 +0100
0 matrix-synapse-py3 (1.3.0) stable; urgency=medium
1
2 [ Andrew Morgan ]
3 * Remove libsqlite3-dev from required build dependencies.
54
65 matrix-synapse-py3 (1.2.0) stable; urgency=medium
76
1312
1413 [ Synapse Packaging team ]
1514 * New synapse release 1.2.0.
16
17 -- Synapse Packaging team <packages@matrix.org> Thu, 25 Jul 2019 14:10:07 +0100
15 * New synapse release 1.3.0.
16
17 -- Synapse Packaging team <packages@matrix.org> Thu, 15 Aug 2019 12:04:23 +0100
1818
1919 matrix-synapse-py3 (1.1.0) stable; urgency=medium
2020
1414 python3-setuptools,
1515 python3-pip,
1616 python3-venv,
17 libsqlite3-dev,
1817 tar,
1918 Standards-Version: 3.9.8
2019 Homepage: https://github.com/matrix-org/synapse
2828
2929 if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
3030 printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
31
31
3232 echo 'enable_registration: true' >> $DIR/etc/$port.config
3333
3434 # Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
4242 tls: true
4343 resources:
4444 - names: [client, federation]
45
45
4646 - port: $port
4747 tls: false
4848 bind_addresses: ['::1', '127.0.0.1']
6767
6868 # Generate tls keys
6969 openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
70
70
7171 # Ignore keys from the trusted keys server
7272 echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
7373 echo 'trusted_key_servers:' >> $DIR/etc/$port.config
119119 python3 -m synapse.app.homeserver \
120120 --config-path "$DIR/etc/$port.config" \
121121 -D \
122 -vv \
123122
124123 popd
125124 done
4141 ###
4242 FROM ${distro}
4343
44 # Get the distro we want to pull from as a dynamic build variable
45 # (We need to define it in each build stage)
46 ARG distro=""
47 ENV distro ${distro}
48
4449 # Install the build dependencies
4550 #
4651 # NB: keep this list in sync with the list of build-deps in debian/control
33
44 set -ex
55
6 DIST=`lsb_release -c -s`
6 # Get the codename from distro env
7 DIST=`cut -d ':' -f2 <<< $distro`
78
89 # we get a read-only copy of the source: make a writeable copy
910 cp -aT /synapse/source /synapse/build
0 # Code Style
0 Code Style
1 ==========
2
3 Formatting tools
4 ----------------
15
26 The Synapse codebase uses a number of code formatting tools in order to
37 quickly and automatically check for formatting (and sometimes logical) errors
59
610 The necessary tools are detailed below.
711
8 ## Formatting tools
12 - **black**
913
10 The Synapse codebase uses [black](https://pypi.org/project/black/) as an
11 opinionated code formatter, ensuring all comitted code is properly
12 formatted.
14 The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
15 opinionated code formatter, ensuring all comitted code is properly
16 formatted.
1317
14 First install ``black`` with::
18 First install ``black`` with::
1519
16 pip install --upgrade black
20 pip install --upgrade black
1721
18 Have ``black`` auto-format your code (it shouldn't change any
19 functionality) with::
22 Have ``black`` auto-format your code (it shouldn't change any functionality)
23 with::
2024
21 black . --exclude="\.tox|build|env"
25 black . --exclude="\.tox|build|env"
2226
2327 - **flake8**
2428
5357 workflow. It is not, however, recommended to run ``flake8`` on save as it
5458 takes a while and is very resource intensive.
5559
56 ## General rules
60 General rules
61 -------------
5762
5863 - **Naming**:
5964
6065 - Use camel case for class and type names
6166 - Use underscores for functions and variables.
6267
63 - Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
64
65 - **Comments**: should follow the `google code style
66 <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
68 - **Docstrings**: should follow the `google code style
69 <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
6770 This is so that we can generate documentation with `sphinx
6871 <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
6972 `examples
7174 in the sphinx documentation.
7275
7376 - **Imports**:
77
78 - Imports should be sorted by ``isort`` as described above.
7479
7580 - Prefer to import classes and functions rather than packages or modules.
7681
9196 This goes against the advice in the Google style guide, but it means that
9297 errors in the name are caught early (at import time).
9398
94 - Multiple imports from the same package can be combined onto one line::
95
96 from synapse.types import GroupID, RoomID, UserID
97
98 An effort should be made to keep the individual imports in alphabetical
99 order.
100
101 If the list becomes long, wrap it with parentheses and split it over
102 multiple lines.
103
104 - As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
105 imports should be grouped in the following order, with a blank line between
106 each group:
107
108 1. standard library imports
109 2. related third party imports
110 3. local application/library specific imports
111
112 - Imports within each group should be sorted alphabetically by module name.
113
11499 - Avoid wildcard imports (``from synapse.types import *``) and relative
115100 imports (``from .types import UserID``).
101
102 Configuration file format
103 -------------------------
104
105 The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
106 Synapse's configuration options for server administrators. Remember that many
107 readers will be unfamiliar with YAML and server administration in general, so
108 that it is important that the file be as easy to understand as possible, which
109 includes following a consistent format.
110
111 Some guidelines follow:
112
113 * Sections should be separated with a heading consisting of a single line
114 prefixed and suffixed with ``##``. There should be **two** blank lines
115 before the section header, and **one** after.
116
117 * Each option should be listed in the file with the following format:
118
119 * A comment describing the setting. Each line of this comment should be
120 prefixed with a hash (``#``) and a space.
121
122 The comment should describe the default behaviour (ie, what happens if
123 the setting is omitted), as well as what the effect will be if the
124 setting is changed.
125
126 Often, the comment end with something like "uncomment the
127 following to \<do action>".
128
129 * A line consisting of only ``#``.
130
131 * A commented-out example setting, prefixed with only ``#``.
132
133 For boolean (on/off) options, convention is that this example should be
134 the *opposite* to the default (so the comment will end with "Uncomment
135 the following to enable [or disable] \<feature\>." For other options,
136 the example should give some non-default value which is likely to be
137 useful to the reader.
138
139 * There should be a blank line between each option.
140
141 * Where several settings are grouped into a single dict, *avoid* the
142 convention where the whole block is commented out, resulting in comment
143 lines starting ``# #``, as this is hard to read and confusing to
144 edit. Instead, leave the top-level config option uncommented, and follow
145 the conventions above for sub-options. Ensure that your code correctly
146 handles the top-level option being set to ``None`` (as it will be if no
147 sub-options are enabled).
148
149 * Lines should be wrapped at 80 characters.
150
151 Example::
152
153 ## Frobnication ##
154
155 # The frobnicator will ensure that all requests are fully frobnicated.
156 # To enable it, uncomment the following.
157 #
158 #frobnicator_enabled: true
159
160 # By default, the frobnicator will frobnicate with the default frobber.
161 # The following will make it use an alternative frobber.
162 #
163 #frobincator_frobber: special_frobber
164
165 # Settings for the frobber
166 #
167 frobber:
168 # frobbing speed. Defaults to 1.
169 #
170 #speed: 10
171
172 # frobbing distance. Defaults to 1000.
173 #
174 #distance: 100
175
176 Note that the sample configuration is generated from the synapse code and is
177 maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
178 that the output from this script matches the desired format is left as an
179 exercise for the reader!
147147 d = more_stuff()
148148 result = yield d # also fine, of course
149149
150 defer.returnValue(result)
150 return result
151151
152152 def nonInlineCallbacksFun():
153153 logger.debug("just a wrapper really")
277277 # Used by phonehome stats to group together related servers.
278278 #server_context: context
279279
280 # Resource-constrained Homeserver Settings
281 #
282 # If limit_remote_rooms.enabled is True, the room complexity will be
283 # checked before a user joins a new remote room. If it is above
284 # limit_remote_rooms.complexity, it will disallow joining or
285 # instantly leave.
286 #
287 # limit_remote_rooms.complexity_error can be set to customise the text
288 # displayed to the user when a room above the complexity threshold has
289 # its join cancelled.
290 #
291 # Uncomment the below lines to enable:
292 #limit_remote_rooms:
293 # enabled: True
294 # complexity: 1.0
295 # complexity_error: "This room is too complex."
296
280297 # Whether to require a user to be in the room to add an alias to it.
281298 # Defaults to 'true'.
282299 #
546563 #federation_rr_transactions_per_room_per_second: 50
547564
548565
566
567 ## Media Store ##
568
569 # Enable the media store service in the Synapse master. Uncomment the
570 # following if you are using a separate media store worker.
571 #
572 #enable_media_repo: false
549573
550574 # Directory where uploaded images and attachments are stored.
551575 #
784808 # period: 6w
785809 # renew_at: 1w
786810 # renew_email_subject: "Renew your %(app)s account"
811 # # Directory in which Synapse will try to find the HTML files to serve to the
812 # # user when trying to renew an account. Optional, defaults to
813 # # synapse/res/templates.
814 # template_dir: "res/templates"
815 # # HTML to be displayed to the user after they successfully renewed their
816 # # account. Optional.
817 # account_renewed_html_path: "account_renewed.html"
818 # # HTML to be displayed when the user tries to renew an account with an invalid
819 # # renewal token. Optional.
820 # invalid_token_html_path: "invalid_token.html"
787821
788822 # Time that a user's session remains valid for, after they log in.
789823 #
923957 # a secret key is derived from the signing key.
924958 #
925959 # macaroon_secret_key: <PRIVATE STRING>
926
927 # Used to enable access token expiration.
928 #
929 #expire_access_token: False
930960
931961 # a secret which is used to calculate HMACs for form values, to stop
932962 # falsification of values. Must be specified for the User Consent
14291459 #
14301460 #homeserver_whitelist:
14311461 # - ".*"
1462
1463 # Jaeger can be configured to sample traces at different rates.
1464 # All configuration options provided by Jaeger can be set here.
1465 # Jaeger's configuration mostly related to trace sampling which
1466 # is documented here:
1467 # https://www.jaegertracing.io/docs/1.13/sampling/.
1468 #
1469 #jaeger_config:
1470 # sampler:
1471 # type: const
1472 # param: 1
1473
1474 # Logging whether spans were started and reported
1475 #
1476 # logging:
1477 # false
205205
206206 /_matrix/media/
207207
208 And the following regular expressions matching media-specific administration
209 APIs::
210
211 ^/_synapse/admin/v1/purge_media_cache$
212 ^/_synapse/admin/v1/room/.*/media$
213 ^/_synapse/admin/v1/quarantine_media/.*$
214
208215 You should also set ``enable_media_repo: False`` in the shared configuration
209216 file to stop the main synapse running background jobs related to managing the
210217 media repository.
3434 except ImportError:
3535 pass
3636
37 __version__ = "1.2.1"
37 __version__ = "1.3.0"
127127 )
128128
129129 self._check_joined_room(member, user_id, room_id)
130 defer.returnValue(member)
130 return member
131131
132132 @defer.inlineCallbacks
133133 def check_user_was_in_room(self, room_id, user_id):
155155 if forgot:
156156 raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
157157
158 defer.returnValue(member)
158 return member
159159
160160 @defer.inlineCallbacks
161161 def check_host_in_room(self, room_id, host):
162162 with Measure(self.clock, "check_host_in_room"):
163163 latest_event_ids = yield self.store.is_host_joined(room_id, host)
164 defer.returnValue(latest_event_ids)
164 return latest_event_ids
165165
166166 def _check_joined_room(self, member, user_id, room_id):
167167 if not member or member.membership != Membership.JOIN:
218218 device_id="dummy-device", # stubbed
219219 )
220220
221 defer.returnValue(
222 synapse.types.create_requester(user_id, app_service=app_service)
223 )
221 return synapse.types.create_requester(user_id, app_service=app_service)
224222
225223 user_info = yield self.get_user_by_access_token(access_token, rights)
226224 user = user_info["user"]
261259
262260 request.authenticated_entity = user.to_string()
263261
264 defer.returnValue(
265 synapse.types.create_requester(
266 user, token_id, is_guest, device_id, app_service=app_service
267 )
262 return synapse.types.create_requester(
263 user, token_id, is_guest, device_id, app_service=app_service
268264 )
269265 except KeyError:
270266 raise MissingClientTokenError()
275271 self.get_access_token_from_request(request)
276272 )
277273 if app_service is None:
278 defer.returnValue((None, None))
274 return (None, None)
279275
280276 if app_service.ip_range_whitelist:
281277 ip_address = IPAddress(self.hs.get_ip_from_request(request))
282278 if ip_address not in app_service.ip_range_whitelist:
283 defer.returnValue((None, None))
279 return (None, None)
284280
285281 if b"user_id" not in request.args:
286 defer.returnValue((app_service.sender, app_service))
282 return (app_service.sender, app_service)
287283
288284 user_id = request.args[b"user_id"][0].decode("utf8")
289285 if app_service.sender == user_id:
290 defer.returnValue((app_service.sender, app_service))
286 return (app_service.sender, app_service)
291287
292288 if not app_service.is_interested_in_user(user_id):
293289 raise AuthError(403, "Application service cannot masquerade as this user.")
294290 if not (yield self.store.get_user_by_id(user_id)):
295291 raise AuthError(403, "Application service has not registered this user")
296 defer.returnValue((user_id, app_service))
292 return (user_id, app_service)
297293
298294 @defer.inlineCallbacks
299295 def get_user_by_access_token(self, token, rights="access"):
329325 msg="Access token has expired", soft_logout=True
330326 )
331327
332 defer.returnValue(r)
328 return r
333329
334330 # otherwise it needs to be a valid macaroon
335331 try:
377373 }
378374 else:
379375 raise RuntimeError("Unknown rights setting %s", rights)
380 defer.returnValue(ret)
376 return ret
381377 except (
382378 _InvalidMacaroonException,
383379 pymacaroons.exceptions.MacaroonException,
413409 try:
414410 user_id = self.get_user_id_from_macaroon(macaroon)
415411
416 has_expiry = False
417412 guest = False
418413 for caveat in macaroon.caveats:
419 if caveat.caveat_id.startswith("time "):
420 has_expiry = True
421 elif caveat.caveat_id == "guest = true":
414 if caveat.caveat_id == "guest = true":
422415 guest = True
423416
424 self.validate_macaroon(
425 macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
426 )
417 self.validate_macaroon(macaroon, rights, user_id=user_id)
427418 except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
428419 raise InvalidClientTokenError("Invalid macaroon passed.")
429420
430 if not has_expiry and rights == "access":
421 if rights == "access":
431422 self.token_cache[token] = (user_id, guest)
432423
433424 return user_id, guest
453444 return caveat.caveat_id[len(user_prefix) :]
454445 raise InvalidClientTokenError("No user caveat in macaroon")
455446
456 def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
447 def validate_macaroon(self, macaroon, type_string, user_id):
457448 """
458449 validate that a Macaroon is understood by and was signed by this server.
459450
461452 macaroon(pymacaroons.Macaroon): The macaroon to validate
462453 type_string(str): The kind of token required (e.g. "access",
463454 "delete_pusher")
464 verify_expiry(bool): Whether to verify whether the macaroon has expired.
465455 user_id (str): The user_id required
466456 """
467457 v = pymacaroons.Verifier()
474464 v.satisfy_exact("type = " + type_string)
475465 v.satisfy_exact("user_id = %s" % user_id)
476466 v.satisfy_exact("guest = true")
477
478 # verify_expiry should really always be True, but there exist access
479 # tokens in the wild which expire when they should not, so we can't
480 # enforce expiry yet (so we have to allow any caveat starting with
481 # 'time < ' in access tokens).
482 #
483 # On the other hand, short-term login tokens (as used by CAS login, for
484 # example) have an expiry time which we do want to enforce.
485
486 if verify_expiry:
487 v.satisfy_general(self._verify_expiry)
488 else:
489 v.satisfy_general(lambda c: c.startswith("time < "))
467 v.satisfy_general(self._verify_expiry)
490468
491469 # access_tokens include a nonce for uniqueness: any value is acceptable
492470 v.satisfy_general(lambda c: c.startswith("nonce = "))
505483 def _look_up_user_by_access_token(self, token):
506484 ret = yield self.store.get_user_by_access_token(token)
507485 if not ret:
508 defer.returnValue(None)
486 return None
509487
510488 # we use ret.get() below because *lots* of unit tests stub out
511489 # get_user_by_access_token in a way where it only returns a couple of
517495 "device_id": ret.get("device_id"),
518496 "valid_until_ms": ret.get("valid_until_ms"),
519497 }
520 defer.returnValue(user_info)
498 return user_info
521499
522500 def get_appservice_by_req(self, request):
523501 token = self.get_access_token_from_request(request)
542520 @defer.inlineCallbacks
543521 def compute_auth_events(self, event, current_state_ids, for_verification=False):
544522 if event.type == EventTypes.Create:
545 defer.returnValue([])
523 return []
546524
547525 auth_ids = []
548526
603581 if member_event.content["membership"] == Membership.JOIN:
604582 auth_ids.append(member_event.event_id)
605583
606 defer.returnValue(auth_ids)
584 return auth_ids
607585
608586 @defer.inlineCallbacks
609587 def check_can_change_room_list(self, room_id, user):
617595
618596 is_admin = yield self.is_server_admin(user)
619597 if is_admin:
620 defer.returnValue(True)
598 return True
621599
622600 user_id = user.to_string()
623601 yield self.check_joined_room(room_id, user_id)
711689 # * The user is a guest user, and has joined the room
712690 # else it will throw.
713691 member_event = yield self.check_user_was_in_room(room_id, user_id)
714 defer.returnValue((member_event.membership, member_event.event_id))
692 return (member_event.membership, member_event.event_id)
715693 except AuthError:
716694 visibility = yield self.state.get_current_state(
717695 room_id, EventTypes.RoomHistoryVisibility, ""
720698 visibility
721699 and visibility.content["history_visibility"] == "world_readable"
722700 ):
723 defer.returnValue((Membership.JOIN, None))
701 return (Membership.JOIN, None)
724702 return
725703 raise AuthError(
726704 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
6060 INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
6161 WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
6262 EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
63 USER_DEACTIVATED = "M_USER_DEACTIVATED"
6364
6465
6566 class CodeMessageException(RuntimeError):
150151 msg (str): The human-readable error message
151152 """
152153 super(UserDeactivatedError, self).__init__(
153 code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
154 code=http_client.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
154155 )
155156
156157
131131 @defer.inlineCallbacks
132132 def get_user_filter(self, user_localpart, filter_id):
133133 result = yield self.store.get_user_filter(user_localpart, filter_id)
134 defer.returnValue(FilterCollection(result))
134 return FilterCollection(result)
135135
136136 def add_user_filter(self, user_localpart, user_filter):
137137 self.check_valid_filter(user_filter)
1414
1515 import gc
1616 import logging
17 import os
1718 import signal
1819 import sys
1920 import traceback
2021
22 import sdnotify
2123 from daemonize import Daemonize
2224
2325 from twisted.internet import defer, error, reactor
241243 if hasattr(signal, "SIGHUP"):
242244
243245 def handle_sighup(*args, **kwargs):
246 # Tell systemd our state, if we're using it. This will silently fail if
247 # we're not using systemd.
248 sd_channel = sdnotify.SystemdNotifier()
249 sd_channel.notify("RELOADING=1")
250
244251 for i in _sighup_callbacks:
245252 i(hs)
253
254 sd_channel.notify("READY=1")
246255
247256 signal.signal(signal.SIGHUP, handle_sighup)
248257
259268 hs.get_datastore().start_profiling()
260269
261270 setup_sentry(hs)
271 setup_sdnotify(hs)
262272 except Exception:
263273 traceback.print_exc(file=sys.stderr)
264274 reactor = hs.get_reactor()
289299 name = hs.config.worker_name if hs.config.worker_name else "master"
290300 scope.set_tag("worker_app", app)
291301 scope.set_tag("worker_name", name)
302
303
304 def setup_sdnotify(hs):
305 """Adds process state hooks to tell systemd what we are up to.
306 """
307
308 # Tell systemd our state, if we're using it. This will silently fail if
309 # we're not using systemd.
310 sd_channel = sdnotify.SystemdNotifier()
311
312 hs.get_reactor().addSystemEventTrigger(
313 "after",
314 "startup",
315 lambda: sd_channel.notify("READY=1\nMAINPID=%s" % (os.getpid())),
316 )
317
318 hs.get_reactor().addSystemEventTrigger(
319 "before", "shutdown", lambda: sd_channel.notify("STOPPING=1")
320 )
292321
293322
294323 def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
167167 )
168168
169169 ps.setup()
170 reactor.callWhenRunning(_base.start, ps, config.worker_listeners)
170 reactor.addSystemEventTrigger(
171 "before", "startup", _base.start, ps, config.worker_listeners
172 )
171173
172174 _base.start_worker_reactor("synapse-appservice", config)
173175
193193 )
194194
195195 ss.setup()
196 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
196 reactor.addSystemEventTrigger(
197 "before", "startup", _base.start, ss, config.worker_listeners
198 )
197199
198200 _base.start_worker_reactor("synapse-client-reader", config)
199201
192192 )
193193
194194 ss.setup()
195 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
195 reactor.addSystemEventTrigger(
196 "before", "startup", _base.start, ss, config.worker_listeners
197 )
196198
197199 _base.start_worker_reactor("synapse-event-creator", config)
198200
174174 )
175175
176176 ss.setup()
177 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
177 reactor.addSystemEventTrigger(
178 "before", "startup", _base.start, ss, config.worker_listeners
179 )
178180
179181 _base.start_worker_reactor("synapse-federation-reader", config)
180182
197197 )
198198
199199 ss.setup()
200 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
200 reactor.addSystemEventTrigger(
201 "before", "startup", _base.start, ss, config.worker_listeners
202 )
201203
202204 _base.start_worker_reactor("synapse-federation-sender", config)
203205
6969 except HttpResponseException as e:
7070 raise e.to_synapse_error()
7171
72 defer.returnValue((200, result))
72 return (200, result)
7373
7474 @defer.inlineCallbacks
7575 def on_PUT(self, request, user_id):
7676 yield self.auth.get_user_by_req(request)
77 defer.returnValue((200, {}))
77 return (200, {})
7878
7979
8080 class KeyUploadServlet(RestServlet):
125125 self.main_uri + request.uri.decode("ascii"), body, headers=headers
126126 )
127127
128 defer.returnValue((200, result))
128 return (200, result)
129129 else:
130130 # Just interested in counts.
131131 result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
132 defer.returnValue((200, {"one_time_key_counts": result}))
132 return (200, {"one_time_key_counts": result})
133133
134134
135135 class FrontendProxySlavedStore(
246246 )
247247
248248 ss.setup()
249 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
249 reactor.addSystemEventTrigger(
250 "before", "startup", _base.start, ss, config.worker_listeners
251 )
250252
251253 _base.start_worker_reactor("synapse-frontend-proxy", config)
252254
405405 if provision:
406406 yield acme.provision_certificate()
407407
408 defer.returnValue(provision)
408 return provision
409409
410410 @defer.inlineCallbacks
411411 def reprovision_acme():
446446 reactor.stop()
447447 sys.exit(1)
448448
449 reactor.callWhenRunning(start)
449 reactor.addSystemEventTrigger("before", "startup", start)
450450
451451 return hs
452452
2525 from synapse.config._base import ConfigError
2626 from synapse.config.homeserver import HomeServerConfig
2727 from synapse.config.logger import setup_logging
28 from synapse.http.server import JsonResource
2829 from synapse.http.site import SynapseSite
2930 from synapse.logging.context import LoggingContext
3031 from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
3435 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
3536 from synapse.replication.slave.storage.transactions import SlavedTransactionStore
3637 from synapse.replication.tcp.client import ReplicationClientHandler
38 from synapse.rest.admin import register_servlets_for_media_repo
3739 from synapse.rest.media.v0.content_repository import ContentRepoResource
3840 from synapse.server import HomeServer
3941 from synapse.storage.engines import create_engine
7072 resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
7173 elif name == "media":
7274 media_repo = self.get_media_repository_resource()
75
76 # We need to serve the admin servlets for media on the
77 # worker.
78 admin_resource = JsonResource(self, canonical_json=False)
79 register_servlets_for_media_repo(self, admin_resource)
80
7381 resources.update(
7482 {
7583 MEDIA_PREFIX: media_repo,
7785 CONTENT_REPO_PREFIX: ContentRepoResource(
7886 self, self.config.uploads_path
7987 ),
88 "/_synapse/admin": admin_resource,
8089 }
8190 )
8291
160169 )
161170
162171 ss.setup()
163 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
172 reactor.addSystemEventTrigger(
173 "before", "startup", _base.start, ss, config.worker_listeners
174 )
164175
165176 _base.start_worker_reactor("synapse-media-repository", config)
166177
215215 _base.start(ps, config.worker_listeners)
216216 ps.get_pusherpool().start()
217217
218 reactor.callWhenRunning(start)
218 reactor.addSystemEventTrigger("before", "startup", start)
219219
220220 _base.start_worker_reactor("synapse-pusher", config)
221221
450450 )
451451
452452 ss.setup()
453 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
453 reactor.addSystemEventTrigger(
454 "before", "startup", _base.start, ss, config.worker_listeners
455 )
454456
455457 _base.start_worker_reactor("synapse-synchrotron", config)
456458
223223 )
224224
225225 ss.setup()
226 reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
226 reactor.addSystemEventTrigger(
227 "before", "startup", _base.start, ss, config.worker_listeners
228 )
227229
228230 _base.start_worker_reactor("synapse-user-dir", config)
229231
174174 @defer.inlineCallbacks
175175 def _matches_user(self, event, store):
176176 if not event:
177 defer.returnValue(False)
177 return False
178178
179179 if self.is_interested_in_user(event.sender):
180 defer.returnValue(True)
180 return True
181181 # also check m.room.member state key
182182 if event.type == EventTypes.Member and self.is_interested_in_user(
183183 event.state_key
184184 ):
185 defer.returnValue(True)
185 return True
186186
187187 if not store:
188 defer.returnValue(False)
188 return False
189189
190190 does_match = yield self._matches_user_in_member_list(event.room_id, store)
191 defer.returnValue(does_match)
191 return does_match
192192
193193 @cachedInlineCallbacks(num_args=1, cache_context=True)
194194 def _matches_user_in_member_list(self, room_id, store, cache_context):
199199 # check joined member events
200200 for user_id in member_list:
201201 if self.is_interested_in_user(user_id):
202 defer.returnValue(True)
203 defer.returnValue(False)
202 return True
203 return False
204204
205205 def _matches_room_id(self, event):
206206 if hasattr(event, "room_id"):
210210 @defer.inlineCallbacks
211211 def _matches_aliases(self, event, store):
212212 if not store or not event:
213 defer.returnValue(False)
213 return False
214214
215215 alias_list = yield store.get_aliases_for_room(event.room_id)
216216 for alias in alias_list:
217217 if self.is_interested_in_alias(alias):
218 defer.returnValue(True)
219 defer.returnValue(False)
218 return True
219 return False
220220
221221 @defer.inlineCallbacks
222222 def is_interested(self, event, store=None):
230230 """
231231 # Do cheap checks first
232232 if self._matches_room_id(event):
233 defer.returnValue(True)
233 return True
234234
235235 if (yield self._matches_aliases(event, store)):
236 defer.returnValue(True)
236 return True
237237
238238 if (yield self._matches_user(event, store)):
239 defer.returnValue(True)
240
241 defer.returnValue(False)
239 return True
240
241 return False
242242
243243 def is_interested_in_user(self, user_id):
244244 return (
9696 @defer.inlineCallbacks
9797 def query_user(self, service, user_id):
9898 if service.url is None:
99 defer.returnValue(False)
99 return False
100100 uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
101101 response = None
102102 try:
103103 response = yield self.get_json(uri, {"access_token": service.hs_token})
104104 if response is not None: # just an empty json object
105 defer.returnValue(True)
105 return True
106106 except CodeMessageException as e:
107107 if e.code == 404:
108 defer.returnValue(False)
108 return False
109109 return
110110 logger.warning("query_user to %s received %s", uri, e.code)
111111 except Exception as ex:
112112 logger.warning("query_user to %s threw exception %s", uri, ex)
113 defer.returnValue(False)
113 return False
114114
115115 @defer.inlineCallbacks
116116 def query_alias(self, service, alias):
117117 if service.url is None:
118 defer.returnValue(False)
118 return False
119119 uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
120120 response = None
121121 try:
122122 response = yield self.get_json(uri, {"access_token": service.hs_token})
123123 if response is not None: # just an empty json object
124 defer.returnValue(True)
124 return True
125125 except CodeMessageException as e:
126126 logger.warning("query_alias to %s received %s", uri, e.code)
127127 if e.code == 404:
128 defer.returnValue(False)
128 return False
129129 return
130130 except Exception as ex:
131131 logger.warning("query_alias to %s threw exception %s", uri, ex)
132 defer.returnValue(False)
132 return False
133133
134134 @defer.inlineCallbacks
135135 def query_3pe(self, service, kind, protocol, fields):
140140 else:
141141 raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind)
142142 if service.url is None:
143 defer.returnValue([])
143 return []
144144
145145 uri = "%s%s/thirdparty/%s/%s" % (
146146 service.url,
154154 logger.warning(
155155 "query_3pe to %s returned an invalid response %r", uri, response
156156 )
157 defer.returnValue([])
157 return []
158158
159159 ret = []
160160 for r in response:
165165 "query_3pe to %s returned an invalid result %r", uri, r
166166 )
167167
168 defer.returnValue(ret)
168 return ret
169169 except Exception as ex:
170170 logger.warning("query_3pe to %s threw exception %s", uri, ex)
171 defer.returnValue([])
171 return []
172172
173173 def get_3pe_protocol(self, service, protocol):
174174 if service.url is None:
175 defer.returnValue({})
175 return {}
176176
177177 @defer.inlineCallbacks
178178 def _get():
188188 logger.warning(
189189 "query_3pe_protocol to %s did not return a" " valid result", uri
190190 )
191 defer.returnValue(None)
191 return None
192192
193193 for instance in info.get("instances", []):
194194 network_id = instance.get("network_id", None)
197197 service.id, network_id
198198 ).to_string()
199199
200 defer.returnValue(info)
200 return info
201201 except Exception as ex:
202202 logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex)
203 defer.returnValue(None)
203 return None
204204
205205 key = (service.id, protocol)
206206 return self.protocol_meta_cache.wrap(key, _get)
208208 @defer.inlineCallbacks
209209 def push_bulk(self, service, events, txn_id=None):
210210 if service.url is None:
211 defer.returnValue(True)
211 return True
212212
213213 events = self._serialize(events)
214214
228228 )
229229 sent_transactions_counter.labels(service.id).inc()
230230 sent_events_counter.labels(service.id).inc(len(events))
231 defer.returnValue(True)
231 return True
232232 return
233233 except CodeMessageException as e:
234234 logger.warning("push_bulk to %s received %s", uri, e.code)
235235 except Exception as ex:
236236 logger.warning("push_bulk to %s threw exception %s", uri, ex)
237237 failed_transactions_counter.labels(service.id).inc()
238 defer.returnValue(False)
238 return False
239239
240240 def _serialize(self, events):
241241 time_now = self.clock.time_msec()
192192 @defer.inlineCallbacks
193193 def _is_service_up(self, service):
194194 state = yield self.store.get_appservice_state(service)
195 defer.returnValue(state == ApplicationServiceState.UP or state is None)
195 return state == ApplicationServiceState.UP or state is None
196196
197197
198198 class _Recoverer(object):
207207 r.service.id,
208208 )
209209 r.recover()
210 defer.returnValue(recoverers)
210 return recoverers
211211
212212 def __init__(self, clock, store, as_api, service, callback):
213213 self.clock = clock
115115 seed = bytes(self.signing_key[0])
116116 self.macaroon_secret_key = hashlib.sha256(seed).digest()
117117
118 self.expire_access_token = config.get("expire_access_token", False)
119
120118 # a secret which is used to calculate HMACs for form values, to stop
121119 # falsification of values
122120 self.form_secret = config.get("form_secret", None)
142140 # a secret key is derived from the signing key.
143141 #
144142 %(macaroon_secret_key)s
145
146 # Used to enable access token expiration.
147 #
148 #expire_access_token: False
149143
150144 # a secret which is used to calculate HMACs for form values, to stop
151145 # falsification of values. Must be specified for the User Consent
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
1415 import logging
1516 import logging.config
1617 import os
7475
7576 class LoggingConfig(Config):
7677 def read_config(self, config, **kwargs):
77 self.verbosity = config.get("verbose", 0)
78 self.log_config = self.abspath(config.get("log_config"))
7879 self.no_redirect_stdio = config.get("no_redirect_stdio", False)
79 self.log_config = self.abspath(config.get("log_config"))
80 self.log_file = self.abspath(config.get("log_file"))
8180
8281 def generate_config_section(self, config_dir_path, server_name, **kwargs):
8382 log_config = os.path.join(config_dir_path, server_name + ".log.config")
9392 )
9493
9594 def read_arguments(self, args):
96 if args.verbose is not None:
97 self.verbosity = args.verbose
9895 if args.no_redirect_stdio is not None:
9996 self.no_redirect_stdio = args.no_redirect_stdio
100 if args.log_config is not None:
101 self.log_config = args.log_config
102 if args.log_file is not None:
103 self.log_file = args.log_file
10497
10598 @staticmethod
10699 def add_arguments(parser):
107100 logging_group = parser.add_argument_group("logging")
108 logging_group.add_argument(
109 "-v",
110 "--verbose",
111 dest="verbose",
112 action="count",
113 help="The verbosity level. Specify multiple times to increase "
114 "verbosity. (Ignored if --log-config is specified.)",
115 )
116 logging_group.add_argument(
117 "-f",
118 "--log-file",
119 dest="log_file",
120 help="File to log to. (Ignored if --log-config is specified.)",
121 )
122 logging_group.add_argument(
123 "--log-config",
124 dest="log_config",
125 default=None,
126 help="Python logging config file",
127 )
128101 logging_group.add_argument(
129102 "-n",
130103 "--no-redirect-stdio",
152125 config (LoggingConfig | synapse.config.workers.WorkerConfig):
153126 configuration data
154127
155 use_worker_options (bool): True to use 'worker_log_config' and
156 'worker_log_file' options instead of 'log_config' and 'log_file'.
128 use_worker_options (bool): True to use the 'worker_log_config' option
129 instead of 'log_config'.
157130
158131 register_sighup (func | None): Function to call to register a
159132 sighup handler.
160133 """
161134 log_config = config.worker_log_config if use_worker_options else config.log_config
162 log_file = config.worker_log_file if use_worker_options else config.log_file
163
164 log_format = (
165 "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
166 " - %(message)s"
167 )
168135
169136 if log_config is None:
170 # We don't have a logfile, so fall back to the 'verbosity' param from
171 # the config or cmdline. (Note that we generate a log config for new
172 # installs, so this will be an unusual case)
173 level = logging.INFO
174 level_for_storage = logging.INFO
175 if config.verbosity:
176 level = logging.DEBUG
177 if config.verbosity > 1:
178 level_for_storage = logging.DEBUG
137 log_format = (
138 "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
139 " - %(message)s"
140 )
179141
180142 logger = logging.getLogger("")
181 logger.setLevel(level)
182
183 logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
143 logger.setLevel(logging.INFO)
144 logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
184145
185146 formatter = logging.Formatter(log_format)
186 if log_file:
187 # TODO: Customisable file size / backup count
188 handler = logging.handlers.RotatingFileHandler(
189 log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
190 )
191
192 def sighup(signum, stack):
193 logger.info("Closing log file due to SIGHUP")
194 handler.doRollover()
195 logger.info("Opened new log file due to SIGHUP")
196
197 else:
198 handler = logging.StreamHandler()
199
200 def sighup(*args):
201 pass
202
147
148 handler = logging.StreamHandler()
203149 handler.setFormatter(formatter)
204
205150 handler.addFilter(LoggingContextFilter(request=""))
206
207151 logger.addHandler(handler)
208152 else:
209153
217161 logging.info("Reloaded log config from %s due to SIGHUP", log_config)
218162
219163 load_log_config()
220
221 appbase.register_sighup(sighup)
164 appbase.register_sighup(sighup)
222165
223166 # make sure that the first thing we log is a thing we can grep backwards
224167 # for
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 import os
1516 from distutils.util import strtobool
17
18 import pkg_resources
1619
1720 from synapse.config._base import Config, ConfigError
1821 from synapse.types import RoomAlias
4043
4144 self.startup_job_max_delta = self.period * 10.0 / 100.0
4245
43 if self.renew_by_email_enabled and "public_baseurl" not in synapse_config:
44 raise ConfigError("Can't send renewal emails without 'public_baseurl'")
46 if self.renew_by_email_enabled:
47 if "public_baseurl" not in synapse_config:
48 raise ConfigError("Can't send renewal emails without 'public_baseurl'")
49
50 template_dir = config.get("template_dir")
51
52 if not template_dir:
53 template_dir = pkg_resources.resource_filename("synapse", "res/templates")
54
55 if "account_renewed_html_path" in config:
56 file_path = os.path.join(template_dir, config["account_renewed_html_path"])
57
58 self.account_renewed_html_content = self.read_file(
59 file_path, "account_validity.account_renewed_html_path"
60 )
61 else:
62 self.account_renewed_html_content = (
63 "<html><body>Your account has been successfully renewed.</body><html>"
64 )
65
66 if "invalid_token_html_path" in config:
67 file_path = os.path.join(template_dir, config["invalid_token_html_path"])
68
69 self.invalid_token_html_content = self.read_file(
70 file_path, "account_validity.invalid_token_html_path"
71 )
72 else:
73 self.invalid_token_html_content = (
74 "<html><body>Invalid renewal token.</body><html>"
75 )
4576
4677
4778 class RegistrationConfig(Config):
144175 # period: 6w
145176 # renew_at: 1w
146177 # renew_email_subject: "Renew your %%(app)s account"
178 # # Directory in which Synapse will try to find the HTML files to serve to the
179 # # user when trying to renew an account. Optional, defaults to
180 # # synapse/res/templates.
181 # template_dir: "res/templates"
182 # # HTML to be displayed to the user after they successfully renewed their
183 # # account. Optional.
184 # account_renewed_html_path: "account_renewed.html"
185 # # HTML to be displayed when the user tries to renew an account with an invalid
186 # # renewal token. Optional.
187 # invalid_token_html_path: "invalid_token.html"
147188
148189 # Time that a user's session remains valid for, after they log in.
149190 #
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
1415 import os
1516 from collections import namedtuple
1617
8687
8788 class ContentRepositoryConfig(Config):
8889 def read_config(self, config, **kwargs):
90
91 # Only enable the media repo if either the media repo is enabled or the
92 # current worker app is the media repo.
93 if (
94 self.enable_media_repo is False
95 and config.get("worker_app") != "synapse.app.media_repository"
96 ):
97 self.can_load_media_repo = False
98 return
99 else:
100 self.can_load_media_repo = True
101
89102 self.max_upload_size = self.parse_size(config.get("max_upload_size", "10M"))
90103 self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M"))
91104 self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M"))
201214
202215 return (
203216 r"""
217 ## Media Store ##
218
219 # Enable the media store service in the Synapse master. Uncomment the
220 # following if you are using a separate media store worker.
221 #
222 #enable_media_repo: false
223
204224 # Directory where uploaded images and attachments are stored.
205225 #
206226 media_store_path: "%(media_store)s"
1717 import logging
1818 import os.path
1919
20 import attr
2021 from netaddr import IPSet
2122
2223 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
3637 DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
3738
3839 DEFAULT_ROOM_VERSION = "4"
40
41 ROOM_COMPLEXITY_TOO_GREAT = (
42 "Your homeserver is unable to join rooms this large or complex. "
43 "Please speak to your server administrator, or upgrade your instance "
44 "to join this room."
45 )
3946
4047
4148 class ServerConfig(Config):
245252 _warn_if_webclient_configured(self.listeners)
246253
247254 self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
255
256 @attr.s
257 class LimitRemoteRoomsConfig(object):
258 enabled = attr.ib(
259 validator=attr.validators.instance_of(bool), default=False
260 )
261 complexity = attr.ib(
262 validator=attr.validators.instance_of((int, float)), default=1.0
263 )
264 complexity_error = attr.ib(
265 validator=attr.validators.instance_of(str),
266 default=ROOM_COMPLEXITY_TOO_GREAT,
267 )
268
269 self.limit_remote_rooms = LimitRemoteRoomsConfig(
270 **config.get("limit_remote_rooms", {})
271 )
248272
249273 bind_port = config.get("bind_port")
250274 if bind_port:
616640 # Used by phonehome stats to group together related servers.
617641 #server_context: context
618642
643 # Resource-constrained Homeserver Settings
644 #
645 # If limit_remote_rooms.enabled is True, the room complexity will be
646 # checked before a user joins a new remote room. If it is above
647 # limit_remote_rooms.complexity, it will disallow joining or
648 # instantly leave.
649 #
650 # limit_remote_rooms.complexity_error can be set to customise the text
651 # displayed to the user when a room above the complexity threshold has
652 # its join cancelled.
653 #
654 # Uncomment the below lines to enable:
655 #limit_remote_rooms:
656 # enabled: True
657 # complexity: 1.0
658 # complexity_error: "This room is too complex."
659
619660 # Whether to require a user to be in the room to add an alias to it.
620661 # Defaults to 'true'.
621662 #
2222 opentracing_config = {}
2323
2424 self.opentracer_enabled = opentracing_config.get("enabled", False)
25
26 self.jaeger_config = opentracing_config.get(
27 "jaeger_config",
28 {"sampler": {"type": "const", "param": 1}, "logging": False},
29 )
30
2531 if not self.opentracer_enabled:
2632 return
2733
5561 #
5662 #homeserver_whitelist:
5763 # - ".*"
64
65 # Jaeger can be configured to sample traces at different rates.
66 # All configuration options provided by Jaeger can be set here.
67 # Jaeger's configuration mostly related to trace sampling which
68 # is documented here:
69 # https://www.jaegertracing.io/docs/1.13/sampling/.
70 #
71 #jaeger_config:
72 # sampler:
73 # type: const
74 # param: 1
75
76 # Logging whether spans were started and reported
77 #
78 # logging:
79 # false
5880 """
3030 self.worker_listeners = config.get("worker_listeners", [])
3131 self.worker_daemonize = config.get("worker_daemonize")
3232 self.worker_pid_file = config.get("worker_pid_file")
33 self.worker_log_file = config.get("worker_log_file")
3433 self.worker_log_config = config.get("worker_log_config")
3534
3635 # The host used to connect to the main synapse
7776
7877 if args.daemonize is not None:
7978 self.worker_daemonize = args.daemonize
80 if args.log_config is not None:
81 self.worker_log_config = args.log_config
82 if args.log_file is not None:
83 self.worker_log_file = args.log_file
8479 if args.manhole is not None:
8580 self.worker_manhole = args.worker_manhole
3030 platformTrust,
3131 )
3232 from twisted.python.failure import Failure
33 from twisted.web.iweb import IPolicyForHTTPS
3334
3435 logger = logging.getLogger(__name__)
3536
7374 return self._context
7475
7576
77 @implementer(IPolicyForHTTPS)
7678 class ClientTLSOptionsFactory(object):
7779 """Factory for Twisted SSLClientConnectionCreators that are used to make connections
7880 to remote servers for federation.
145147 f = Failure()
146148 tls_protocol.failVerification(f)
147149
150 def creatorForNetloc(self, hostname, port):
151 """Implements the IPolicyForHTTPS interace so that this can be passed
152 directly to agents.
153 """
154 return self.get_options(hostname)
155
148156
149157 @implementer(IOpenSSLClientConnectionCreator)
150158 class SSLClientConnectionCreator(object):
237237 """
238238
239239 try:
240 # create a deferred for each server we're going to look up the keys
241 # for; we'll resolve them once we have completed our lookups.
242 # These will be passed into wait_for_previous_lookups to block
243 # any other lookups until we have finished.
244 # The deferreds are called with no logcontext.
245 server_to_deferred = {
246 rq.server_name: defer.Deferred() for rq in verify_requests
247 }
248
249 # We want to wait for any previous lookups to complete before
250 # proceeding.
251 yield self.wait_for_previous_lookups(server_to_deferred)
252
253 # Actually start fetching keys.
254 self._get_server_verify_keys(verify_requests)
255
256 # When we've finished fetching all the keys for a given server_name,
257 # resolve the deferred passed to `wait_for_previous_lookups` so that
258 # any lookups waiting will proceed.
259 #
260 # map from server name to a set of request ids
240 ctx = LoggingContext.current_context()
241
242 # map from server name to a set of outstanding request ids
261243 server_to_request_ids = {}
262244
263245 for verify_request in verify_requests:
265247 request_id = id(verify_request)
266248 server_to_request_ids.setdefault(server_name, set()).add(request_id)
267249
268 def remove_deferreds(res, verify_request):
250 # Wait for any previous lookups to complete before proceeding.
251 yield self.wait_for_previous_lookups(server_to_request_ids.keys())
252
253 # take out a lock on each of the servers by sticking a Deferred in
254 # key_downloads
255 for server_name in server_to_request_ids.keys():
256 self.key_downloads[server_name] = defer.Deferred()
257 logger.debug("Got key lookup lock on %s", server_name)
258
259 # When we've finished fetching all the keys for a given server_name,
260 # drop the lock by resolving the deferred in key_downloads.
261 def drop_server_lock(server_name):
262 d = self.key_downloads.pop(server_name)
263 d.callback(None)
264
265 def lookup_done(res, verify_request):
269266 server_name = verify_request.server_name
270 request_id = id(verify_request)
271 server_to_request_ids[server_name].discard(request_id)
272 if not server_to_request_ids[server_name]:
273 d = server_to_deferred.pop(server_name, None)
274 if d:
275 d.callback(None)
267 server_requests = server_to_request_ids[server_name]
268 server_requests.remove(id(verify_request))
269
270 # if there are no more requests for this server, we can drop the lock.
271 if not server_requests:
272 with PreserveLoggingContext(ctx):
273 logger.debug("Releasing key lookup lock on %s", server_name)
274
275 # ... but not immediately, as that can cause stack explosions if
276 # we get a long queue of lookups.
277 self.clock.call_later(0, drop_server_lock, server_name)
278
276279 return res
277280
278281 for verify_request in verify_requests:
279 verify_request.key_ready.addBoth(remove_deferreds, verify_request)
282 verify_request.key_ready.addBoth(lookup_done, verify_request)
283
284 # Actually start fetching keys.
285 self._get_server_verify_keys(verify_requests)
280286 except Exception:
281287 logger.exception("Error starting key lookups")
282288
283289 @defer.inlineCallbacks
284 def wait_for_previous_lookups(self, server_to_deferred):
290 def wait_for_previous_lookups(self, server_names):
285291 """Waits for any previous key lookups for the given servers to finish.
286292
287293 Args:
288 server_to_deferred (dict[str, Deferred]): server_name to deferred which gets
289 resolved once we've finished looking up keys for that server.
290 The Deferreds should be regular twisted ones which call their
291 callbacks with no logcontext.
292
293 Returns: a Deferred which resolves once all key lookups for the given
294 servers have completed. Follows the synapse rules of logcontext
295 preservation.
294 server_names (Iterable[str]): list of servers which we want to look up
295
296 Returns:
297 Deferred[None]: resolves once all key lookups for the given servers have
298 completed. Follows the synapse rules of logcontext preservation.
296299 """
297300 loop_count = 1
298301 while True:
299302 wait_on = [
300303 (server_name, self.key_downloads[server_name])
301 for server_name in server_to_deferred.keys()
304 for server_name in server_names
302305 if server_name in self.key_downloads
303306 ]
304307 if not wait_on:
312315 yield defer.DeferredList((w[1] for w in wait_on))
313316
314317 loop_count += 1
315
316 ctx = LoggingContext.current_context()
317
318 def rm(r, server_name_):
319 with PreserveLoggingContext(ctx):
320 logger.debug("Releasing key lookup lock on %s", server_name_)
321 self.key_downloads.pop(server_name_, None)
322 return r
323
324 for server_name, deferred in server_to_deferred.items():
325 logger.debug("Got key lookup lock on %s", server_name)
326 self.key_downloads[server_name] = deferred
327 deferred.addBoth(rm, server_name)
328318
329319 def _get_server_verify_keys(self, verify_requests):
330320 """Tries to find at least one key for each verify request
471461 keys = {}
472462 for (server_name, key_id), key in res.items():
473463 keys.setdefault(server_name, {})[key_id] = key
474 defer.returnValue(keys)
464 return keys
475465
476466
477467 class BaseV2KeyFetcher(object):
575565 ).addErrback(unwrapFirstError)
576566 )
577567
578 defer.returnValue(verify_keys)
568 return verify_keys
579569
580570
581571 class PerspectivesKeyFetcher(BaseV2KeyFetcher):
597587 result = yield self.get_server_verify_key_v2_indirect(
598588 keys_to_fetch, key_server
599589 )
600 defer.returnValue(result)
590 return result
601591 except KeyLookupError as e:
602592 logger.warning(
603593 "Key lookup failed from %r: %s", key_server.server_name, e
610600 str(e),
611601 )
612602
613 defer.returnValue({})
603 return {}
614604
615605 results = yield make_deferred_yieldable(
616606 defer.gatherResults(
624614 for server_name, keys in result.items():
625615 union_of_keys.setdefault(server_name, {}).update(keys)
626616
627 defer.returnValue(union_of_keys)
617 return union_of_keys
628618
629619 @defer.inlineCallbacks
630620 def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
710700 perspective_name, time_now_ms, added_keys
711701 )
712702
713 defer.returnValue(keys)
703 return keys
714704
715705 def _validate_perspectives_response(self, key_server, response):
716706 """Optionally check the signature on the result of a /key/query request
852842 )
853843 keys.update(response_keys)
854844
855 defer.returnValue(keys)
845 return keys
856846
857847
858848 @defer.inlineCallbacks
143143 if self._origin_server_ts is not None:
144144 event_dict["origin_server_ts"] = self._origin_server_ts
145145
146 defer.returnValue(
147 create_local_event_from_event_dict(
148 clock=self._clock,
149 hostname=self._hostname,
150 signing_key=self._signing_key,
151 format_version=self.format_version,
152 event_dict=event_dict,
153 internal_metadata_dict=self.internal_metadata.get_dict(),
154 )
146 return create_local_event_from_event_dict(
147 clock=self._clock,
148 hostname=self._hostname,
149 signing_key=self._signing_key,
150 format_version=self.format_version,
151 event_dict=event_dict,
152 internal_metadata_dict=self.internal_metadata.get_dict(),
155153 )
156154
157155
132132 else:
133133 prev_state_id = None
134134
135 defer.returnValue(
136 {
137 "prev_state_id": prev_state_id,
138 "event_type": event.type,
139 "event_state_key": event.state_key if event.is_state() else None,
140 "state_group": self.state_group,
141 "rejected": self.rejected,
142 "prev_group": self.prev_group,
143 "delta_ids": _encode_state_dict(self.delta_ids),
144 "prev_state_events": self.prev_state_events,
145 "app_service_id": self.app_service.id if self.app_service else None,
146 }
147 )
135 return {
136 "prev_state_id": prev_state_id,
137 "event_type": event.type,
138 "event_state_key": event.state_key if event.is_state() else None,
139 "state_group": self.state_group,
140 "rejected": self.rejected,
141 "prev_group": self.prev_group,
142 "delta_ids": _encode_state_dict(self.delta_ids),
143 "prev_state_events": self.prev_state_events,
144 "app_service_id": self.app_service.id if self.app_service else None,
145 }
148146
149147 @staticmethod
150148 def deserialize(store, input):
201199
202200 yield make_deferred_yieldable(self._fetching_state_deferred)
203201
204 defer.returnValue(self._current_state_ids)
202 return self._current_state_ids
205203
206204 @defer.inlineCallbacks
207205 def get_prev_state_ids(self, store):
221219
222220 yield make_deferred_yieldable(self._fetching_state_deferred)
223221
224 defer.returnValue(self._prev_state_ids)
222 return self._prev_state_ids
225223
226224 def get_cached_current_state_ids(self):
227225 """Gets the current state IDs if we have them already cached.
5050 defer.Deferred[bool]: True if the event should be allowed, False if not.
5151 """
5252 if self.third_party_rules is None:
53 defer.returnValue(True)
53 return True
5454
5555 prev_state_ids = yield context.get_prev_state_ids(self.store)
5656
6060 state_events[key] = yield self.store.get_event(event_id, allow_none=True)
6161
6262 ret = yield self.third_party_rules.check_event_allowed(event, state_events)
63 defer.returnValue(ret)
63 return ret
6464
6565 @defer.inlineCallbacks
6666 def on_create_room(self, requester, config, is_requester_admin):
9797 """
9898
9999 if self.third_party_rules is None:
100 defer.returnValue(True)
100 return True
101101
102102 state_ids = yield self.store.get_filtered_current_state_ids(room_id)
103103 room_state_events = yield self.store.get_events(state_ids.values())
109109 ret = yield self.third_party_rules.check_threepid_can_be_invited(
110110 medium, address, state_events
111111 )
112 defer.returnValue(ret)
112 return ret
359359 """
360360 # To handle the case of presence events and the like
361361 if not isinstance(event, EventBase):
362 defer.returnValue(event)
362 return event
363363
364364 event_id = event.event_id
365365 serialized_event = serialize_event(event, time_now, **kwargs)
405405 "sender": edit.sender,
406406 }
407407
408 defer.returnValue(serialized_event)
408 return serialized_event
409409
410410 def serialize_events(self, events, time_now, **kwargs):
411411 """Serializes multiple events.
9494
9595 elif event.type == EventTypes.Topic:
9696 self._ensure_strings(event.content, ["topic"])
97
97 self._ensure_state_event(event)
9898 elif event.type == EventTypes.Name:
9999 self._ensure_strings(event.content, ["name"])
100
100 self._ensure_state_event(event)
101101 elif event.type == EventTypes.Member:
102102 if "membership" not in event.content:
103103 raise SynapseError(400, "Content has not membership key")
105105 if event.content["membership"] not in Membership.LIST:
106106 raise SynapseError(400, "Invalid membership key")
107107
108 self._ensure_state_event(event)
109 elif event.type == EventTypes.Tombstone:
110 if "replacement_room" not in event.content:
111 raise SynapseError(400, "Content has no replacement_room key")
112
113 if event.content["replacement_room"] == event.room_id:
114 raise SynapseError(
115 400, "Tombstone cannot reference the room it was sent in"
116 )
117
118 self._ensure_state_event(event)
119
108120 def _ensure_strings(self, d, keys):
109121 for s in keys:
110122 if s not in d:
111123 raise SynapseError(400, "'%s' not in content" % (s,))
112124 if not isinstance(d[s], string_types):
113125 raise SynapseError(400, "'%s' not a string type" % (s,))
126
127 def _ensure_state_event(self, event):
128 if not event.is_state():
129 raise SynapseError(400, "'%s' must be state events" % (event.type,))
105105 "Failed to find copy of %s with valid signature", pdu.event_id
106106 )
107107
108 defer.returnValue(res)
108 return res
109109
110110 handle = preserve_fn(handle_check_result)
111111 deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
115115 ).addErrback(unwrapFirstError)
116116
117117 if include_none:
118 defer.returnValue(valid_pdus)
118 return valid_pdus
119119 else:
120 defer.returnValue([p for p in valid_pdus if p])
120 return [p for p in valid_pdus if p]
121121
122122 def _check_sigs_and_hash(self, room_version, pdu):
123123 return make_deferred_yieldable(
212212 ).addErrback(unwrapFirstError)
213213 )
214214
215 defer.returnValue(pdus)
215 return pdus
216216
217217 @defer.inlineCallbacks
218218 @log_function
244244
245245 ev = self._get_pdu_cache.get(event_id)
246246 if ev:
247 defer.returnValue(ev)
247 return ev
248248
249249 pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
250250
306306 if signed_pdu:
307307 self._get_pdu_cache[event_id] = signed_pdu
308308
309 defer.returnValue(signed_pdu)
309 return signed_pdu
310310
311311 @defer.inlineCallbacks
312312 @log_function
354354
355355 auth_chain.sort(key=lambda e: e.depth)
356356
357 defer.returnValue((pdus, auth_chain))
357 return (pdus, auth_chain)
358358 except HttpResponseException as e:
359359 if e.code == 400 or e.code == 404:
360360 logger.info("Failed to use get_room_state_ids API, falling back")
403403
404404 signed_auth.sort(key=lambda e: e.depth)
405405
406 defer.returnValue((signed_pdus, signed_auth))
406 return (signed_pdus, signed_auth)
407407
408408 @defer.inlineCallbacks
409409 def get_events_from_store_or_dest(self, destination, room_id, event_ids):
428428 missing_events.discard(k)
429429
430430 if not missing_events:
431 defer.returnValue((signed_events, failed_to_fetch))
431 return (signed_events, failed_to_fetch)
432432
433433 logger.debug(
434434 "Fetching unknown state/auth events %s for room %s",
464464 # We removed all events we successfully fetched from `batch`
465465 failed_to_fetch.update(batch)
466466
467 defer.returnValue((signed_events, failed_to_fetch))
467 return (signed_events, failed_to_fetch)
468468
469469 @defer.inlineCallbacks
470470 @log_function
484484
485485 signed_auth.sort(key=lambda e: e.depth)
486486
487 defer.returnValue(signed_auth)
487 return signed_auth
488488
489489 @defer.inlineCallbacks
490490 def _try_destination_list(self, description, destinations, callback):
510510 The [Deferred] result of callback, if it succeeds
511511
512512 Raises:
513 SynapseError if the chosen remote server returns a 300/400 code.
514
515 RuntimeError if no servers were reachable.
513 SynapseError if the chosen remote server returns a 300/400 code, or
514 no servers were reachable.
516515 """
517516 for destination in destinations:
518517 if destination == self.server_name:
520519
521520 try:
522521 res = yield callback(destination)
523 defer.returnValue(res)
522 return res
524523 except InvalidResponseError as e:
525524 logger.warn("Failed to %s via %s: %s", description, destination, e)
526525 except HttpResponseException as e:
537536 except Exception:
538537 logger.warn("Failed to %s via %s", description, destination, exc_info=1)
539538
540 raise RuntimeError("Failed to %s via any server" % (description,))
539 raise SynapseError(502, "Failed to %s via any server" % (description,))
541540
542541 def make_membership_event(
543542 self, destinations, room_id, user_id, membership, content, params
614613 event_dict=pdu_dict,
615614 )
616615
617 defer.returnValue((destination, ev, event_format))
616 return (destination, ev, event_format)
618617
619618 return self._try_destination_list(
620619 "make_" + membership, destinations, send_request
727726
728727 check_authchain_validity(signed_auth)
729728
730 defer.returnValue(
731 {
732 "state": signed_state,
733 "auth_chain": signed_auth,
734 "origin": destination,
735 }
736 )
729 return {
730 "state": signed_state,
731 "auth_chain": signed_auth,
732 "origin": destination,
733 }
737734
738735 return self._try_destination_list("send_join", destinations, send_request)
739736
757754
758755 # FIXME: We should handle signature failures more gracefully.
759756
760 defer.returnValue(pdu)
757 return pdu
761758
762759 @defer.inlineCallbacks
763760 def _do_send_invite(self, destination, pdu, room_version):
785782 "invite_room_state": pdu.unsigned.get("invite_room_state", []),
786783 },
787784 )
788 defer.returnValue(content)
785 return content
789786 except HttpResponseException as e:
790787 if e.code in [400, 404]:
791788 err = e.to_synapse_error()
820817 event_id=pdu.event_id,
821818 content=pdu.get_pdu_json(time_now),
822819 )
823 defer.returnValue(content)
820 return content
824821
825822 def send_leave(self, destinations, pdu):
826823 """Sends a leave event to one of a list of homeservers.
855852 )
856853
857854 logger.debug("Got content: %s", content)
858 defer.returnValue(None)
855 return None
859856
860857 return self._try_destination_list("send_leave", destinations, send_request)
861858
916913 "missing": content.get("missing", []),
917914 }
918915
919 defer.returnValue(ret)
916 return ret
920917
921918 @defer.inlineCallbacks
922919 def get_missing_events(
973970 # get_missing_events
974971 signed_events = []
975972
976 defer.returnValue(signed_events)
973 return signed_events
977974
978975 @defer.inlineCallbacks
979976 def forward_third_party_invite(self, destinations, room_id, event_dict):
985982 yield self.transport_layer.exchange_third_party_invite(
986983 destination=destination, room_id=room_id, event_dict=event_dict
987984 )
988 defer.returnValue(None)
985 return None
989986 except CodeMessageException:
990987 raise
991988 except Exception as e:
994991 )
995992
996993 raise RuntimeError("Failed to send to any server.")
994
995 @defer.inlineCallbacks
996 def get_room_complexity(self, destination, room_id):
997 """
998 Fetch the complexity of a remote room from another server.
999
1000 Args:
1001 destination (str): The remote server
1002 room_id (str): The room ID to ask about.
1003
1004 Returns:
1005 Deferred[dict] or Deferred[None]: Dict contains the complexity
1006 metric versions, while None means we could not fetch the complexity.
1007 """
1008 try:
1009 complexity = yield self.transport_layer.get_room_complexity(
1010 destination=destination, room_id=room_id
1011 )
1012 defer.returnValue(complexity)
1013 except CodeMessageException as e:
1014 # We didn't manage to get it -- probably a 404. We are okay if other
1015 # servers don't give it to us.
1016 logger.debug(
1017 "Failed to fetch room complexity via %s for %s, got a %d",
1018 destination,
1019 room_id,
1020 e.code,
1021 )
1022 except Exception:
1023 logger.exception(
1024 "Failed to fetch room complexity via %s for %s", destination, room_id
1025 )
1026
1027 # If we don't manage to find it, return None. It's not an error if a
1028 # server doesn't give it to us.
1029 defer.returnValue(None)
9898
9999 res = self._transaction_from_pdus(pdus).get_dict()
100100
101 defer.returnValue((200, res))
101 return (200, res)
102102
103103 @defer.inlineCallbacks
104104 @log_function
125125 origin, transaction, request_time
126126 )
127127
128 defer.returnValue(result)
128 return result
129129
130130 @defer.inlineCallbacks
131131 def _handle_incoming_transaction(self, origin, transaction, request_time):
146146 "[%s] We've already responded to this request",
147147 transaction.transaction_id,
148148 )
149 defer.returnValue(response)
150 return
149 return response
151150
152151 logger.debug("[%s] Transaction is new", transaction.transaction_id)
153152
162161 yield self.transaction_actions.set_response(
163162 origin, transaction, 400, response
164163 )
165 defer.returnValue((400, response))
164 return (400, response)
166165
167166 received_pdus_counter.inc(len(transaction.pdus))
168167
264263 logger.debug("Returning: %s", str(response))
265264
266265 yield self.transaction_actions.set_response(origin, transaction, 200, response)
267 defer.returnValue((200, response))
266 return (200, response)
268267
269268 @defer.inlineCallbacks
270269 def received_edu(self, origin, edu_type, content):
297296 event_id,
298297 )
299298
300 defer.returnValue((200, resp))
299 return (200, resp)
301300
302301 @defer.inlineCallbacks
303302 def on_state_ids_request(self, origin, room_id, event_id):
314313 state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
315314 auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
316315
317 defer.returnValue(
318 (200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
319 )
316 return (200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
320317
321318 @defer.inlineCallbacks
322319 def _on_context_state_request_compute(self, room_id, event_id):
335332 )
336333 )
337334
338 defer.returnValue(
339 {
340 "pdus": [pdu.get_pdu_json() for pdu in pdus],
341 "auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
342 }
343 )
335 return {
336 "pdus": [pdu.get_pdu_json() for pdu in pdus],
337 "auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
338 }
344339
345340 @defer.inlineCallbacks
346341 @log_function
348343 pdu = yield self.handler.get_persisted_pdu(origin, event_id)
349344
350345 if pdu:
351 defer.returnValue((200, self._transaction_from_pdus([pdu]).get_dict()))
346 return (200, self._transaction_from_pdus([pdu]).get_dict())
352347 else:
353 defer.returnValue((404, ""))
348 return (404, "")
354349
355350 @defer.inlineCallbacks
356351 def on_query_request(self, query_type, args):
357352 received_queries_counter.labels(query_type).inc()
358353 resp = yield self.registry.on_query(query_type, args)
359 defer.returnValue((200, resp))
354 return (200, resp)
360355
361356 @defer.inlineCallbacks
362357 def on_make_join_request(self, origin, room_id, user_id, supported_versions):
370365
371366 pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
372367 time_now = self._clock.time_msec()
373 defer.returnValue(
374 {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
375 )
368 return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
376369
377370 @defer.inlineCallbacks
378371 def on_invite_request(self, origin, content, room_version):
390383 yield self.check_server_matches_acl(origin_host, pdu.room_id)
391384 ret_pdu = yield self.handler.on_invite_request(origin, pdu)
392385 time_now = self._clock.time_msec()
393 defer.returnValue({"event": ret_pdu.get_pdu_json(time_now)})
386 return {"event": ret_pdu.get_pdu_json(time_now)}
394387
395388 @defer.inlineCallbacks
396389 def on_send_join_request(self, origin, content, room_id):
406399 logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
407400 res_pdus = yield self.handler.on_send_join_request(origin, pdu)
408401 time_now = self._clock.time_msec()
409 defer.returnValue(
410 (
411 200,
412 {
413 "state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
414 "auth_chain": [
415 p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
416 ],
417 },
418 )
402 return (
403 200,
404 {
405 "state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
406 "auth_chain": [
407 p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
408 ],
409 },
419410 )
420411
421412 @defer.inlineCallbacks
427418 room_version = yield self.store.get_room_version(room_id)
428419
429420 time_now = self._clock.time_msec()
430 defer.returnValue(
431 {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
432 )
421 return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
433422
434423 @defer.inlineCallbacks
435424 def on_send_leave_request(self, origin, content, room_id):
444433
445434 logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
446435 yield self.handler.on_send_leave_request(origin, pdu)
447 defer.returnValue((200, {}))
436 return (200, {})
448437
449438 @defer.inlineCallbacks
450439 def on_event_auth(self, origin, room_id, event_id):
455444 time_now = self._clock.time_msec()
456445 auth_pdus = yield self.handler.on_event_auth(event_id)
457446 res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
458 defer.returnValue((200, res))
447 return (200, res)
459448
460449 @defer.inlineCallbacks
461450 def on_query_auth_request(self, origin, content, room_id, event_id):
508497 "missing": ret.get("missing", []),
509498 }
510499
511 defer.returnValue((200, send_content))
500 return (200, send_content)
512501
513502 @log_function
514503 def on_query_client_keys(self, origin, content):
547536 ),
548537 )
549538
550 defer.returnValue({"one_time_keys": json_result})
539 return {"one_time_keys": json_result}
551540
552541 @defer.inlineCallbacks
553542 @log_function
579568
580569 time_now = self._clock.time_msec()
581570
582 defer.returnValue(
583 {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
584 )
571 return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
585572
586573 @log_function
587574 def on_openid_userinfo(self, token):
675662 ret = yield self.handler.exchange_third_party_invite(
676663 sender_user_id, target_user_id, room_id, signed
677664 )
678 defer.returnValue(ret)
665 return ret
679666
680667 @defer.inlineCallbacks
681668 def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
682669 ret = yield self.handler.on_exchange_third_party_invite_request(
683670 origin, room_id, event_dict
684671 )
685 defer.returnValue(ret)
672 return ret
686673
687674 @defer.inlineCallbacks
688675 def check_server_matches_acl(self, server_name, room_id):
373373
374374 assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
375375
376 defer.returnValue((edus, now_stream_id))
376 return (edus, now_stream_id)
377377
378378 @defer.inlineCallbacks
379379 def _get_to_device_message_edus(self, limit):
392392 for content in contents
393393 ]
394394
395 defer.returnValue((edus, stream_id))
395 return (edus, stream_id)
132132 )
133133 success = False
134134
135 defer.returnValue(success)
135 return success
2020 from twisted.internet import defer
2121
2222 from synapse.api.constants import Membership
23 from synapse.api.urls import FEDERATION_V1_PREFIX, FEDERATION_V2_PREFIX
23 from synapse.api.urls import (
24 FEDERATION_UNSTABLE_PREFIX,
25 FEDERATION_V1_PREFIX,
26 FEDERATION_V2_PREFIX,
27 )
2428 from synapse.logging.utils import log_function
2529
2630 logger = logging.getLogger(__name__)
182186 try_trailing_slash_on_400=True,
183187 )
184188
185 defer.returnValue(response)
189 return response
186190
187191 @defer.inlineCallbacks
188192 @log_function
200204 ignore_backoff=ignore_backoff,
201205 )
202206
203 defer.returnValue(content)
207 return content
204208
205209 @defer.inlineCallbacks
206210 @log_function
258262 ignore_backoff=ignore_backoff,
259263 )
260264
261 defer.returnValue(content)
265 return content
262266
263267 @defer.inlineCallbacks
264268 @log_function
269273 destination=destination, path=path, data=content
270274 )
271275
272 defer.returnValue(response)
276 return response
273277
274278 @defer.inlineCallbacks
275279 @log_function
287291 ignore_backoff=True,
288292 )
289293
290 defer.returnValue(response)
294 return response
291295
292296 @defer.inlineCallbacks
293297 @log_function
298302 destination=destination, path=path, data=content, ignore_backoff=True
299303 )
300304
301 defer.returnValue(response)
305 return response
302306
303307 @defer.inlineCallbacks
304308 @log_function
309313 destination=destination, path=path, data=content, ignore_backoff=True
310314 )
311315
312 defer.returnValue(response)
316 return response
313317
314318 @defer.inlineCallbacks
315319 @log_function
338342 destination=remote_server, path=path, args=args, ignore_backoff=True
339343 )
340344
341 defer.returnValue(response)
345 return response
342346
343347 @defer.inlineCallbacks
344348 @log_function
349353 destination=destination, path=path, data=event_dict
350354 )
351355
352 defer.returnValue(response)
356 return response
353357
354358 @defer.inlineCallbacks
355359 @log_function
358362
359363 content = yield self.client.get_json(destination=destination, path=path)
360364
361 defer.returnValue(content)
365 return content
362366
363367 @defer.inlineCallbacks
364368 @log_function
369373 destination=destination, path=path, data=content
370374 )
371375
372 defer.returnValue(content)
376 return content
373377
374378 @defer.inlineCallbacks
375379 @log_function
401405 content = yield self.client.post_json(
402406 destination=destination, path=path, data=query_content, timeout=timeout
403407 )
404 defer.returnValue(content)
408 return content
405409
406410 @defer.inlineCallbacks
407411 @log_function
425429 content = yield self.client.get_json(
426430 destination=destination, path=path, timeout=timeout
427431 )
428 defer.returnValue(content)
432 return content
429433
430434 @defer.inlineCallbacks
431435 @log_function
459463 content = yield self.client.post_json(
460464 destination=destination, path=path, data=query_content, timeout=timeout
461465 )
462 defer.returnValue(content)
466 return content
463467
464468 @defer.inlineCallbacks
465469 @log_function
487491 timeout=timeout,
488492 )
489493
490 defer.returnValue(content)
494 return content
491495
492496 @log_function
493497 def get_group_profile(self, destination, group_id, requester_user_id):
934938 destination=destination, path=path, data=content, ignore_backoff=True
935939 )
936940
941 def get_room_complexity(self, destination, room_id):
942 """
943 Args:
944 destination (str): The remote server
945 room_id (str): The room ID to ask about.
946 """
947 path = _create_path(FEDERATION_UNSTABLE_PREFIX, "/rooms/%s/complexity", room_id)
948
949 return self.client.get_json(destination=destination, path=path)
950
951
952 def _create_path(federation_prefix, path, *args):
953 """
954 Ensures that all args are url encoded.
955 """
956 return federation_prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)
957
937958
938959 def _create_v1_path(path, *args):
939960 """Creates a path against V1 federation API from the path template and
950971 Returns:
951972 str
952973 """
953 return FEDERATION_V1_PREFIX + path % tuple(
954 urllib.parse.quote(arg, "") for arg in args
955 )
974 return _create_path(FEDERATION_V1_PREFIX, path, *args)
956975
957976
958977 def _create_v2_path(path, *args):
970989 Returns:
971990 str
972991 """
973 return FEDERATION_V2_PREFIX + path % tuple(
974 urllib.parse.quote(arg, "") for arg in args
975 )
992 return _create_path(FEDERATION_V2_PREFIX, path, *args)
1717 import functools
1818 import logging
1919 import re
20
21 from twisted.internet.defer import maybeDeferred
2022
2123 import synapse
2224 import synapse.logging.opentracing as opentracing
744746 else:
745747 network_tuple = ThirdPartyInstanceID(None, None)
746748
747 data = await self.handler.get_local_public_room_list(
748 limit, since_token, network_tuple=network_tuple, from_federation=True
749 data = await maybeDeferred(
750 self.handler.get_local_public_room_list,
751 limit,
752 since_token,
753 network_tuple=network_tuple,
754 from_federation=True,
749755 )
750756 return 200, data
751757
156156
157157 yield self.store.update_remote_attestion(group_id, user_id, attestation)
158158
159 defer.returnValue({})
159 return {}
160160
161161 def _start_renew_attestations(self):
162162 return run_as_background_process("renew_attestations", self._renew_attestations)
8484 if not is_admin:
8585 raise SynapseError(403, "User is not admin in group")
8686
87 defer.returnValue(group)
87 return group
8888
8989 @defer.inlineCallbacks
9090 def get_group_summary(self, group_id, requester_user_id):
150150 group_id, requester_user_id
151151 )
152152
153 defer.returnValue(
154 {
155 "profile": profile,
156 "users_section": {
157 "users": users,
158 "roles": roles,
159 "total_user_count_estimate": 0, # TODO
160 },
161 "rooms_section": {
162 "rooms": rooms,
163 "categories": categories,
164 "total_room_count_estimate": 0, # TODO
165 },
166 "user": membership_info,
167 }
168 )
153 return {
154 "profile": profile,
155 "users_section": {
156 "users": users,
157 "roles": roles,
158 "total_user_count_estimate": 0, # TODO
159 },
160 "rooms_section": {
161 "rooms": rooms,
162 "categories": categories,
163 "total_room_count_estimate": 0, # TODO
164 },
165 "user": membership_info,
166 }
169167
170168 @defer.inlineCallbacks
171169 def update_group_summary_room(
191189 is_public=is_public,
192190 )
193191
194 defer.returnValue({})
192 return {}
195193
196194 @defer.inlineCallbacks
197195 def delete_group_summary_room(
207205 group_id=group_id, room_id=room_id, category_id=category_id
208206 )
209207
210 defer.returnValue({})
208 return {}
211209
212210 @defer.inlineCallbacks
213211 def set_group_join_policy(self, group_id, requester_user_id, content):
227225
228226 yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
229227
230 defer.returnValue({})
228 return {}
231229
232230 @defer.inlineCallbacks
233231 def get_group_categories(self, group_id, requester_user_id):
236234 yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
237235
238236 categories = yield self.store.get_group_categories(group_id=group_id)
239 defer.returnValue({"categories": categories})
237 return {"categories": categories}
240238
241239 @defer.inlineCallbacks
242240 def get_group_category(self, group_id, requester_user_id, category_id):
248246 group_id=group_id, category_id=category_id
249247 )
250248
251 defer.returnValue(res)
249 return res
252250
253251 @defer.inlineCallbacks
254252 def update_group_category(self, group_id, requester_user_id, category_id, content):
268266 profile=profile,
269267 )
270268
271 defer.returnValue({})
269 return {}
272270
273271 @defer.inlineCallbacks
274272 def delete_group_category(self, group_id, requester_user_id, category_id):
282280 group_id=group_id, category_id=category_id
283281 )
284282
285 defer.returnValue({})
283 return {}
286284
287285 @defer.inlineCallbacks
288286 def get_group_roles(self, group_id, requester_user_id):
291289 yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
292290
293291 roles = yield self.store.get_group_roles(group_id=group_id)
294 defer.returnValue({"roles": roles})
292 return {"roles": roles}
295293
296294 @defer.inlineCallbacks
297295 def get_group_role(self, group_id, requester_user_id, role_id):
300298 yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
301299
302300 res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
303 defer.returnValue(res)
301 return res
304302
305303 @defer.inlineCallbacks
306304 def update_group_role(self, group_id, requester_user_id, role_id, content):
318316 group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
319317 )
320318
321 defer.returnValue({})
319 return {}
322320
323321 @defer.inlineCallbacks
324322 def delete_group_role(self, group_id, requester_user_id, role_id):
330328
331329 yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
332330
333 defer.returnValue({})
331 return {}
334332
335333 @defer.inlineCallbacks
336334 def update_group_summary_user(
354352 is_public=is_public,
355353 )
356354
357 defer.returnValue({})
355 return {}
358356
359357 @defer.inlineCallbacks
360358 def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
368366 group_id=group_id, user_id=user_id, role_id=role_id
369367 )
370368
371 defer.returnValue({})
369 return {}
372370
373371 @defer.inlineCallbacks
374372 def get_group_profile(self, group_id, requester_user_id):
390388 group_description = {key: group[key] for key in cols}
391389 group_description["is_openly_joinable"] = group["join_policy"] == "open"
392390
393 defer.returnValue(group_description)
391 return group_description
394392 else:
395393 raise SynapseError(404, "Unknown group")
396394
460458
461459 # TODO: If admin add lists of users whose attestations have timed out
462460
463 defer.returnValue(
464 {"chunk": chunk, "total_user_count_estimate": len(user_results)}
465 )
461 return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
466462
467463 @defer.inlineCallbacks
468464 def get_invited_users_in_group(self, group_id, requester_user_id):
493489 logger.warn("Error getting profile for %s: %s", user_id, e)
494490 user_profiles.append(user_profile)
495491
496 defer.returnValue(
497 {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
498 )
492 return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
499493
500494 @defer.inlineCallbacks
501495 def get_rooms_in_group(self, group_id, requester_user_id):
532526
533527 chunk.sort(key=lambda e: -e["num_joined_members"])
534528
535 defer.returnValue(
536 {"chunk": chunk, "total_room_count_estimate": len(room_results)}
537 )
529 return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
538530
539531 @defer.inlineCallbacks
540532 def add_room_to_group(self, group_id, requester_user_id, room_id, content):
550542
551543 yield self.store.add_room_to_group(group_id, room_id, is_public=is_public)
552544
553 defer.returnValue({})
545 return {}
554546
555547 @defer.inlineCallbacks
556548 def update_room_in_group(
573565 else:
574566 raise SynapseError(400, "Uknown config option")
575567
576 defer.returnValue({})
568 return {}
577569
578570 @defer.inlineCallbacks
579571 def remove_room_from_group(self, group_id, requester_user_id, room_id):
585577
586578 yield self.store.remove_room_from_group(group_id, room_id)
587579
588 defer.returnValue({})
580 return {}
589581
590582 @defer.inlineCallbacks
591583 def invite_to_group(self, group_id, user_id, requester_user_id, content):
643635 )
644636 elif res["state"] == "invite":
645637 yield self.store.add_group_invite(group_id, user_id)
646 defer.returnValue({"state": "invite"})
638 return {"state": "invite"}
647639 elif res["state"] == "reject":
648 defer.returnValue({"state": "reject"})
640 return {"state": "reject"}
649641 else:
650642 raise SynapseError(502, "Unknown state returned by HS")
651643
678670 remote_attestation=remote_attestation,
679671 )
680672
681 defer.returnValue(local_attestation)
673 return local_attestation
682674
683675 @defer.inlineCallbacks
684676 def accept_invite(self, group_id, requester_user_id, content):
698690
699691 local_attestation = yield self._add_user(group_id, requester_user_id, content)
700692
701 defer.returnValue({"state": "join", "attestation": local_attestation})
693 return {"state": "join", "attestation": local_attestation}
702694
703695 @defer.inlineCallbacks
704696 def join_group(self, group_id, requester_user_id, content):
715707
716708 local_attestation = yield self._add_user(group_id, requester_user_id, content)
717709
718 defer.returnValue({"state": "join", "attestation": local_attestation})
710 return {"state": "join", "attestation": local_attestation}
719711
720712 @defer.inlineCallbacks
721713 def knock(self, group_id, requester_user_id, content):
768760 if not self.hs.is_mine_id(user_id):
769761 yield self.store.maybe_delete_remote_profile_cache(user_id)
770762
771 defer.returnValue({})
763 return {}
772764
773765 @defer.inlineCallbacks
774766 def create_group(self, group_id, requester_user_id, content):
844836 avatar_url=user_profile.get("avatar_url"),
845837 )
846838
847 defer.returnValue({"group_id": group_id})
839 return {"group_id": group_id}
848840
849841 @defer.inlineCallbacks
850842 def delete_group(self, group_id, requester_user_id):
5050 {"type": account_data_type, "content": content, "room_id": room_id}
5151 )
5252
53 defer.returnValue((results, current_stream_id))
53 return (results, current_stream_id)
5454
5555 @defer.inlineCallbacks
5656 def get_pagination_rows(self, user, config, key):
57 defer.returnValue(([], config.to_id))
57 return ([], config.to_id)
192192 if threepid["medium"] == "email":
193193 addresses.append(threepid["address"])
194194
195 defer.returnValue(addresses)
195 return addresses
196196
197197 @defer.inlineCallbacks
198198 def _get_renewal_token(self, user_id):
213213 try:
214214 renewal_token = stringutils.random_string(32)
215215 yield self.store.set_renewal_token_for_user(user_id, renewal_token)
216 defer.returnValue(renewal_token)
216 return renewal_token
217217 except StoreError:
218218 attempts += 1
219219 raise StoreError(500, "Couldn't generate a unique string as refresh string.")
225225
226226 Args:
227227 renewal_token (str): Token sent with the renewal request.
228 """
229 user_id = yield self.store.get_user_from_renewal_token(renewal_token)
228 Returns:
229 bool: Whether the provided token is valid.
230 """
231 try:
232 user_id = yield self.store.get_user_from_renewal_token(renewal_token)
233 except StoreError:
234 defer.returnValue(False)
235
230236 logger.debug("Renewing an account for user %s", user_id)
231237 yield self.renew_account_for_user(user_id)
238
239 defer.returnValue(True)
232240
233241 @defer.inlineCallbacks
234242 def renew_account_for_user(self, user_id, expiration_ts=None, email_sent=False):
253261 user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent
254262 )
255263
256 defer.returnValue(expiration_ts)
264 return expiration_ts
9999 logger.exception("Failed saving!")
100100 raise
101101
102 defer.returnValue(True)
102 return True
4848 "devices": {"": {"sessions": [{"connections": connections}]}},
4949 }
5050
51 defer.returnValue(ret)
51 return ret
5252
5353 @defer.inlineCallbacks
5454 def get_users(self):
6060 """
6161 ret = yield self.store.get_users()
6262
63 defer.returnValue(ret)
63 return ret
6464
6565 @defer.inlineCallbacks
6666 def get_users_paginate(self, order, start, limit):
7777 """
7878 ret = yield self.store.get_users_paginate(order, start, limit)
7979
80 defer.returnValue(ret)
80 return ret
8181
8282 @defer.inlineCallbacks
8383 def search_users(self, term):
9191 """
9292 ret = yield self.store.search_users(term)
9393
94 defer.returnValue(ret)
94 return ret
9595
9696 @defer.inlineCallbacks
9797 def export_user_data(self, user_id, writer):
224224 state = yield self.store.get_state_for_event(event_id)
225225 writer.write_state(room_id, event_id, state)
226226
227 defer.returnValue(writer.finished())
227 return writer.finished()
228228
229229
230230 class ExfiltrationWriter(object):
166166 for user_service in user_query_services:
167167 is_known_user = yield self.appservice_api.query_user(user_service, user_id)
168168 if is_known_user:
169 defer.returnValue(True)
170 defer.returnValue(False)
169 return True
170 return False
171171
172172 @defer.inlineCallbacks
173173 def query_room_alias_exists(self, room_alias):
191191 if is_known_alias:
192192 # the alias exists now so don't query more ASes.
193193 result = yield self.store.get_association_from_room_alias(room_alias)
194 defer.returnValue(result)
194 return result
195195
196196 @defer.inlineCallbacks
197197 def query_3pe(self, kind, protocol, fields):
214214 if success:
215215 ret.extend(result)
216216
217 defer.returnValue(ret)
217 return ret
218218
219219 @defer.inlineCallbacks
220220 def get_3pe_protocols(self, only_protocol=None):
253253 for p in protocols.keys():
254254 protocols[p] = _merge_instances(protocols[p])
255255
256 defer.returnValue(protocols)
256 return protocols
257257
258258 @defer.inlineCallbacks
259259 def _get_services_for_event(self, event):
275275 if (yield s.is_interested(event, self.store)):
276276 interested_list.append(s)
277277
278 defer.returnValue(interested_list)
278 return interested_list
279279
280280 def _get_services_for_user(self, user_id):
281281 services = self.store.get_app_services()
292292 if not self.is_mine_id(user_id):
293293 # we don't know if they are unknown or not since it isn't one of our
294294 # users. We can't poke ASes.
295 defer.returnValue(False)
295 return False
296296 return
297297
298298 user_info = yield self.store.get_user_by_id(user_id)
299299 if user_info:
300 defer.returnValue(False)
300 return False
301301 return
302302
303303 # user not found; could be the AS though, so check.
304304 services = self.store.get_app_services()
305305 service_list = [s for s in services if s.sender == user_id]
306 defer.returnValue(len(service_list) == 0)
306 return len(service_list) == 0
307307
308308 @defer.inlineCallbacks
309309 def _check_user_exists(self, user_id):
310310 unknown_user = yield self._is_unknown_user(user_id)
311311 if unknown_user:
312312 exists = yield self.query_user_exists(user_id)
313 defer.returnValue(exists)
314 defer.returnValue(True)
313 return exists
314 return True
154154 if user_id != requester.user.to_string():
155155 raise AuthError(403, "Invalid auth")
156156
157 defer.returnValue(params)
157 return params
158158
159159 @defer.inlineCallbacks
160160 def check_auth(self, flows, clientdict, clientip, password_servlet=False):
279279 creds,
280280 list(clientdict),
281281 )
282 defer.returnValue((creds, clientdict, session["id"]))
282 return (creds, clientdict, session["id"])
283283
284284 ret = self._auth_dict_for_flows(flows, session)
285285 ret["completed"] = list(creds)
306306 if result:
307307 creds[stagetype] = result
308308 self._save_session(sess)
309 defer.returnValue(True)
310 defer.returnValue(False)
309 return True
310 return False
311311
312312 def get_session_id(self, clientdict):
313313 """
378378 res = yield checker(
379379 authdict, clientip=clientip, password_servlet=password_servlet
380380 )
381 defer.returnValue(res)
381 return res
382382
383383 # build a v1-login-style dict out of the authdict and fall back to the
384384 # v1 code
388388 raise SynapseError(400, "", Codes.MISSING_PARAM)
389389
390390 (canonical_id, callback) = yield self.validate_login(user_id, authdict)
391 defer.returnValue(canonical_id)
391 return canonical_id
392392
393393 @defer.inlineCallbacks
394394 def _check_recaptcha(self, authdict, clientip, **kwargs):
432432 resp_body.get("hostname"),
433433 )
434434 if resp_body["success"]:
435 defer.returnValue(True)
435 return True
436436 raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
437437
438438 def _check_email_identity(self, authdict, **kwargs):
501501
502502 threepid["threepid_creds"] = authdict["threepid_creds"]
503503
504 defer.returnValue(threepid)
504 return threepid
505505
506506 def _get_params_recaptcha(self):
507507 return {"public_key": self.hs.config.recaptcha_public_key}
605605 yield self.store.delete_access_token(access_token)
606606 raise StoreError(400, "Login raced against device deletion")
607607
608 defer.returnValue(access_token)
608 return access_token
609609
610610 @defer.inlineCallbacks
611611 def check_user_exists(self, user_id):
628628 self.ratelimit_login_per_account(user_id)
629629 res = yield self._find_user_id_and_pwd_hash(user_id)
630630 if res is not None:
631 defer.returnValue(res[0])
632 defer.returnValue(None)
631 return res[0]
632 return None
633633
634634 @defer.inlineCallbacks
635635 def _find_user_id_and_pwd_hash(self, user_id):
660660 user_id,
661661 user_infos.keys(),
662662 )
663 defer.returnValue(result)
663 return result
664664
665665 def get_supported_login_types(self):
666666 """Get a the login types supported for the /login API
721721 known_login_type = True
722722 is_valid = yield provider.check_password(qualified_user_id, password)
723723 if is_valid:
724 defer.returnValue((qualified_user_id, None))
724 return (qualified_user_id, None)
725725
726726 if not hasattr(provider, "get_supported_login_types") or not hasattr(
727727 provider, "check_auth"
755755 if result:
756756 if isinstance(result, str):
757757 result = (result, None)
758 defer.returnValue(result)
758 return result
759759
760760 if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
761761 known_login_type = True
765765 )
766766
767767 if canonical_user_id:
768 defer.returnValue((canonical_user_id, None))
768 return (canonical_user_id, None)
769769
770770 if not known_login_type:
771771 raise SynapseError(400, "Unknown login type %s" % login_type)
813813 if isinstance(result, str):
814814 # If it's a str, set callback function to None
815815 result = (result, None)
816 defer.returnValue(result)
817
818 defer.returnValue((None, None))
816 return result
817
818 return (None, None)
819819
820820 @defer.inlineCallbacks
821821 def _check_local_password(self, user_id, password):
837837 """
838838 lookupres = yield self._find_user_id_and_pwd_hash(user_id)
839839 if not lookupres:
840 defer.returnValue(None)
840 return None
841841 (user_id, password_hash) = lookupres
842842
843843 # If the password hash is None, the account has likely been deactivated
849849 result = yield self.validate_hash(password, password_hash)
850850 if not result:
851851 logger.warn("Failed password login for user %s", user_id)
852 defer.returnValue(None)
853 defer.returnValue(user_id)
852 return None
853 return user_id
854854
855855 @defer.inlineCallbacks
856856 def validate_short_term_login_token_and_get_user_id(self, login_token):
859859 try:
860860 macaroon = pymacaroons.Macaroon.deserialize(login_token)
861861 user_id = auth_api.get_user_id_from_macaroon(macaroon)
862 auth_api.validate_macaroon(macaroon, "login", True, user_id)
862 auth_api.validate_macaroon(macaroon, "login", user_id)
863863 except Exception:
864864 raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
865865 self.ratelimit_login_per_account(user_id)
866866 yield self.auth.check_auth_blocking(user_id)
867 defer.returnValue(user_id)
867 return user_id
868868
869869 @defer.inlineCallbacks
870870 def delete_access_token(self, access_token):
975975 )
976976
977977 yield self.store.user_delete_threepid(user_id, medium, address)
978 defer.returnValue(result)
978 return result
979979
980980 def _save_session(self, session):
981981 # TODO: Persistent storage
124124 # Mark the user as deactivated.
125125 yield self.store.set_user_deactivated_status(user_id, True)
126126
127 defer.returnValue(identity_server_supports_unbinding)
127 return identity_server_supports_unbinding
128128
129129 def _start_user_parting(self):
130130 """
6363 for device in devices:
6464 _update_device_from_client_ips(device, ips)
6565
66 defer.returnValue(devices)
66 return devices
6767
6868 @defer.inlineCallbacks
6969 def get_device(self, user_id, device_id):
8484 raise errors.NotFoundError
8585 ips = yield self.store.get_last_client_ip_by_device(user_id, device_id)
8686 _update_device_from_client_ips(device, ips)
87 defer.returnValue(device)
87 return device
8888
8989 @measure_func("device.get_user_ids_changed")
9090 @defer.inlineCallbacks
199199 possibly_joined = []
200200 possibly_left = []
201201
202 defer.returnValue(
203 {"changed": list(possibly_joined), "left": list(possibly_left)}
204 )
202 return {"changed": list(possibly_joined), "left": list(possibly_left)}
205203
206204
207205 class DeviceHandler(DeviceWorkerHandler):
210208
211209 self.federation_sender = hs.get_federation_sender()
212210
213 self._edu_updater = DeviceListEduUpdater(hs, self)
211 self.device_list_updater = DeviceListUpdater(hs, self)
214212
215213 federation_registry = hs.get_federation_registry()
216214
217215 federation_registry.register_edu_handler(
218 "m.device_list_update", self._edu_updater.incoming_device_list_update
216 "m.device_list_update", self.device_list_updater.incoming_device_list_update
219217 )
220218 federation_registry.register_query_handler(
221219 "user_devices", self.on_federation_query_user_devices
249247 )
250248 if new_device:
251249 yield self.notify_device_update(user_id, [device_id])
252 defer.returnValue(device_id)
250 return device_id
253251
254252 # if the device id is not specified, we'll autogen one, but loop a few
255253 # times in case of a clash.
263261 )
264262 if new_device:
265263 yield self.notify_device_update(user_id, [device_id])
266 defer.returnValue(device_id)
264 return device_id
267265 attempts += 1
268266
269267 raise errors.StoreError(500, "Couldn't generate a device ID.")
410408 @defer.inlineCallbacks
411409 def on_federation_query_user_devices(self, user_id):
412410 stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
413 defer.returnValue(
414 {"user_id": user_id, "stream_id": stream_id, "devices": devices}
415 )
411 return {"user_id": user_id, "stream_id": stream_id, "devices": devices}
416412
417413 @defer.inlineCallbacks
418414 def user_left_room(self, user, room_id):
429425 device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")})
430426
431427
432 class DeviceListEduUpdater(object):
428 class DeviceListUpdater(object):
433429 "Handles incoming device list updates from federation and updates the DB"
434430
435431 def __init__(self, hs, device_handler):
522518 logger.debug("Need to re-sync devices for %r? %r", user_id, resync)
523519
524520 if resync:
525 # Fetch all devices for the user.
526 origin = get_domain_from_id(user_id)
527 try:
528 result = yield self.federation.query_user_devices(origin, user_id)
529 except (
530 NotRetryingDestination,
531 RequestSendFailed,
532 HttpResponseException,
533 ):
534 # TODO: Remember that we are now out of sync and try again
535 # later
536 logger.warn("Failed to handle device list update for %s", user_id)
537 # We abort on exceptions rather than accepting the update
538 # as otherwise synapse will 'forget' that its device list
539 # is out of date. If we bail then we will retry the resync
540 # next time we get a device list update for this user_id.
541 # This makes it more likely that the device lists will
542 # eventually become consistent.
543 return
544 except FederationDeniedError as e:
545 logger.info(e)
546 return
547 except Exception:
548 # TODO: Remember that we are now out of sync and try again
549 # later
550 logger.exception(
551 "Failed to handle device list update for %s", user_id
552 )
553 return
554
555 stream_id = result["stream_id"]
556 devices = result["devices"]
557
558 # If the remote server has more than ~1000 devices for this user
559 # we assume that something is going horribly wrong (e.g. a bot
560 # that logs in and creates a new device every time it tries to
561 # send a message). Maintaining lots of devices per user in the
562 # cache can cause serious performance issues as if this request
563 # takes more than 60s to complete, internal replication from the
564 # inbound federation worker to the synapse master may time out
565 # causing the inbound federation to fail and causing the remote
566 # server to retry, causing a DoS. So in this scenario we give
567 # up on storing the total list of devices and only handle the
568 # delta instead.
569 if len(devices) > 1000:
570 logger.warn(
571 "Ignoring device list snapshot for %s as it has >1K devs (%d)",
572 user_id,
573 len(devices),
574 )
575 devices = []
576
577 for device in devices:
578 logger.debug(
579 "Handling resync update %r/%r, ID: %r",
580 user_id,
581 device["device_id"],
582 stream_id,
583 )
584
585 yield self.store.update_remote_device_list_cache(
586 user_id, devices, stream_id
587 )
588 device_ids = [device["device_id"] for device in devices]
589 yield self.device_handler.notify_device_update(user_id, device_ids)
590
591 # We clobber the seen updates since we've re-synced from a given
592 # point.
593 self._seen_updates[user_id] = set([stream_id])
521 yield self.user_device_resync(user_id)
594522 else:
595523 # Simply update the single device, since we know that is the only
596524 # change (because of the single prev_id matching the current cache)
622550 for _, stream_id, prev_ids, _ in updates:
623551 if not prev_ids:
624552 # We always do a resync if there are no previous IDs
625 defer.returnValue(True)
553 return True
626554
627555 for prev_id in prev_ids:
628556 if prev_id == extremity:
632560 elif prev_id in stream_id_in_updates:
633561 continue
634562 else:
635 defer.returnValue(True)
563 return True
636564
637565 stream_id_in_updates.add(stream_id)
638566
639 defer.returnValue(False)
567 return False
568
569 @defer.inlineCallbacks
570 def user_device_resync(self, user_id):
571 """Fetches all devices for a user and updates the device cache with them.
572
573 Args:
574 user_id (str): The user's id whose device_list will be updated.
575 Returns:
576 Deferred[dict]: a dict with device info as under the "devices" in the result of this
577 request:
578 https://matrix.org/docs/spec/server_server/r0.1.2#get-matrix-federation-v1-user-devices-userid
579 """
580 # Fetch all devices for the user.
581 origin = get_domain_from_id(user_id)
582 try:
583 result = yield self.federation.query_user_devices(origin, user_id)
584 except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
585 # TODO: Remember that we are now out of sync and try again
586 # later
587 logger.warn("Failed to handle device list update for %s", user_id)
588 # We abort on exceptions rather than accepting the update
589 # as otherwise synapse will 'forget' that its device list
590 # is out of date. If we bail then we will retry the resync
591 # next time we get a device list update for this user_id.
592 # This makes it more likely that the device lists will
593 # eventually become consistent.
594 return
595 except FederationDeniedError as e:
596 logger.info(e)
597 return
598 except Exception:
599 # TODO: Remember that we are now out of sync and try again
600 # later
601 logger.exception("Failed to handle device list update for %s", user_id)
602 return
603 stream_id = result["stream_id"]
604 devices = result["devices"]
605
606 # If the remote server has more than ~1000 devices for this user
607 # we assume that something is going horribly wrong (e.g. a bot
608 # that logs in and creates a new device every time it tries to
609 # send a message). Maintaining lots of devices per user in the
610 # cache can cause serious performance issues as if this request
611 # takes more than 60s to complete, internal replication from the
612 # inbound federation worker to the synapse master may time out
613 # causing the inbound federation to fail and causing the remote
614 # server to retry, causing a DoS. So in this scenario we give
615 # up on storing the total list of devices and only handle the
616 # delta instead.
617 if len(devices) > 1000:
618 logger.warn(
619 "Ignoring device list snapshot for %s as it has >1K devs (%d)",
620 user_id,
621 len(devices),
622 )
623 devices = []
624
625 for device in devices:
626 logger.debug(
627 "Handling resync update %r/%r, ID: %r",
628 user_id,
629 device["device_id"],
630 stream_id,
631 )
632
633 yield self.store.update_remote_device_list_cache(user_id, devices, stream_id)
634 device_ids = [device["device_id"] for device in devices]
635 yield self.device_handler.notify_device_update(user_id, device_ids)
636
637 # We clobber the seen updates since we've re-synced from a given
638 # point.
639 self._seen_updates[user_id] = set([stream_id])
640
641 defer.returnValue(result)
209209 except AuthError as e:
210210 logger.info("Failed to update alias events: %s", e)
211211
212 defer.returnValue(room_id)
212 return room_id
213213
214214 @defer.inlineCallbacks
215215 def delete_appservice_association(self, service, room_alias):
228228
229229 room_id = yield self.store.delete_room_alias(room_alias)
230230
231 defer.returnValue(room_id)
231 return room_id
232232
233233 @defer.inlineCallbacks
234234 def get_association(self, room_alias):
276276 else:
277277 servers = list(servers)
278278
279 defer.returnValue({"room_id": room_id, "servers": servers})
280 return
279 return {"room_id": room_id, "servers": servers}
281280
282281 @defer.inlineCallbacks
283282 def on_directory_query(self, args):
288287 result = yield self.get_association_from_room_alias(room_alias)
289288
290289 if result is not None:
291 defer.returnValue({"room_id": result.room_id, "servers": result.servers})
290 return {"room_id": result.room_id, "servers": result.servers}
292291 else:
293292 raise SynapseError(
294293 404,
341340 # Query AS to see if it exists
342341 as_handler = self.appservice_handler
343342 result = yield as_handler.query_room_alias_exists(room_alias)
344 defer.returnValue(result)
343 return result
345344
346345 def can_modify_alias(self, alias, user_id=None):
347346 # Any application service "interested" in an alias they are regexing on
368367 creator = yield self.store.get_room_alias_creator(alias.to_string())
369368
370369 if creator is not None and creator == user_id:
371 defer.returnValue(True)
370 return True
372371
373372 is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id))
374 defer.returnValue(is_admin)
373 return is_admin
375374
376375 @defer.inlineCallbacks
377376 def edit_published_room_list(self, requester, room_id, visibility):
2424 from synapse.api.errors import CodeMessageException, SynapseError
2525 from synapse.logging.context import make_deferred_yieldable, run_in_background
2626 from synapse.types import UserID, get_domain_from_id
27 from synapse.util import unwrapFirstError
2728 from synapse.util.retryutils import NotRetryingDestination
2829
2930 logger = logging.getLogger(__name__)
6465 }
6566 }
6667 """
68
6769 device_keys_query = query_body.get("device_keys", {})
6870
6971 # separate users by domain.
120122 # Now fetch any devices that we don't have in our cache
121123 @defer.inlineCallbacks
122124 def do_remote_query(destination):
125 """This is called when we are querying the device list of a user on
126 a remote homeserver and their device list is not in the device list
127 cache. If we share a room with this user and we're not querying for
128 specific user we will update the cache
129 with their device list."""
130
123131 destination_query = remote_queries_not_in_cache[destination]
132
133 # We first consider whether we wish to update the device list cache with
134 # the users device list. We want to track a user's devices when the
135 # authenticated user shares a room with the queried user and the query
136 # has not specified a particular device.
137 # If we update the cache for the queried user we remove them from further
138 # queries. We use the more efficient batched query_client_keys for all
139 # remaining users
140 user_ids_updated = []
141 for (user_id, device_list) in destination_query.items():
142 if user_id in user_ids_updated:
143 continue
144
145 if device_list:
146 continue
147
148 room_ids = yield self.store.get_rooms_for_user(user_id)
149 if not room_ids:
150 continue
151
152 # We've decided we're sharing a room with this user and should
153 # probably be tracking their device lists. However, we haven't
154 # done an initial sync on the device list so we do it now.
155 try:
156 user_devices = yield self.device_handler.device_list_updater.user_device_resync(
157 user_id
158 )
159 user_devices = user_devices["devices"]
160 for device in user_devices:
161 results[user_id] = {device["device_id"]: device["keys"]}
162 user_ids_updated.append(user_id)
163 except Exception as e:
164 failures[destination] = _exception_to_failure(e)
165
166 if len(destination_query) == len(user_ids_updated):
167 # We've updated all the users in the query and we do not need to
168 # make any further remote calls.
169 return
170
171 # Remove all the users from the query which we have updated
172 for user_id in user_ids_updated:
173 destination_query.pop(user_id)
174
124175 try:
125176 remote_result = yield self.federation.query_client_keys(
126177 destination, {"device_keys": destination_query}, timeout=timeout
131182 results[user_id] = keys
132183
133184 except Exception as e:
134 failures[destination] = _exception_to_failure(e)
185 failure = _exception_to_failure(e)
186 failures[destination] = failure
135187
136188 yield make_deferred_yieldable(
137189 defer.gatherResults(
140192 for destination in remote_queries_not_in_cache
141193 ],
142194 consumeErrors=True,
143 )
144 )
145
146 defer.returnValue({"device_keys": results, "failures": failures})
195 ).addErrback(unwrapFirstError)
196 )
197
198 return {"device_keys": results, "failures": failures}
147199
148200 @defer.inlineCallbacks
149201 def query_local_devices(self, query):
188240 r["unsigned"]["device_display_name"] = display_name
189241 result_dict[user_id][device_id] = r
190242
191 defer.returnValue(result_dict)
243 return result_dict
192244
193245 @defer.inlineCallbacks
194246 def on_federation_query_client_keys(self, query_body):
196248 """
197249 device_keys_query = query_body.get("device_keys", {})
198250 res = yield self.query_local_devices(device_keys_query)
199 defer.returnValue({"device_keys": res})
251 return {"device_keys": res}
200252
201253 @defer.inlineCallbacks
202254 def claim_one_time_keys(self, query, timeout):
233285 for user_id, keys in remote_result["one_time_keys"].items():
234286 if user_id in device_keys:
235287 json_result[user_id] = keys
288
236289 except Exception as e:
237 failures[destination] = _exception_to_failure(e)
290 failure = _exception_to_failure(e)
291 failures[destination] = failure
238292
239293 yield make_deferred_yieldable(
240294 defer.gatherResults(
258312 ),
259313 )
260314
261 defer.returnValue({"one_time_keys": json_result, "failures": failures})
315 return {"one_time_keys": json_result, "failures": failures}
262316
263317 @defer.inlineCallbacks
264318 def upload_keys_for_user(self, user_id, device_id, keys):
319
265320 time_now = self.clock.time_msec()
266321
267322 # TODO: Validate the JSON to make sure it has the right keys.
296351
297352 result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
298353
299 defer.returnValue({"one_time_key_counts": result})
354 return {"one_time_key_counts": result}
300355
301356 @defer.inlineCallbacks
302357 def _upload_one_time_keys_for_user(
8383 user_id, version, room_id, session_id
8484 )
8585
86 defer.returnValue(results)
86 return results
8787
8888 @defer.inlineCallbacks
8989 def delete_room_keys(self, user_id, version, room_id=None, session_id=None):
261261 new_version = yield self.store.create_e2e_room_keys_version(
262262 user_id, version_info
263263 )
264 defer.returnValue(new_version)
264 return new_version
265265
266266 @defer.inlineCallbacks
267267 def get_version_info(self, user_id, version=None):
291291 raise NotFoundError("Unknown backup version")
292292 else:
293293 raise
294 defer.returnValue(res)
294 return res
295295
296296 @defer.inlineCallbacks
297297 def delete_version(self, user_id, version=None):
349349 user_id, version, version_info
350350 )
351351
352 defer.returnValue({})
352 return {}
142142 "end": tokens[1].to_string(),
143143 }
144144
145 defer.returnValue(chunk)
145 return chunk
146146
147147
148148 class EventHandler(BaseHandler):
165165 event = yield self.store.get_event(event_id, check_room_id=room_id)
166166
167167 if not event:
168 defer.returnValue(None)
168 return None
169169 return
170170
171171 users = yield self.store.get_users_in_room(event.room_id)
178178 if not filtered:
179179 raise AuthError(403, "You don't have permission to access that event.")
180180
181 defer.returnValue(event)
181 return event
209209 event_id,
210210 origin,
211211 )
212 defer.returnValue(None)
212 return None
213213
214214 state = None
215215 auth_chain = []
675675 events = [e for e in events if e.event_id not in seen_events]
676676
677677 if not events:
678 defer.returnValue([])
678 return []
679679
680680 event_map = {e.event_id: e for e in events}
681681
837837 # TODO: We can probably do something more clever here.
838838 yield self._handle_new_event(dest, event, backfilled=True)
839839
840 defer.returnValue(events)
840 return events
841841
842842 @defer.inlineCallbacks
843843 def maybe_backfill(self, room_id, current_depth):
893893 )
894894
895895 if not filtered_extremities:
896 defer.returnValue(False)
896 return False
897897
898898 # Check if we reached a point where we should start backfilling.
899899 sorted_extremeties_tuple = sorted(extremities.items(), key=lambda e: -int(e[1]))
964964 # If this succeeded then we probably already have the
965965 # appropriate stuff.
966966 # TODO: We can probably do something more intelligent here.
967 defer.returnValue(True)
967 return True
968968 except SynapseError as e:
969969 logger.info("Failed to backfill from %s because %s", dom, e)
970970 continue
976976 continue
977977 except NotRetryingDestination as e:
978978 logger.info(str(e))
979 continue
980 except RequestSendFailed as e:
981 logger.info("Falied to get backfill from %s because %s", dom, e)
979982 continue
980983 except FederationDeniedError as e:
981984 logger.info(e)
984987 logger.exception("Failed to backfill from %s because %s", dom, e)
985988 continue
986989
987 defer.returnValue(False)
990 return False
988991
989992 success = yield try_backfill(likely_domains)
990993 if success:
991 defer.returnValue(True)
994 return True
992995
993996 # Huh, well *those* domains didn't work out. Lets try some domains
994997 # from the time.
10301033 [dom for dom, _ in likely_domains if dom not in tried_domains]
10311034 )
10321035 if success:
1033 defer.returnValue(True)
1036 return True
10341037
10351038 tried_domains.update(dom for dom, _ in likely_domains)
10361039
1037 defer.returnValue(False)
1040 return False
10381041
10391042 def _sanity_check_event(self, ev):
10401043 """
10811084 pdu=event,
10821085 )
10831086
1084 defer.returnValue(pdu)
1087 return pdu
10851088
10861089 @defer.inlineCallbacks
10871090 def on_event_auth(self, event_id):
10891092 auth = yield self.store.get_auth_chain(
10901093 [auth_id for auth_id in event.auth_event_ids()], include_given=True
10911094 )
1092 defer.returnValue([e for e in auth])
1095 return [e for e in auth]
10931096
10941097 @log_function
10951098 @defer.inlineCallbacks
11761179
11771180 run_in_background(self._handle_queued_pdus, room_queue)
11781181
1179 defer.returnValue(True)
1182 return True
11801183
11811184 @defer.inlineCallbacks
11821185 def _handle_queued_pdus(self, room_queue):
12631266 room_version, event, context, do_sig_check=False
12641267 )
12651268
1266 defer.returnValue(event)
1269 return event
12671270
12681271 @defer.inlineCallbacks
12691272 @log_function
13241327
13251328 state = yield self.store.get_events(list(prev_state_ids.values()))
13261329
1327 defer.returnValue({"state": list(state.values()), "auth_chain": auth_chain})
1330 return {"state": list(state.values()), "auth_chain": auth_chain}
13281331
13291332 @defer.inlineCallbacks
13301333 def on_invite_request(self, origin, pdu):
13801383 context = yield self.state_handler.compute_event_context(event)
13811384 yield self.persist_events_and_notify([(event, context)])
13821385
1383 defer.returnValue(event)
1386 return event
13841387
13851388 @defer.inlineCallbacks
13861389 def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
14051408 context = yield self.state_handler.compute_event_context(event)
14061409 yield self.persist_events_and_notify([(event, context)])
14071410
1408 defer.returnValue(event)
1411 return event
14091412
14101413 @defer.inlineCallbacks
14111414 def _make_and_verify_event(
14231426 assert event.user_id == user_id
14241427 assert event.state_key == user_id
14251428 assert event.room_id == room_id
1426 defer.returnValue((origin, event, format_ver))
1429 return (origin, event, format_ver)
14271430
14281431 @defer.inlineCallbacks
14291432 @log_function
14831486 logger.warn("Failed to create new leave %r because %s", event, e)
14841487 raise e
14851488
1486 defer.returnValue(event)
1489 return event
14871490
14881491 @defer.inlineCallbacks
14891492 @log_function
15161519 event.signatures,
15171520 )
15181521
1519 defer.returnValue(None)
1522 return None
15201523
15211524 @defer.inlineCallbacks
15221525 def get_state_for_pdu(self, room_id, event_id):
15441547 del results[(event.type, event.state_key)]
15451548
15461549 res = list(results.values())
1547 defer.returnValue(res)
1550 return res
15481551 else:
1549 defer.returnValue([])
1552 return []
15501553
15511554 @defer.inlineCallbacks
15521555 def get_state_ids_for_pdu(self, room_id, event_id):
15711574 else:
15721575 results.pop((event.type, event.state_key), None)
15731576
1574 defer.returnValue(list(results.values()))
1577 return list(results.values())
15751578 else:
1576 defer.returnValue([])
1579 return []
15771580
15781581 @defer.inlineCallbacks
15791582 @log_function
15861589
15871590 events = yield filter_events_for_server(self.store, origin, events)
15881591
1589 defer.returnValue(events)
1592 return events
15901593
15911594 @defer.inlineCallbacks
15921595 @log_function
16161619
16171620 events = yield filter_events_for_server(self.store, origin, [event])
16181621 event = events[0]
1619 defer.returnValue(event)
1622 return event
16201623 else:
1621 defer.returnValue(None)
1624 return None
16221625
16231626 def get_min_depth_for_context(self, context):
16241627 return self.store.get_min_depth(context)
16501653 self.store.remove_push_actions_from_staging, event.event_id
16511654 )
16521655
1653 defer.returnValue(context)
1656 return context
16541657
16551658 @defer.inlineCallbacks
16561659 def _handle_new_events(self, origin, event_infos, backfilled=False):
16731676 auth_events=ev_info.get("auth_events"),
16741677 backfilled=backfilled,
16751678 )
1676 defer.returnValue(res)
1679 return res
16771680
16781681 contexts = yield make_deferred_yieldable(
16791682 defer.gatherResults(
18321835 if event.type == EventTypes.GuestAccess and not context.rejected:
18331836 yield self.maybe_kick_guest_users(event)
18341837
1835 defer.returnValue(context)
1838 return context
18361839
18371840 @defer.inlineCallbacks
18381841 def _check_for_soft_fail(self, event, state, backfilled):
19511954
19521955 logger.debug("on_query_auth returning: %s", ret)
19531956
1954 defer.returnValue(ret)
1957 return ret
19551958
19561959 @defer.inlineCallbacks
19571960 def on_get_missing_events(
19741977 self.store, origin, missing_events
19751978 )
19761979
1977 defer.returnValue(missing_events)
1980 return missing_events
19781981
19791982 @defer.inlineCallbacks
19801983 @log_function
24502453
24512454 logger.debug("construct_auth_difference returning")
24522455
2453 defer.returnValue(
2454 {
2455 "auth_chain": local_auth,
2456 "rejects": {
2457 e.event_id: {"reason": reason_map[e.event_id], "proof": None}
2458 for e in base_remote_rejected
2459 },
2460 "missing": [e.event_id for e in missing_locals],
2461 }
2462 )
2456 return {
2457 "auth_chain": local_auth,
2458 "rejects": {
2459 e.event_id: {"reason": reason_map[e.event_id], "proof": None}
2460 for e in base_remote_rejected
2461 },
2462 "missing": [e.event_id for e in missing_locals],
2463 }
24632464
24642465 @defer.inlineCallbacks
24652466 @log_function
26072608 builder=builder
26082609 )
26092610 EventValidator().validate_new(event)
2610 defer.returnValue((event, context))
2611 return (event, context)
26112612
26122613 @defer.inlineCallbacks
26132614 def _check_signature(self, event, context):
27972798 )
27982799 else:
27992800 return user_joined_room(self.distributor, user, room_id)
2801
2802 @defer.inlineCallbacks
2803 def get_room_complexity(self, remote_room_hosts, room_id):
2804 """
2805 Fetch the complexity of a remote room over federation.
2806
2807 Args:
2808 remote_room_hosts (list[str]): The remote servers to ask.
2809 room_id (str): The room ID to ask about.
2810
2811 Returns:
2812 Deferred[dict] or Deferred[None]: Dict contains the complexity
2813 metric versions, while None means we could not fetch the complexity.
2814 """
2815
2816 for host in remote_room_hosts:
2817 res = yield self.federation_client.get_room_complexity(host, room_id)
2818
2819 # We got a result, return it.
2820 if res:
2821 defer.returnValue(res)
2822
2823 # We fell off the bottom, couldn't get the complexity from anyone. Oh
2824 # well.
2825 defer.returnValue(None)
125125 group_id, requester_user_id
126126 )
127127 else:
128 res = yield self.transport_client.get_group_summary(
129 get_domain_from_id(group_id), group_id, requester_user_id
130 )
128 try:
129 res = yield self.transport_client.get_group_summary(
130 get_domain_from_id(group_id), group_id, requester_user_id
131 )
132 except RequestSendFailed:
133 raise SynapseError(502, "Failed to contact group server")
131134
132135 group_server_name = get_domain_from_id(group_id)
133136
161164
162165 res.setdefault("user", {})["is_publicised"] = is_publicised
163166
164 defer.returnValue(res)
167 return res
165168
166169 @defer.inlineCallbacks
167170 def create_group(self, group_id, user_id, content):
182185
183186 content["user_profile"] = yield self.profile_handler.get_profile(user_id)
184187
185 res = yield self.transport_client.create_group(
186 get_domain_from_id(group_id), group_id, user_id, content
187 )
188 try:
189 res = yield self.transport_client.create_group(
190 get_domain_from_id(group_id), group_id, user_id, content
191 )
192 except RequestSendFailed:
193 raise SynapseError(502, "Failed to contact group server")
188194
189195 remote_attestation = res["attestation"]
190196 yield self.attestations.verify_attestation(
206212 )
207213 self.notifier.on_new_event("groups_key", token, users=[user_id])
208214
209 defer.returnValue(res)
215 return res
210216
211217 @defer.inlineCallbacks
212218 def get_users_in_group(self, group_id, requester_user_id):
216222 res = yield self.groups_server_handler.get_users_in_group(
217223 group_id, requester_user_id
218224 )
219 defer.returnValue(res)
225 return res
220226
221227 group_server_name = get_domain_from_id(group_id)
222228
223 res = yield self.transport_client.get_users_in_group(
224 get_domain_from_id(group_id), group_id, requester_user_id
225 )
229 try:
230 res = yield self.transport_client.get_users_in_group(
231 get_domain_from_id(group_id), group_id, requester_user_id
232 )
233 except RequestSendFailed:
234 raise SynapseError(502, "Failed to contact group server")
226235
227236 chunk = res["chunk"]
228237 valid_entries = []
243252
244253 res["chunk"] = valid_entries
245254
246 defer.returnValue(res)
255 return res
247256
248257 @defer.inlineCallbacks
249258 def join_group(self, group_id, user_id, content):
257266 local_attestation = self.attestations.create_attestation(group_id, user_id)
258267 content["attestation"] = local_attestation
259268
260 res = yield self.transport_client.join_group(
261 get_domain_from_id(group_id), group_id, user_id, content
262 )
269 try:
270 res = yield self.transport_client.join_group(
271 get_domain_from_id(group_id), group_id, user_id, content
272 )
273 except RequestSendFailed:
274 raise SynapseError(502, "Failed to contact group server")
263275
264276 remote_attestation = res["attestation"]
265277
284296 )
285297 self.notifier.on_new_event("groups_key", token, users=[user_id])
286298
287 defer.returnValue({})
299 return {}
288300
289301 @defer.inlineCallbacks
290302 def accept_invite(self, group_id, user_id, content):
298310 local_attestation = self.attestations.create_attestation(group_id, user_id)
299311 content["attestation"] = local_attestation
300312
301 res = yield self.transport_client.accept_group_invite(
302 get_domain_from_id(group_id), group_id, user_id, content
303 )
313 try:
314 res = yield self.transport_client.accept_group_invite(
315 get_domain_from_id(group_id), group_id, user_id, content
316 )
317 except RequestSendFailed:
318 raise SynapseError(502, "Failed to contact group server")
304319
305320 remote_attestation = res["attestation"]
306321
325340 )
326341 self.notifier.on_new_event("groups_key", token, users=[user_id])
327342
328 defer.returnValue({})
343 return {}
329344
330345 @defer.inlineCallbacks
331346 def invite(self, group_id, user_id, requester_user_id, config):
337352 group_id, user_id, requester_user_id, content
338353 )
339354 else:
340 res = yield self.transport_client.invite_to_group(
341 get_domain_from_id(group_id),
342 group_id,
343 user_id,
344 requester_user_id,
345 content,
346 )
347
348 defer.returnValue(res)
355 try:
356 res = yield self.transport_client.invite_to_group(
357 get_domain_from_id(group_id),
358 group_id,
359 user_id,
360 requester_user_id,
361 content,
362 )
363 except RequestSendFailed:
364 raise SynapseError(502, "Failed to contact group server")
365
366 return res
349367
350368 @defer.inlineCallbacks
351369 def on_invite(self, group_id, user_id, content):
376394 logger.warn("No profile for user %s: %s", user_id, e)
377395 user_profile = {}
378396
379 defer.returnValue({"state": "invite", "user_profile": user_profile})
397 return {"state": "invite", "user_profile": user_profile}
380398
381399 @defer.inlineCallbacks
382400 def remove_user_from_group(self, group_id, user_id, requester_user_id, content):
397415 )
398416 else:
399417 content["requester_user_id"] = requester_user_id
400 res = yield self.transport_client.remove_user_from_group(
401 get_domain_from_id(group_id),
402 group_id,
403 requester_user_id,
404 user_id,
405 content,
406 )
407
408 defer.returnValue(res)
418 try:
419 res = yield self.transport_client.remove_user_from_group(
420 get_domain_from_id(group_id),
421 group_id,
422 requester_user_id,
423 user_id,
424 content,
425 )
426 except RequestSendFailed:
427 raise SynapseError(502, "Failed to contact group server")
428
429 return res
409430
410431 @defer.inlineCallbacks
411432 def user_removed_from_group(self, group_id, user_id, content):
420441 @defer.inlineCallbacks
421442 def get_joined_groups(self, user_id):
422443 group_ids = yield self.store.get_joined_groups(user_id)
423 defer.returnValue({"groups": group_ids})
444 return {"groups": group_ids}
424445
425446 @defer.inlineCallbacks
426447 def get_publicised_groups_for_user(self, user_id):
432453 for app_service in self.store.get_app_services():
433454 result.extend(app_service.get_groups_for_user(user_id))
434455
435 defer.returnValue({"groups": result})
436 else:
437 bulk_result = yield self.transport_client.bulk_get_publicised_groups(
438 get_domain_from_id(user_id), [user_id]
439 )
456 return {"groups": result}
457 else:
458 try:
459 bulk_result = yield self.transport_client.bulk_get_publicised_groups(
460 get_domain_from_id(user_id), [user_id]
461 )
462 except RequestSendFailed:
463 raise SynapseError(502, "Failed to contact group server")
464
440465 result = bulk_result.get("users", {}).get(user_id)
441466 # TODO: Verify attestations
442 defer.returnValue({"groups": result})
467 return {"groups": result}
443468
444469 @defer.inlineCallbacks
445470 def bulk_get_publicised_groups(self, user_ids, proxy=True):
474499 for app_service in self.store.get_app_services():
475500 results[uid].extend(app_service.get_groups_for_user(uid))
476501
477 defer.returnValue({"users": results})
502 return {"users": results}
8181 "%s is not a trusted ID server: rejecting 3pid " + "credentials",
8282 id_server,
8383 )
84 defer.returnValue(None)
84 return None
8585
8686 try:
8787 data = yield self.http_client.get_json(
9494 raise e.to_synapse_error()
9595
9696 if "medium" in data:
97 defer.returnValue(data)
98 defer.returnValue(None)
97 return data
98 return None
9999
100100 @defer.inlineCallbacks
101101 def bind_threepid(self, creds, mxid):
132132 )
133133 except CodeMessageException as e:
134134 data = json.loads(e.msg) # XXX WAT?
135 defer.returnValue(data)
135 return data
136136
137137 @defer.inlineCallbacks
138138 def try_unbind_threepid(self, mxid, threepid):
160160
161161 # We don't know where to unbind, so we don't have a choice but to return
162162 if not id_servers:
163 defer.returnValue(False)
163 return False
164164
165165 changed = True
166166 for id_server in id_servers:
168168 mxid, threepid, id_server
169169 )
170170
171 defer.returnValue(changed)
171 return changed
172172
173173 @defer.inlineCallbacks
174174 def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server):
223223 id_server=id_server,
224224 )
225225
226 defer.returnValue(changed)
226 return changed
227227
228228 @defer.inlineCallbacks
229229 def requestEmailToken(
249249 % (id_server, "/_matrix/identity/api/v1/validate/email/requestToken"),
250250 params,
251251 )
252 defer.returnValue(data)
252 return data
253253 except HttpResponseException as e:
254254 logger.info("Proxied requestToken failed: %r", e)
255255 raise e.to_synapse_error()
277277 % (id_server, "/_matrix/identity/api/v1/validate/msisdn/requestToken"),
278278 params,
279279 )
280 defer.returnValue(data)
280 return data
281281 except HttpResponseException as e:
282282 logger.info("Proxied requestToken failed: %r", e)
283283 raise e.to_synapse_error()
249249 "end": now_token.to_string(),
250250 }
251251
252 defer.returnValue(ret)
252 return ret
253253
254254 @defer.inlineCallbacks
255255 def room_initial_sync(self, requester, room_id, pagin_config=None):
300300
301301 result["account_data"] = account_data_events
302302
303 defer.returnValue(result)
303 return result
304304
305305 @defer.inlineCallbacks
306306 def _room_initial_sync_parted(
329329
330330 time_now = self.clock.time_msec()
331331
332 defer.returnValue(
333 {
334 "membership": membership,
335 "room_id": room_id,
336 "messages": {
337 "chunk": (
338 yield self._event_serializer.serialize_events(
339 messages, time_now
340 )
341 ),
342 "start": start_token.to_string(),
343 "end": end_token.to_string(),
344 },
345 "state": (
346 yield self._event_serializer.serialize_events(
347 room_state.values(), time_now
348 )
332 return {
333 "membership": membership,
334 "room_id": room_id,
335 "messages": {
336 "chunk": (
337 yield self._event_serializer.serialize_events(messages, time_now)
349338 ),
350 "presence": [],
351 "receipts": [],
352 }
353 )
339 "start": start_token.to_string(),
340 "end": end_token.to_string(),
341 },
342 "state": (
343 yield self._event_serializer.serialize_events(
344 room_state.values(), time_now
345 )
346 ),
347 "presence": [],
348 "receipts": [],
349 }
354350
355351 @defer.inlineCallbacks
356352 def _room_initial_sync_joined(
383379 def get_presence():
384380 # If presence is disabled, return an empty list
385381 if not self.hs.config.use_presence:
386 defer.returnValue([])
382 return []
387383
388384 states = yield presence_handler.get_states(
389385 [m.user_id for m in room_members], as_event=True
390386 )
391387
392 defer.returnValue(states)
388 return states
393389
394390 @defer.inlineCallbacks
395391 def get_receipts():
398394 )
399395 if not receipts:
400396 receipts = []
401 defer.returnValue(receipts)
397 return receipts
402398
403399 presence, receipts, (messages, token) = yield make_deferred_yieldable(
404400 defer.gatherResults(
441437 if not is_peeking:
442438 ret["membership"] = membership
443439
444 defer.returnValue(ret)
440 return ret
445441
446442 @defer.inlineCallbacks
447443 def _check_in_room_or_world_readable(self, room_id, user_id):
452448 # * The user is a guest user, and has joined the room
453449 # else it will throw.
454450 member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
455 defer.returnValue((member_event.membership, member_event.event_id))
451 return (member_event.membership, member_event.event_id)
456452 return
457453 except AuthError:
458454 visibility = yield self.state_handler.get_current_state(
462458 visibility
463459 and visibility.content["history_visibility"] == "world_readable"
464460 ):
465 defer.returnValue((Membership.JOIN, None))
461 return (Membership.JOIN, None)
466462 return
467463 raise AuthError(
468464 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
8686 )
8787 data = room_state[membership_event_id].get(key)
8888
89 defer.returnValue(data)
89 return data
9090
9191 @defer.inlineCallbacks
9292 def get_state_events(
173173 # events, as clients won't use them.
174174 bundle_aggregations=False,
175175 )
176 defer.returnValue(events)
176 return events
177177
178178 @defer.inlineCallbacks
179179 def get_joined_members(self, requester, room_id):
212212 # Loop fell through, AS has no interested users in room
213213 raise AuthError(403, "Appservice not in room")
214214
215 defer.returnValue(
216 {
217 user_id: {
218 "avatar_url": profile.avatar_url,
219 "display_name": profile.display_name,
220 }
221 for user_id, profile in iteritems(users_with_profile)
215 return {
216 user_id: {
217 "avatar_url": profile.avatar_url,
218 "display_name": profile.display_name,
222219 }
223 )
220 for user_id, profile in iteritems(users_with_profile)
221 }
224222
225223
226224 class EventCreationHandler(object):
379377 # tolerate them in event_auth.check().
380378 prev_state_ids = yield context.get_prev_state_ids(self.store)
381379 prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
382 prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
380 prev_event = (
381 yield self.store.get_event(prev_event_id, allow_none=True)
382 if prev_event_id
383 else None
384 )
383385 if not prev_event or prev_event.membership != Membership.JOIN:
384386 logger.warning(
385387 (
397399
398400 self.validator.validate_new(event)
399401
400 defer.returnValue((event, context))
402 return (event, context)
401403
402404 def _is_exempt_from_privacy_policy(self, builder, requester):
403405 """"Determine if an event to be sent is exempt from having to consent
424426 @defer.inlineCallbacks
425427 def _is_server_notices_room(self, room_id):
426428 if self.config.server_notices_mxid is None:
427 defer.returnValue(False)
429 return False
428430 user_ids = yield self.store.get_users_in_room(room_id)
429 defer.returnValue(self.config.server_notices_mxid in user_ids)
431 return self.config.server_notices_mxid in user_ids
430432
431433 @defer.inlineCallbacks
432434 def assert_accepted_privacy_policy(self, requester):
506508 event.event_id,
507509 prev_state.event_id,
508510 )
509 defer.returnValue(prev_state)
511 return prev_state
510512
511513 yield self.handle_new_client_event(
512514 requester=requester, event=event, context=context, ratelimit=ratelimit
522524 """
523525 prev_state_ids = yield context.get_prev_state_ids(self.store)
524526 prev_event_id = prev_state_ids.get((event.type, event.state_key))
527 if not prev_event_id:
528 return
525529 prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
526530 if not prev_event:
527531 return
530534 prev_content = encode_canonical_json(prev_event.content)
531535 next_content = encode_canonical_json(event.content)
532536 if prev_content == next_content:
533 defer.returnValue(prev_event)
537 return prev_event
534538 return
535539
536540 @defer.inlineCallbacks
562566 yield self.send_nonmember_event(
563567 requester, event, context, ratelimit=ratelimit
564568 )
565 defer.returnValue(event)
569 return event
566570
567571 @measure_func("create_new_client_event")
568572 @defer.inlineCallbacks
625629
626630 logger.debug("Created event %s", event.event_id)
627631
628 defer.returnValue((event, context))
632 return (event, context)
629633
630634 @measure_func("handle_new_client_event")
631635 @defer.inlineCallbacks
790794 get_prev_content=False,
791795 allow_rejected=False,
792796 allow_none=True,
793 check_room_id=event.room_id,
794797 )
795798
796799 # we can make some additional checks now if we have the original event.
797800 if original_event:
798801 if original_event.type == EventTypes.Create:
799802 raise AuthError(403, "Redacting create events is not permitted")
803
804 if original_event.room_id != event.room_id:
805 raise SynapseError(400, "Cannot redact event from a different room")
800806
801807 prev_state_ids = yield context.get_prev_state_ids(self.store)
802808 auth_events_ids = yield self.auth.compute_auth_events(
241241 )
242242
243243 if not events:
244 defer.returnValue(
245 {
246 "chunk": [],
247 "start": pagin_config.from_token.to_string(),
248 "end": next_token.to_string(),
249 }
250 )
244 return {
245 "chunk": [],
246 "start": pagin_config.from_token.to_string(),
247 "end": next_token.to_string(),
248 }
251249
252250 state = None
253251 if event_filter and event_filter.lazy_load_members() and len(events) > 0:
285283 )
286284 )
287285
288 defer.returnValue(chunk)
286 return chunk
332332 """Checks the presence of users that have timed out and updates as
333333 appropriate.
334334 """
335 logger.info("Handling presence timeouts")
335 logger.debug("Handling presence timeouts")
336336 now = self.clock.time_msec()
337337
338338 # Fetch the list of users that *may* have timed out. Things may have
460460 if affect_presence:
461461 run_in_background(_end)
462462
463 defer.returnValue(_user_syncing())
463 return _user_syncing()
464464
465465 def get_currently_syncing_users(self):
466466 """Get the set of user ids that are currently syncing on this HS.
555555 """Get the current presence state for a user.
556556 """
557557 res = yield self.current_state_for_users([user_id])
558 defer.returnValue(res[user_id])
558 return res[user_id]
559559
560560 @defer.inlineCallbacks
561561 def current_state_for_users(self, user_ids):
584584 states.update(new)
585585 self.user_to_current_state.update(new)
586586
587 defer.returnValue(states)
587 return states
588588
589589 @defer.inlineCallbacks
590590 def _persist_and_notify(self, states):
680680 def get_state(self, target_user, as_event=False):
681681 results = yield self.get_states([target_user.to_string()], as_event=as_event)
682682
683 defer.returnValue(results[0])
683 return results[0]
684684
685685 @defer.inlineCallbacks
686686 def get_states(self, target_user_ids, as_event=False):
702702
703703 now = self.clock.time_msec()
704704 if as_event:
705 defer.returnValue(
706 [
707 {
708 "type": "m.presence",
709 "content": format_user_presence_state(state, now),
710 }
711 for state in updates
712 ]
713 )
705 return [
706 {
707 "type": "m.presence",
708 "content": format_user_presence_state(state, now),
709 }
710 for state in updates
711 ]
714712 else:
715 defer.returnValue(updates)
713 return updates
716714
717715 @defer.inlineCallbacks
718716 def set_state(self, target_user, state, ignore_status_msg=False):
756754 )
757755
758756 if observer_room_ids & observed_room_ids:
759 defer.returnValue(True)
760
761 defer.returnValue(False)
757 return True
758
759 return False
762760
763761 @defer.inlineCallbacks
764762 def get_all_presence_updates(self, last_id, current_id):
777775 # TODO(markjh): replicate the unpersisted changes.
778776 # This could use the in-memory stores for recent changes.
779777 rows = yield self.store.get_all_presence_updates(last_id, current_id)
780 defer.returnValue(rows)
778 return rows
781779
782780 def notify_new_event(self):
783781 """Called when new events have happened. Handles users and servers
10331031 #
10341032 # Hence this guard where we just return nothing so that the sync
10351033 # doesn't return. C.f. #5503.
1036 defer.returnValue(([], max_token))
1034 return ([], max_token)
10371035
10381036 presence = self.get_presence_handler()
10391037 stream_change_cache = self.store.presence_stream_cache
10671065 updates = yield presence.current_state_for_users(user_ids_changed)
10681066
10691067 if include_offline:
1070 defer.returnValue((list(updates.values()), max_token))
1068 return (list(updates.values()), max_token)
10711069 else:
1072 defer.returnValue(
1073 (
1074 [
1075 s
1076 for s in itervalues(updates)
1077 if s.state != PresenceState.OFFLINE
1078 ],
1079 max_token,
1080 )
1070 return (
1071 [s for s in itervalues(updates) if s.state != PresenceState.OFFLINE],
1072 max_token,
10811073 )
10821074
10831075 def get_current_key(self):
11061098 )
11071099 users_interested_in.update(user_ids)
11081100
1109 defer.returnValue(users_interested_in)
1101 return users_interested_in
11101102
11111103
11121104 def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
12861278 # Always notify self
12871279 users_to_states.setdefault(state.user_id, []).append(state)
12881280
1289 defer.returnValue((room_ids_to_states, users_to_states))
1281 return (room_ids_to_states, users_to_states)
12901282
12911283
12921284 @defer.inlineCallbacks
13201312 host = get_domain_from_id(user_id)
13211313 hosts_and_states.append(([host], states))
13221314
1323 defer.returnValue(hosts_and_states)
1315 return hosts_and_states
7272 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
7373 raise
7474
75 defer.returnValue({"displayname": displayname, "avatar_url": avatar_url})
75 return {"displayname": displayname, "avatar_url": avatar_url}
7676 else:
7777 try:
7878 result = yield self.federation.make_query(
8181 args={"user_id": user_id},
8282 ignore_backoff=True,
8383 )
84 defer.returnValue(result)
84 return result
8585 except RequestSendFailed as e:
8686 raise_from(SynapseError(502, "Failed to fetch profile"), e)
8787 except HttpResponseException as e:
107107 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
108108 raise
109109
110 defer.returnValue({"displayname": displayname, "avatar_url": avatar_url})
110 return {"displayname": displayname, "avatar_url": avatar_url}
111111 else:
112112 profile = yield self.store.get_from_remote_profile_cache(user_id)
113 defer.returnValue(profile or {})
113 return profile or {}
114114
115115 @defer.inlineCallbacks
116116 def get_displayname(self, target_user):
124124 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
125125 raise
126126
127 defer.returnValue(displayname)
127 return displayname
128128 else:
129129 try:
130130 result = yield self.federation.make_query(
138138 except HttpResponseException as e:
139139 raise e.to_synapse_error()
140140
141 defer.returnValue(result["displayname"])
141 return result["displayname"]
142142
143143 @defer.inlineCallbacks
144144 def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
185185 if e.code == 404:
186186 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
187187 raise
188 defer.returnValue(avatar_url)
188 return avatar_url
189189 else:
190190 try:
191191 result = yield self.federation.make_query(
199199 except HttpResponseException as e:
200200 raise e.to_synapse_error()
201201
202 defer.returnValue(result["avatar_url"])
202 return result["avatar_url"]
203203
204204 @defer.inlineCallbacks
205205 def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False):
250250 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
251251 raise
252252
253 defer.returnValue(response)
253 return response
254254
255255 @defer.inlineCallbacks
256256 def _update_join_states(self, requester, target_user):
9292
9393 if min_batch_id is None:
9494 # no new receipts
95 defer.returnValue(False)
95 return False
9696
9797 affected_room_ids = list(set([r.room_id for r in receipts]))
9898
102102 min_batch_id, max_batch_id, affected_room_ids
103103 )
104104
105 defer.returnValue(True)
105 return True
106106
107107 @defer.inlineCallbacks
108108 def received_client_receipt(self, room_id, receipt_type, user_id, event_id):
132132 )
133133
134134 if not result:
135 defer.returnValue([])
135 return []
136136
137 defer.returnValue(result)
137 return result
138138
139139
140140 class ReceiptEventSource(object):
147147 to_key = yield self.get_current_key()
148148
149149 if from_key == to_key:
150 defer.returnValue(([], to_key))
150 return ([], to_key)
151151
152152 events = yield self.store.get_linearized_receipts_for_rooms(
153153 room_ids, from_key=from_key, to_key=to_key
154154 )
155155
156 defer.returnValue((events, to_key))
156 return (events, to_key)
157157
158158 def get_current_key(self, direction="f"):
159159 return self.store.get_max_receipt_stream_id()
172172 room_ids, from_key=from_key, to_key=to_key
173173 )
174174
175 defer.returnValue((events, to_key))
175 return (events, to_key)
264264 # Bind email to new account
265265 yield self._register_email_threepid(user_id, threepid_dict, None, False)
266266
267 defer.returnValue(user_id)
267 return user_id
268268
269269 @defer.inlineCallbacks
270270 def _auto_join_rooms(self, user_id):
359359 appservice_id=service_id,
360360 create_profile_with_displayname=user.localpart,
361361 )
362 defer.returnValue(user_id)
362 return user_id
363363
364364 @defer.inlineCallbacks
365365 def check_recaptcha(self, ip, private_key, challenge, response):
460460
461461 id = self._next_generated_user_id
462462 self._next_generated_user_id += 1
463 defer.returnValue(str(id))
463 return str(id)
464464
465465 @defer.inlineCallbacks
466466 def _validate_captcha(self, ip_addr, private_key, challenge, response):
480480 "error_url": "http://www.recaptcha.net/recaptcha/api/challenge?"
481481 + "error=%s" % lines[1],
482482 }
483 defer.returnValue(json)
483 return json
484484
485485 @defer.inlineCallbacks
486486 def _submit_captcha(self, ip_addr, private_key, challenge, response):
496496 "response": response,
497497 },
498498 )
499 defer.returnValue(data)
499 return data
500500
501501 @defer.inlineCallbacks
502502 def _join_user_to_room(self, requester, room_identifier):
621621 initial_display_name=initial_display_name,
622622 is_guest=is_guest,
623623 )
624 defer.returnValue((r["device_id"], r["access_token"]))
624 return (r["device_id"], r["access_token"])
625625
626626 valid_until_ms = None
627627 if self.session_lifetime is not None:
644644 user_id, device_id=device_id, valid_until_ms=valid_until_ms
645645 )
646646
647 defer.returnValue((device_id, access_token))
647 return (device_id, access_token)
648648
649649 @defer.inlineCallbacks
650650 def post_registration_actions(
797797 if ex.errcode == Codes.MISSING_PARAM:
798798 # This will only happen if the ID server returns a malformed response
799799 logger.info("Can't add incomplete 3pid")
800 defer.returnValue(None)
800 return None
801801 raise
802802
803803 yield self._auth_handler.add_threepid(
127127 old_room_id,
128128 new_version, # args for _upgrade_room
129129 )
130 defer.returnValue(ret)
130 return ret
131131
132132 @defer.inlineCallbacks
133133 def _upgrade_room(self, requester, old_room_id, new_version):
192192 requester, old_room_id, new_room_id, old_room_state
193193 )
194194
195 defer.returnValue(new_room_id)
195 return new_room_id
196196
197197 @defer.inlineCallbacks
198198 def _update_upgraded_room_pls(
670670 result["room_alias"] = room_alias.to_string()
671671 yield directory_handler.send_room_alias_update_event(requester, room_id)
672672
673 defer.returnValue(result)
673 return result
674674
675675 @defer.inlineCallbacks
676676 def _send_events_for_new_room(
795795 room_creator_user_id=creator_id,
796796 is_public=is_public,
797797 )
798 defer.returnValue(gen_room_id)
798 return gen_room_id
799799 except StoreError:
800800 attempts += 1
801801 raise StoreError(500, "Couldn't generate a room ID.")
838838 event_id, get_prev_content=True, allow_none=True
839839 )
840840 if not event:
841 defer.returnValue(None)
841 return None
842842 return
843843
844844 filtered = yield (filter_evts([event]))
889889
890890 results["end"] = token.copy_and_replace("room_key", results["end"]).to_string()
891891
892 defer.returnValue(results)
892 return results
893893
894894
895895 class RoomEventSource(object):
940940 else:
941941 end_key = to_key
942942
943 defer.returnValue((events, end_key))
943 return (events, end_key)
944944
945945 def get_current_key(self):
946946 return self.store.get_room_events_max_id()
958958 limit=config.limit,
959959 )
960960
961 defer.returnValue((events, next_key))
961 return (events, next_key)
324324 current_limit=since_token.current_limit - 1,
325325 ).to_token()
326326
327 defer.returnValue(results)
327 return results
328328
329329 @defer.inlineCallbacks
330330 def _append_room_entry_to_chunk(
419419 if join_rules_event:
420420 join_rule = join_rules_event.content.get("join_rule", None)
421421 if not allow_private and join_rule and join_rule != JoinRules.PUBLIC:
422 defer.returnValue(None)
422 return None
423423
424424 # Return whether this room is open to federation users or not
425425 create_event = current_state.get((EventTypes.Create, ""))
468468 if avatar_url:
469469 result["avatar_url"] = avatar_url
470470
471 defer.returnValue(result)
471 return result
472472
473473 @defer.inlineCallbacks
474474 def get_remote_public_room_list(
481481 third_party_instance_id=None,
482482 ):
483483 if not self.enable_room_list_search:
484 defer.returnValue({"chunk": [], "total_room_count_estimate": 0})
484 return {"chunk": [], "total_room_count_estimate": 0}
485485
486486 if search_filter:
487487 # We currently don't support searching across federation, so we have
506506 ]
507507 }
508508
509 defer.returnValue(res)
509 return res
510510
511511 def _get_remote_list_cached(
512512 self,
2525
2626 from twisted.internet import defer
2727
28 import synapse.server
29 import synapse.types
28 from synapse import types
3029 from synapse.api.constants import EventTypes, Membership
3130 from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError
3231 from synapse.types import RoomID, UserID
190189 )
191190 if duplicate is not None:
192191 # Discard the new event since this membership change is a no-op.
193 defer.returnValue(duplicate)
192 return duplicate
194193
195194 yield self.event_creation_handler.handle_new_client_event(
196195 requester, event, context, extra_users=[target], ratelimit=ratelimit
232231 if prev_member_event.membership == Membership.JOIN:
233232 yield self._user_left_room(target, room_id)
234233
235 defer.returnValue(event)
234 return event
236235
237236 @defer.inlineCallbacks
238237 def copy_room_tags_and_direct_to_room(self, old_room_id, new_room_id, user_id):
302301 require_consent=require_consent,
303302 )
304303
305 defer.returnValue(result)
304 return result
306305
307306 @defer.inlineCallbacks
308307 def _update_membership(
422421 same_membership = old_membership == effective_membership_state
423422 same_sender = requester.user.to_string() == old_state.sender
424423 if same_sender and same_membership and same_content:
425 defer.returnValue(old_state)
424 return old_state
426425
427426 if old_membership in ["ban", "leave"] and action == "kick":
428427 raise AuthError(403, "The target user is not in the room")
472471 ret = yield self._remote_join(
473472 requester, remote_room_hosts, room_id, target, content
474473 )
475 defer.returnValue(ret)
474 return ret
476475
477476 elif effective_membership_state == Membership.LEAVE:
478477 if not is_host_in_room:
494493 res = yield self._remote_reject_invite(
495494 requester, remote_room_hosts, room_id, target
496495 )
497 defer.returnValue(res)
496 return res
498497
499498 res = yield self._local_membership_update(
500499 requester=requester,
507506 content=content,
508507 require_consent=require_consent,
509508 )
510 defer.returnValue(res)
509 return res
511510
512511 @defer.inlineCallbacks
513512 def send_membership_event(
542541 ), "Sender (%s) must be same as requester (%s)" % (sender, requester.user)
543542 assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,)
544543 else:
545 requester = synapse.types.create_requester(target_user)
544 requester = types.create_requester(target_user)
546545
547546 prev_event = yield self.event_creation_handler.deduplicate_state_event(
548547 event, context
595594 """
596595 guest_access_id = current_state_ids.get((EventTypes.GuestAccess, ""), None)
597596 if not guest_access_id:
598 defer.returnValue(False)
597 return False
599598
600599 guest_access = yield self.store.get_event(guest_access_id)
601600
602 defer.returnValue(
601 return (
603602 guest_access
604603 and guest_access.content
605604 and "guest_access" in guest_access.content
634633 servers.remove(room_alias.domain)
635634 servers.insert(0, room_alias.domain)
636635
637 defer.returnValue((RoomID.from_string(room_id), servers))
636 return (RoomID.from_string(room_id), servers)
638637
639638 @defer.inlineCallbacks
640639 def _get_inviter(self, user_id, room_id):
642641 user_id=user_id, room_id=room_id
643642 )
644643 if invite:
645 defer.returnValue(UserID.from_string(invite.sender))
644 return UserID.from_string(invite.sender)
646645
647646 @defer.inlineCallbacks
648647 def do_3pid_invite(
707706 if "signatures" not in data:
708707 raise AuthError(401, "No signatures on 3pid binding")
709708 yield self._verify_any_signature(data, id_server)
710 defer.returnValue(data["mxid"])
709 return data["mxid"]
711710
712711 except IOError as e:
713712 logger.warn("Error from identity server lookup: %s" % (e,))
714 defer.returnValue(None)
713 return None
715714
716715 @defer.inlineCallbacks
717716 def _verify_any_signature(self, data, server_hostname):
903902 if not public_keys:
904903 public_keys.append(fallback_public_key)
905904 display_name = data["display_name"]
906 defer.returnValue((token, public_keys, fallback_public_key, display_name))
905 return (token, public_keys, fallback_public_key, display_name)
907906
908907 @defer.inlineCallbacks
909908 def _is_host_in_room(self, current_state_ids):
912911 create_event_id = current_state_ids.get(("m.room.create", ""))
913912 if len(current_state_ids) == 1 and create_event_id:
914913 # We can only get here if we're in the process of creating the room
915 defer.returnValue(True)
914 return True
916915
917916 for etype, state_key in current_state_ids:
918917 if etype != EventTypes.Member or not self.hs.is_mine_id(state_key):
924923 continue
925924
926925 if event.membership == Membership.JOIN:
927 defer.returnValue(True)
928
929 defer.returnValue(False)
926 return True
927
928 return False
930929
931930 @defer.inlineCallbacks
932931 def _is_server_notice_room(self, room_id):
933932 if self._server_notices_mxid is None:
934 defer.returnValue(False)
933 return False
935934 user_ids = yield self.store.get_users_in_room(room_id)
936 defer.returnValue(self._server_notices_mxid in user_ids)
935 return self._server_notices_mxid in user_ids
937936
938937
939938 class RoomMemberMasterHandler(RoomMemberHandler):
945944 self.distributor.declare("user_left_room")
946945
947946 @defer.inlineCallbacks
947 def _is_remote_room_too_complex(self, room_id, remote_room_hosts):
948 """
949 Check if complexity of a remote room is too great.
950
951 Args:
952 room_id (str)
953 remote_room_hosts (list[str])
954
955 Returns: bool of whether the complexity is too great, or None
956 if unable to be fetched
957 """
958 max_complexity = self.hs.config.limit_remote_rooms.complexity
959 complexity = yield self.federation_handler.get_room_complexity(
960 remote_room_hosts, room_id
961 )
962
963 if complexity:
964 if complexity["v1"] > max_complexity:
965 return True
966 return False
967 return None
968
969 @defer.inlineCallbacks
970 def _is_local_room_too_complex(self, room_id):
971 """
972 Check if the complexity of a local room is too great.
973
974 Args:
975 room_id (str)
976
977 Returns: bool
978 """
979 max_complexity = self.hs.config.limit_remote_rooms.complexity
980 complexity = yield self.store.get_room_complexity(room_id)
981
982 if complexity["v1"] > max_complexity:
983 return True
984
985 return False
986
987 @defer.inlineCallbacks
948988 def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
949989 """Implements RoomMemberHandler._remote_join
950990 """
951991 # filter ourselves out of remote_room_hosts: do_invite_join ignores it
952992 # and if it is the only entry we'd like to return a 404 rather than a
953993 # 500.
954
955994 remote_room_hosts = [
956995 host for host in remote_room_hosts if host != self.hs.hostname
957996 ]
958997
959998 if len(remote_room_hosts) == 0:
960999 raise SynapseError(404, "No known servers")
1000
1001 if self.hs.config.limit_remote_rooms.enabled:
1002 # Fetch the room complexity
1003 too_complex = yield self._is_remote_room_too_complex(
1004 room_id, remote_room_hosts
1005 )
1006 if too_complex is True:
1007 raise SynapseError(
1008 code=400,
1009 msg=self.hs.config.limit_remote_rooms.complexity_error,
1010 errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
1011 )
9611012
9621013 # We don't do an auth check if we are doing an invite
9631014 # join dance for now, since we're kinda implicitly checking
9681019 )
9691020 yield self._user_joined_room(user, room_id)
9701021
1022 # Check the room we just joined wasn't too large, if we didn't fetch the
1023 # complexity of it before.
1024 if self.hs.config.limit_remote_rooms.enabled:
1025 if too_complex is False:
1026 # We checked, and we're under the limit.
1027 return
1028
1029 # Check again, but with the local state events
1030 too_complex = yield self._is_local_room_too_complex(room_id)
1031
1032 if too_complex is False:
1033 # We're under the limit.
1034 return
1035
1036 # The room is too large. Leave.
1037 requester = types.create_requester(user, None, False, None)
1038 yield self.update_membership(
1039 requester=requester, target=user, room_id=room_id, action="leave"
1040 )
1041 raise SynapseError(
1042 code=400,
1043 msg=self.hs.config.limit_remote_rooms.complexity_error,
1044 errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
1045 )
1046
9711047 @defer.inlineCallbacks
9721048 def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
9731049 """Implements RoomMemberHandler._remote_reject_invite
9771053 ret = yield fed_handler.do_remotely_reject_invite(
9781054 remote_room_hosts, room_id, target.to_string()
9791055 )
980 defer.returnValue(ret)
1056 return ret
9811057 except Exception as e:
9821058 # if we were unable to reject the exception, just mark
9831059 # it as rejected on our end and plough ahead.
9881064 logger.warn("Failed to reject invite: %s", e)
9891065
9901066 yield self.store.locally_reject_invite(target.to_string(), room_id)
991 defer.returnValue({})
1067 return {}
9921068
9931069 def _user_joined_room(self, target, room_id):
9941070 """Implements RoomMemberHandler._user_joined_room
5252
5353 yield self._user_joined_room(user, room_id)
5454
55 defer.returnValue(ret)
55 return ret
5656
5757 def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
5858 """Implements RoomMemberHandler._remote_reject_invite
6868 # Scan through the old room for further predecessors
6969 room_id = predecessor["room_id"]
7070
71 defer.returnValue(historical_room_ids)
71 return historical_room_ids
7272
7373 @defer.inlineCallbacks
7474 def search(self, user, content, batch=None):
185185 room_ids.intersection_update({batch_group_key})
186186
187187 if not room_ids:
188 defer.returnValue(
189 {
190 "search_categories": {
191 "room_events": {"results": [], "count": 0, "highlights": []}
192 }
188 return {
189 "search_categories": {
190 "room_events": {"results": [], "count": 0, "highlights": []}
193191 }
194 )
192 }
195193
196194 rank_map = {} # event_id -> rank of event
197195 allowed_events = []
454452 if global_next_batch:
455453 rooms_cat_res["next_batch"] = global_next_batch
456454
457 defer.returnValue({"search_categories": {"room_events": rooms_cat_res}})
455 return {"search_categories": {"room_events": rooms_cat_res}}
4747
4848 if not event and not prev_event:
4949 logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
50 defer.returnValue(None)
50 return None
5151
5252 prev_value = None
5353 value = None
6161 logger.debug("prev_value: %r -> value: %r", prev_value, value)
6262
6363 if value == public_value and prev_value != public_value:
64 defer.returnValue(True)
64 return True
6565 elif value != public_value and prev_value == public_value:
66 defer.returnValue(False)
66 return False
6767 else:
68 defer.returnValue(None)
68 return None
8585
8686 # If still None then the initial background update hasn't happened yet
8787 if self.pos is None:
88 defer.returnValue(None)
88 return None
8989
9090 # Loop round handling deltas until we're up to date
9191 while True:
327327 == "world_readable"
328328 )
329329 ):
330 defer.returnValue(True)
330 return True
331331 else:
332 defer.returnValue(False)
332 return False
262262 timeout,
263263 full_state,
264264 )
265 defer.returnValue(res)
265 return res
266266
267267 @defer.inlineCallbacks
268268 def _wait_for_sync_for_user(self, sync_config, since_token, timeout, full_state):
302302 lazy_loaded = "false"
303303 non_empty_sync_counter.labels(sync_type, lazy_loaded).inc()
304304
305 defer.returnValue(result)
305 return result
306306
307307 def current_sync_for_user(self, sync_config, since_token=None, full_state=False):
308308 """Get the sync for client needed to match what the server has now.
316316 user_id = user.to_string()
317317 rules = yield self.store.get_push_rules_for_user(user_id)
318318 rules = format_push_rules_for_user(user, rules)
319 defer.returnValue(rules)
319 return rules
320320
321321 @defer.inlineCallbacks
322322 def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
377377 event_copy = {k: v for (k, v) in iteritems(event) if k != "room_id"}
378378 ephemeral_by_room.setdefault(room_id, []).append(event_copy)
379379
380 defer.returnValue((now_token, ephemeral_by_room))
380 return (now_token, ephemeral_by_room)
381381
382382 @defer.inlineCallbacks
383383 def _load_filtered_recents(
425425 recents = []
426426
427427 if not limited or block_all_timeline:
428 defer.returnValue(
429 TimelineBatch(events=recents, prev_batch=now_token, limited=False)
428 return TimelineBatch(
429 events=recents, prev_batch=now_token, limited=False
430430 )
431431
432432 filtering_factor = 2
489489
490490 prev_batch_token = now_token.copy_and_replace("room_key", room_key)
491491
492 defer.returnValue(
493 TimelineBatch(
494 events=recents,
495 prev_batch=prev_batch_token,
496 limited=limited or newly_joined_room,
497 )
492 return TimelineBatch(
493 events=recents,
494 prev_batch=prev_batch_token,
495 limited=limited or newly_joined_room,
498496 )
499497
500498 @defer.inlineCallbacks
516514 if event.is_state():
517515 state_ids = state_ids.copy()
518516 state_ids[(event.type, event.state_key)] = event.event_id
519 defer.returnValue(state_ids)
517 return state_ids
520518
521519 @defer.inlineCallbacks
522520 def get_state_at(self, room_id, stream_position, state_filter=StateFilter.all()):
548546 else:
549547 # no events in this room - so presumably no state
550548 state = {}
551 defer.returnValue(state)
549 return state
552550
553551 @defer.inlineCallbacks
554552 def compute_summary(self, room_id, sync_config, batch, state, now_token):
578576 )
579577
580578 if not last_events:
581 defer.returnValue(None)
579 return None
582580 return
583581
584582 last_event = last_events[-1]
610608 if name_id:
611609 name = yield self.store.get_event(name_id, allow_none=True)
612610 if name and name.content.get("name"):
613 defer.returnValue(summary)
611 return summary
614612
615613 if canonical_alias_id:
616614 canonical_alias = yield self.store.get_event(
617615 canonical_alias_id, allow_none=True
618616 )
619617 if canonical_alias and canonical_alias.content.get("alias"):
620 defer.returnValue(summary)
618 return summary
621619
622620 me = sync_config.user.to_string()
623621
651649 summary["m.heroes"] = sorted([user_id for user_id in gone_user_ids])[0:5]
652650
653651 if not sync_config.filter_collection.lazy_load_members():
654 defer.returnValue(summary)
652 return summary
655653
656654 # ensure we send membership events for heroes if needed
657655 cache_key = (sync_config.user.to_string(), sync_config.device_id)
685683 cache.set(s.state_key, s.event_id)
686684 state[(EventTypes.Member, s.state_key)] = s
687685
688 defer.returnValue(summary)
686 return summary
689687
690688 def get_lazy_loaded_members_cache(self, cache_key):
691689 cache = self.lazy_loaded_members_cache.get(cache_key)
782780 lazy_load_members=lazy_load_members,
783781 )
784782 elif batch.limited:
785 state_at_timeline_start = yield self.store.get_state_ids_for_event(
786 batch.events[0].event_id, state_filter=state_filter
787 )
783 if batch:
784 state_at_timeline_start = yield self.store.get_state_ids_for_event(
785 batch.events[0].event_id, state_filter=state_filter
786 )
787 else:
788 # Its not clear how we get here, but empirically we do
789 # (#5407). Logging has been added elsewhere to try and
790 # figure out where this state comes from.
791 state_at_timeline_start = yield self.get_state_at(
792 room_id, stream_position=now_token, state_filter=state_filter
793 )
788794
789795 # for now, we disable LL for gappy syncs - see
790796 # https://github.com/vector-im/riot-web/issues/7211#issuecomment-419976346
804810 room_id, stream_position=since_token, state_filter=state_filter
805811 )
806812
807 current_state_ids = yield self.store.get_state_ids_for_event(
808 batch.events[-1].event_id, state_filter=state_filter
809 )
813 if batch:
814 current_state_ids = yield self.store.get_state_ids_for_event(
815 batch.events[-1].event_id, state_filter=state_filter
816 )
817 else:
818 # Its not clear how we get here, but empirically we do
819 # (#5407). Logging has been added elsewhere to try and
820 # figure out where this state comes from.
821 current_state_ids = yield self.get_state_at(
822 room_id, stream_position=now_token, state_filter=state_filter
823 )
810824
811825 state_ids = _calculate_state(
812826 timeline_contains=timeline_state,
870884 if state_ids:
871885 state = yield self.store.get_events(list(state_ids.values()))
872886
873 defer.returnValue(
874 {
875 (e.type, e.state_key): e
876 for e in sync_config.filter_collection.filter_room_state(
877 list(state.values())
878 )
879 }
880 )
887 return {
888 (e.type, e.state_key): e
889 for e in sync_config.filter_collection.filter_room_state(
890 list(state.values())
891 )
892 }
881893
882894 @defer.inlineCallbacks
883895 def unread_notifs_for_room_id(self, room_id, sync_config):
893905 notifs = yield self.store.get_unread_event_push_actions_by_room_for_user(
894906 room_id, sync_config.user.to_string(), last_unread_event_id
895907 )
896 defer.returnValue(notifs)
908 return notifs
897909
898910 # There is no new information in this period, so your notification
899911 # count is whatever it was last time.
900 defer.returnValue(None)
912 return None
901913
902914 @defer.inlineCallbacks
903915 def generate_sync_result(self, sync_config, since_token=None, full_state=False):
9881000 "Sync result for newly joined room %s: %r", room_id, joined_room
9891001 )
9901002
991 defer.returnValue(
992 SyncResult(
993 presence=sync_result_builder.presence,
994 account_data=sync_result_builder.account_data,
995 joined=sync_result_builder.joined,
996 invited=sync_result_builder.invited,
997 archived=sync_result_builder.archived,
998 to_device=sync_result_builder.to_device,
999 device_lists=device_lists,
1000 groups=sync_result_builder.groups,
1001 device_one_time_keys_count=one_time_key_counts,
1002 next_batch=sync_result_builder.now_token,
1003 )
1003 return SyncResult(
1004 presence=sync_result_builder.presence,
1005 account_data=sync_result_builder.account_data,
1006 joined=sync_result_builder.joined,
1007 invited=sync_result_builder.invited,
1008 archived=sync_result_builder.archived,
1009 to_device=sync_result_builder.to_device,
1010 device_lists=device_lists,
1011 groups=sync_result_builder.groups,
1012 device_one_time_keys_count=one_time_key_counts,
1013 next_batch=sync_result_builder.now_token,
10041014 )
10051015
10061016 @measure_func("_generate_sync_entry_for_groups")
11231133 # Remove any users that we still share a room with.
11241134 newly_left_users -= users_who_share_room
11251135
1126 defer.returnValue(
1127 DeviceLists(changed=users_that_have_changed, left=newly_left_users)
1128 )
1136 return DeviceLists(changed=users_that_have_changed, left=newly_left_users)
11291137 else:
1130 defer.returnValue(DeviceLists(changed=[], left=[]))
1138 return DeviceLists(changed=[], left=[])
11311139
11321140 @defer.inlineCallbacks
11331141 def _generate_sync_entry_for_to_device(self, sync_result_builder):
12241232
12251233 sync_result_builder.account_data = account_data_for_user
12261234
1227 defer.returnValue(account_data_by_room)
1235 return account_data_by_room
12281236
12291237 @defer.inlineCallbacks
12301238 def _generate_sync_entry_for_presence(
13241332 )
13251333 if not tags_by_room:
13261334 logger.debug("no-oping sync")
1327 defer.returnValue(([], [], [], []))
1335 return ([], [], [], [])
13281336
13291337 ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
13301338 "m.ignored_user_list", user_id=user_id
13871395
13881396 newly_left_users -= newly_joined_or_invited_users
13891397
1390 defer.returnValue(
1391 (
1392 newly_joined_rooms,
1393 newly_joined_or_invited_users,
1394 newly_left_rooms,
1395 newly_left_users,
1396 )
1398 return (
1399 newly_joined_rooms,
1400 newly_joined_or_invited_users,
1401 newly_left_rooms,
1402 newly_left_users,
13971403 )
13981404
13991405 @defer.inlineCallbacks
14131419 )
14141420
14151421 if rooms_changed:
1416 defer.returnValue(True)
1422 return True
14171423
14181424 stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream
14191425 for room_id in sync_result_builder.joined_room_ids:
14201426 if self.store.has_room_changed_since(room_id, stream_id):
1421 defer.returnValue(True)
1422 defer.returnValue(False)
1427 return True
1428 return False
14231429
14241430 @defer.inlineCallbacks
14251431 def _get_rooms_changed(self, sync_result_builder, ignored_users):
16361642 )
16371643 room_entries.append(entry)
16381644
1639 defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms))
1645 return (room_entries, invited, newly_joined_rooms, newly_left_rooms)
16401646
16411647 @defer.inlineCallbacks
16421648 def _get_all_rooms(self, sync_result_builder, ignored_users):
17101716 )
17111717 )
17121718
1713 defer.returnValue((room_entries, invited, []))
1719 return (room_entries, invited, [])
17141720
17151721 @defer.inlineCallbacks
17161722 def _generate_room_entry(
17631769 recents=events,
17641770 newly_joined_room=newly_joined,
17651771 )
1772
1773 if not batch and batch.limited:
1774 # This resulted in #5407, which is weird, so lets log! We do it
1775 # here as we have the maximum amount of information.
1776 user_id = sync_result_builder.sync_config.user.to_string()
1777 logger.info(
1778 "Issue #5407: Found limited batch with no events. user %s, room %s,"
1779 " sync_config %s, newly_joined %s, events %s, batch %s.",
1780 user_id,
1781 room_id,
1782 sync_config,
1783 newly_joined,
1784 events,
1785 batch,
1786 )
17661787
17671788 if newly_joined:
17681789 # debug for https://github.com/matrix-org/synapse/issues/4422
19111932 joined_room_ids.add(room_id)
19121933
19131934 joined_room_ids = frozenset(joined_room_ids)
1914 defer.returnValue(joined_room_ids)
1935 return joined_room_ids
19151936
19161937
19171938 def _action_has_highlight(actions):
8282 self._room_typing = {}
8383
8484 def _handle_timeouts(self):
85 logger.info("Checking for typing timeouts")
85 logger.debug("Checking for typing timeouts")
8686
8787 now = self.clock.time_msec()
8888
139139
140140 if was_present:
141141 # No point sending another notification
142 defer.returnValue(None)
142 return None
143143
144144 self._push_update(member=member, typing=True)
145145
172172 def _stopped_typing(self, member):
173173 if member.user_id not in self._room_typing.get(member.room_id, set()):
174174 # No point
175 defer.returnValue(None)
175 return None
176176
177177 self._member_typing_until.pop(member, None)
178178 self._member_last_federation_poke.pop(member, None)
132132
133133 # If still None then the initial background update hasn't happened yet
134134 if self.pos is None:
135 defer.returnValue(None)
135 return None
136136
137137 # Loop round handling deltas until we're up to date
138138 while True:
293293 logger.info(
294294 "Received response to %s %s: %s", method, redact_uri(uri), response.code
295295 )
296 defer.returnValue(response)
296 return response
297297 except Exception as e:
298298 incoming_responses_counter.labels(method, "ERR").inc()
299299 logger.info(
344344 body = yield make_deferred_yieldable(readBody(response))
345345
346346 if 200 <= response.code < 300:
347 defer.returnValue(json.loads(body))
347 return json.loads(body)
348348 else:
349349 raise HttpResponseException(response.code, response.phrase, body)
350350
384384 body = yield make_deferred_yieldable(readBody(response))
385385
386386 if 200 <= response.code < 300:
387 defer.returnValue(json.loads(body))
387 return json.loads(body)
388388 else:
389389 raise HttpResponseException(response.code, response.phrase, body)
390390
409409 ValueError: if the response was not JSON
410410 """
411411 body = yield self.get_raw(uri, args, headers=headers)
412 defer.returnValue(json.loads(body))
412 return json.loads(body)
413413
414414 @defer.inlineCallbacks
415415 def put_json(self, uri, json_body, args={}, headers=None):
452452 body = yield make_deferred_yieldable(readBody(response))
453453
454454 if 200 <= response.code < 300:
455 defer.returnValue(json.loads(body))
455 return json.loads(body)
456456 else:
457457 raise HttpResponseException(response.code, response.phrase, body)
458458
487487 body = yield make_deferred_yieldable(readBody(response))
488488
489489 if 200 <= response.code < 300:
490 defer.returnValue(body)
490 return body
491491 else:
492492 raise HttpResponseException(response.code, response.phrase, body)
493493
544544 except Exception as e:
545545 raise_from(SynapseError(502, ("Failed to download remote body: %s" % e)), e)
546546
547 defer.returnValue(
548 (
549 length,
550 resp_headers,
551 response.request.absoluteURI.decode("ascii"),
552 response.code,
553 )
547 return (
548 length,
549 resp_headers,
550 response.request.absoluteURI.decode("ascii"),
551 response.code,
554552 )
555553
556554
626624
627625 try:
628626 body = yield make_deferred_yieldable(readBody(response))
629 defer.returnValue(body)
627 return body
630628 except PartialDownloadError as e:
631629 # twisted dislikes google's response, no content length.
632 defer.returnValue(e.response)
630 return e.response
633631
634632
635633 def encode_urlencode_args(args):
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import json
14
1515 import logging
16 import random
17 import time
1816
1917 import attr
2018 from netaddr import IPAddress
2321 from twisted.internet import defer
2422 from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
2523 from twisted.internet.interfaces import IStreamClientEndpoint
26 from twisted.web.client import URI, Agent, HTTPConnectionPool, RedirectAgent, readBody
27 from twisted.web.http import stringToDatetime
24 from twisted.web.client import URI, Agent, HTTPConnectionPool
2825 from twisted.web.http_headers import Headers
2926 from twisted.web.iweb import IAgent
3027
3128 from synapse.http.federation.srv_resolver import SrvResolver, pick_server_from_list
29 from synapse.http.federation.well_known_resolver import WellKnownResolver
3230 from synapse.logging.context import make_deferred_yieldable
3331 from synapse.util import Clock
34 from synapse.util.caches.ttlcache import TTLCache
35 from synapse.util.metrics import Measure
36
37 # period to cache .well-known results for by default
38 WELL_KNOWN_DEFAULT_CACHE_PERIOD = 24 * 3600
39
40 # jitter to add to the .well-known default cache ttl
41 WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER = 10 * 60
42
43 # period to cache failure to fetch .well-known for
44 WELL_KNOWN_INVALID_CACHE_PERIOD = 1 * 3600
45
46 # cap for .well-known cache period
47 WELL_KNOWN_MAX_CACHE_PERIOD = 48 * 3600
4832
4933 logger = logging.getLogger(__name__)
50 well_known_cache = TTLCache("well-known")
5134
5235
5336 @implementer(IAgent)
6346 tls_client_options_factory (ClientTLSOptionsFactory|None):
6447 factory to use for fetching client tls options, or none to disable TLS.
6548
66 _well_known_tls_policy (IPolicyForHTTPS|None):
67 TLS policy to use for fetching .well-known files. None to use a default
68 (browser-like) implementation.
69
7049 _srv_resolver (SrvResolver|None):
7150 SRVResolver impl to use for looking up SRV records. None to use a default
7251 implementation.
8059 self,
8160 reactor,
8261 tls_client_options_factory,
83 _well_known_tls_policy=None,
8462 _srv_resolver=None,
85 _well_known_cache=well_known_cache,
63 _well_known_cache=None,
8664 ):
8765 self._reactor = reactor
8866 self._clock = Clock(reactor)
9775 self._pool.maxPersistentPerHost = 5
9876 self._pool.cachedConnectionTimeout = 2 * 60
9977
100 agent_args = {}
101 if _well_known_tls_policy is not None:
102 # the param is called 'contextFactory', but actually passing a
103 # contextfactory is deprecated, and it expects an IPolicyForHTTPS.
104 agent_args["contextFactory"] = _well_known_tls_policy
105 _well_known_agent = RedirectAgent(
106 Agent(self._reactor, pool=self._pool, **agent_args)
78 self._well_known_resolver = WellKnownResolver(
79 self._reactor,
80 agent=Agent(
81 self._reactor,
82 pool=self._pool,
83 contextFactory=tls_client_options_factory,
84 ),
85 well_known_cache=_well_known_cache,
10786 )
108 self._well_known_agent = _well_known_agent
109
110 # our cache of .well-known lookup results, mapping from server name
111 # to delegated name. The values can be:
112 # `bytes`: a valid server-name
113 # `None`: there is no (valid) .well-known here
114 self._well_known_cache = _well_known_cache
11587
11688 @defer.inlineCallbacks
11789 def request(self, method, uri, headers=None, bodyProducer=None):
176148 res = yield make_deferred_yieldable(
177149 agent.request(method, uri, headers, bodyProducer)
178150 )
179 defer.returnValue(res)
151 return res
180152
181153 @defer.inlineCallbacks
182154 def _route_matrix_uri(self, parsed_uri, lookup_well_known=True):
204176 port = parsed_uri.port
205177 if port == -1:
206178 port = 8448
207 defer.returnValue(
208 _RoutingResult(
209 host_header=parsed_uri.netloc,
210 tls_server_name=parsed_uri.host,
211 target_host=parsed_uri.host,
212 target_port=port,
213 )
179 return _RoutingResult(
180 host_header=parsed_uri.netloc,
181 tls_server_name=parsed_uri.host,
182 target_host=parsed_uri.host,
183 target_port=port,
214184 )
215185
216186 if parsed_uri.port != -1:
217187 # there is an explicit port
218 defer.returnValue(
219 _RoutingResult(
220 host_header=parsed_uri.netloc,
221 tls_server_name=parsed_uri.host,
222 target_host=parsed_uri.host,
223 target_port=parsed_uri.port,
224 )
188 return _RoutingResult(
189 host_header=parsed_uri.netloc,
190 tls_server_name=parsed_uri.host,
191 target_host=parsed_uri.host,
192 target_port=parsed_uri.port,
225193 )
226194
227195 if lookup_well_known:
228196 # try a .well-known lookup
229 well_known_server = yield self._get_well_known(parsed_uri.host)
197 well_known_result = yield self._well_known_resolver.get_well_known(
198 parsed_uri.host
199 )
200 well_known_server = well_known_result.delegated_server
230201
231202 if well_known_server:
232203 # if we found a .well-known, start again, but don't do another
258229 )
259230
260231 res = yield self._route_matrix_uri(new_uri, lookup_well_known=False)
261 defer.returnValue(res)
232 return res
262233
263234 # try a SRV lookup
264235 service_name = b"_matrix._tcp.%s" % (parsed_uri.host,)
282253 parsed_uri.host.decode("ascii"),
283254 )
284255
285 defer.returnValue(
286 _RoutingResult(
287 host_header=parsed_uri.netloc,
288 tls_server_name=parsed_uri.host,
289 target_host=target_host,
290 target_port=port,
291 )
256 return _RoutingResult(
257 host_header=parsed_uri.netloc,
258 tls_server_name=parsed_uri.host,
259 target_host=target_host,
260 target_port=port,
292261 )
293
294 @defer.inlineCallbacks
295 def _get_well_known(self, server_name):
296 """Attempt to fetch and parse a .well-known file for the given server
297
298 Args:
299 server_name (bytes): name of the server, from the requested url
300
301 Returns:
302 Deferred[bytes|None]: either the new server name, from the .well-known, or
303 None if there was no .well-known file.
304 """
305 try:
306 result = self._well_known_cache[server_name]
307 except KeyError:
308 # TODO: should we linearise so that we don't end up doing two .well-known
309 # requests for the same server in parallel?
310 with Measure(self._clock, "get_well_known"):
311 result, cache_period = yield self._do_get_well_known(server_name)
312
313 if cache_period > 0:
314 self._well_known_cache.set(server_name, result, cache_period)
315
316 defer.returnValue(result)
317
318 @defer.inlineCallbacks
319 def _do_get_well_known(self, server_name):
320 """Actually fetch and parse a .well-known, without checking the cache
321
322 Args:
323 server_name (bytes): name of the server, from the requested url
324
325 Returns:
326 Deferred[Tuple[bytes|None|object],int]:
327 result, cache period, where result is one of:
328 - the new server name from the .well-known (as a `bytes`)
329 - None if there was no .well-known file.
330 - INVALID_WELL_KNOWN if the .well-known was invalid
331 """
332 uri = b"https://%s/.well-known/matrix/server" % (server_name,)
333 uri_str = uri.decode("ascii")
334 logger.info("Fetching %s", uri_str)
335 try:
336 response = yield make_deferred_yieldable(
337 self._well_known_agent.request(b"GET", uri)
338 )
339 body = yield make_deferred_yieldable(readBody(response))
340 if response.code != 200:
341 raise Exception("Non-200 response %s" % (response.code,))
342
343 parsed_body = json.loads(body.decode("utf-8"))
344 logger.info("Response from .well-known: %s", parsed_body)
345 if not isinstance(parsed_body, dict):
346 raise Exception("not a dict")
347 if "m.server" not in parsed_body:
348 raise Exception("Missing key 'm.server'")
349 except Exception as e:
350 logger.info("Error fetching %s: %s", uri_str, e)
351
352 # add some randomness to the TTL to avoid a stampeding herd every hour
353 # after startup
354 cache_period = WELL_KNOWN_INVALID_CACHE_PERIOD
355 cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
356 defer.returnValue((None, cache_period))
357
358 result = parsed_body["m.server"].encode("ascii")
359
360 cache_period = _cache_period_from_headers(
361 response.headers, time_now=self._reactor.seconds
362 )
363 if cache_period is None:
364 cache_period = WELL_KNOWN_DEFAULT_CACHE_PERIOD
365 # add some randomness to the TTL to avoid a stampeding herd every 24 hours
366 # after startup
367 cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
368 else:
369 cache_period = min(cache_period, WELL_KNOWN_MAX_CACHE_PERIOD)
370
371 defer.returnValue((result, cache_period))
372262
373263
374264 @implementer(IStreamClientEndpoint)
385275 return self.ep.connect(protocol_factory)
386276
387277
388 def _cache_period_from_headers(headers, time_now=time.time):
389 cache_controls = _parse_cache_control(headers)
390
391 if b"no-store" in cache_controls:
392 return 0
393
394 if b"max-age" in cache_controls:
395 try:
396 max_age = int(cache_controls[b"max-age"])
397 return max_age
398 except ValueError:
399 pass
400
401 expires = headers.getRawHeaders(b"expires")
402 if expires is not None:
403 try:
404 expires_date = stringToDatetime(expires[-1])
405 return expires_date - time_now()
406 except ValueError:
407 # RFC7234 says 'A cache recipient MUST interpret invalid date formats,
408 # especially the value "0", as representing a time in the past (i.e.,
409 # "already expired").
410 return 0
411
412 return None
413
414
415 def _parse_cache_control(headers):
416 cache_controls = {}
417 for hdr in headers.getRawHeaders(b"cache-control", []):
418 for directive in hdr.split(b","):
419 splits = [x.strip() for x in directive.split(b"=", 1)]
420 k = splits[0].lower()
421 v = splits[1] if len(splits) > 1 else None
422 cache_controls[k] = v
423 return cache_controls
424
425
426278 @attr.s
427279 class _RoutingResult(object):
428280 """The result returned by `_route_matrix_uri`.
119119 if cache_entry:
120120 if all(s.expires > now for s in cache_entry):
121121 servers = list(cache_entry)
122 defer.returnValue(servers)
122 return servers
123123
124124 try:
125125 answers, _, _ = yield make_deferred_yieldable(
128128 except DNSNameError:
129129 # TODO: cache this. We can get the SOA out of the exception, and use
130130 # the negative-TTL value.
131 defer.returnValue([])
131 return []
132132 except DomainError as e:
133133 # We failed to resolve the name (other than a NameError)
134134 # Try something in the cache, else rereaise
137137 logger.warn(
138138 "Failed to resolve %r, falling back to cache. %r", service_name, e
139139 )
140 defer.returnValue(list(cache_entry))
140 return list(cache_entry)
141141 else:
142142 raise e
143143
168168 )
169169
170170 self._cache[service_name] = list(servers)
171 defer.returnValue(servers)
171 return servers
0 # -*- coding: utf-8 -*-
1 # Copyright 2019 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import logging
17 import random
18 import time
19
20 import attr
21
22 from twisted.internet import defer
23 from twisted.web.client import RedirectAgent, readBody
24 from twisted.web.http import stringToDatetime
25
26 from synapse.logging.context import make_deferred_yieldable
27 from synapse.util import Clock
28 from synapse.util.caches.ttlcache import TTLCache
29 from synapse.util.metrics import Measure
30
31 # period to cache .well-known results for by default
32 WELL_KNOWN_DEFAULT_CACHE_PERIOD = 24 * 3600
33
34 # jitter to add to the .well-known default cache ttl
35 WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER = 10 * 60
36
37 # period to cache failure to fetch .well-known for
38 WELL_KNOWN_INVALID_CACHE_PERIOD = 1 * 3600
39
40 # cap for .well-known cache period
41 WELL_KNOWN_MAX_CACHE_PERIOD = 48 * 3600
42
43 # lower bound for .well-known cache period
44 WELL_KNOWN_MIN_CACHE_PERIOD = 5 * 60
45
46 logger = logging.getLogger(__name__)
47
48
49 _well_known_cache = TTLCache("well-known")
50
51
52 @attr.s(slots=True, frozen=True)
53 class WellKnownLookupResult(object):
54 delegated_server = attr.ib()
55
56
57 class WellKnownResolver(object):
58 """Handles well-known lookups for matrix servers.
59 """
60
61 def __init__(self, reactor, agent, well_known_cache=None):
62 self._reactor = reactor
63 self._clock = Clock(reactor)
64
65 if well_known_cache is None:
66 well_known_cache = _well_known_cache
67
68 self._well_known_cache = well_known_cache
69 self._well_known_agent = RedirectAgent(agent)
70
71 @defer.inlineCallbacks
72 def get_well_known(self, server_name):
73 """Attempt to fetch and parse a .well-known file for the given server
74
75 Args:
76 server_name (bytes): name of the server, from the requested url
77
78 Returns:
79 Deferred[WellKnownLookupResult]: The result of the lookup
80 """
81 try:
82 result = self._well_known_cache[server_name]
83 except KeyError:
84 # TODO: should we linearise so that we don't end up doing two .well-known
85 # requests for the same server in parallel?
86 with Measure(self._clock, "get_well_known"):
87 result, cache_period = yield self._do_get_well_known(server_name)
88
89 if cache_period > 0:
90 self._well_known_cache.set(server_name, result, cache_period)
91
92 return WellKnownLookupResult(delegated_server=result)
93
94 @defer.inlineCallbacks
95 def _do_get_well_known(self, server_name):
96 """Actually fetch and parse a .well-known, without checking the cache
97
98 Args:
99 server_name (bytes): name of the server, from the requested url
100
101 Returns:
102 Deferred[Tuple[bytes|None|object],int]:
103 result, cache period, where result is one of:
104 - the new server name from the .well-known (as a `bytes`)
105 - None if there was no .well-known file.
106 - INVALID_WELL_KNOWN if the .well-known was invalid
107 """
108 uri = b"https://%s/.well-known/matrix/server" % (server_name,)
109 uri_str = uri.decode("ascii")
110 logger.info("Fetching %s", uri_str)
111 try:
112 response = yield make_deferred_yieldable(
113 self._well_known_agent.request(b"GET", uri)
114 )
115 body = yield make_deferred_yieldable(readBody(response))
116 if response.code != 200:
117 raise Exception("Non-200 response %s" % (response.code,))
118
119 parsed_body = json.loads(body.decode("utf-8"))
120 logger.info("Response from .well-known: %s", parsed_body)
121 if not isinstance(parsed_body, dict):
122 raise Exception("not a dict")
123 if "m.server" not in parsed_body:
124 raise Exception("Missing key 'm.server'")
125 except Exception as e:
126 logger.info("Error fetching %s: %s", uri_str, e)
127
128 # add some randomness to the TTL to avoid a stampeding herd every hour
129 # after startup
130 cache_period = WELL_KNOWN_INVALID_CACHE_PERIOD
131 cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
132 return (None, cache_period)
133
134 result = parsed_body["m.server"].encode("ascii")
135
136 cache_period = _cache_period_from_headers(
137 response.headers, time_now=self._reactor.seconds
138 )
139 if cache_period is None:
140 cache_period = WELL_KNOWN_DEFAULT_CACHE_PERIOD
141 # add some randomness to the TTL to avoid a stampeding herd every 24 hours
142 # after startup
143 cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
144 else:
145 cache_period = min(cache_period, WELL_KNOWN_MAX_CACHE_PERIOD)
146 cache_period = max(cache_period, WELL_KNOWN_MIN_CACHE_PERIOD)
147
148 return (result, cache_period)
149
150
151 def _cache_period_from_headers(headers, time_now=time.time):
152 cache_controls = _parse_cache_control(headers)
153
154 if b"no-store" in cache_controls:
155 return 0
156
157 if b"max-age" in cache_controls:
158 try:
159 max_age = int(cache_controls[b"max-age"])
160 return max_age
161 except ValueError:
162 pass
163
164 expires = headers.getRawHeaders(b"expires")
165 if expires is not None:
166 try:
167 expires_date = stringToDatetime(expires[-1])
168 return expires_date - time_now()
169 except ValueError:
170 # RFC7234 says 'A cache recipient MUST interpret invalid date formats,
171 # especially the value "0", as representing a time in the past (i.e.,
172 # "already expired").
173 return 0
174
175 return None
176
177
178 def _parse_cache_control(headers):
179 cache_controls = {}
180 for hdr in headers.getRawHeaders(b"cache-control", []):
181 for directive in hdr.split(b","):
182 splits = [x.strip() for x in directive.split(b"=", 1)]
183 k = splits[0].lower()
184 v = splits[1] if len(splits) > 1 else None
185 cache_controls[k] = v
186 return cache_controls
157157 response.code,
158158 response.phrase.decode("ascii", errors="replace"),
159159 )
160 defer.returnValue(body)
160 return body
161161
162162
163163 class MatrixFederationHttpClient(object):
255255
256256 response = yield self._send_request(request, **send_request_args)
257257
258 defer.returnValue(response)
258 return response
259259
260260 @defer.inlineCallbacks
261261 def _send_request(
519519 _flatten_response_never_received(e),
520520 )
521521 raise
522 defer.returnValue(response)
522 return response
523523
524524 def build_auth_headers(
525525 self, destination, method, url_bytes, content=None, destination_is=None
643643 self.reactor, self.default_timeout, request, response
644644 )
645645
646 defer.returnValue(body)
646 return body
647647
648648 @defer.inlineCallbacks
649649 def post_json(
712712 body = yield _handle_json_response(
713713 self.reactor, _sec_timeout, request, response
714714 )
715 defer.returnValue(body)
715 return body
716716
717717 @defer.inlineCallbacks
718718 def get_json(
777777 self.reactor, self.default_timeout, request, response
778778 )
779779
780 defer.returnValue(body)
780 return body
781781
782782 @defer.inlineCallbacks
783783 def delete_json(
835835 body = yield _handle_json_response(
836836 self.reactor, self.default_timeout, request, response
837837 )
838 defer.returnValue(body)
838 return body
839839
840840 @defer.inlineCallbacks
841841 def get_file(
901901 response.phrase.decode("ascii", errors="replace"),
902902 length,
903903 )
904 defer.returnValue((length, headers))
904 return (length, headers)
905905
906906
907907 class _ReadBodyToFileProtocol(protocol.Protocol):
165165 value = args[name][0]
166166
167167 if encoding:
168 value = value.decode(encoding)
168 try:
169 value = value.decode(encoding)
170 except ValueError:
171 raise SynapseError(
172 400, "Query parameter %r must be %s" % (name, encoding)
173 )
169174
170175 if allowed_values is not None and value not in allowed_values:
171176 message = "Query parameter %r must be one of [%s]" % (
1010 # distributed under the License is distributed on an "AS IS" BASIS,
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
13 # limitations under the License.import opentracing
13 # limitations under the License.
1414
1515
1616 # NOTE
8888 # We start
8989 yield we_wait
9090 # we finish
91 defer.returnValue(something_usual_and_useful)
91 return something_usual_and_useful
9292
9393 Operation names can be explicitly set for functions by using
9494 ``trace_using_operation_name`` and
112112 # We start
113113 yield we_wait
114114 # we finish
115 defer.returnValue(something_usual_and_useful)
115 return something_usual_and_useful
116116
117117 Contexts and carriers
118118 ---------------------
149149 """
150150
151151 import contextlib
152 import inspect
152153 import logging
153154 import re
154155 from functools import wraps
156
157 from canonicaljson import json
155158
156159 from twisted.internet import defer
157160
172175 logger = logging.getLogger(__name__)
173176
174177
175 class _DumTagNames(object):
178 # Block everything by default
179 # A regex which matches the server_names to expose traces for.
180 # None means 'block everything'.
181 _homeserver_whitelist = None
182
183 # Util methods
184
185
186 def only_if_tracing(func):
187 """Executes the function only if we're tracing. Otherwise return.
188 Assumes the function wrapped may return None"""
189
190 @wraps(func)
191 def _only_if_tracing_inner(*args, **kwargs):
192 if opentracing:
193 return func(*args, **kwargs)
194 else:
195 return
196
197 return _only_if_tracing_inner
198
199
200 @contextlib.contextmanager
201 def _noop_context_manager(*args, **kwargs):
202 """Does exactly what it says on the tin"""
203 yield
204
205
206 # Setup
207
208
209 def init_tracer(config):
210 """Set the whitelists and initialise the JaegerClient tracer
211
212 Args:
213 config (HomeserverConfig): The config used by the homeserver
214 """
215 global opentracing
216 if not config.opentracer_enabled:
217 # We don't have a tracer
218 opentracing = None
219 return
220
221 if not opentracing or not JaegerConfig:
222 raise ConfigError(
223 "The server has been configured to use opentracing but opentracing is not "
224 "installed."
225 )
226
227 # Include the worker name
228 name = config.worker_name if config.worker_name else "master"
229
230 # Pull out the jaeger config if it was given. Otherwise set it to something sensible.
231 # See https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py
232
233 set_homeserver_whitelist(config.opentracer_whitelist)
234
235 JaegerConfig(
236 config=config.jaeger_config,
237 service_name="{} {}".format(config.server_name, name),
238 scope_manager=LogContextScopeManager(config),
239 ).initialize_tracer()
240
241 # Set up tags to be opentracing's tags
242 global tags
243 tags = opentracing.tags
244
245
246 # Whitelisting
247
248
249 @only_if_tracing
250 def set_homeserver_whitelist(homeserver_whitelist):
251 """Sets the homeserver whitelist
252
253 Args:
254 homeserver_whitelist (Iterable[str]): regex of whitelisted homeservers
255 """
256 global _homeserver_whitelist
257 if homeserver_whitelist:
258 # Makes a single regex which accepts all passed in regexes in the list
259 _homeserver_whitelist = re.compile(
260 "({})".format(")|(".join(homeserver_whitelist))
261 )
262
263
264 @only_if_tracing
265 def whitelisted_homeserver(destination):
266 """Checks if a destination matches the whitelist
267
268 Args:
269 destination (str)
270 """
271 _homeserver_whitelist
272 if _homeserver_whitelist:
273 return _homeserver_whitelist.match(destination)
274 return False
275
276
277 # Start spans and scopes
278
279 # Could use kwargs but I want these to be explicit
280 def start_active_span(
281 operation_name,
282 child_of=None,
283 references=None,
284 tags=None,
285 start_time=None,
286 ignore_active_span=False,
287 finish_on_close=True,
288 ):
289 """Starts an active opentracing span. Note, the scope doesn't become active
290 until it has been entered, however, the span starts from the time this
291 message is called.
292 Args:
293 See opentracing.tracer
294 Returns:
295 scope (Scope) or noop_context_manager
296 """
297
298 if opentracing is None:
299 return _noop_context_manager()
300
301 else:
302 # We need to enter the scope here for the logcontext to become active
303 return opentracing.tracer.start_active_span(
304 operation_name,
305 child_of=child_of,
306 references=references,
307 tags=tags,
308 start_time=start_time,
309 ignore_active_span=ignore_active_span,
310 finish_on_close=finish_on_close,
311 )
312
313
314 def start_active_span_follows_from(operation_name, contexts):
315 if opentracing is None:
316 return _noop_context_manager()
317 else:
318 references = [opentracing.follows_from(context) for context in contexts]
319 scope = start_active_span(operation_name, references=references)
320 return scope
321
322
323 def start_active_span_from_context(
324 headers,
325 operation_name,
326 references=None,
327 tags=None,
328 start_time=None,
329 ignore_active_span=False,
330 finish_on_close=True,
331 ):
332 """
333 Extracts a span context from Twisted Headers.
334 args:
335 headers (twisted.web.http_headers.Headers)
336
337 For the other args see opentracing.tracer
338
339 returns:
340 span_context (opentracing.span.SpanContext)
341 """
342 # Twisted encodes the values as lists whereas opentracing doesn't.
343 # So, we take the first item in the list.
344 # Also, twisted uses byte arrays while opentracing expects strings.
345
346 if opentracing is None:
347 return _noop_context_manager()
348
349 header_dict = {k.decode(): v[0].decode() for k, v in headers.getAllRawHeaders()}
350 context = opentracing.tracer.extract(opentracing.Format.HTTP_HEADERS, header_dict)
351
352 return opentracing.tracer.start_active_span(
353 operation_name,
354 child_of=context,
355 references=references,
356 tags=tags,
357 start_time=start_time,
358 ignore_active_span=ignore_active_span,
359 finish_on_close=finish_on_close,
360 )
361
362
363 def start_active_span_from_edu(
364 edu_content,
365 operation_name,
366 references=[],
367 tags=None,
368 start_time=None,
369 ignore_active_span=False,
370 finish_on_close=True,
371 ):
372 """
373 Extracts a span context from an edu and uses it to start a new active span
374
375 Args:
376 edu_content (dict): and edu_content with a `context` field whose value is
377 canonical json for a dict which contains opentracing information.
378
379 For the other args see opentracing.tracer
380 """
381
382 if opentracing is None:
383 return _noop_context_manager()
384
385 carrier = json.loads(edu_content.get("context", "{}")).get("opentracing", {})
386 context = opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
387 _references = [
388 opentracing.child_of(span_context_from_string(x))
389 for x in carrier.get("references", [])
390 ]
391
392 # For some reason jaeger decided not to support the visualization of multiple parent
393 # spans or explicitely show references. I include the span context as a tag here as
394 # an aid to people debugging but it's really not an ideal solution.
395
396 references += _references
397
398 scope = opentracing.tracer.start_active_span(
399 operation_name,
400 child_of=context,
401 references=references,
402 tags=tags,
403 start_time=start_time,
404 ignore_active_span=ignore_active_span,
405 finish_on_close=finish_on_close,
406 )
407
408 scope.span.set_tag("references", carrier.get("references", []))
409 return scope
410
411
412 # Opentracing setters for tags, logs, etc
413
414
415 @only_if_tracing
416 def set_tag(key, value):
417 """Sets a tag on the active span"""
418 opentracing.tracer.active_span.set_tag(key, value)
419
420
421 @only_if_tracing
422 def log_kv(key_values, timestamp=None):
423 """Log to the active span"""
424 opentracing.tracer.active_span.log_kv(key_values, timestamp)
425
426
427 @only_if_tracing
428 def set_operation_name(operation_name):
429 """Sets the operation name of the active span"""
430 opentracing.tracer.active_span.set_operation_name(operation_name)
431
432
433 # Injection and extraction
434
435
436 @only_if_tracing
437 def inject_active_span_twisted_headers(headers, destination):
438 """
439 Injects a span context into twisted headers in-place
440
441 Args:
442 headers (twisted.web.http_headers.Headers)
443 span (opentracing.Span)
444
445 Returns:
446 In-place modification of headers
447
448 Note:
449 The headers set by the tracer are custom to the tracer implementation which
450 should be unique enough that they don't interfere with any headers set by
451 synapse or twisted. If we're still using jaeger these headers would be those
452 here:
453 https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
454 """
455
456 if not whitelisted_homeserver(destination):
457 return
458
459 span = opentracing.tracer.active_span
460 carrier = {}
461 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
462
463 for key, value in carrier.items():
464 headers.addRawHeaders(key, value)
465
466
467 @only_if_tracing
468 def inject_active_span_byte_dict(headers, destination):
469 """
470 Injects a span context into a dict where the headers are encoded as byte
471 strings
472
473 Args:
474 headers (dict)
475 span (opentracing.Span)
476
477 Returns:
478 In-place modification of headers
479
480 Note:
481 The headers set by the tracer are custom to the tracer implementation which
482 should be unique enough that they don't interfere with any headers set by
483 synapse or twisted. If we're still using jaeger these headers would be those
484 here:
485 https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
486 """
487 if not whitelisted_homeserver(destination):
488 return
489
490 span = opentracing.tracer.active_span
491
492 carrier = {}
493 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
494
495 for key, value in carrier.items():
496 headers[key.encode()] = [value.encode()]
497
498
499 @only_if_tracing
500 def inject_active_span_text_map(carrier, destination=None):
501 """
502 Injects a span context into a dict
503
504 Args:
505 carrier (dict)
506 destination (str): the name of the remote server. The span context
507 will only be injected if the destination matches the homeserver_whitelist
508 or destination is None.
509
510 Returns:
511 In-place modification of carrier
512
513 Note:
514 The headers set by the tracer are custom to the tracer implementation which
515 should be unique enough that they don't interfere with any headers set by
516 synapse or twisted. If we're still using jaeger these headers would be those
517 here:
518 https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
519 """
520
521 if destination and not whitelisted_homeserver(destination):
522 return
523
524 opentracing.tracer.inject(
525 opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
526 )
527
528
529 def active_span_context_as_string():
530 """
531 Returns:
532 The active span context encoded as a string.
533 """
534 carrier = {}
535 if opentracing:
536 opentracing.tracer.inject(
537 opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
538 )
539 return json.dumps(carrier)
540
541
542 @only_if_tracing
543 def span_context_from_string(carrier):
544 """
545 Returns:
546 The active span context decoded from a string.
547 """
548 carrier = json.loads(carrier)
549 return opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
550
551
552 @only_if_tracing
553 def extract_text_map(carrier):
554 """
555 Wrapper method for opentracing's tracer.extract for TEXT_MAP.
556 Args:
557 carrier (dict): a dict possibly containing a span context.
558
559 Returns:
560 The active span context extracted from carrier.
561 """
562 return opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
563
564
565 # Tracing decorators
566
567
568 def trace(func):
569 """
570 Decorator to trace a function.
571 Sets the operation name to that of the function's.
572 """
573 if opentracing is None:
574 return func
575
576 @wraps(func)
577 def _trace_inner(self, *args, **kwargs):
578 if opentracing is None:
579 return func(self, *args, **kwargs)
580
581 scope = start_active_span(func.__name__)
582 scope.__enter__()
583
584 try:
585 result = func(self, *args, **kwargs)
586 if isinstance(result, defer.Deferred):
587
588 def call_back(result):
589 scope.__exit__(None, None, None)
590 return result
591
592 def err_back(result):
593 scope.span.set_tag(tags.ERROR, True)
594 scope.__exit__(None, None, None)
595 return result
596
597 result.addCallbacks(call_back, err_back)
598
599 else:
600 scope.__exit__(None, None, None)
601
602 return result
603
604 except Exception as e:
605 scope.__exit__(type(e), None, e.__traceback__)
606 raise
607
608 return _trace_inner
609
610
611 def trace_using_operation_name(operation_name):
612 """Decorator to trace a function. Explicitely sets the operation_name."""
613
614 def trace(func):
615 """
616 Decorator to trace a function.
617 Sets the operation name to that of the function's.
618 """
619 if opentracing is None:
620 return func
621
622 @wraps(func)
623 def _trace_inner(self, *args, **kwargs):
624 if opentracing is None:
625 return func(self, *args, **kwargs)
626
627 scope = start_active_span(operation_name)
628 scope.__enter__()
629
630 try:
631 result = func(self, *args, **kwargs)
632 if isinstance(result, defer.Deferred):
633
634 def call_back(result):
635 scope.__exit__(None, None, None)
636 return result
637
638 def err_back(result):
639 scope.span.set_tag(tags.ERROR, True)
640 scope.__exit__(None, None, None)
641 return result
642
643 result.addCallbacks(call_back, err_back)
644 else:
645 scope.__exit__(None, None, None)
646
647 return result
648
649 except Exception as e:
650 scope.__exit__(type(e), None, e.__traceback__)
651 raise
652
653 return _trace_inner
654
655 return trace
656
657
658 def tag_args(func):
659 """
660 Tags all of the args to the active span.
661 """
662
663 if not opentracing:
664 return func
665
666 @wraps(func)
667 def _tag_args_inner(self, *args, **kwargs):
668 argspec = inspect.getargspec(func)
669 for i, arg in enumerate(argspec.args[1:]):
670 set_tag("ARG_" + arg, args[i])
671 set_tag("args", args[len(argspec.args) :])
672 set_tag("kwargs", kwargs)
673 return func(self, *args, **kwargs)
674
675 return _tag_args_inner
676
677
678 def trace_servlet(servlet_name, func):
679 """Decorator which traces a serlet. It starts a span with some servlet specific
680 tags such as the servlet_name and request information"""
681 if not opentracing:
682 return func
683
684 @wraps(func)
685 @defer.inlineCallbacks
686 def _trace_servlet_inner(request, *args, **kwargs):
687 with start_active_span(
688 "incoming-client-request",
689 tags={
690 "request_id": request.get_request_id(),
691 tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
692 tags.HTTP_METHOD: request.get_method(),
693 tags.HTTP_URL: request.get_redacted_uri(),
694 tags.PEER_HOST_IPV6: request.getClientIP(),
695 "servlet_name": servlet_name,
696 },
697 ):
698 result = yield defer.maybeDeferred(func, request, *args, **kwargs)
699 return result
700
701 return _trace_servlet_inner
702
703
704 # Helper class
705
706
707 class _DummyTagNames(object):
176708 """wrapper of opentracings tags. We need to have them if we
177709 want to reference them without opentracing around. Clearly they
178710 should never actually show up in a trace. `set_tags` overwrites
204736 SPAN_KIND_RPC_SERVER = INVALID_TAG
205737
206738
207 def only_if_tracing(func):
208 """Executes the function only if we're tracing. Otherwise return.
209 Assumes the function wrapped may return None"""
210
211 @wraps(func)
212 def _only_if_tracing_inner(*args, **kwargs):
213 if opentracing:
214 return func(*args, **kwargs)
215 else:
216 return
217
218 return _only_if_tracing_inner
219
220
221 # A regex which matches the server_names to expose traces for.
222 # None means 'block everything'.
223 _homeserver_whitelist = None
224
225 tags = _DumTagNames
226
227
228 def init_tracer(config):
229 """Set the whitelists and initialise the JaegerClient tracer
230
231 Args:
232 config (HomeserverConfig): The config used by the homeserver
233 """
234 global opentracing
235 if not config.opentracer_enabled:
236 # We don't have a tracer
237 opentracing = None
238 return
239
240 if not opentracing or not JaegerConfig:
241 raise ConfigError(
242 "The server has been configured to use opentracing but opentracing is not "
243 "installed."
244 )
245
246 # Include the worker name
247 name = config.worker_name if config.worker_name else "master"
248
249 set_homeserver_whitelist(config.opentracer_whitelist)
250 jaeger_config = JaegerConfig(
251 config={"sampler": {"type": "const", "param": 1}, "logging": True},
252 service_name="{} {}".format(config.server_name, name),
253 scope_manager=LogContextScopeManager(config),
254 )
255 jaeger_config.initialize_tracer()
256
257 # Set up tags to be opentracing's tags
258 global tags
259 tags = opentracing.tags
260
261
262 @contextlib.contextmanager
263 def _noop_context_manager(*args, **kwargs):
264 """Does absolutely nothing really well. Can be entered and exited arbitrarily.
265 Good substitute for an opentracing scope."""
266 yield
267
268
269 # Could use kwargs but I want these to be explicit
270 def start_active_span(
271 operation_name,
272 child_of=None,
273 references=None,
274 tags=None,
275 start_time=None,
276 ignore_active_span=False,
277 finish_on_close=True,
278 ):
279 """Starts an active opentracing span. Note, the scope doesn't become active
280 until it has been entered, however, the span starts from the time this
281 message is called.
282 Args:
283 See opentracing.tracer
284 Returns:
285 scope (Scope) or noop_context_manager
286 """
287 if opentracing is None:
288 return _noop_context_manager()
289 else:
290 # We need to enter the scope here for the logcontext to become active
291 return opentracing.tracer.start_active_span(
292 operation_name,
293 child_of=child_of,
294 references=references,
295 tags=tags,
296 start_time=start_time,
297 ignore_active_span=ignore_active_span,
298 finish_on_close=finish_on_close,
299 )
300
301
302 @only_if_tracing
303 def close_active_span():
304 """Closes the active span. This will close it's logcontext if the context
305 was made for the span"""
306 opentracing.tracer.scope_manager.active.__exit__(None, None, None)
307
308
309 @only_if_tracing
310 def set_tag(key, value):
311 """Set's a tag on the active span"""
312 opentracing.tracer.active_span.set_tag(key, value)
313
314
315 @only_if_tracing
316 def log_kv(key_values, timestamp=None):
317 """Log to the active span"""
318 opentracing.tracer.active_span.log_kv(key_values, timestamp)
319
320
321 # Note: we don't have a get baggage items because we're trying to hide all
322 # scope and span state from synapse. I think this method may also be useless
323 # as a result
324 @only_if_tracing
325 def set_baggage_item(key, value):
326 """Attach baggage to the active span"""
327 opentracing.tracer.active_span.set_baggage_item(key, value)
328
329
330 @only_if_tracing
331 def set_operation_name(operation_name):
332 """Sets the operation name of the active span"""
333 opentracing.tracer.active_span.set_operation_name(operation_name)
334
335
336 @only_if_tracing
337 def set_homeserver_whitelist(homeserver_whitelist):
338 """Sets the whitelist
339
340 Args:
341 homeserver_whitelist (iterable of strings): regex of whitelisted homeservers
342 """
343 global _homeserver_whitelist
344 if homeserver_whitelist:
345 # Makes a single regex which accepts all passed in regexes in the list
346 _homeserver_whitelist = re.compile(
347 "({})".format(")|(".join(homeserver_whitelist))
348 )
349
350
351 @only_if_tracing
352 def whitelisted_homeserver(destination):
353 """Checks if a destination matches the whitelist
354 Args:
355 destination (String)"""
356 if _homeserver_whitelist:
357 return _homeserver_whitelist.match(destination)
358 return False
359
360
361 def start_active_span_from_context(
362 headers,
363 operation_name,
364 references=None,
365 tags=None,
366 start_time=None,
367 ignore_active_span=False,
368 finish_on_close=True,
369 ):
370 """
371 Extracts a span context from Twisted Headers.
372 args:
373 headers (twisted.web.http_headers.Headers)
374 returns:
375 span_context (opentracing.span.SpanContext)
376 """
377 # Twisted encodes the values as lists whereas opentracing doesn't.
378 # So, we take the first item in the list.
379 # Also, twisted uses byte arrays while opentracing expects strings.
380 if opentracing is None:
381 return _noop_context_manager()
382
383 header_dict = {k.decode(): v[0].decode() for k, v in headers.getAllRawHeaders()}
384 context = opentracing.tracer.extract(opentracing.Format.HTTP_HEADERS, header_dict)
385
386 return opentracing.tracer.start_active_span(
387 operation_name,
388 child_of=context,
389 references=references,
390 tags=tags,
391 start_time=start_time,
392 ignore_active_span=ignore_active_span,
393 finish_on_close=finish_on_close,
394 )
395
396
397 @only_if_tracing
398 def inject_active_span_twisted_headers(headers, destination):
399 """
400 Injects a span context into twisted headers inplace
401
402 Args:
403 headers (twisted.web.http_headers.Headers)
404 span (opentracing.Span)
405
406 Returns:
407 Inplace modification of headers
408
409 Note:
410 The headers set by the tracer are custom to the tracer implementation which
411 should be unique enough that they don't interfere with any headers set by
412 synapse or twisted. If we're still using jaeger these headers would be those
413 here:
414 https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
415 """
416
417 if not whitelisted_homeserver(destination):
418 return
419
420 span = opentracing.tracer.active_span
421 carrier = {}
422 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
423
424 for key, value in carrier.items():
425 headers.addRawHeaders(key, value)
426
427
428 @only_if_tracing
429 def inject_active_span_byte_dict(headers, destination):
430 """
431 Injects a span context into a dict where the headers are encoded as byte
432 strings
433
434 Args:
435 headers (dict)
436 span (opentracing.Span)
437
438 Returns:
439 Inplace modification of headers
440
441 Note:
442 The headers set by the tracer are custom to the tracer implementation which
443 should be unique enough that they don't interfere with any headers set by
444 synapse or twisted. If we're still using jaeger these headers would be those
445 here:
446 https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
447 """
448 if not whitelisted_homeserver(destination):
449 return
450
451 span = opentracing.tracer.active_span
452
453 carrier = {}
454 opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
455
456 for key, value in carrier.items():
457 headers[key.encode()] = [value.encode()]
458
459
460 def trace_servlet(servlet_name, func):
461 """Decorator which traces a serlet. It starts a span with some servlet specific
462 tags such as the servlet_name and request information"""
463
464 @wraps(func)
465 @defer.inlineCallbacks
466 def _trace_servlet_inner(request, *args, **kwargs):
467 with start_active_span_from_context(
468 request.requestHeaders,
469 "incoming-client-request",
470 tags={
471 "request_id": request.get_request_id(),
472 tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
473 tags.HTTP_METHOD: request.get_method(),
474 tags.HTTP_URL: request.get_redacted_uri(),
475 tags.PEER_HOST_IPV6: request.getClientIP(),
476 "servlet_name": servlet_name,
477 },
478 ):
479 result = yield defer.maybeDeferred(func, request, *args, **kwargs)
480 defer.returnValue(result)
481
482 return _trace_servlet_inner
739 tags = _DummyTagNames
130130
131131 def close(self):
132132 if self.manager.active is not self:
133 logger.error("Tried to close a none active scope!")
133 logger.error("Tried to close a non-active scope!")
134134 return
135135
136136 if self._finish_on_close:
100100 )
101101 user_id = yield self.register_user(localpart, displayname, emails)
102102 _, access_token = yield self.register_device(user_id)
103 defer.returnValue((user_id, access_token))
103 return (user_id, access_token)
104104
105105 def register_user(self, localpart, displayname=None, emails=[]):
106106 """Registers a new user with given localpart and optional displayname, emails.
364364 current_token = user_stream.current_token
365365 result = yield callback(prev_token, current_token)
366366
367 defer.returnValue(result)
367 return result
368368
369369 @defer.inlineCallbacks
370370 def get_events_for(
399399 @defer.inlineCallbacks
400400 def check_for_updates(before_token, after_token):
401401 if not after_token.is_after(before_token):
402 defer.returnValue(EventStreamResult([], (from_token, from_token)))
402 return EventStreamResult([], (from_token, from_token))
403403
404404 events = []
405405 end_token = from_token
439439 events.extend(new_events)
440440 end_token = end_token.copy_and_replace(keyname, new_key)
441441
442 defer.returnValue(EventStreamResult(events, (from_token, end_token)))
442 return EventStreamResult(events, (from_token, end_token))
443443
444444 user_id_for_stream = user.to_string()
445445 if is_peeking:
464464 from_token=from_token,
465465 )
466466
467 defer.returnValue(result)
467 return result
468468
469469 @defer.inlineCallbacks
470470 def _get_room_ids(self, user, explicit_room_id):
471471 joined_room_ids = yield self.store.get_rooms_for_user(user.to_string())
472472 if explicit_room_id:
473473 if explicit_room_id in joined_room_ids:
474 defer.returnValue(([explicit_room_id], True))
474 return ([explicit_room_id], True)
475475 if (yield self._is_world_readable(explicit_room_id)):
476 defer.returnValue(([explicit_room_id], False))
476 return ([explicit_room_id], False)
477477 raise AuthError(403, "Non-joined access not allowed")
478 defer.returnValue((joined_room_ids, True))
478 return (joined_room_ids, True)
479479
480480 @defer.inlineCallbacks
481481 def _is_world_readable(self, room_id):
483483 room_id, EventTypes.RoomHistoryVisibility, ""
484484 )
485485 if state and "history_visibility" in state.content:
486 defer.returnValue(state.content["history_visibility"] == "world_readable")
486 return state.content["history_visibility"] == "world_readable"
487487 else:
488 defer.returnValue(False)
488 return False
489489
490490 @log_function
491491 def remove_expired_streams(self):
244244 "key": "type",
245245 "pattern": "m.room.tombstone",
246246 "_id": "_tombstone",
247 }
247 },
248 {
249 "kind": "event_match",
250 "key": "state_key",
251 "pattern": "",
252 "_id": "_tombstone_statekey",
253 },
248254 ],
249255 "actions": ["notify", {"set_tweak": "highlight", "value": True}],
250256 },
9494 invited
9595 )
9696
97 defer.returnValue(rules_by_user)
97 return rules_by_user
9898
9999 @cached()
100100 def _get_rules_for_room(self, room_id):
133133
134134 pl_event = auth_events.get(POWER_KEY)
135135
136 defer.returnValue((pl_event.content if pl_event else {}, sender_level))
136 return (pl_event.content if pl_event else {}, sender_level)
137137
138138 @defer.inlineCallbacks
139139 def action_for_event_by_user(self, event, context):
282282 if state_group and self.state_group == state_group:
283283 logger.debug("Using cached rules for %r", self.room_id)
284284 self.room_push_rule_cache_metrics.inc_hits()
285 defer.returnValue(self.rules_by_user)
285 return self.rules_by_user
286286
287287 with (yield self.linearizer.queue(())):
288288 if state_group and self.state_group == state_group:
289289 logger.debug("Using cached rules for %r", self.room_id)
290290 self.room_push_rule_cache_metrics.inc_hits()
291 defer.returnValue(self.rules_by_user)
291 return self.rules_by_user
292292
293293 self.room_push_rule_cache_metrics.inc_misses()
294294
365365 logger.debug(
366366 "Returning push rules for %r %r", self.room_id, ret_rules_by_user.keys()
367367 )
368 defer.returnValue(ret_rules_by_user)
368 return ret_rules_by_user
369369
370370 @defer.inlineCallbacks
371371 def _update_rules_with_member_event_ids(
233233 return
234234
235235 self.last_stream_ordering = last_stream_ordering
236 yield self.store.update_pusher_last_stream_ordering_and_success(
237 self.app_id,
238 self.email,
239 self.user_id,
240 last_stream_ordering,
241 self.clock.time_msec(),
236 pusher_still_exists = (
237 yield self.store.update_pusher_last_stream_ordering_and_success(
238 self.app_id,
239 self.email,
240 self.user_id,
241 last_stream_ordering,
242 self.clock.time_msec(),
243 )
242244 )
245 if not pusher_still_exists:
246 # The pusher has been deleted while we were processing, so
247 # lets just stop and return.
248 self.on_stop()
243249
244250 def seconds_until(self, ts_msec):
245251 secs = (ts_msec - self.clock.time_msec()) / 1000
198198 http_push_processed_counter.inc()
199199 self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
200200 self.last_stream_ordering = push_action["stream_ordering"]
201 yield self.store.update_pusher_last_stream_ordering_and_success(
202 self.app_id,
203 self.pushkey,
204 self.user_id,
205 self.last_stream_ordering,
206 self.clock.time_msec(),
201 pusher_still_exists = (
202 yield self.store.update_pusher_last_stream_ordering_and_success(
203 self.app_id,
204 self.pushkey,
205 self.user_id,
206 self.last_stream_ordering,
207 self.clock.time_msec(),
208 )
207209 )
210 if not pusher_still_exists:
211 # The pusher has been deleted while we were processing, so
212 # lets just stop and return.
213 self.on_stop()
214 return
215
208216 if self.failing_since:
209217 self.failing_since = None
210218 yield self.store.update_pusher_failing_since(
233241 )
234242 self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
235243 self.last_stream_ordering = push_action["stream_ordering"]
236 yield self.store.update_pusher_last_stream_ordering(
244 pusher_still_exists = yield self.store.update_pusher_last_stream_ordering(
237245 self.app_id,
238246 self.pushkey,
239247 self.user_id,
240248 self.last_stream_ordering,
241249 )
250 if not pusher_still_exists:
251 # The pusher has been deleted while we were processing, so
252 # lets just stop and return.
253 self.on_stop()
254 return
242255
243256 self.failing_since = None
244257 yield self.store.update_pusher_failing_since(
257270 @defer.inlineCallbacks
258271 def _process_one(self, push_action):
259272 if "notify" not in push_action["actions"]:
260 defer.returnValue(True)
273 return True
261274
262275 tweaks = push_rule_evaluator.tweaks_for_actions(push_action["actions"])
263276 badge = yield push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
264277
265278 event = yield self.store.get_event(push_action["event_id"], allow_none=True)
266279 if event is None:
267 defer.returnValue(True) # It's been redacted
280 return True # It's been redacted
268281 rejected = yield self.dispatch_push(event, tweaks, badge)
269282 if rejected is False:
270 defer.returnValue(False)
283 return False
271284
272285 if isinstance(rejected, list) or isinstance(rejected, tuple):
273286 for pk in rejected:
281294 else:
282295 logger.info("Pushkey %s was rejected: removing", pk)
283296 yield self.hs.remove_pusher(self.app_id, pk, self.user_id)
284 defer.returnValue(True)
297 return True
285298
286299 @defer.inlineCallbacks
287300 def _build_notification_dict(self, event, tweaks, badge):
301314 ],
302315 }
303316 }
304 defer.returnValue(d)
317 return d
305318
306319 ctx = yield push_tools.get_context_for_event(
307320 self.store, self.state_handler, event, self.user_id
344357 if "name" in ctx and len(ctx["name"]) > 0:
345358 d["notification"]["room_name"] = ctx["name"]
346359
347 defer.returnValue(d)
360 return d
348361
349362 @defer.inlineCallbacks
350363 def dispatch_push(self, event, tweaks, badge):
351364 notification_dict = yield self._build_notification_dict(event, tweaks, badge)
352365 if not notification_dict:
353 defer.returnValue([])
366 return []
354367 try:
355368 resp = yield self.http_client.post_json_get_json(
356369 self.url, notification_dict
363376 type(e),
364377 e,
365378 )
366 defer.returnValue(False)
379 return False
367380 rejected = []
368381 if "rejected" in resp:
369382 rejected = resp["rejected"]
370 defer.returnValue(rejected)
383 return rejected
371384
372385 @defer.inlineCallbacks
373386 def _send_badge(self, badge):
315315 if not merge:
316316 room_vars["notifs"].append(notifvars)
317317
318 defer.returnValue(room_vars)
318 return room_vars
319319
320320 @defer.inlineCallbacks
321321 def get_notif_vars(self, notif, user_id, notif_event, room_state_ids):
342342 if messagevars is not None:
343343 ret["messages"].append(messagevars)
344344
345 defer.returnValue(ret)
345 return ret
346346
347347 @defer.inlineCallbacks
348348 def get_message_vars(self, notif, event, room_state_ids):
378378 if "body" in event.content:
379379 ret["body_text_plain"] = event.content["body"]
380380
381 defer.returnValue(ret)
381 return ret
382382
383383 def add_text_message_vars(self, messagevars, event):
384384 msgformat = event.content.get("format")
427427 inviter_name = name_from_member_event(inviter_member_event)
428428
429429 if room_name is None:
430 defer.returnValue(
431 INVITE_FROM_PERSON
432 % {"person": inviter_name, "app": self.app_name}
433 )
430 return INVITE_FROM_PERSON % {
431 "person": inviter_name,
432 "app": self.app_name,
433 }
434434 else:
435 defer.returnValue(
436 INVITE_FROM_PERSON_TO_ROOM
437 % {
438 "person": inviter_name,
439 "room": room_name,
440 "app": self.app_name,
441 }
442 )
435 return INVITE_FROM_PERSON_TO_ROOM % {
436 "person": inviter_name,
437 "room": room_name,
438 "app": self.app_name,
439 }
443440
444441 sender_name = None
445442 if len(notifs_by_room[room_id]) == 1:
453450 sender_name = name_from_member_event(state_event)
454451
455452 if sender_name is not None and room_name is not None:
456 defer.returnValue(
457 MESSAGE_FROM_PERSON_IN_ROOM
458 % {
459 "person": sender_name,
460 "room": room_name,
461 "app": self.app_name,
462 }
463 )
453 return MESSAGE_FROM_PERSON_IN_ROOM % {
454 "person": sender_name,
455 "room": room_name,
456 "app": self.app_name,
457 }
464458 elif sender_name is not None:
465 defer.returnValue(
466 MESSAGE_FROM_PERSON
467 % {"person": sender_name, "app": self.app_name}
468 )
459 return MESSAGE_FROM_PERSON % {
460 "person": sender_name,
461 "app": self.app_name,
462 }
469463 else:
470464 # There's more than one notification for this room, so just
471465 # say there are several
472466 if room_name is not None:
473 defer.returnValue(
474 MESSAGES_IN_ROOM % {"room": room_name, "app": self.app_name}
475 )
467 return MESSAGES_IN_ROOM % {"room": room_name, "app": self.app_name}
476468 else:
477469 # If the room doesn't have a name, say who the messages
478470 # are from explicitly to avoid, "messages in the Bob room"
492484 ]
493485 )
494486
495 defer.returnValue(
496 MESSAGES_FROM_PERSON
497 % {
498 "person": descriptor_from_member_events(
499 member_events.values()
500 ),
501 "app": self.app_name,
502 }
503 )
487 return MESSAGES_FROM_PERSON % {
488 "person": descriptor_from_member_events(member_events.values()),
489 "app": self.app_name,
490 }
504491 else:
505492 # Stuff's happened in multiple different rooms
506493
507494 # ...but we still refer to the 'reason' room which triggered the mail
508495 if reason["room_name"] is not None:
509 defer.returnValue(
510 MESSAGES_IN_ROOM_AND_OTHERS
511 % {"room": reason["room_name"], "app": self.app_name}
512 )
496 return MESSAGES_IN_ROOM_AND_OTHERS % {
497 "room": reason["room_name"],
498 "app": self.app_name,
499 }
513500 else:
514501 # If the reason room doesn't have a name, say who the messages
515502 # are from explicitly to avoid, "messages in the Bob room"
526513 [room_state_ids[room_id][("m.room.member", s)] for s in sender_ids]
527514 )
528515
529 defer.returnValue(
530 MESSAGES_FROM_PERSON_AND_OTHERS
531 % {
532 "person": descriptor_from_member_events(member_events.values()),
533 "app": self.app_name,
534 }
535 )
516 return MESSAGES_FROM_PERSON_AND_OTHERS % {
517 "person": descriptor_from_member_events(member_events.values()),
518 "app": self.app_name,
519 }
536520
537521 def make_room_link(self, room_id):
538522 if self.hs.config.email_riot_base_url:
5454 room_state_ids[("m.room.name", "")], allow_none=True
5555 )
5656 if m_room_name and m_room_name.content and m_room_name.content["name"]:
57 defer.returnValue(m_room_name.content["name"])
57 return m_room_name.content["name"]
5858
5959 # does it have a canonical alias?
6060 if ("m.room.canonical_alias", "") in room_state_ids:
6767 and canon_alias.content["alias"]
6868 and _looks_like_an_alias(canon_alias.content["alias"])
6969 ):
70 defer.returnValue(canon_alias.content["alias"])
70 return canon_alias.content["alias"]
7171
7272 # at this point we're going to need to search the state by all state keys
7373 # for an event type, so rearrange the data structure
8181 if alias_event and alias_event.content.get("aliases"):
8282 the_aliases = alias_event.content["aliases"]
8383 if len(the_aliases) > 0 and _looks_like_an_alias(the_aliases[0]):
84 defer.returnValue(the_aliases[0])
84 return the_aliases[0]
8585
8686 if not fallback_to_members:
87 defer.returnValue(None)
87 return None
8888
8989 my_member_event = None
9090 if ("m.room.member", user_id) in room_state_ids:
103103 )
104104 if inviter_member_event:
105105 if fallback_to_single_member:
106 defer.returnValue(
107 "Invite from %s"
108 % (name_from_member_event(inviter_member_event),)
106 return "Invite from %s" % (
107 name_from_member_event(inviter_member_event),
109108 )
110109 else:
111110 return
112111 else:
113 defer.returnValue("Room Invite")
112 return "Room Invite"
114113
115114 # we're going to have to generate a name based on who's in the room,
116115 # so find out who is in the room that isn't the user.
153152 # return "Inviting %s" % (
154153 # descriptor_from_member_events(third_party_invites)
155154 # )
156 defer.returnValue("Inviting email address")
155 return "Inviting email address"
157156 else:
158 defer.returnValue(ALL_ALONE)
157 return ALL_ALONE
159158 else:
160 defer.returnValue(name_from_member_event(all_members[0]))
159 return name_from_member_event(all_members[0])
161160 else:
162 defer.returnValue(ALL_ALONE)
161 return ALL_ALONE
163162 elif len(other_members) == 1 and not fallback_to_single_member:
164163 return
165164 else:
166 defer.returnValue(descriptor_from_member_events(other_members))
165 return descriptor_from_member_events(other_members)
167166
168167
169168 def descriptor_from_member_events(member_events):
3838 # return one badge count per conversation, as count per
3939 # message is so noisy as to be almost useless
4040 badge += 1 if notifs["notify_count"] else 0
41 defer.returnValue(badge)
41 return badge
4242
4343
4444 @defer.inlineCallbacks
6060 sender_state_event = yield store.get_event(sender_state_event_id)
6161 ctx["sender_display_name"] = name_from_member_event(sender_state_event)
6262
63 defer.returnValue(ctx)
63 return ctx
122122 )
123123 pusher = yield self.start_pusher_by_id(app_id, pushkey, user_id)
124124
125 defer.returnValue(pusher)
125 return pusher
126126
127127 @defer.inlineCallbacks
128128 def remove_pushers_by_app_id_and_pushkey_not_user(
223223 if pusher_dict:
224224 pusher = yield self._start_pusher(pusher_dict)
225225
226 defer.returnValue(pusher)
226 return pusher
227227
228228 @defer.inlineCallbacks
229229 def _start_pushers(self):
292292
293293 p.on_started(have_notifs)
294294
295 defer.returnValue(p)
295 return p
296296
297297 @defer.inlineCallbacks
298298 def remove_pusher(self, app_id, pushkey, user_id):
7171 "netaddr>=0.7.18",
7272 "Jinja2>=2.9",
7373 "bleach>=1.4.3",
74 "sdnotify>=0.3",
7475 ]
7576
7677 CONDITIONAL_REQUIREMENTS = {
184184 except RequestSendFailed as e:
185185 raise_from(SynapseError(502, "Failed to talk to master"), e)
186186
187 defer.returnValue(result)
187 return result
188188
189189 return send_request
190190
7979
8080 payload = {"events": event_payloads, "backfilled": backfilled}
8181
82 defer.returnValue(payload)
82 return payload
8383
8484 @defer.inlineCallbacks
8585 def _handle_request(self, request):
112112 event_and_contexts, backfilled
113113 )
114114
115 defer.returnValue((200, {}))
115 return (200, {})
116116
117117
118118 class ReplicationFederationSendEduRestServlet(ReplicationEndpoint):
155155
156156 result = yield self.registry.on_edu(edu_type, origin, edu_content)
157157
158 defer.returnValue((200, result))
158 return (200, result)
159159
160160
161161 class ReplicationGetQueryRestServlet(ReplicationEndpoint):
203203
204204 result = yield self.registry.on_query(query_type, args)
205205
206 defer.returnValue((200, result))
206 return (200, result)
207207
208208
209209 class ReplicationCleanRoomRestServlet(ReplicationEndpoint):
237237 def _handle_request(self, request, room_id):
238238 yield self.store.clean_room_for_join(room_id)
239239
240 defer.returnValue((200, {}))
240 return (200, {})
241241
242242
243243 def register_servlets(hs, http_server):
6363 user_id, device_id, initial_display_name, is_guest
6464 )
6565
66 defer.returnValue((200, {"device_id": device_id, "access_token": access_token}))
66 return (200, {"device_id": device_id, "access_token": access_token})
6767
6868
6969 def register_servlets(hs, http_server):
8282 remote_room_hosts, room_id, user_id, event_content
8383 )
8484
85 defer.returnValue((200, {}))
85 return (200, {})
8686
8787
8888 class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
152152 yield self.store.locally_reject_invite(user_id, room_id)
153153 ret = {}
154154
155 defer.returnValue((200, ret))
155 return (200, ret)
156156
157157
158158 class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
8989 address=content["address"],
9090 )
9191
92 defer.returnValue((200, {}))
92 return (200, {})
9393
9494
9595 class ReplicationPostRegisterActionsServlet(ReplicationEndpoint):
142142 bind_msisdn=bind_msisdn,
143143 )
144144
145 defer.returnValue((200, {}))
145 return (200, {})
146146
147147
148148 def register_servlets(hs, http_server):
8484 "extra_users": [u.to_string() for u in extra_users],
8585 }
8686
87 defer.returnValue(payload)
87 return payload
8888
8989 @defer.inlineCallbacks
9090 def _handle_request(self, request, event_id):
116116 requester, event, context, ratelimit=ratelimit, extra_users=extra_users
117117 )
118118
119 defer.returnValue((200, {}))
119 return (200, {})
120120
121121
122122 def register_servlets(hs, http_server):
157157 updates, current_token = yield self.get_updates_since(self.last_token)
158158 self.last_token = current_token
159159
160 defer.returnValue((updates, current_token))
160 return (updates, current_token)
161161
162162 @defer.inlineCallbacks
163163 def get_updates_since(self, from_token):
171171 sent over the replication steam.
172172 """
173173 if from_token in ("NOW", "now"):
174 defer.returnValue(([], self.upto_token))
174 return ([], self.upto_token)
175175
176176 current_token = self.upto_token
177177
178178 from_token = int(from_token)
179179
180180 if from_token == current_token:
181 defer.returnValue(([], current_token))
181 return ([], current_token)
182182
183183 if self._LIMITED:
184184 rows = yield self.update_function(
197197 if self._LIMITED and len(updates) >= MAX_EVENTS_BEHIND:
198198 raise Exception("stream %s has fallen behind" % (self.NAME))
199199
200 defer.returnValue((updates, current_token))
200 return (updates, current_token)
201201
202202 def current_token(self):
203203 """Gets the current token of the underlying streams. Should be provided
296296 @defer.inlineCallbacks
297297 def update_function(self, from_token, to_token, limit):
298298 rows = yield self.store.get_all_push_rule_updates(from_token, to_token, limit)
299 defer.returnValue([(row[0], row[2]) for row in rows])
299 return [(row[0], row[2]) for row in rows]
300300
301301
302302 class PushersStream(Stream):
423423 for stream_id, user_id, account_data_type, content in global_results
424424 )
425425
426 defer.returnValue(results)
426 return results
427427
428428
429429 class GroupServerStream(Stream):
133133
134134 all_updates = heapq.merge(event_updates, state_updates)
135135
136 defer.returnValue(all_updates)
136 return all_updates
137137
138138 @classmethod
139139 def parse_row(cls, row):
0 <html><body>Your account has been successfully renewed.</body><html>
0 <html><body>Invalid renewal token.</body><html>
2626
2727 import synapse
2828 from synapse.api.constants import Membership, UserTypes
29 from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
29 from synapse.api.errors import Codes, NotFoundError, SynapseError
3030 from synapse.http.server import JsonResource
3131 from synapse.http.servlet import (
3232 RestServlet,
3535 parse_json_object_from_request,
3636 parse_string,
3737 )
38 from synapse.rest.admin._base import assert_requester_is_admin, assert_user_is_admin
38 from synapse.rest.admin._base import (
39 assert_requester_is_admin,
40 assert_user_is_admin,
41 historical_admin_path_patterns,
42 )
43 from synapse.rest.admin.media import register_servlets_for_media_repo
3944 from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
4045 from synapse.types import UserID, create_requester
4146 from synapse.util.versionstring import get_version_string
4348 logger = logging.getLogger(__name__)
4449
4550
46 def historical_admin_path_patterns(path_regex):
47 """Returns the list of patterns for an admin endpoint, including historical ones
48
49 This is a backwards-compatibility hack. Previously, the Admin API was exposed at
50 various paths under /_matrix/client. This function returns a list of patterns
51 matching those paths (as well as the new one), so that existing scripts which rely
52 on the endpoints being available there are not broken.
53
54 Note that this should only be used for existing endpoints: new ones should just
55 register for the /_synapse/admin path.
56 """
57 return list(
58 re.compile(prefix + path_regex)
59 for prefix in (
60 "^/_synapse/admin/v1",
61 "^/_matrix/client/api/v1/admin",
62 "^/_matrix/client/unstable/admin",
63 "^/_matrix/client/r0/admin",
64 )
65 )
66
67
6851 class UsersRestServlet(RestServlet):
6952 PATTERNS = historical_admin_path_patterns("/users/(?P<user_id>[^/]*)")
7053
8366
8467 ret = yield self.handlers.admin_handler.get_users()
8568
86 defer.returnValue((200, ret))
69 return (200, ret)
8770
8871
8972 class VersionServlet(RestServlet):
226209 )
227210
228211 result = yield register._create_registration_details(user_id, body)
229 defer.returnValue((200, result))
212 return (200, result)
230213
231214
232215 class WhoisRestServlet(RestServlet):
251234
252235 ret = yield self.handlers.admin_handler.get_whois(target_user)
253236
254 defer.returnValue((200, ret))
255
256
257 class PurgeMediaCacheRestServlet(RestServlet):
258 PATTERNS = historical_admin_path_patterns("/purge_media_cache")
259
260 def __init__(self, hs):
261 self.media_repository = hs.get_media_repository()
262 self.auth = hs.get_auth()
263
264 @defer.inlineCallbacks
265 def on_POST(self, request):
266 yield assert_requester_is_admin(self.auth, request)
267
268 before_ts = parse_integer(request, "before_ts", required=True)
269 logger.info("before_ts: %r", before_ts)
270
271 ret = yield self.media_repository.delete_old_remote_media(before_ts)
272
273 defer.returnValue((200, ret))
237 return (200, ret)
274238
275239
276240 class PurgeHistoryRestServlet(RestServlet):
355319 room_id, token, delete_local_events=delete_local_events
356320 )
357321
358 defer.returnValue((200, {"purge_id": purge_id}))
322 return (200, {"purge_id": purge_id})
359323
360324
361325 class PurgeHistoryStatusRestServlet(RestServlet):
380344 if purge_status is None:
381345 raise NotFoundError("purge id '%s' not found" % purge_id)
382346
383 defer.returnValue((200, purge_status.asdict()))
347 return (200, purge_status.asdict())
384348
385349
386350 class DeactivateAccountRestServlet(RestServlet):
412376 else:
413377 id_server_unbind_result = "no-support"
414378
415 defer.returnValue((200, {"id_server_unbind_result": id_server_unbind_result}))
379 return (200, {"id_server_unbind_result": id_server_unbind_result})
416380
417381
418382 class ShutdownRoomRestServlet(RestServlet):
530494 room_id, new_room_id, requester_user_id
531495 )
532496
533 defer.returnValue(
534 (
535 200,
536 {
537 "kicked_users": kicked_users,
538 "failed_to_kick_users": failed_to_kick_users,
539 "local_aliases": aliases_for_room,
540 "new_room_id": new_room_id,
541 },
542 )
543 )
544
545
546 class QuarantineMediaInRoom(RestServlet):
547 """Quarantines all media in a room so that no one can download it via
548 this server.
549 """
550
551 PATTERNS = historical_admin_path_patterns("/quarantine_media/(?P<room_id>[^/]+)")
552
553 def __init__(self, hs):
554 self.store = hs.get_datastore()
555 self.auth = hs.get_auth()
556
557 @defer.inlineCallbacks
558 def on_POST(self, request, room_id):
559 requester = yield self.auth.get_user_by_req(request)
560 yield assert_user_is_admin(self.auth, requester.user)
561
562 num_quarantined = yield self.store.quarantine_media_ids_in_room(
563 room_id, requester.user.to_string()
564 )
565
566 defer.returnValue((200, {"num_quarantined": num_quarantined}))
567
568
569 class ListMediaInRoom(RestServlet):
570 """Lists all of the media in a given room.
571 """
572
573 PATTERNS = historical_admin_path_patterns("/room/(?P<room_id>[^/]+)/media")
574
575 def __init__(self, hs):
576 self.store = hs.get_datastore()
577
578 @defer.inlineCallbacks
579 def on_GET(self, request, room_id):
580 requester = yield self.auth.get_user_by_req(request)
581 is_admin = yield self.auth.is_server_admin(requester.user)
582 if not is_admin:
583 raise AuthError(403, "You are not a server admin")
584
585 local_mxcs, remote_mxcs = yield self.store.get_media_mxcs_in_room(room_id)
586
587 defer.returnValue((200, {"local": local_mxcs, "remote": remote_mxcs}))
497 return (
498 200,
499 {
500 "kicked_users": kicked_users,
501 "failed_to_kick_users": failed_to_kick_users,
502 "local_aliases": aliases_for_room,
503 "new_room_id": new_room_id,
504 },
505 )
588506
589507
590508 class ResetPasswordRestServlet(RestServlet):
628546 yield self._set_password_handler.set_password(
629547 target_user_id, new_password, requester
630548 )
631 defer.returnValue((200, {}))
549 return (200, {})
632550
633551
634552 class GetUsersPaginatedRestServlet(RestServlet):
670588 logger.info("limit: %s, start: %s", limit, start)
671589
672590 ret = yield self.handlers.admin_handler.get_users_paginate(order, start, limit)
673 defer.returnValue((200, ret))
591 return (200, ret)
674592
675593 @defer.inlineCallbacks
676594 def on_POST(self, request, target_user_id):
698616 logger.info("limit: %s, start: %s", limit, start)
699617
700618 ret = yield self.handlers.admin_handler.get_users_paginate(order, start, limit)
701 defer.returnValue((200, ret))
619 return (200, ret)
702620
703621
704622 class SearchUsersRestServlet(RestServlet):
741659 logger.info("term: %s ", term)
742660
743661 ret = yield self.handlers.admin_handler.search_users(term)
744 defer.returnValue((200, ret))
662 return (200, ret)
745663
746664
747665 class DeleteGroupAdminRestServlet(RestServlet):
764682 raise SynapseError(400, "Can only delete local groups")
765683
766684 yield self.group_server.delete_group(group_id, requester.user.to_string())
767 defer.returnValue((200, {}))
685 return (200, {})
768686
769687
770688 class AccountValidityRenewServlet(RestServlet):
795713 )
796714
797715 res = {"expiration_ts": expiration_ts}
798 defer.returnValue((200, res))
716 return (200, res)
799717
800718
801719 ########################################################################################
826744 def register_servlets_for_client_rest_resource(hs, http_server):
827745 """Register only the servlets which need to be exposed on /_matrix/client/xxx"""
828746 WhoisRestServlet(hs).register(http_server)
829 PurgeMediaCacheRestServlet(hs).register(http_server)
830747 PurgeHistoryStatusRestServlet(hs).register(http_server)
831748 DeactivateAccountRestServlet(hs).register(http_server)
832749 PurgeHistoryRestServlet(hs).register(http_server)
835752 GetUsersPaginatedRestServlet(hs).register(http_server)
836753 SearchUsersRestServlet(hs).register(http_server)
837754 ShutdownRoomRestServlet(hs).register(http_server)
838 QuarantineMediaInRoom(hs).register(http_server)
839 ListMediaInRoom(hs).register(http_server)
840755 UserRegisterServlet(hs).register(http_server)
841756 DeleteGroupAdminRestServlet(hs).register(http_server)
842757 AccountValidityRenewServlet(hs).register(http_server)
758
759 # Load the media repo ones if we're using them.
760 if hs.config.can_load_media_repo:
761 register_servlets_for_media_repo(hs, http_server)
762
843763 # don't add more things here: new servlets should only be exposed on
844764 # /_synapse/admin so should not go here. Instead register them in AdminRestResource.
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
15 import re
16
1417 from twisted.internet import defer
1518
1619 from synapse.api.errors import AuthError
20
21
22 def historical_admin_path_patterns(path_regex):
23 """Returns the list of patterns for an admin endpoint, including historical ones
24
25 This is a backwards-compatibility hack. Previously, the Admin API was exposed at
26 various paths under /_matrix/client. This function returns a list of patterns
27 matching those paths (as well as the new one), so that existing scripts which rely
28 on the endpoints being available there are not broken.
29
30 Note that this should only be used for existing endpoints: new ones should just
31 register for the /_synapse/admin path.
32 """
33 return list(
34 re.compile(prefix + path_regex)
35 for prefix in (
36 "^/_synapse/admin/v1",
37 "^/_matrix/client/api/v1/admin",
38 "^/_matrix/client/unstable/admin",
39 "^/_matrix/client/r0/admin",
40 )
41 )
1742
1843
1944 @defer.inlineCallbacks
0 # -*- coding: utf-8 -*-
1 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2018-2019 New Vector Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import logging
17
18 from twisted.internet import defer
19
20 from synapse.api.errors import AuthError
21 from synapse.http.servlet import RestServlet, parse_integer
22 from synapse.rest.admin._base import (
23 assert_requester_is_admin,
24 assert_user_is_admin,
25 historical_admin_path_patterns,
26 )
27
28 logger = logging.getLogger(__name__)
29
30
31 class QuarantineMediaInRoom(RestServlet):
32 """Quarantines all media in a room so that no one can download it via
33 this server.
34 """
35
36 PATTERNS = historical_admin_path_patterns("/quarantine_media/(?P<room_id>[^/]+)")
37
38 def __init__(self, hs):
39 self.store = hs.get_datastore()
40 self.auth = hs.get_auth()
41
42 @defer.inlineCallbacks
43 def on_POST(self, request, room_id):
44 requester = yield self.auth.get_user_by_req(request)
45 yield assert_user_is_admin(self.auth, requester.user)
46
47 num_quarantined = yield self.store.quarantine_media_ids_in_room(
48 room_id, requester.user.to_string()
49 )
50
51 return (200, {"num_quarantined": num_quarantined})
52
53
54 class ListMediaInRoom(RestServlet):
55 """Lists all of the media in a given room.
56 """
57
58 PATTERNS = historical_admin_path_patterns("/room/(?P<room_id>[^/]+)/media")
59
60 def __init__(self, hs):
61 self.store = hs.get_datastore()
62
63 @defer.inlineCallbacks
64 def on_GET(self, request, room_id):
65 requester = yield self.auth.get_user_by_req(request)
66 is_admin = yield self.auth.is_server_admin(requester.user)
67 if not is_admin:
68 raise AuthError(403, "You are not a server admin")
69
70 local_mxcs, remote_mxcs = yield self.store.get_media_mxcs_in_room(room_id)
71
72 return (200, {"local": local_mxcs, "remote": remote_mxcs})
73
74
75 class PurgeMediaCacheRestServlet(RestServlet):
76 PATTERNS = historical_admin_path_patterns("/purge_media_cache")
77
78 def __init__(self, hs):
79 self.media_repository = hs.get_media_repository()
80 self.auth = hs.get_auth()
81
82 @defer.inlineCallbacks
83 def on_POST(self, request):
84 yield assert_requester_is_admin(self.auth, request)
85
86 before_ts = parse_integer(request, "before_ts", required=True)
87 logger.info("before_ts: %r", before_ts)
88
89 ret = yield self.media_repository.delete_old_remote_media(before_ts)
90
91 return (200, ret)
92
93
94 def register_servlets_for_media_repo(hs, http_server):
95 """
96 Media repo specific APIs.
97 """
98 PurgeMediaCacheRestServlet(hs).register(http_server)
99 QuarantineMediaInRoom(hs).register(http_server)
100 ListMediaInRoom(hs).register(http_server)
9191 event_content=body["content"],
9292 )
9393
94 defer.returnValue((200, {"event_id": event.event_id}))
94 return (200, {"event_id": event.event_id})
9595
9696 def on_PUT(self, request, txn_id):
9797 return self.txns.fetch_or_execute_request(
5353 dir_handler = self.handlers.directory_handler
5454 res = yield dir_handler.get_association(room_alias)
5555
56 defer.returnValue((200, res))
56 return (200, res)
5757
5858 @defer.inlineCallbacks
5959 def on_PUT(self, request, room_alias):
8686 requester, room_alias, room_id, servers
8787 )
8888
89 defer.returnValue((200, {}))
89 return (200, {})
9090
9191 @defer.inlineCallbacks
9292 def on_DELETE(self, request, room_alias):
101101 service.url,
102102 room_alias.to_string(),
103103 )
104 defer.returnValue((200, {}))
104 return (200, {})
105105 except InvalidClientCredentialsError:
106106 # fallback to default user behaviour if they aren't an AS
107107 pass
117117 "User %s deleted alias %s", user.to_string(), room_alias.to_string()
118118 )
119119
120 defer.returnValue((200, {}))
120 return (200, {})
121121
122122
123123 class ClientDirectoryListServer(RestServlet):
135135 if room is None:
136136 raise NotFoundError("Unknown room")
137137
138 defer.returnValue(
139 (200, {"visibility": "public" if room["is_public"] else "private"})
140 )
138 return (200, {"visibility": "public" if room["is_public"] else "private"})
141139
142140 @defer.inlineCallbacks
143141 def on_PUT(self, request, room_id):
150148 requester, room_id, visibility
151149 )
152150
153 defer.returnValue((200, {}))
151 return (200, {})
154152
155153 @defer.inlineCallbacks
156154 def on_DELETE(self, request, room_id):
160158 requester, room_id, "private"
161159 )
162160
163 defer.returnValue((200, {}))
161 return (200, {})
164162
165163
166164 class ClientAppserviceDirectoryListServer(RestServlet):
194192 requester.app_service.id, network_id, room_id, visibility
195193 )
196194
197 defer.returnValue((200, {}))
195 return (200, {})
6666 is_guest=is_guest,
6767 )
6868
69 defer.returnValue((200, chunk))
69 return (200, chunk)
7070
7171 def on_OPTIONS(self, request):
7272 return (200, {})
9090 time_now = self.clock.time_msec()
9191 if event:
9292 event = yield self._event_serializer.serialize_event(event, time_now)
93 defer.returnValue((200, event))
93 return (200, event)
9494 else:
95 defer.returnValue((404, "Event not found."))
95 return (404, "Event not found.")
9696
9797
9898 def register_servlets(hs, http_server):
4141 include_archived=include_archived,
4242 )
4343
44 defer.returnValue((200, content))
44 return (200, content)
4545
4646
4747 def register_servlets(hs, http_server):
151151 well_known_data = self._well_known_builder.get_well_known()
152152 if well_known_data:
153153 result["well_known"] = well_known_data
154 defer.returnValue((200, result))
154 return (200, result)
155155
156156 @defer.inlineCallbacks
157157 def _do_other_login(self, login_submission):
211211 result = yield self._register_device_with_callback(
212212 canonical_user_id, login_submission, callback_3pid
213213 )
214 defer.returnValue(result)
214 return result
215215
216216 # No password providers were able to handle this 3pid
217217 # Check local store
240240 result = yield self._register_device_with_callback(
241241 canonical_user_id, login_submission, callback
242242 )
243 defer.returnValue(result)
243 return result
244244
245245 @defer.inlineCallbacks
246246 def _register_device_with_callback(self, user_id, login_submission, callback=None):
272272 if callback is not None:
273273 yield callback(result)
274274
275 defer.returnValue(result)
275 return result
276276
277277 @defer.inlineCallbacks
278278 def do_token_login(self, login_submission):
283283 )
284284
285285 result = yield self._register_device_with_callback(user_id, login_submission)
286 defer.returnValue(result)
286 return result
287287
288288 @defer.inlineCallbacks
289289 def do_jwt_login(self, login_submission):
320320 result = yield self._register_device_with_callback(
321321 registered_user_id, login_submission
322322 )
323 defer.returnValue(result)
323 return result
324324
325325
326326 class BaseSSORedirectServlet(RestServlet):
394394 # even if that's being used old-http style to signal end-of-data
395395 body = pde.response
396396 result = yield self.handle_cas_response(request, body, client_redirect_url)
397 defer.returnValue(result)
397 return result
398398
399399 def handle_cas_response(self, request, cas_response_body, client_redirect_url):
400400 user, attributes = self.parse_cas_response(cas_response_body)
4848 requester.user.to_string(), requester.device_id
4949 )
5050
51 defer.returnValue((200, {}))
51 return (200, {})
5252
5353
5454 class LogoutAllRestServlet(RestServlet):
7474 # .. and then delete any access tokens which weren't associated with
7575 # devices.
7676 yield self._auth_handler.delete_access_tokens_for_user(user_id)
77 defer.returnValue((200, {}))
77 return (200, {})
7878
7979
8080 def register_servlets(hs, http_server):
5555 state = yield self.presence_handler.get_state(target_user=user)
5656 state = format_user_presence_state(state, self.clock.time_msec())
5757
58 defer.returnValue((200, state))
58 return (200, state)
5959
6060 @defer.inlineCallbacks
6161 def on_PUT(self, request, user_id):
8787 if self.hs.config.use_presence:
8888 yield self.presence_handler.set_state(user, state)
8989
90 defer.returnValue((200, {}))
90 return (200, {})
9191
9292 def on_OPTIONS(self, request):
9393 return (200, {})
4747 if displayname is not None:
4848 ret["displayname"] = displayname
4949
50 defer.returnValue((200, ret))
50 return (200, ret)
5151
5252 @defer.inlineCallbacks
5353 def on_PUT(self, request, user_id):
6060 try:
6161 new_name = content["displayname"]
6262 except Exception:
63 defer.returnValue((400, "Unable to parse name"))
63 return (400, "Unable to parse name")
6464
6565 yield self.profile_handler.set_displayname(user, requester, new_name, is_admin)
6666
67 defer.returnValue((200, {}))
67 return (200, {})
6868
6969 def on_OPTIONS(self, request, user_id):
7070 return (200, {})
9797 if avatar_url is not None:
9898 ret["avatar_url"] = avatar_url
9999
100 defer.returnValue((200, ret))
100 return (200, ret)
101101
102102 @defer.inlineCallbacks
103103 def on_PUT(self, request, user_id):
109109 try:
110110 new_name = content["avatar_url"]
111111 except Exception:
112 defer.returnValue((400, "Unable to parse name"))
112 return (400, "Unable to parse name")
113113
114114 yield self.profile_handler.set_avatar_url(user, requester, new_name, is_admin)
115115
116 defer.returnValue((200, {}))
116 return (200, {})
117117
118118 def on_OPTIONS(self, request, user_id):
119119 return (200, {})
149149 if avatar_url is not None:
150150 ret["avatar_url"] = avatar_url
151151
152 defer.returnValue((200, ret))
152 return (200, ret)
153153
154154
155155 def register_servlets(hs, http_server):
6868 if "attr" in spec:
6969 yield self.set_rule_attr(user_id, spec, content)
7070 self.notify_user(user_id)
71 defer.returnValue((200, {}))
71 return (200, {})
7272
7373 if spec["rule_id"].startswith("."):
7474 # Rule ids starting with '.' are reserved for server default rules.
105105 except RuleNotFoundException as e:
106106 raise SynapseError(400, str(e))
107107
108 defer.returnValue((200, {}))
108 return (200, {})
109109
110110 @defer.inlineCallbacks
111111 def on_DELETE(self, request, path):
122122 try:
123123 yield self.store.delete_push_rule(user_id, namespaced_rule_id)
124124 self.notify_user(user_id)
125 defer.returnValue((200, {}))
125 return (200, {})
126126 except StoreError as e:
127127 if e.code == 404:
128128 raise NotFoundError()
150150 )
151151
152152 if path[0] == "":
153 defer.returnValue((200, rules))
153 return (200, rules)
154154 elif path[0] == "global":
155155 result = _filter_ruleset_with_path(rules["global"], path[1:])
156 defer.returnValue((200, result))
156 return (200, result)
157157 else:
158158 raise UnrecognizedRequestError()
159159
6161 if k not in allowed_keys:
6262 del p[k]
6363
64 defer.returnValue((200, {"pushers": pushers}))
64 return (200, {"pushers": pushers})
6565
6666 def on_OPTIONS(self, _):
6767 return 200, {}
9393 yield self.pusher_pool.remove_pusher(
9494 content["app_id"], content["pushkey"], user_id=user.to_string()
9595 )
96 defer.returnValue((200, {}))
96 return (200, {})
9797
9898 assert_params_in_dict(
9999 content,
142142
143143 self.notifier.on_new_replication_data()
144144
145 defer.returnValue((200, {}))
145 return (200, {})
146146
147147 def on_OPTIONS(self, _):
148148 return 200, {}
189189 )
190190 request.write(PushersRemoveRestServlet.SUCCESS_HTML)
191191 finish_request(request)
192 defer.returnValue(None)
192 return None
193193
194194 def on_OPTIONS(self, _):
195195 return 200, {}
9090 requester, self.get_room_config(request)
9191 )
9292
93 defer.returnValue((200, info))
93 return (200, info)
9494
9595 def get_room_config(self, request):
9696 user_supplied_config = parse_json_object_from_request(request)
172172
173173 if format == "event":
174174 event = format_event_for_client_v2(data.get_dict())
175 defer.returnValue((200, event))
175 return (200, event)
176176 elif format == "content":
177 defer.returnValue((200, data.get_dict()["content"]))
177 return (200, data.get_dict()["content"])
178178
179179 @defer.inlineCallbacks
180180 def on_PUT(self, request, room_id, event_type, state_key, txn_id=None):
209209 ret = {}
210210 if event:
211211 ret = {"event_id": event.event_id}
212 defer.returnValue((200, ret))
212 return (200, ret)
213213
214214
215215 # TODO: Needs unit testing for generic events + feedback
243243 requester, event_dict, txn_id=txn_id
244244 )
245245
246 defer.returnValue((200, {"event_id": event.event_id}))
246 return (200, {"event_id": event.event_id})
247247
248248 def on_GET(self, request, room_id, event_type, txn_id):
249249 return (200, "Not implemented")
306306 third_party_signed=content.get("third_party_signed", None),
307307 )
308308
309 defer.returnValue((200, {"room_id": room_id}))
309 return (200, {"room_id": room_id})
310310
311311 def on_PUT(self, request, room_identifier, txn_id):
312312 return self.txns.fetch_or_execute_request(
359359 limit=limit, since_token=since_token
360360 )
361361
362 defer.returnValue((200, data))
362 return (200, data)
363363
364364 @defer.inlineCallbacks
365365 def on_POST(self, request):
404404 network_tuple=network_tuple,
405405 )
406406
407 defer.returnValue((200, data))
407 return (200, data)
408408
409409
410410 # TODO: Needs unit testing
455455 continue
456456 chunk.append(event)
457457
458 defer.returnValue((200, {"chunk": chunk}))
458 return (200, {"chunk": chunk})
459459
460460
461461 # deprecated in favour of /members?membership=join?
476476 requester, room_id
477477 )
478478
479 defer.returnValue((200, {"joined": users_with_profile}))
479 return (200, {"joined": users_with_profile})
480480
481481
482482 # TODO: Needs better unit testing
509509 event_filter=event_filter,
510510 )
511511
512 defer.returnValue((200, msgs))
512 return (200, msgs)
513513
514514
515515 # TODO: Needs unit testing
530530 user_id=requester.user.to_string(),
531531 is_guest=requester.is_guest,
532532 )
533 defer.returnValue((200, events))
533 return (200, events)
534534
535535
536536 # TODO: Needs unit testing
549549 content = yield self.initial_sync_handler.room_initial_sync(
550550 room_id=room_id, requester=requester, pagin_config=pagination_config
551551 )
552 defer.returnValue((200, content))
552 return (200, content)
553553
554554
555555 class RoomEventServlet(RestServlet):
567567 @defer.inlineCallbacks
568568 def on_GET(self, request, room_id, event_id):
569569 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
570 event = yield self.event_handler.get_event(requester.user, room_id, event_id)
570 try:
571 event = yield self.event_handler.get_event(
572 requester.user, room_id, event_id
573 )
574 except AuthError:
575 # This endpoint is supposed to return a 404 when the requester does
576 # not have permission to access the event
577 # https://matrix.org/docs/spec/client_server/r0.5.0#get-matrix-client-r0-rooms-roomid-event-eventid
578 raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
571579
572580 time_now = self.clock.time_msec()
573581 if event:
574582 event = yield self._event_serializer.serialize_event(event, time_now)
575 defer.returnValue((200, event))
576 else:
577 defer.returnValue((404, "Event not found."))
583 return (200, event)
584
585 return SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
578586
579587
580588 class RoomEventContextServlet(RestServlet):
624632 results["state"], time_now
625633 )
626634
627 defer.returnValue((200, results))
635 return (200, results)
628636
629637
630638 class RoomForgetRestServlet(TransactionRestServlet):
643651
644652 yield self.room_member_handler.forget(user=requester.user, room_id=room_id)
645653
646 defer.returnValue((200, {}))
654 return (200, {})
647655
648656 def on_PUT(self, request, room_id, txn_id):
649657 return self.txns.fetch_or_execute_request(
693701 requester,
694702 txn_id,
695703 )
696 defer.returnValue((200, {}))
704 return (200, {})
697705 return
698706
699707 target = requester.user
720728 if membership_action == "join":
721729 return_value["room_id"] = room_id
722730
723 defer.returnValue((200, return_value))
731 return (200, return_value)
724732
725733 def _has_3pid_invite_keys(self, content):
726734 for key in {"id_server", "medium", "address"}:
762770 txn_id=txn_id,
763771 )
764772
765 defer.returnValue((200, {"event_id": event.event_id}))
773 return (200, {"event_id": event.event_id})
766774
767775 def on_PUT(self, request, room_id, event_id, txn_id):
768776 return self.txns.fetch_or_execute_request(
807815 target_user=target_user, auth_user=requester.user, room_id=room_id
808816 )
809817
810 defer.returnValue((200, {}))
818 return (200, {})
811819
812820
813821 class SearchRestServlet(RestServlet):
829837 requester.user, content, batch
830838 )
831839
832 defer.returnValue((200, results))
840 return (200, results)
833841
834842
835843 class JoinedRoomsRestServlet(RestServlet):
845853 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
846854
847855 room_ids = yield self.store.get_rooms_for_user(requester.user.to_string())
848 defer.returnValue((200, {"joined_rooms": list(room_ids)}))
856 return (200, {"joined_rooms": list(room_ids)})
849857
850858
851859 def register_txn_path(servlet, regex_string, http_server, with_get=False):
5959 password = turnPassword
6060
6161 else:
62 defer.returnValue((200, {}))
62 return (200, {})
6363
64 defer.returnValue(
65 (
66 200,
67 {
68 "username": username,
69 "password": password,
70 "ttl": userLifetime / 1000,
71 "uris": turnUris,
72 },
73 )
64 return (
65 200,
66 {
67 "username": username,
68 "password": password,
69 "ttl": userLifetime / 1000,
70 "uris": turnUris,
71 },
7472 )
7573
7674 def on_OPTIONS(self, request):
116116 # Wrap the session id in a JSON object
117117 ret = {"sid": sid}
118118
119 defer.returnValue((200, ret))
119 return (200, ret)
120120
121121 @defer.inlineCallbacks
122122 def send_password_reset(self, email, client_secret, send_attempt, next_link=None):
148148 # Check that the send_attempt is higher than previous attempts
149149 if send_attempt <= last_send_attempt:
150150 # If not, just return a success without sending an email
151 defer.returnValue(session_id)
151 return session_id
152152 else:
153153 # An non-validated session does not exist yet.
154154 # Generate a session id
184184 token_expires,
185185 )
186186
187 defer.returnValue(session_id)
187 return session_id
188188
189189
190190 class MsisdnPasswordRequestTokenRestServlet(RestServlet):
220220 raise SynapseError(400, "MSISDN not found", Codes.THREEPID_NOT_FOUND)
221221
222222 ret = yield self.identity_handler.requestMsisdnToken(**body)
223 defer.returnValue((200, ret))
223 return (200, ret)
224224
225225
226226 class PasswordResetSubmitTokenServlet(RestServlet):
278278 request.setResponseCode(302)
279279 request.setHeader("Location", next_link)
280280 finish_request(request)
281 defer.returnValue(None)
281 return None
282282
283283 # Otherwise show the success template
284284 html = self.config.email_password_reset_success_html_content
294294
295295 request.write(html.encode("utf-8"))
296296 finish_request(request)
297 defer.returnValue(None)
297 return None
298298
299299 def load_jinja2_template(self, template_dir, template_filename, template_vars):
300300 """Loads a jinja2 template with variables to insert
329329 )
330330 response_code = 200 if valid else 400
331331
332 defer.returnValue((response_code, {"success": valid}))
332 return (response_code, {"success": valid})
333333
334334
335335 class PasswordRestServlet(RestServlet):
398398
399399 yield self._set_password_handler.set_password(user_id, new_password, requester)
400400
401 defer.returnValue((200, {}))
401 return (200, {})
402402
403403 def on_OPTIONS(self, _):
404404 return 200, {}
433433 yield self._deactivate_account_handler.deactivate_account(
434434 requester.user.to_string(), erase
435435 )
436 defer.returnValue((200, {}))
436 return (200, {})
437437
438438 yield self.auth_handler.validate_user_via_ui_auth(
439439 requester, body, self.hs.get_ip_from_request(request)
446446 else:
447447 id_server_unbind_result = "no-support"
448448
449 defer.returnValue((200, {"id_server_unbind_result": id_server_unbind_result}))
449 return (200, {"id_server_unbind_result": id_server_unbind_result})
450450
451451
452452 class EmailThreepidRequestTokenRestServlet(RestServlet):
480480 raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
481481
482482 ret = yield self.identity_handler.requestEmailToken(**body)
483 defer.returnValue((200, ret))
483 return (200, ret)
484484
485485
486486 class MsisdnThreepidRequestTokenRestServlet(RestServlet):
515515 raise SynapseError(400, "MSISDN is already in use", Codes.THREEPID_IN_USE)
516516
517517 ret = yield self.identity_handler.requestMsisdnToken(**body)
518 defer.returnValue((200, ret))
518 return (200, ret)
519519
520520
521521 class ThreepidRestServlet(RestServlet):
535535
536536 threepids = yield self.datastore.user_get_threepids(requester.user.to_string())
537537
538 defer.returnValue((200, {"threepids": threepids}))
538 return (200, {"threepids": threepids})
539539
540540 @defer.inlineCallbacks
541541 def on_POST(self, request):
567567 logger.debug("Binding threepid %s to %s", threepid, user_id)
568568 yield self.identity_handler.bind_threepid(threePidCreds, user_id)
569569
570 defer.returnValue((200, {}))
570 return (200, {})
571571
572572
573573 class ThreepidDeleteRestServlet(RestServlet):
602602 else:
603603 id_server_unbind_result = "no-support"
604604
605 defer.returnValue((200, {"id_server_unbind_result": id_server_unbind_result}))
605 return (200, {"id_server_unbind_result": id_server_unbind_result})
606606
607607
608608 class WhoamiRestServlet(RestServlet):
616616 def on_GET(self, request):
617617 requester = yield self.auth.get_user_by_req(request)
618618
619 defer.returnValue((200, {"user_id": requester.user.to_string()}))
619 return (200, {"user_id": requester.user.to_string()})
620620
621621
622622 def register_servlets(hs, http_server):
5454
5555 self.notifier.on_new_event("account_data_key", max_id, users=[user_id])
5656
57 defer.returnValue((200, {}))
57 return (200, {})
5858
5959 @defer.inlineCallbacks
6060 def on_GET(self, request, user_id, account_data_type):
6969 if event is None:
7070 raise NotFoundError("Account data not found")
7171
72 defer.returnValue((200, event))
72 return (200, event)
7373
7474
7575 class RoomAccountDataServlet(RestServlet):
111111
112112 self.notifier.on_new_event("account_data_key", max_id, users=[user_id])
113113
114 defer.returnValue((200, {}))
114 return (200, {})
115115
116116 @defer.inlineCallbacks
117117 def on_GET(self, request, user_id, room_id, account_data_type):
126126 if event is None:
127127 raise NotFoundError("Room account data not found")
128128
129 defer.returnValue((200, event))
129 return (200, event)
130130
131131
132132 def register_servlets(hs, http_server):
4141 self.hs = hs
4242 self.account_activity_handler = hs.get_account_validity_handler()
4343 self.auth = hs.get_auth()
44 self.success_html = hs.config.account_validity.account_renewed_html_content
45 self.failure_html = hs.config.account_validity.invalid_token_html_content
4446
4547 @defer.inlineCallbacks
4648 def on_GET(self, request):
4850 raise SynapseError(400, "Missing renewal token")
4951 renewal_token = request.args[b"token"][0]
5052
51 yield self.account_activity_handler.renew_account(renewal_token.decode("utf8"))
53 token_valid = yield self.account_activity_handler.renew_account(
54 renewal_token.decode("utf8")
55 )
5256
53 request.setResponseCode(200)
57 if token_valid:
58 status_code = 200
59 response = self.success_html
60 else:
61 status_code = 404
62 response = self.failure_html
63
64 request.setResponseCode(status_code)
5465 request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
55 request.setHeader(
56 b"Content-Length", b"%d" % (len(AccountValidityRenewServlet.SUCCESS_HTML),)
57 )
58 request.write(AccountValidityRenewServlet.SUCCESS_HTML)
66 request.setHeader(b"Content-Length", b"%d" % (len(response),))
67 request.write(response.encode("utf8"))
5968 finish_request(request)
6069 defer.returnValue(None)
6170
206206 request.write(html_bytes)
207207 finish_request(request)
208208
209 defer.returnValue(None)
209 return None
210210 elif stagetype == LoginType.TERMS:
211211 if ("session" not in request.args or len(request.args["session"])) == 0:
212212 raise SynapseError(400, "No session supplied")
238238
239239 request.write(html_bytes)
240240 finish_request(request)
241 defer.returnValue(None)
241 return None
242242 else:
243243 raise SynapseError(404, "Unknown auth stage type")
244244
5757 "m.change_password": {"enabled": change_password},
5858 }
5959 }
60 defer.returnValue((200, response))
60 return (200, response)
6161
6262
6363 def register_servlets(hs, http_server):
4747 devices = yield self.device_handler.get_devices_by_user(
4848 requester.user.to_string()
4949 )
50 defer.returnValue((200, {"devices": devices}))
50 return (200, {"devices": devices})
5151
5252
5353 class DeleteDevicesRestServlet(RestServlet):
9090 yield self.device_handler.delete_devices(
9191 requester.user.to_string(), body["devices"]
9292 )
93 defer.returnValue((200, {}))
93 return (200, {})
9494
9595
9696 class DeviceRestServlet(RestServlet):
113113 device = yield self.device_handler.get_device(
114114 requester.user.to_string(), device_id
115115 )
116 defer.returnValue((200, device))
116 return (200, device)
117117
118118 @interactive_auth_handler
119119 @defer.inlineCallbacks
136136 )
137137
138138 yield self.device_handler.delete_device(requester.user.to_string(), device_id)
139 defer.returnValue((200, {}))
139 return (200, {})
140140
141141 @defer.inlineCallbacks
142142 def on_PUT(self, request, device_id):
146146 yield self.device_handler.update_device(
147147 requester.user.to_string(), device_id, body
148148 )
149 defer.returnValue((200, {}))
149 return (200, {})
150150
151151
152152 def register_servlets(hs, http_server):
5555 user_localpart=target_user.localpart, filter_id=filter_id
5656 )
5757
58 defer.returnValue((200, filter.get_filter_json()))
58 return (200, filter.get_filter_json())
5959 except (KeyError, StoreError):
6060 raise SynapseError(400, "No such filter", errcode=Codes.NOT_FOUND)
6161
8888 user_localpart=target_user.localpart, user_filter=content
8989 )
9090
91 defer.returnValue((200, {"filter_id": str(filter_id)}))
91 return (200, {"filter_id": str(filter_id)})
9292
9393
9494 def register_servlets(hs, http_server):
4646 group_id, requester_user_id
4747 )
4848
49 defer.returnValue((200, group_description))
49 return (200, group_description)
5050
5151 @defer.inlineCallbacks
5252 def on_POST(self, request, group_id):
5858 group_id, requester_user_id, content
5959 )
6060
61 defer.returnValue((200, {}))
61 return (200, {})
6262
6363
6464 class GroupSummaryServlet(RestServlet):
8282 group_id, requester_user_id
8383 )
8484
85 defer.returnValue((200, get_group_summary))
85 return (200, get_group_summary)
8686
8787
8888 class GroupSummaryRoomsCatServlet(RestServlet):
119119 content=content,
120120 )
121121
122 defer.returnValue((200, resp))
122 return (200, resp)
123123
124124 @defer.inlineCallbacks
125125 def on_DELETE(self, request, group_id, category_id, room_id):
130130 group_id, requester_user_id, room_id=room_id, category_id=category_id
131131 )
132132
133 defer.returnValue((200, resp))
133 return (200, resp)
134134
135135
136136 class GroupCategoryServlet(RestServlet):
156156 group_id, requester_user_id, category_id=category_id
157157 )
158158
159 defer.returnValue((200, category))
159 return (200, category)
160160
161161 @defer.inlineCallbacks
162162 def on_PUT(self, request, group_id, category_id):
168168 group_id, requester_user_id, category_id=category_id, content=content
169169 )
170170
171 defer.returnValue((200, resp))
171 return (200, resp)
172172
173173 @defer.inlineCallbacks
174174 def on_DELETE(self, request, group_id, category_id):
179179 group_id, requester_user_id, category_id=category_id
180180 )
181181
182 defer.returnValue((200, resp))
182 return (200, resp)
183183
184184
185185 class GroupCategoriesServlet(RestServlet):
203203 group_id, requester_user_id
204204 )
205205
206 defer.returnValue((200, category))
206 return (200, category)
207207
208208
209209 class GroupRoleServlet(RestServlet):
227227 group_id, requester_user_id, role_id=role_id
228228 )
229229
230 defer.returnValue((200, category))
230 return (200, category)
231231
232232 @defer.inlineCallbacks
233233 def on_PUT(self, request, group_id, role_id):
239239 group_id, requester_user_id, role_id=role_id, content=content
240240 )
241241
242 defer.returnValue((200, resp))
242 return (200, resp)
243243
244244 @defer.inlineCallbacks
245245 def on_DELETE(self, request, group_id, role_id):
250250 group_id, requester_user_id, role_id=role_id
251251 )
252252
253 defer.returnValue((200, resp))
253 return (200, resp)
254254
255255
256256 class GroupRolesServlet(RestServlet):
274274 group_id, requester_user_id
275275 )
276276
277 defer.returnValue((200, category))
277 return (200, category)
278278
279279
280280 class GroupSummaryUsersRoleServlet(RestServlet):
311311 content=content,
312312 )
313313
314 defer.returnValue((200, resp))
314 return (200, resp)
315315
316316 @defer.inlineCallbacks
317317 def on_DELETE(self, request, group_id, role_id, user_id):
322322 group_id, requester_user_id, user_id=user_id, role_id=role_id
323323 )
324324
325 defer.returnValue((200, resp))
325 return (200, resp)
326326
327327
328328 class GroupRoomServlet(RestServlet):
346346 group_id, requester_user_id
347347 )
348348
349 defer.returnValue((200, result))
349 return (200, result)
350350
351351
352352 class GroupUsersServlet(RestServlet):
370370 group_id, requester_user_id
371371 )
372372
373 defer.returnValue((200, result))
373 return (200, result)
374374
375375
376376 class GroupInvitedUsersServlet(RestServlet):
394394 group_id, requester_user_id
395395 )
396396
397 defer.returnValue((200, result))
397 return (200, result)
398398
399399
400400 class GroupSettingJoinPolicyServlet(RestServlet):
419419 group_id, requester_user_id, content
420420 )
421421
422 defer.returnValue((200, result))
422 return (200, result)
423423
424424
425425 class GroupCreateServlet(RestServlet):
449449 group_id, requester_user_id, content
450450 )
451451
452 defer.returnValue((200, result))
452 return (200, result)
453453
454454
455455 class GroupAdminRoomsServlet(RestServlet):
476476 group_id, requester_user_id, room_id, content
477477 )
478478
479 defer.returnValue((200, result))
479 return (200, result)
480480
481481 @defer.inlineCallbacks
482482 def on_DELETE(self, request, group_id, room_id):
487487 group_id, requester_user_id, room_id
488488 )
489489
490 defer.returnValue((200, result))
490 return (200, result)
491491
492492
493493 class GroupAdminRoomsConfigServlet(RestServlet):
515515 group_id, requester_user_id, room_id, config_key, content
516516 )
517517
518 defer.returnValue((200, result))
518 return (200, result)
519519
520520
521521 class GroupAdminUsersInviteServlet(RestServlet):
545545 group_id, user_id, requester_user_id, config
546546 )
547547
548 defer.returnValue((200, result))
548 return (200, result)
549549
550550
551551 class GroupAdminUsersKickServlet(RestServlet):
572572 group_id, user_id, requester_user_id, content
573573 )
574574
575 defer.returnValue((200, result))
575 return (200, result)
576576
577577
578578 class GroupSelfLeaveServlet(RestServlet):
597597 group_id, requester_user_id, requester_user_id, content
598598 )
599599
600 defer.returnValue((200, result))
600 return (200, result)
601601
602602
603603 class GroupSelfJoinServlet(RestServlet):
622622 group_id, requester_user_id, content
623623 )
624624
625 defer.returnValue((200, result))
625 return (200, result)
626626
627627
628628 class GroupSelfAcceptInviteServlet(RestServlet):
647647 group_id, requester_user_id, content
648648 )
649649
650 defer.returnValue((200, result))
650 return (200, result)
651651
652652
653653 class GroupSelfUpdatePublicityServlet(RestServlet):
671671 publicise = content["publicise"]
672672 yield self.store.update_group_publicity(group_id, requester_user_id, publicise)
673673
674 defer.returnValue((200, {}))
674 return (200, {})
675675
676676
677677 class PublicisedGroupsForUserServlet(RestServlet):
693693
694694 result = yield self.groups_handler.get_publicised_groups_for_user(user_id)
695695
696 defer.returnValue((200, result))
696 return (200, result)
697697
698698
699699 class PublicisedGroupsForUsersServlet(RestServlet):
718718
719719 result = yield self.groups_handler.bulk_get_publicised_groups(user_ids)
720720
721 defer.returnValue((200, result))
721 return (200, result)
722722
723723
724724 class GroupsForUserServlet(RestServlet):
740740
741741 result = yield self.groups_handler.get_joined_groups(requester_user_id)
742742
743 defer.returnValue((200, result))
743 return (200, result)
744744
745745
746746 def register_servlets(hs, http_server):
9494 result = yield self.e2e_keys_handler.upload_keys_for_user(
9595 user_id, device_id, body
9696 )
97 defer.returnValue((200, result))
97 return (200, result)
9898
9999
100100 class KeyQueryServlet(RestServlet):
148148 timeout = parse_integer(request, "timeout", 10 * 1000)
149149 body = parse_json_object_from_request(request)
150150 result = yield self.e2e_keys_handler.query_devices(body, timeout)
151 defer.returnValue((200, result))
151 return (200, result)
152152
153153
154154 class KeyChangesServlet(RestServlet):
188188
189189 results = yield self.device_handler.get_user_ids_changed(user_id, from_token)
190190
191 defer.returnValue((200, results))
191 return (200, results)
192192
193193
194194 class OneTimeKeyServlet(RestServlet):
223223 timeout = parse_integer(request, "timeout", 10 * 1000)
224224 body = parse_json_object_from_request(request)
225225 result = yield self.e2e_keys_handler.claim_one_time_keys(body, timeout)
226 defer.returnValue((200, result))
226 return (200, result)
227227
228228
229229 def register_servlets(hs, http_server):
8787 returned_push_actions.append(returned_pa)
8888 next_token = str(pa["stream_ordering"])
8989
90 defer.returnValue(
91 (200, {"notifications": returned_push_actions, "next_token": next_token})
92 )
90 return (200, {"notifications": returned_push_actions, "next_token": next_token})
9391
9492
9593 def register_servlets(hs, http_server):
8282
8383 yield self.store.insert_open_id_token(token, ts_valid_until_ms, user_id)
8484
85 defer.returnValue(
86 (
87 200,
88 {
89 "access_token": token,
90 "token_type": "Bearer",
91 "matrix_server_name": self.server_name,
92 "expires_in": self.EXPIRES_MS / 1000,
93 },
94 )
85 return (
86 200,
87 {
88 "access_token": token,
89 "token_type": "Bearer",
90 "matrix_server_name": self.server_name,
91 "expires_in": self.EXPIRES_MS / 1000,
92 },
9593 )
9694
9795
5858 event_id=read_marker_event_id,
5959 )
6060
61 defer.returnValue((200, {}))
61 return (200, {})
6262
6363
6464 def register_servlets(hs, http_server):
5151 room_id, receipt_type, user_id=requester.user.to_string(), event_id=event_id
5252 )
5353
54 defer.returnValue((200, {}))
54 return (200, {})
5555
5656
5757 def register_servlets(hs, http_server):
9494 raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
9595
9696 ret = yield self.identity_handler.requestEmailToken(**body)
97 defer.returnValue((200, ret))
97 return (200, ret)
9898
9999
100100 class MsisdnRegisterRequestTokenRestServlet(RestServlet):
137137 )
138138
139139 ret = yield self.identity_handler.requestMsisdnToken(**body)
140 defer.returnValue((200, ret))
140 return (200, ret)
141141
142142
143143 class UsernameAvailabilityRestServlet(RestServlet):
177177
178178 yield self.registration_handler.check_username(username)
179179
180 defer.returnValue((200, {"available": True}))
180 return (200, {"available": True})
181181
182182
183183 class RegisterRestServlet(RestServlet):
229229
230230 if kind == b"guest":
231231 ret = yield self._do_guest_registration(body, address=client_addr)
232 defer.returnValue(ret)
232 return ret
233233 return
234234 elif kind != b"user":
235235 raise UnrecognizedRequestError(
281281 result = yield self._do_appservice_registration(
282282 desired_username, access_token, body
283283 )
284 defer.returnValue((200, result)) # we throw for non 200 responses
284 return (200, result) # we throw for non 200 responses
285285 return
286286
287287 # for either shared secret or regular registration, downcase the
300300 result = yield self._do_shared_secret_registration(
301301 desired_username, desired_password, body
302302 )
303 defer.returnValue((200, result)) # we throw for non 200 responses
303 return (200, result) # we throw for non 200 responses
304304 return
305305
306306 # == Normal User Registration == (everyone else)
499499 bind_msisdn=params.get("bind_msisdn"),
500500 )
501501
502 defer.returnValue((200, return_dict))
502 return (200, return_dict)
503503
504504 def on_OPTIONS(self, _):
505505 return 200, {}
509509 user_id = yield self.registration_handler.appservice_register(
510510 username, as_token
511511 )
512 defer.returnValue((yield self._create_registration_details(user_id, body)))
512 return (yield self._create_registration_details(user_id, body))
513513
514514 @defer.inlineCallbacks
515515 def _do_shared_secret_registration(self, username, password, body):
545545 )
546546
547547 result = yield self._create_registration_details(user_id, body)
548 defer.returnValue(result)
548 return result
549549
550550 @defer.inlineCallbacks
551551 def _create_registration_details(self, user_id, params):
569569 )
570570
571571 result.update({"access_token": access_token, "device_id": device_id})
572 defer.returnValue(result)
572 return result
573573
574574 @defer.inlineCallbacks
575575 def _do_guest_registration(self, params, address=None):
587587 user_id, device_id, initial_display_name, is_guest=True
588588 )
589589
590 defer.returnValue(
591 (
592 200,
593 {
594 "user_id": user_id,
595 "device_id": device_id,
596 "access_token": access_token,
597 "home_server": self.hs.hostname,
598 },
599 )
590 return (
591 200,
592 {
593 "user_id": user_id,
594 "device_id": device_id,
595 "access_token": access_token,
596 "home_server": self.hs.hostname,
597 },
600598 )
601599
602600
117117 requester, event_dict=event_dict, txn_id=txn_id
118118 )
119119
120 defer.returnValue((200, {"event_id": event.event_id}))
120 return (200, {"event_id": event.event_id})
121121
122122
123123 class RelationPaginationServlet(RestServlet):
197197 return_value["chunk"] = events
198198 return_value["original_event"] = original_event
199199
200 defer.returnValue((200, return_value))
200 return (200, return_value)
201201
202202
203203 class RelationAggregationPaginationServlet(RestServlet):
269269 to_token=to_token,
270270 )
271271
272 defer.returnValue((200, pagination_chunk.to_dict()))
272 return (200, pagination_chunk.to_dict())
273273
274274
275275 class RelationAggregationGroupPaginationServlet(RestServlet):
355355 return_value = result.to_dict()
356356 return_value["chunk"] = events
357357
358 defer.returnValue((200, return_value))
358 return (200, return_value)
359359
360360
361361 def register_servlets(hs, http_server):
7171 received_ts=self.clock.time_msec(),
7272 )
7373
74 defer.returnValue((200, {}))
74 return (200, {})
7575
7676
7777 def register_servlets(hs, http_server):
134134 body = {"rooms": {room_id: body}}
135135
136136 yield self.e2e_room_keys_handler.upload_room_keys(user_id, version, body)
137 defer.returnValue((200, {}))
137 return (200, {})
138138
139139 @defer.inlineCallbacks
140140 def on_GET(self, request, room_id, session_id):
217217 else:
218218 room_keys = room_keys["rooms"][room_id]
219219
220 defer.returnValue((200, room_keys))
220 return (200, room_keys)
221221
222222 @defer.inlineCallbacks
223223 def on_DELETE(self, request, room_id, session_id):
241241 yield self.e2e_room_keys_handler.delete_room_keys(
242242 user_id, version, room_id, session_id
243243 )
244 defer.returnValue((200, {}))
244 return (200, {})
245245
246246
247247 class RoomKeysNewVersionServlet(RestServlet):
292292 info = parse_json_object_from_request(request)
293293
294294 new_version = yield self.e2e_room_keys_handler.create_version(user_id, info)
295 defer.returnValue((200, {"version": new_version}))
295 return (200, {"version": new_version})
296296
297297 # we deliberately don't have a PUT /version, as these things really should
298298 # be immutable to avoid people footgunning
337337 except SynapseError as e:
338338 if e.code == 404:
339339 raise SynapseError(404, "No backup found", Codes.NOT_FOUND)
340 defer.returnValue((200, info))
340 return (200, info)
341341
342342 @defer.inlineCallbacks
343343 def on_DELETE(self, request, version):
357357 user_id = requester.user.to_string()
358358
359359 yield self.e2e_room_keys_handler.delete_version(user_id, version)
360 defer.returnValue((200, {}))
360 return (200, {})
361361
362362 @defer.inlineCallbacks
363363 def on_PUT(self, request, version):
391391 )
392392
393393 yield self.e2e_room_keys_handler.update_version(user_id, version, info)
394 defer.returnValue((200, {}))
394 return (200, {})
395395
396396
397397 def register_servlets(hs, http_server):
7979
8080 ret = {"replacement_room": new_room_id}
8181
82 defer.returnValue((200, ret))
82 return (200, ret)
8383
8484
8585 def register_servlets(hs, http_server):
5959 )
6060
6161 response = (200, {})
62 defer.returnValue(response)
62 return response
6363
6464
6565 def register_servlets(hs, http_server):
173173 time_now, sync_result, requester.access_token_id, filter
174174 )
175175
176 defer.returnValue((200, response_content))
176 return (200, response_content)
177177
178178 @defer.inlineCallbacks
179179 def encode_response(self, time_now, sync_result, access_token_id, filter):
204204 event_formatter,
205205 )
206206
207 defer.returnValue(
208 {
209 "account_data": {"events": sync_result.account_data},
210 "to_device": {"events": sync_result.to_device},
211 "device_lists": {
212 "changed": list(sync_result.device_lists.changed),
213 "left": list(sync_result.device_lists.left),
214 },
215 "presence": SyncRestServlet.encode_presence(
216 sync_result.presence, time_now
217 ),
218 "rooms": {"join": joined, "invite": invited, "leave": archived},
219 "groups": {
220 "join": sync_result.groups.join,
221 "invite": sync_result.groups.invite,
222 "leave": sync_result.groups.leave,
223 },
224 "device_one_time_keys_count": sync_result.device_one_time_keys_count,
225 "next_batch": sync_result.next_batch.to_string(),
226 }
227 )
207 return {
208 "account_data": {"events": sync_result.account_data},
209 "to_device": {"events": sync_result.to_device},
210 "device_lists": {
211 "changed": list(sync_result.device_lists.changed),
212 "left": list(sync_result.device_lists.left),
213 },
214 "presence": SyncRestServlet.encode_presence(sync_result.presence, time_now),
215 "rooms": {"join": joined, "invite": invited, "leave": archived},
216 "groups": {
217 "join": sync_result.groups.join,
218 "invite": sync_result.groups.invite,
219 "leave": sync_result.groups.leave,
220 },
221 "device_one_time_keys_count": sync_result.device_one_time_keys_count,
222 "next_batch": sync_result.next_batch.to_string(),
223 }
228224
229225 @staticmethod
230226 def encode_presence(events, time_now):
272268 event_formatter=event_formatter,
273269 )
274270
275 defer.returnValue(joined)
271 return joined
276272
277273 @defer.inlineCallbacks
278274 def encode_invited(self, rooms, time_now, token_id, event_formatter):
308304 invited_state.append(invite)
309305 invited[room.room_id] = {"invite_state": {"events": invited_state}}
310306
311 defer.returnValue(invited)
307 return invited
312308
313309 @defer.inlineCallbacks
314310 def encode_archived(self, rooms, time_now, token_id, event_fields, event_formatter):
341337 event_formatter=event_formatter,
342338 )
343339
344 defer.returnValue(joined)
340 return joined
345341
346342 @defer.inlineCallbacks
347343 def encode_room(
413409 result["unread_notifications"] = room.unread_notifications
414410 result["summary"] = room.summary
415411
416 defer.returnValue(result)
412 return result
417413
418414
419415 def register_servlets(hs, http_server):
4444
4545 tags = yield self.store.get_tags_for_room(user_id, room_id)
4646
47 defer.returnValue((200, {"tags": tags}))
47 return (200, {"tags": tags})
4848
4949
5050 class TagServlet(RestServlet):
7575
7676 self.notifier.on_new_event("account_data_key", max_id, users=[user_id])
7777
78 defer.returnValue((200, {}))
78 return (200, {})
7979
8080 @defer.inlineCallbacks
8181 def on_DELETE(self, request, user_id, room_id, tag):
8787
8888 self.notifier.on_new_event("account_data_key", max_id, users=[user_id])
8989
90 defer.returnValue((200, {}))
90 return (200, {})
9191
9292
9393 def register_servlets(hs, http_server):
3939 yield self.auth.get_user_by_req(request, allow_guest=True)
4040
4141 protocols = yield self.appservice_handler.get_3pe_protocols()
42 defer.returnValue((200, protocols))
42 return (200, protocols)
4343
4444
4545 class ThirdPartyProtocolServlet(RestServlet):
5959 only_protocol=protocol
6060 )
6161 if protocol in protocols:
62 defer.returnValue((200, protocols[protocol]))
62 return (200, protocols[protocol])
6363 else:
64 defer.returnValue((404, {"error": "Unknown protocol"}))
64 return (404, {"error": "Unknown protocol"})
6565
6666
6767 class ThirdPartyUserServlet(RestServlet):
8484 ThirdPartyEntityKind.USER, protocol, fields
8585 )
8686
87 defer.returnValue((200, results))
87 return (200, results)
8888
8989
9090 class ThirdPartyLocationServlet(RestServlet):
107107 ThirdPartyEntityKind.LOCATION, protocol, fields
108108 )
109109
110 defer.returnValue((200, results))
110 return (200, results)
111111
112112
113113 def register_servlets(hs, http_server):
5959 user_id = requester.user.to_string()
6060
6161 if not self.hs.config.user_directory_search_enabled:
62 defer.returnValue((200, {"limited": False, "results": []}))
62 return (200, {"limited": False, "results": []})
6363
6464 body = parse_json_object_from_request(request)
6565
7575 user_id, search_term, limit
7676 )
7777
78 defer.returnValue((200, results))
78 return (200, results)
7979
8080
8181 def register_servlets(hs, http_server):
3232 RequestSendFailed,
3333 SynapseError,
3434 )
35 from synapse.config._base import ConfigError
3536 from synapse.logging.context import defer_to_thread
3637 from synapse.metrics.background_process_metrics import run_as_background_process
3738 from synapse.util.async_helpers import Linearizer
170171
171172 yield self._generate_thumbnails(None, media_id, media_id, media_type)
172173
173 defer.returnValue("mxc://%s/%s" % (self.server_name, media_id))
174 return "mxc://%s/%s" % (self.server_name, media_id)
174175
175176 @defer.inlineCallbacks
176177 def get_local_media(self, request, media_id, name):
281282 with responder:
282283 pass
283284
284 defer.returnValue(media_info)
285 return media_info
285286
286287 @defer.inlineCallbacks
287288 def _get_remote_media_impl(self, server_name, media_id):
316317
317318 responder = yield self.media_storage.fetch_media(file_info)
318319 if responder:
319 defer.returnValue((responder, media_info))
320 return (responder, media_info)
320321
321322 # Failed to find the file anywhere, lets download it.
322323
323324 media_info = yield self._download_remote_file(server_name, media_id, file_id)
324325
325326 responder = yield self.media_storage.fetch_media(file_info)
326 defer.returnValue((responder, media_info))
327 return (responder, media_info)
327328
328329 @defer.inlineCallbacks
329330 def _download_remote_file(self, server_name, media_id, file_id):
420421
421422 yield self._generate_thumbnails(server_name, media_id, file_id, media_type)
422423
423 defer.returnValue(media_info)
424 return media_info
424425
425426 def _get_thumbnail_requirements(self, media_type):
426427 return self.thumbnail_requirements.get(media_type, ())
499500 media_id, t_width, t_height, t_type, t_method, t_len
500501 )
501502
502 defer.returnValue(output_path)
503 return output_path
503504
504505 @defer.inlineCallbacks
505506 def generate_remote_exact_thumbnail(
553554 t_len,
554555 )
555556
556 defer.returnValue(output_path)
557 return output_path
557558
558559 @defer.inlineCallbacks
559560 def _generate_thumbnails(
666667 media_id, t_width, t_height, t_type, t_method, t_len
667668 )
668669
669 defer.returnValue({"width": m_width, "height": m_height})
670 return {"width": m_width, "height": m_height}
670671
671672 @defer.inlineCallbacks
672673 def delete_old_remote_media(self, before_ts):
703704 yield self.store.delete_remote_media(origin, media_id)
704705 deleted += 1
705706
706 defer.returnValue({"deleted": deleted})
707 return {"deleted": deleted}
707708
708709
709710 class MediaRepositoryResource(Resource):
752753 """
753754
754755 def __init__(self, hs):
755 Resource.__init__(self)
756
756 # If we're not configured to use it, raise if we somehow got here.
757 if not hs.config.can_load_media_repo:
758 raise ConfigError("Synapse is not configured to use a media repo.")
759
760 super().__init__()
757761 media_repo = hs.get_media_repository()
758762
759763 self.putChild(b"upload", UploadResource(hs, media_repo))
6868 )
6969 yield finish_cb()
7070
71 defer.returnValue(fname)
71 return fname
7272
7373 @contextlib.contextmanager
7474 def store_into_file(self, file_info):
142142 path = self._file_info_to_path(file_info)
143143 local_path = os.path.join(self.local_media_directory, path)
144144 if os.path.exists(local_path):
145 defer.returnValue(FileResponder(open(local_path, "rb")))
145 return FileResponder(open(local_path, "rb"))
146146
147147 for provider in self.storage_providers:
148148 res = yield provider.fetch(path, file_info)
149149 if res:
150 defer.returnValue(res)
151
152 defer.returnValue(None)
150 return res
151
152 return None
153153
154154 @defer.inlineCallbacks
155155 def ensure_media_is_in_local_cache(self, file_info):
165165 path = self._file_info_to_path(file_info)
166166 local_path = os.path.join(self.local_media_directory, path)
167167 if os.path.exists(local_path):
168 defer.returnValue(local_path)
168 return local_path
169169
170170 dirname = os.path.dirname(local_path)
171171 if not os.path.exists(dirname):
180180 )
181181 yield res.write_to_consumer(consumer)
182182 yield consumer.wait()
183 defer.returnValue(local_path)
183 return local_path
184184
185185 raise Exception("file could not be found")
186186
181181 og = cache_result["og"]
182182 if isinstance(og, six.text_type):
183183 og = og.encode("utf8")
184 defer.returnValue(og)
184 return og
185185 return
186186
187187 media_info = yield self._download_url(url, user)
283283 media_info["created_ts"],
284284 )
285285
286 defer.returnValue(jsonog)
286 return jsonog
287287
288288 @defer.inlineCallbacks
289289 def _download_url(self, url, user):
353353 # therefore not expire it.
354354 raise
355355
356 defer.returnValue(
357 {
358 "media_type": media_type,
359 "media_length": length,
360 "download_name": download_name,
361 "created_ts": time_now_ms,
362 "filesystem_id": file_id,
363 "filename": fname,
364 "uri": uri,
365 "response_code": code,
366 # FIXME: we should calculate a proper expiration based on the
367 # Cache-Control and Expire headers. But for now, assume 1 hour.
368 "expires": 60 * 60 * 1000,
369 "etag": headers["ETag"][0] if "ETag" in headers else None,
370 }
371 )
356 return {
357 "media_type": media_type,
358 "media_length": length,
359 "download_name": download_name,
360 "created_ts": time_now_ms,
361 "filesystem_id": file_id,
362 "filename": fname,
363 "uri": uri,
364 "response_code": code,
365 # FIXME: we should calculate a proper expiration based on the
366 # Cache-Control and Expire headers. But for now, assume 1 hour.
367 "expires": 60 * 60 * 1000,
368 "etag": headers["ETag"][0] if "ETag" in headers else None,
369 }
372370
373371 def _start_expire_url_cache_data(self):
374372 return run_as_background_process(
192192 if event_id in referenced_events:
193193 referenced_events.remove(event.event_id)
194194
195 defer.returnValue((currently_blocked, referenced_events))
195 return (currently_blocked, referenced_events)
8585 res = yield self._event_creation_handler.create_and_send_nonmember_event(
8686 requester, event_dict, ratelimit=False
8787 )
88 defer.returnValue(res)
88 return res
8989
9090 @cachedInlineCallbacks()
9191 def get_notice_room_for_user(self, user_id):
119119 # we found a room which our user shares with the system notice
120120 # user
121121 logger.info("Using room %s", room.room_id)
122 defer.returnValue(room.room_id)
122 return room.room_id
123123
124124 # apparently no existing notice room: create a new one
125125 logger.info("Creating server notices room for %s", user_id)
157157 self._notifier.on_new_event("account_data_key", max_id, users=[user_id])
158158
159159 logger.info("Created server notices room %s for %s", room_id, user_id)
160 defer.returnValue(room_id)
160 return room_id
134134 event = None
135135 if event_id:
136136 event = yield self.store.get_event(event_id, allow_none=True)
137 defer.returnValue(event)
137 return event
138138 return
139139
140140 state_map = yield self.store.get_events(
144144 key: state_map[e_id] for key, e_id in iteritems(state) if e_id in state_map
145145 }
146146
147 defer.returnValue(state)
147 return state
148148
149149 @defer.inlineCallbacks
150150 def get_current_state_ids(self, room_id, latest_event_ids=None):
168168 ret = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
169169 state = ret.state
170170
171 defer.returnValue(state)
171 return state
172172
173173 @defer.inlineCallbacks
174174 def get_current_users_in_room(self, room_id, latest_event_ids=None):
188188 logger.debug("calling resolve_state_groups from get_current_users_in_room")
189189 entry = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
190190 joined_users = yield self.store.get_joined_users_from_state(room_id, entry)
191 defer.returnValue(joined_users)
191 return joined_users
192192
193193 @defer.inlineCallbacks
194194 def get_current_hosts_in_room(self, room_id, latest_event_ids=None):
197197 logger.debug("calling resolve_state_groups from get_current_hosts_in_room")
198198 entry = yield self.resolve_state_groups_for_events(room_id, latest_event_ids)
199199 joined_hosts = yield self.store.get_joined_hosts(room_id, entry)
200 defer.returnValue(joined_hosts)
200 return joined_hosts
201201
202202 @defer.inlineCallbacks
203203 def compute_event_context(self, event, old_state=None):
240240 prev_state_ids=prev_state_ids,
241241 )
242242
243 defer.returnValue(context)
243 return context
244244
245245 if old_state:
246246 # We already have the state, so we don't need to calculate it.
274274 prev_state_ids=prev_state_ids,
275275 )
276276
277 defer.returnValue(context)
277 return context
278278
279279 logger.debug("calling resolve_state_groups from compute_event_context")
280280
342342 delta_ids=delta_ids,
343343 )
344344
345 defer.returnValue(context)
345 return context
346346
347347 @defer.inlineCallbacks
348348 def resolve_state_groups_for_events(self, room_id, event_ids):
367367 state_groups_ids = yield self.store.get_state_groups_ids(room_id, event_ids)
368368
369369 if len(state_groups_ids) == 0:
370 defer.returnValue(_StateCacheEntry(state={}, state_group=None))
370 return _StateCacheEntry(state={}, state_group=None)
371371 elif len(state_groups_ids) == 1:
372372 name, state_list = list(state_groups_ids.items()).pop()
373373
374374 prev_group, delta_ids = yield self.store.get_state_group_delta(name)
375375
376 defer.returnValue(
377 _StateCacheEntry(
378 state=state_list,
379 state_group=name,
380 prev_group=prev_group,
381 delta_ids=delta_ids,
382 )
376 return _StateCacheEntry(
377 state=state_list,
378 state_group=name,
379 prev_group=prev_group,
380 delta_ids=delta_ids,
383381 )
384382
385383 room_version = yield self.store.get_room_version(room_id)
391389 None,
392390 state_res_store=StateResolutionStore(self.store),
393391 )
394 defer.returnValue(result)
392 return result
395393
396394 @defer.inlineCallbacks
397395 def resolve_events(self, room_version, state_sets, event):
414412
415413 new_state = {key: state_map[ev_id] for key, ev_id in iteritems(new_state)}
416414
417 defer.returnValue(new_state)
415 return new_state
418416
419417
420418 class StateResolutionHandler(object):
478476 if self._state_cache is not None:
479477 cache = self._state_cache.get(group_names, None)
480478 if cache:
481 defer.returnValue(cache)
479 return cache
482480
483481 logger.info(
484482 "Resolving state for %s with %d groups", room_id, len(state_groups_ids)
524522 if self._state_cache is not None:
525523 self._state_cache[group_names] = cache
526524
527 defer.returnValue(cache)
525 return cache
528526
529527
530528 def _make_state_cache_entry(new_state, state_groups_ids):
5454 a map from (type, state_key) to event_id.
5555 """
5656 if len(state_sets) == 1:
57 defer.returnValue(state_sets[0])
57 return state_sets[0]
5858
5959 unconflicted_state, conflicted_state = _seperate(state_sets)
6060
9696 state_map_new = yield state_map_factory(new_needed_events)
9797 state_map.update(state_map_new)
9898
99 defer.returnValue(
100 _resolve_with_state(
101 unconflicted_state, conflicted_state, auth_events, state_map
102 )
99 return _resolve_with_state(
100 unconflicted_state, conflicted_state, auth_events, state_map
103101 )
104102
105103
6262 unconflicted_state, conflicted_state = _seperate(state_sets)
6363
6464 if not conflicted_state:
65 defer.returnValue(unconflicted_state)
65 return unconflicted_state
6666
6767 logger.debug("%d conflicted state entries", len(conflicted_state))
6868 logger.debug("Calculating auth chain difference")
136136
137137 logger.debug("done")
138138
139 defer.returnValue(resolved_state)
139 return resolved_state
140140
141141
142142 @defer.inlineCallbacks
167167 aev = yield _get_event(aid, event_map, state_res_store)
168168 if (aev.type, aev.state_key) == (EventTypes.Create, ""):
169169 if aev.content.get("creator") == event.sender:
170 defer.returnValue(100)
170 return 100
171171 break
172 defer.returnValue(0)
172 return 0
173173
174174 level = pl.content.get("users", {}).get(event.sender)
175175 if level is None:
176176 level = pl.content.get("users_default", 0)
177177
178178 if level is None:
179 defer.returnValue(0)
179 return 0
180180 else:
181 defer.returnValue(int(level))
181 return int(level)
182182
183183
184184 @defer.inlineCallbacks
223223 intersection = set(auth_sets[0]).intersection(*auth_sets[1:])
224224 union = set().union(*auth_sets)
225225
226 defer.returnValue(union - intersection)
226 return union - intersection
227227
228228
229229 def _seperate(state_sets):
342342 it = lexicographical_topological_sort(graph, key=_get_power_order)
343343 sorted_events = list(it)
344344
345 defer.returnValue(sorted_events)
345 return sorted_events
346346
347347
348348 @defer.inlineCallbacks
395395 except AuthError:
396396 pass
397397
398 defer.returnValue(resolved_state)
398 return resolved_state
399399
400400
401401 @defer.inlineCallbacks
438438
439439 event_ids.sort(key=lambda ev_id: order_map[ev_id])
440440
441 defer.returnValue(event_ids)
441 return event_ids
442442
443443
444444 @defer.inlineCallbacks
461461 while event:
462462 depth = mainline_map.get(event.event_id)
463463 if depth is not None:
464 defer.returnValue(depth)
464 return depth
465465
466466 auth_events = event.auth_event_ids()
467467 event = None
473473 break
474474
475475 # Didn't find a power level auth event, so we just return 0
476 defer.returnValue(0)
476 return 0
477477
478478
479479 @defer.inlineCallbacks
492492 if event_id not in event_map:
493493 events = yield state_res_store.get_events([event_id], allow_rejected=True)
494494 event_map.update(events)
495 defer.returnValue(event_map[event_id])
495 return event_map[event_id]
496496
497497
498498 def lexicographical_topological_sort(graph, key):
4747 </div>
4848 <h1>It works! Synapse is running</h1>
4949 <p>Your Synapse server is listening on this port and is ready for messages.</p>
50 <p>To use this server you'll need <a href="https://matrix.org/docs/projects/try-matrix-now.html#clients" target="_blank">a Matrix client</a>.
50 <p>To use this server you'll need <a href="https://matrix.org/docs/projects/try-matrix-now.html#clients" target="_blank" rel="noopener noreferrer">a Matrix client</a>.
5151 </p>
5252 <p>Welcome to the Matrix universe :)</p>
5353 <hr>
5454 <p>
5555 <small>
56 <a href="https://matrix.org" target="_blank">
56 <a href="https://matrix.org" target="_blank" rel="noopener noreferrer">
5757 matrix.org
5858 </a>
5959 </small>
468468 return self._simple_select_list(
469469 table="users",
470470 keyvalues={},
471 retcols=["name", "password_hash", "is_guest", "admin"],
471 retcols=["name", "password_hash", "is_guest", "admin", "user_type"],
472472 desc="get_users",
473473 )
474474
493493 orderby=order,
494494 start=start,
495495 limit=limit,
496 retcols=["name", "password_hash", "is_guest", "admin"],
496 retcols=["name", "password_hash", "is_guest", "admin", "user_type"],
497497 )
498498 count = yield self.runInteraction("get_users_paginate", self.get_user_count_txn)
499499 retval = {"users": users, "total": count}
500 defer.returnValue(retval)
500 return retval
501501
502502 def search_users(self, term):
503503 """Function to search users list for one or more users with
513513 table="users",
514514 term=term,
515515 col="name",
516 retcols=["name", "password_hash", "is_guest", "admin"],
516 retcols=["name", "password_hash", "is_guest", "admin", "user_type"],
517517 desc="search_users",
518518 )
519519
8585 class LoggingTransaction(object):
8686 """An object that almost-transparently proxies for the 'txn' object
8787 passed to the constructor. Adds logging and metrics to the .execute()
88 method."""
88 method.
89
90 Args:
91 txn: The database transcation object to wrap.
92 name (str): The name of this transactions for logging.
93 database_engine (Sqlite3Engine|PostgresEngine)
94 after_callbacks(list|None): A list that callbacks will be appended to
95 that have been added by `call_after` which should be run on
96 successful completion of the transaction. None indicates that no
97 callbacks should be allowed to be scheduled to run.
98 exception_callbacks(list|None): A list that callbacks will be appended
99 to that have been added by `call_on_exception` which should be run
100 if transaction ends with an error. None indicates that no callbacks
101 should be allowed to be scheduled to run.
102 """
89103
90104 __slots__ = [
91105 "txn",
96110 ]
97111
98112 def __init__(
99 self, txn, name, database_engine, after_callbacks, exception_callbacks
113 self, txn, name, database_engine, after_callbacks=None, exception_callbacks=None
100114 ):
101115 object.__setattr__(self, "txn", txn)
102116 object.__setattr__(self, "name", name)
498512 after_callback(*after_args, **after_kwargs)
499513 raise
500514
501 defer.returnValue(result)
515 return result
502516
503517 @defer.inlineCallbacks
504518 def runWithConnection(self, func, *args, **kwargs):
538552 with PreserveLoggingContext():
539553 result = yield self._db_pool.runWithConnection(inner_func, *args, **kwargs)
540554
541 defer.returnValue(result)
555 return result
542556
543557 @staticmethod
544558 def cursor_to_dict(cursor):
600614 # a cursor after we receive an error from the db.
601615 if not or_ignore:
602616 raise
603 defer.returnValue(False)
604 defer.returnValue(True)
617 return False
618 return True
605619
606620 @staticmethod
607621 def _simple_insert_txn(txn, table, values):
693707 insertion_values,
694708 lock=lock,
695709 )
696 defer.returnValue(result)
710 return result
697711 except self.database_engine.module.IntegrityError as e:
698712 attempts += 1
699713 if attempts >= 5:
11061120 results = []
11071121
11081122 if not iterable:
1109 defer.returnValue(results)
1123 return results
11101124
11111125 # iterables can not be sliced, so convert it to a list first
11121126 it_list = list(iterable)
11271141
11281142 results.extend(rows)
11291143
1130 defer.returnValue(results)
1144 return results
11311145
11321146 @classmethod
11331147 def _simple_select_many_txn(cls, txn, table, column, iterable, keyvalues, retcols):
110110 )
111111
112112 if result:
113 defer.returnValue(json.loads(result))
113 return json.loads(result)
114114 else:
115 defer.returnValue(None)
115 return None
116116
117117 @cached(num_args=2)
118118 def get_account_data_for_room(self, user_id, room_id):
263263 on_invalidate=cache_context.invalidate,
264264 )
265265 if not ignored_account_data:
266 defer.returnValue(False)
267
268 defer.returnValue(
269 ignored_user_id in ignored_account_data.get("ignored_users", {})
270 )
266 return False
267
268 return ignored_user_id in ignored_account_data.get("ignored_users", {})
271269
272270
273271 class AccountDataStore(AccountDataWorkerStore):
331329 )
332330
333331 result = self._account_data_id_gen.get_current_token()
334 defer.returnValue(result)
332 return result
335333
336334 @defer.inlineCallbacks
337335 def add_account_data_for_user(self, user_id, account_data_type, content):
372370 )
373371
374372 result = self._account_data_id_gen.get_current_token()
375 defer.returnValue(result)
373 return result
376374
377375 def _update_max_stream_id(self, next_id):
378376 """Update the max stream_id
144144 for service in as_list:
145145 if service.id == res["as_id"]:
146146 services.append(service)
147 defer.returnValue(services)
147 return services
148148
149149 @defer.inlineCallbacks
150150 def get_appservice_state(self, service):
163163 desc="get_appservice_state",
164164 )
165165 if result:
166 defer.returnValue(result.get("state"))
166 return result.get("state")
167167 return
168 defer.returnValue(None)
168 return None
169169
170170 def set_appservice_state(self, service, state):
171171 """Set the application service state.
297297 )
298298
299299 if not entry:
300 defer.returnValue(None)
300 return None
301301
302302 event_ids = json.loads(entry["event_ids"])
303303
304304 events = yield self.get_events_as_list(event_ids)
305305
306 defer.returnValue(
307 AppServiceTransaction(service=service, id=entry["txn_id"], events=events)
308 )
306 return AppServiceTransaction(service=service, id=entry["txn_id"], events=events)
309307
310308 def _get_last_txn(self, txn, service_id):
311309 txn.execute(
359357
360358 events = yield self.get_events_as_list(event_ids)
361359
362 defer.returnValue((upper_bound, events))
360 return (upper_bound, events)
363361
364362
365363 class ApplicationServiceTransactionStore(ApplicationServiceTransactionWorkerStore):
114114 " Unscheduling background update task."
115115 )
116116 self._all_done = True
117 defer.returnValue(None)
117 return None
118118
119119 @defer.inlineCallbacks
120120 def has_completed_background_updates(self):
126126 # if we've previously determined that there is nothing left to do, that
127127 # is easy
128128 if self._all_done:
129 defer.returnValue(True)
129 return True
130130
131131 # obviously, if we have things in our queue, we're not done.
132132 if self._background_update_queue:
133 defer.returnValue(False)
133 return False
134134
135135 # otherwise, check if there are updates to be run. This is important,
136136 # as we may be running on a worker which doesn't perform the bg updates
143143 )
144144 if not updates:
145145 self._all_done = True
146 defer.returnValue(True)
147
148 defer.returnValue(False)
146 return True
147
148 return False
149149
150150 @defer.inlineCallbacks
151151 def do_next_background_update(self, desired_duration_ms):
172172
173173 if not self._background_update_queue:
174174 # no work left to do
175 defer.returnValue(None)
175 return None
176176
177177 # pop from the front, and add back to the back
178178 update_name = self._background_update_queue.pop(0)
179179 self._background_update_queue.append(update_name)
180180
181181 res = yield self._do_background_update(update_name, desired_duration_ms)
182 defer.returnValue(res)
182 return res
183183
184184 @defer.inlineCallbacks
185185 def _do_background_update(self, update_name, desired_duration_ms):
230230
231231 performance.update(items_updated, duration_ms)
232232
233 defer.returnValue(len(self._background_update_performance))
233 return len(self._background_update_performance)
234234
235235 def register_background_update_handler(self, update_name, update_handler):
236236 """Register a handler for doing a background update.
265265 @defer.inlineCallbacks
266266 def noop_update(progress, batch_size):
267267 yield self._end_background_update(update_name)
268 defer.returnValue(1)
268 return 1
269269
270270 self.register_background_update_handler(update_name, noop_update)
271271
369369 logger.info("Adding index %s to %s", index_name, table)
370370 yield self.runWithConnection(runner)
371371 yield self._end_background_update(update_name)
372 defer.returnValue(1)
372 return 1
373373
374374 self.register_background_update_handler(update_name, updater)
375375
103103
104104 yield self.runWithConnection(f)
105105 yield self._end_background_update("user_ips_drop_nonunique_index")
106 defer.returnValue(1)
106 return 1
107107
108108 @defer.inlineCallbacks
109109 def _analyze_user_ip(self, progress, batch_size):
120120
121121 yield self._end_background_update("user_ips_analyze")
122122
123 defer.returnValue(1)
123 return 1
124124
125125 @defer.inlineCallbacks
126126 def _remove_user_ip_dupes(self, progress, batch_size):
290290 if last:
291291 yield self._end_background_update("user_ips_remove_dupes")
292292
293 defer.returnValue(batch_size)
293 return batch_size
294294
295295 @defer.inlineCallbacks
296296 def insert_client_ip(
400400 "device_id": did,
401401 "last_seen": last_seen,
402402 }
403 defer.returnValue(ret)
403 return ret
404404
405405 @classmethod
406406 def _get_last_client_ip_by_device_txn(cls, txn, user_id, device_id, retcols):
460460 ((row["access_token"], row["ip"]), (row["user_agent"], row["last_seen"]))
461461 for row in rows
462462 )
463 defer.returnValue(
464 list(
465 {
466 "access_token": access_token,
467 "ip": ip,
468 "user_agent": user_agent,
469 "last_seen": last_seen,
470 }
471 for (access_token, ip), (user_agent, last_seen) in iteritems(results)
472 )
473 )
463 return list(
464 {
465 "access_token": access_token,
466 "ip": ip,
467 "user_agent": user_agent,
468 "last_seen": last_seen,
469 }
470 for (access_token, ip), (user_agent, last_seen) in iteritems(results)
471 )
9191 user_id, last_deleted_stream_id
9292 )
9393 if not has_changed:
94 defer.returnValue(0)
94 return 0
9595
9696 def delete_messages_for_device_txn(txn):
9797 sql = (
114114 last_deleted_stream_id, up_to_stream_id
115115 )
116116
117 defer.returnValue(count)
117 return count
118118
119119 def get_new_device_msgs_for_remote(
120120 self, destination, last_stream_id, current_stream_id, limit
262262 destination, stream_id
263263 )
264264
265 defer.returnValue(self._device_inbox_id_gen.get_current_token())
265 return self._device_inbox_id_gen.get_current_token()
266266
267267 @defer.inlineCallbacks
268268 def add_messages_from_remote_to_device_inbox(
311311 for user_id in local_messages_by_user_then_device.keys():
312312 self._device_inbox_stream_cache.entity_has_changed(user_id, stream_id)
313313
314 defer.returnValue(stream_id)
314 return stream_id
315315
316316 def _add_messages_to_local_device_inbox_txn(
317317 self, txn, stream_id, messages_by_user_then_device
425425
426426 yield self._end_background_update(self.DEVICE_INBOX_STREAM_ID)
427427
428 defer.returnValue(1)
428 return 1
7070 desc="get_devices_by_user",
7171 )
7272
73 defer.returnValue({d["device_id"]: d for d in devices})
73 return {d["device_id"]: d for d in devices}
7474
7575 @defer.inlineCallbacks
7676 def get_devices_by_remote(self, destination, from_stream_id, limit):
8787 destination, int(from_stream_id)
8888 )
8989 if not has_changed:
90 defer.returnValue((now_stream_id, []))
90 return (now_stream_id, [])
9191
9292 # We retrieve n+1 devices from the list of outbound pokes where n is
9393 # our outbound device update limit. We then check if the very last
110110
111111 # Return an empty list if there are no updates
112112 if not updates:
113 defer.returnValue((now_stream_id, []))
113 return (now_stream_id, [])
114114
115115 # if we have exceeded the limit, we need to exclude any results with the
116116 # same stream_id as the last row.
146146 # skip that stream_id and return an empty list, and continue with the next
147147 # stream_id next time.
148148 if not query_map:
149 defer.returnValue((stream_id_cutoff, []))
149 return (stream_id_cutoff, [])
150150
151151 results = yield self._get_device_update_edus_by_remote(
152152 destination, from_stream_id, query_map
153153 )
154154
155 defer.returnValue((now_stream_id, results))
155 return (now_stream_id, results)
156156
157157 def _get_devices_by_remote_txn(
158158 self, txn, destination, from_stream_id, now_stream_id, limit
231231
232232 results.append(result)
233233
234 defer.returnValue(results)
234 return results
235235
236236 def _get_last_device_update_for_remote_user(
237237 self, destination, user_id, from_stream_id
329329 else:
330330 results[user_id] = yield self._get_cached_devices_for_user(user_id)
331331
332 defer.returnValue((user_ids_not_in_cache, results))
332 return (user_ids_not_in_cache, results)
333333
334334 @cachedInlineCallbacks(num_args=2, tree=True)
335335 def _get_cached_user_device(self, user_id, device_id):
339339 retcol="content",
340340 desc="_get_cached_user_device",
341341 )
342 defer.returnValue(db_to_json(content))
342 return db_to_json(content)
343343
344344 @cachedInlineCallbacks()
345345 def _get_cached_devices_for_user(self, user_id):
349349 retcols=("device_id", "content"),
350350 desc="_get_cached_devices_for_user",
351351 )
352 defer.returnValue(
353 {device["device_id"]: db_to_json(device["content"]) for device in devices}
354 )
352 return {
353 device["device_id"]: db_to_json(device["content"]) for device in devices
354 }
355355
356356 def get_devices_with_keys_by_user(self, user_id):
357357 """Get all devices (with any device keys) for a user
481481 results = {user_id: None for user_id in user_ids}
482482 results.update({row["user_id"]: row["stream_id"] for row in rows})
483483
484 defer.returnValue(results)
484 return results
485485
486486
487487 class DeviceStore(DeviceWorkerStore, BackgroundUpdateStore):
542542 """
543543 key = (user_id, device_id)
544544 if self.device_id_exists_cache.get(key, None):
545 defer.returnValue(False)
545 return False
546546
547547 try:
548548 inserted = yield self._simple_insert(
556556 or_ignore=True,
557557 )
558558 self.device_id_exists_cache.prefill(key, True)
559 defer.returnValue(inserted)
559 return inserted
560560 except Exception as e:
561561 logger.error(
562562 "store_device with device_id=%s(%r) user_id=%s(%r)"
779779 hosts,
780780 stream_id,
781781 )
782 defer.returnValue(stream_id)
782 return stream_id
783783
784784 def _add_device_change_txn(self, txn, user_id, device_ids, hosts, stream_id):
785785 now = self._clock.time_msec()
888888
889889 yield self.runWithConnection(f)
890890 yield self._end_background_update(DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES)
891 defer.returnValue(1)
891 return 1
4545 )
4646
4747 if not room_id:
48 defer.returnValue(None)
48 return None
4949 return
5050
5151 servers = yield self._simple_select_onecol(
5656 )
5757
5858 if not servers:
59 defer.returnValue(None)
59 return None
6060 return
6161
62 defer.returnValue(RoomAliasMapping(room_id, room_alias.to_string(), servers))
62 return RoomAliasMapping(room_id, room_alias.to_string(), servers)
6363
6464 def get_room_alias_creator(self, room_alias):
6565 return self._simple_select_one_onecol(
124124 raise SynapseError(
125125 409, "Room alias %s already exists" % room_alias.to_string()
126126 )
127 defer.returnValue(ret)
127 return ret
128128
129129 @defer.inlineCallbacks
130130 def delete_room_alias(self, room_alias):
132132 "delete_room_alias", self._delete_room_alias_txn, room_alias
133133 )
134134
135 defer.returnValue(room_id)
135 return room_id
136136
137137 def _delete_room_alias_txn(self, txn, room_alias):
138138 txn.execute(
6060
6161 row["session_data"] = json.loads(row["session_data"])
6262
63 defer.returnValue(row)
63 return row
6464
6565 @defer.inlineCallbacks
6666 def set_e2e_room_key(self, user_id, version, room_id, session_id, room_key):
117117 try:
118118 version = int(version)
119119 except ValueError:
120 defer.returnValue({"rooms": {}})
120 return {"rooms": {}}
121121
122122 keyvalues = {"user_id": user_id, "version": version}
123123 if room_id:
150150 "session_data": json.loads(row["session_data"]),
151151 }
152152
153 defer.returnValue(sessions)
153 return sessions
154154
155155 @defer.inlineCallbacks
156156 def delete_e2e_room_keys(self, user_id, version, room_id=None, session_id=None):
4040 dict containing "key_json", "device_display_name".
4141 """
4242 if not query_list:
43 defer.returnValue({})
43 return {}
4444
4545 results = yield self.runInteraction(
4646 "get_e2e_device_keys",
5454 for device_id, device_info in iteritems(device_keys):
5555 device_info["keys"] = db_to_json(device_info.pop("key_json"))
5656
57 defer.returnValue(results)
57 return results
5858
5959 def _get_e2e_device_keys_txn(
6060 self, txn, query_list, include_all_devices=False, include_deleted_devices=False
129129 desc="add_e2e_one_time_keys_check",
130130 )
131131
132 defer.returnValue(
133 {(row["algorithm"], row["key_id"]): row["key_json"] for row in rows}
134 )
132 return {(row["algorithm"], row["key_id"]): row["key_json"] for row in rows}
135133
136134 @defer.inlineCallbacks
137135 def add_e2e_one_time_keys(self, user_id, device_id, time_now, new_keys):
130130 )
131131
132132 if not rows:
133 defer.returnValue(0)
133 return 0
134134 else:
135 defer.returnValue(max(row["depth"] for row in rows))
135 return max(row["depth"] for row in rows)
136136
137137 def _get_oldest_events_in_room_txn(self, txn, room_id):
138138 return self._simple_select_onecol_txn(
168168 # make sure that we don't completely ignore the older events.
169169 res = res[0:5] + random.sample(res[5:], 5)
170170
171 defer.returnValue(res)
171 return res
172172
173173 def get_latest_event_ids_and_hashes_in_room(self, room_id):
174174 """
410410 limit,
411411 )
412412 events = yield self.get_events_as_list(ids)
413 defer.returnValue(events)
413 return events
414414
415415 def _get_missing_events(self, txn, room_id, earliest_events, latest_events, limit):
416416
462462 desc="get_successor_events",
463463 )
464464
465 defer.returnValue([row["event_id"] for row in rows])
465 return [row["event_id"] for row in rows]
466466
467467
468468 class EventFederationStore(EventFederationWorkerStore):
653653 if not result:
654654 yield self._end_background_update(self.EVENT_AUTH_STATE_ONLY)
655655
656 defer.returnValue(batch_size)
656 return batch_size
7878 db_conn.cursor(),
7979 name="_find_stream_orderings_for_times_txn",
8080 database_engine=self.database_engine,
81 after_callbacks=[],
82 exception_callbacks=[],
8381 )
8482 self._find_stream_orderings_for_times_txn(cur)
8583 cur.close()
10199 user_id,
102100 last_read_event_id,
103101 )
104 defer.returnValue(ret)
102 return ret
105103
106104 def _get_unread_counts_by_receipt_txn(
107105 self, txn, room_id, user_id, last_read_event_id
179177 return [r[0] for r in txn]
180178
181179 ret = yield self.runInteraction("get_push_action_users_in_range", f)
182 defer.returnValue(ret)
180 return ret
183181
184182 @defer.inlineCallbacks
185183 def get_unread_push_actions_for_user_in_range_for_http(
280278
281279 # Take only up to the limit. We have to stop at the limit because
282280 # one of the subqueries may have hit the limit.
283 defer.returnValue(notifs[:limit])
281 return notifs[:limit]
284282
285283 @defer.inlineCallbacks
286284 def get_unread_push_actions_for_user_in_range_for_email(
381379 notifs.sort(key=lambda r: -(r["received_ts"] or 0))
382380
383381 # Now return the first `limit`
384 defer.returnValue(notifs[:limit])
382 return notifs[:limit]
385383
386384 def get_if_maybe_push_in_range_for_user(self, user_id, min_stream_ordering):
387385 """A fast check to see if there might be something to push for the
478476 keyvalues={"event_id": event_id},
479477 desc="remove_push_actions_from_staging",
480478 )
481 defer.returnValue(res)
479 return res
482480 except Exception:
483481 # this method is called from an exception handler, so propagating
484482 # another exception here really isn't helpful - there's nothing
733731 push_actions = yield self.runInteraction("get_push_actions_for_user", f)
734732 for pa in push_actions:
735733 pa["actions"] = _deserialize_action(pa["actions"], pa["highlight"])
736 defer.returnValue(push_actions)
734 return push_actions
737735
738736 @defer.inlineCallbacks
739737 def get_time_of_last_push_action_before(self, stream_ordering):
750748 return txn.fetchone()
751749
752750 result = yield self.runInteraction("get_time_of_last_push_action_before", f)
753 defer.returnValue(result[0] if result else None)
751 return result[0] if result else None
754752
755753 @defer.inlineCallbacks
756754 def get_latest_push_action_stream_ordering(self):
759757 return txn.fetchone()
760758
761759 result = yield self.runInteraction("get_latest_push_action_stream_ordering", f)
762 defer.returnValue(result[0] or 0)
760 return result[0] or 0
763761
764762 def _remove_push_actions_for_event_id_txn(self, txn, room_id, event_id):
765763 # Sad that we have to blow away the cache for the whole room here
222222 except self.database_engine.module.IntegrityError:
223223 logger.exception("IntegrityError, retrying.")
224224 res = yield func(self, *args, delete_existing=True, **kwargs)
225 defer.returnValue(res)
225 return res
226226
227227 return f
228228
308308
309309 max_persisted_id = yield self._stream_id_gen.get_current_token()
310310
311 defer.returnValue(max_persisted_id)
311 return max_persisted_id
312312
313313 @defer.inlineCallbacks
314314 @log_function
333333 yield make_deferred_yieldable(deferred)
334334
335335 max_persisted_id = yield self._stream_id_gen.get_current_token()
336 defer.returnValue((event.internal_metadata.stream_ordering, max_persisted_id))
336 return (event.internal_metadata.stream_ordering, max_persisted_id)
337337
338338 def _maybe_start_persisting(self, room_id):
339339 @defer.inlineCallbacks
363363 if not events_and_contexts:
364364 return
365365
366 if backfilled:
367 stream_ordering_manager = self._backfill_id_gen.get_next_mult(
368 len(events_and_contexts)
369 )
370 else:
371 stream_ordering_manager = self._stream_id_gen.get_next_mult(
372 len(events_and_contexts)
373 )
374
375 with stream_ordering_manager as stream_orderings:
376 for (event, context), stream in zip(events_and_contexts, stream_orderings):
377 event.internal_metadata.stream_ordering = stream
378
379 chunks = [
380 events_and_contexts[x : x + 100]
381 for x in range(0, len(events_and_contexts), 100)
382 ]
383
384 for chunk in chunks:
385 # We can't easily parallelize these since different chunks
386 # might contain the same event. :(
387
388 # NB: Assumes that we are only persisting events for one room
389 # at a time.
390
391 # map room_id->list[event_ids] giving the new forward
392 # extremities in each room
393 new_forward_extremeties = {}
394
395 # map room_id->(type,state_key)->event_id tracking the full
396 # state in each room after adding these events.
397 # This is simply used to prefill the get_current_state_ids
398 # cache
399 current_state_for_room = {}
400
401 # map room_id->(to_delete, to_insert) where to_delete is a list
402 # of type/state keys to remove from current state, and to_insert
403 # is a map (type,key)->event_id giving the state delta in each
404 # room
405 state_delta_for_room = {}
406
407 if not backfilled:
408 with Measure(self._clock, "_calculate_state_and_extrem"):
409 # Work out the new "current state" for each room.
410 # We do this by working out what the new extremities are and then
411 # calculating the state from that.
412 events_by_room = {}
413 for event, context in chunk:
414 events_by_room.setdefault(event.room_id, []).append(
415 (event, context)
366 chunks = [
367 events_and_contexts[x : x + 100]
368 for x in range(0, len(events_and_contexts), 100)
369 ]
370
371 for chunk in chunks:
372 # We can't easily parallelize these since different chunks
373 # might contain the same event. :(
374
375 # NB: Assumes that we are only persisting events for one room
376 # at a time.
377
378 # map room_id->list[event_ids] giving the new forward
379 # extremities in each room
380 new_forward_extremeties = {}
381
382 # map room_id->(type,state_key)->event_id tracking the full
383 # state in each room after adding these events.
384 # This is simply used to prefill the get_current_state_ids
385 # cache
386 current_state_for_room = {}
387
388 # map room_id->(to_delete, to_insert) where to_delete is a list
389 # of type/state keys to remove from current state, and to_insert
390 # is a map (type,key)->event_id giving the state delta in each
391 # room
392 state_delta_for_room = {}
393
394 if not backfilled:
395 with Measure(self._clock, "_calculate_state_and_extrem"):
396 # Work out the new "current state" for each room.
397 # We do this by working out what the new extremities are and then
398 # calculating the state from that.
399 events_by_room = {}
400 for event, context in chunk:
401 events_by_room.setdefault(event.room_id, []).append(
402 (event, context)
403 )
404
405 for room_id, ev_ctx_rm in iteritems(events_by_room):
406 latest_event_ids = yield self.get_latest_event_ids_in_room(
407 room_id
408 )
409 new_latest_event_ids = yield self._calculate_new_extremities(
410 room_id, ev_ctx_rm, latest_event_ids
411 )
412
413 latest_event_ids = set(latest_event_ids)
414 if new_latest_event_ids == latest_event_ids:
415 # No change in extremities, so no change in state
416 continue
417
418 # there should always be at least one forward extremity.
419 # (except during the initial persistence of the send_join
420 # results, in which case there will be no existing
421 # extremities, so we'll `continue` above and skip this bit.)
422 assert new_latest_event_ids, "No forward extremities left!"
423
424 new_forward_extremeties[room_id] = new_latest_event_ids
425
426 len_1 = (
427 len(latest_event_ids) == 1
428 and len(new_latest_event_ids) == 1
429 )
430 if len_1:
431 all_single_prev_not_state = all(
432 len(event.prev_event_ids()) == 1
433 and not event.is_state()
434 for event, ctx in ev_ctx_rm
416435 )
417
418 for room_id, ev_ctx_rm in iteritems(events_by_room):
419 latest_event_ids = yield self.get_latest_event_ids_in_room(
420 room_id
436 # Don't bother calculating state if they're just
437 # a long chain of single ancestor non-state events.
438 if all_single_prev_not_state:
439 continue
440
441 state_delta_counter.inc()
442 if len(new_latest_event_ids) == 1:
443 state_delta_single_event_counter.inc()
444
445 # This is a fairly handwavey check to see if we could
446 # have guessed what the delta would have been when
447 # processing one of these events.
448 # What we're interested in is if the latest extremities
449 # were the same when we created the event as they are
450 # now. When this server creates a new event (as opposed
451 # to receiving it over federation) it will use the
452 # forward extremities as the prev_events, so we can
453 # guess this by looking at the prev_events and checking
454 # if they match the current forward extremities.
455 for ev, _ in ev_ctx_rm:
456 prev_event_ids = set(ev.prev_event_ids())
457 if latest_event_ids == prev_event_ids:
458 state_delta_reuse_delta_counter.inc()
459 break
460
461 logger.info("Calculating state delta for room %s", room_id)
462 with Measure(
463 self._clock, "persist_events.get_new_state_after_events"
464 ):
465 res = yield self._get_new_state_after_events(
466 room_id,
467 ev_ctx_rm,
468 latest_event_ids,
469 new_latest_event_ids,
421470 )
422 new_latest_event_ids = yield self._calculate_new_extremities(
423 room_id, ev_ctx_rm, latest_event_ids
424 )
425
426 latest_event_ids = set(latest_event_ids)
427 if new_latest_event_ids == latest_event_ids:
428 # No change in extremities, so no change in state
429 continue
430
431 # there should always be at least one forward extremity.
432 # (except during the initial persistence of the send_join
433 # results, in which case there will be no existing
434 # extremities, so we'll `continue` above and skip this bit.)
435 assert new_latest_event_ids, "No forward extremities left!"
436
437 new_forward_extremeties[room_id] = new_latest_event_ids
438
439 len_1 = (
440 len(latest_event_ids) == 1
441 and len(new_latest_event_ids) == 1
442 )
443 if len_1:
444 all_single_prev_not_state = all(
445 len(event.prev_event_ids()) == 1
446 and not event.is_state()
447 for event, ctx in ev_ctx_rm
471 current_state, delta_ids = res
472
473 # If either are not None then there has been a change,
474 # and we need to work out the delta (or use that
475 # given)
476 if delta_ids is not None:
477 # If there is a delta we know that we've
478 # only added or replaced state, never
479 # removed keys entirely.
480 state_delta_for_room[room_id] = ([], delta_ids)
481 elif current_state is not None:
482 with Measure(
483 self._clock, "persist_events.calculate_state_delta"
484 ):
485 delta = yield self._calculate_state_delta(
486 room_id, current_state
448487 )
449 # Don't bother calculating state if they're just
450 # a long chain of single ancestor non-state events.
451 if all_single_prev_not_state:
452 continue
453
454 state_delta_counter.inc()
455 if len(new_latest_event_ids) == 1:
456 state_delta_single_event_counter.inc()
457
458 # This is a fairly handwavey check to see if we could
459 # have guessed what the delta would have been when
460 # processing one of these events.
461 # What we're interested in is if the latest extremities
462 # were the same when we created the event as they are
463 # now. When this server creates a new event (as opposed
464 # to receiving it over federation) it will use the
465 # forward extremities as the prev_events, so we can
466 # guess this by looking at the prev_events and checking
467 # if they match the current forward extremities.
468 for ev, _ in ev_ctx_rm:
469 prev_event_ids = set(ev.prev_event_ids())
470 if latest_event_ids == prev_event_ids:
471 state_delta_reuse_delta_counter.inc()
472 break
473
474 logger.info("Calculating state delta for room %s", room_id)
475 with Measure(
476 self._clock, "persist_events.get_new_state_after_events"
477 ):
478 res = yield self._get_new_state_after_events(
479 room_id,
480 ev_ctx_rm,
481 latest_event_ids,
482 new_latest_event_ids,
483 )
484 current_state, delta_ids = res
485
486 # If either are not None then there has been a change,
487 # and we need to work out the delta (or use that
488 # given)
489 if delta_ids is not None:
490 # If there is a delta we know that we've
491 # only added or replaced state, never
492 # removed keys entirely.
493 state_delta_for_room[room_id] = ([], delta_ids)
494 elif current_state is not None:
495 with Measure(
496 self._clock, "persist_events.calculate_state_delta"
497 ):
498 delta = yield self._calculate_state_delta(
499 room_id, current_state
500 )
501 state_delta_for_room[room_id] = delta
502
503 # If we have the current_state then lets prefill
504 # the cache with it.
505 if current_state is not None:
506 current_state_for_room[room_id] = current_state
488 state_delta_for_room[room_id] = delta
489
490 # If we have the current_state then lets prefill
491 # the cache with it.
492 if current_state is not None:
493 current_state_for_room[room_id] = current_state
494
495 # We want to calculate the stream orderings as late as possible, as
496 # we only notify after all events with a lesser stream ordering have
497 # been persisted. I.e. if we spend 10s inside the with block then
498 # that will delay all subsequent events from being notified about.
499 # Hence why we do it down here rather than wrapping the entire
500 # function.
501 #
502 # Its safe to do this after calculating the state deltas etc as we
503 # only need to protect the *persistence* of the events. This is to
504 # ensure that queries of the form "fetch events since X" don't
505 # return events and stream positions after events that are still in
506 # flight, as otherwise subsequent requests "fetch event since Y"
507 # will not return those events.
508 #
509 # Note: Multiple instances of this function cannot be in flight at
510 # the same time for the same room.
511 if backfilled:
512 stream_ordering_manager = self._backfill_id_gen.get_next_mult(
513 len(chunk)
514 )
515 else:
516 stream_ordering_manager = self._stream_id_gen.get_next_mult(len(chunk))
517
518 with stream_ordering_manager as stream_orderings:
519 for (event, context), stream in zip(chunk, stream_orderings):
520 event.internal_metadata.stream_ordering = stream
507521
508522 yield self.runInteraction(
509523 "persist_events",
594608 stale = latest_event_ids & result
595609 stale_forward_extremities_counter.observe(len(stale))
596610
597 defer.returnValue(result)
611 return result
598612
599613 @defer.inlineCallbacks
600614 def _get_events_which_are_prevs(self, event_ids):
632646 "_get_events_which_are_prevs", _get_events_which_are_prevs_txn, chunk
633647 )
634648
635 defer.returnValue(results)
649 return results
636650
637651 @defer.inlineCallbacks
638652 def _get_prevs_before_rejected(self, event_ids):
694708 "_get_prevs_before_rejected", _get_prevs_before_rejected_txn, chunk
695709 )
696710
697 defer.returnValue(existing_prevs)
711 return existing_prevs
698712
699713 @defer.inlineCallbacks
700714 def _get_new_state_after_events(
795809 # If they old and new groups are the same then we don't need to do
796810 # anything.
797811 if old_state_groups == new_state_groups:
798 defer.returnValue((None, None))
812 return (None, None)
799813
800814 if len(new_state_groups) == 1 and len(old_state_groups) == 1:
801815 # If we're going from one state group to another, lets check if
812826 # the current state in memory then lets also return that,
813827 # but it doesn't matter if we don't.
814828 new_state = state_groups_map.get(new_state_group)
815 defer.returnValue((new_state, delta_ids))
829 return (new_state, delta_ids)
816830
817831 # Now that we have calculated new_state_groups we need to get
818832 # their state IDs so we can resolve to a single state set.
824838 if len(new_state_groups) == 1:
825839 # If there is only one state group, then we know what the current
826840 # state is.
827 defer.returnValue((state_groups_map[new_state_groups.pop()], None))
841 return (state_groups_map[new_state_groups.pop()], None)
828842
829843 # Ok, we need to defer to the state handler to resolve our state sets.
830844
853867 state_res_store=StateResolutionStore(self),
854868 )
855869
856 defer.returnValue((res.state, None))
870 return (res.state, None)
857871
858872 @defer.inlineCallbacks
859873 def _calculate_state_delta(self, room_id, current_state):
876890 if ev_id != existing_state.get(key)
877891 }
878892
879 defer.returnValue((to_delete, to_insert))
893 return (to_delete, to_insert)
880894
881895 @log_function
882896 def _persist_events_txn(
916930
917931 min_stream_order = events_and_contexts[0][0].internal_metadata.stream_ordering
918932 max_stream_order = events_and_contexts[-1][0].internal_metadata.stream_ordering
919
920 self._update_current_state_txn(txn, state_delta_for_room, min_stream_order)
921933
922934 self._update_forward_extremities_txn(
923935 txn,
9911003 all_events_and_contexts=all_events_and_contexts,
9921004 backfilled=backfilled,
9931005 )
1006
1007 # We call this last as it assumes we've inserted the events into
1008 # room_memberships, where applicable.
1009 self._update_current_state_txn(txn, state_delta_for_room, min_stream_order)
9941010
9951011 def _update_current_state_txn(self, txn, state_delta_by_room, stream_id):
9961012 for room_id, current_state_tuple in iteritems(state_delta_by_room):
10611077 ),
10621078 )
10631079
1064 self._simple_insert_many_txn(
1065 txn,
1066 table="current_state_events",
1067 values=[
1068 {
1069 "event_id": ev_id,
1070 "room_id": room_id,
1071 "type": key[0],
1072 "state_key": key[1],
1073 }
1080 # We include the membership in the current state table, hence we do
1081 # a lookup when we insert. This assumes that all events have already
1082 # been inserted into room_memberships.
1083 txn.executemany(
1084 """INSERT INTO current_state_events
1085 (room_id, type, state_key, event_id, membership)
1086 VALUES (?, ?, ?, ?, (SELECT membership FROM room_memberships WHERE event_id = ?))
1087 """,
1088 [
1089 (room_id, key[0], key[1], ev_id, ev_id)
10741090 for key, ev_id in iteritems(to_insert)
10751091 ],
10761092 )
15611577 return count
15621578
15631579 ret = yield self.runInteraction("count_messages", _count_messages)
1564 defer.returnValue(ret)
1580 return ret
15651581
15661582 @defer.inlineCallbacks
15671583 def count_daily_sent_messages(self):
15821598 return count
15831599
15841600 ret = yield self.runInteraction("count_daily_sent_messages", _count_messages)
1585 defer.returnValue(ret)
1601 return ret
15861602
15871603 @defer.inlineCallbacks
15881604 def count_daily_active_rooms(self):
15971613 return count
15981614
15991615 ret = yield self.runInteraction("count_daily_active_rooms", _count)
1600 defer.returnValue(ret)
1616 return ret
16011617
16021618 def get_current_backfill_token(self):
16031619 """The current minimum token that backfilled events have reached"""
21802196 """
21812197 to_1, so_1 = yield self._get_event_ordering(event_id1)
21822198 to_2, so_2 = yield self._get_event_ordering(event_id2)
2183 defer.returnValue((to_1, so_1) > (to_2, so_2))
2199 return (to_1, so_1) > (to_2, so_2)
21842200
21852201 @cachedInlineCallbacks(max_entries=5000)
21862202 def _get_event_ordering(self, event_id):
21942210 if not res:
21952211 raise SynapseError(404, "Could not find event %s" % (event_id,))
21962212
2197 defer.returnValue(
2198 (int(res["topological_ordering"]), int(res["stream_ordering"]))
2199 )
2213 return (int(res["topological_ordering"]), int(res["stream_ordering"]))
22002214
22012215 def get_all_updated_current_state_deltas(self, from_token, to_token, limit):
22022216 def get_all_updated_current_state_deltas_txn(txn):
134134 if not result:
135135 yield self._end_background_update(self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME)
136136
137 defer.returnValue(result)
137 return result
138138
139139 @defer.inlineCallbacks
140140 def _background_reindex_origin_server_ts(self, progress, batch_size):
211211 if not result:
212212 yield self._end_background_update(self.EVENT_ORIGIN_SERVER_TS_NAME)
213213
214 defer.returnValue(result)
214 return result
215215
216216 @defer.inlineCallbacks
217217 def _cleanup_extremities_bg_update(self, progress, batch_size):
395395 "_cleanup_extremities_bg_update_drop_table", _drop_table_txn
396396 )
397397
398 defer.returnValue(num_handled)
398 return num_handled
2828 from synapse.events import FrozenEvent, event_type_from_format_version # noqa: F401
2929 from synapse.events.snapshot import EventContext # noqa: F401
3030 from synapse.events.utils import prune_event
31 from synapse.logging.context import (
32 LoggingContext,
33 PreserveLoggingContext,
34 make_deferred_yieldable,
35 run_in_background,
36 )
31 from synapse.logging.context import LoggingContext, PreserveLoggingContext
3732 from synapse.metrics.background_process_metrics import run_as_background_process
3833 from synapse.types import get_domain_from_id
3934 from synapse.util import batch_iter
138133 If there is a mismatch, behave as per allow_none.
139134
140135 Returns:
141 Deferred : A FrozenEvent.
142 """
136 Deferred[EventBase|None]
137 """
138 if not isinstance(event_id, str):
139 raise TypeError("Invalid event event_id %r" % (event_id,))
140
143141 events = yield self.get_events_as_list(
144142 [event_id],
145143 check_redacted=check_redacted,
156154 if event is None and not allow_none:
157155 raise NotFoundError("Could not find event %s" % (event_id,))
158156
159 defer.returnValue(event)
157 return event
160158
161159 @defer.inlineCallbacks
162160 def get_events(
186184 allow_rejected=allow_rejected,
187185 )
188186
189 defer.returnValue({e.event_id: e for e in events})
187 return {e.event_id: e for e in events}
190188
191189 @defer.inlineCallbacks
192190 def get_events_as_list(
216214 """
217215
218216 if not event_ids:
219 defer.returnValue([])
217 return []
220218
221219 # there may be duplicates so we cast the list to a set
222220 event_entry_map = yield self._get_events_from_cache_or_db(
312310 event.unsigned["prev_content"] = prev.content
313311 event.unsigned["prev_sender"] = prev.sender
314312
315 defer.returnValue(events)
313 return events
316314
317315 @defer.inlineCallbacks
318316 def _get_events_from_cache_or_db(self, event_ids, allow_rejected=False):
338336 log_ctx = LoggingContext.current_context()
339337 log_ctx.record_event_fetch(len(missing_events_ids))
340338
341 # Note that _enqueue_events is also responsible for turning db rows
339 # Note that _get_events_from_db is also responsible for turning db rows
342340 # into FrozenEvents (via _get_event_from_row), which involves seeing if
343341 # the events have been redacted, and if so pulling the redaction event out
344342 # of the database to check it.
345343 #
346 # _enqueue_events is a bit of a rubbish name but naming is hard.
347 missing_events = yield self._enqueue_events(
344 missing_events = yield self._get_events_from_db(
348345 missing_events_ids, allow_rejected=allow_rejected
349346 )
350347
417414 The fetch requests. Each entry consists of a list of event
418415 ids to be fetched, and a deferred to be completed once the
419416 events have been fetched.
417
418 The deferreds are callbacked with a dictionary mapping from event id
419 to event row. Note that it may well contain additional events that
420 were not part of this request.
420421 """
421422 with Measure(self._clock, "_fetch_event_list"):
422423 try:
423 event_id_lists = list(zip(*event_list))[0]
424 event_ids = [item for sublist in event_id_lists for item in sublist]
424 events_to_fetch = set(
425 event_id for events, _ in event_list for event_id in events
426 )
425427
426428 row_dict = self._new_transaction(
427 conn, "do_fetch", [], [], self._fetch_event_rows, event_ids
429 conn, "do_fetch", [], [], self._fetch_event_rows, events_to_fetch
428430 )
429431
430432 # We only want to resolve deferreds from the main thread
431 def fire(lst, res):
432 for ids, d in lst:
433 if not d.called:
434 try:
435 with PreserveLoggingContext():
436 d.callback([res[i] for i in ids if i in res])
437 except Exception:
438 logger.exception("Failed to callback")
433 def fire():
434 for _, d in event_list:
435 d.callback(row_dict)
439436
440437 with PreserveLoggingContext():
441 self.hs.get_reactor().callFromThread(fire, event_list, row_dict)
438 self.hs.get_reactor().callFromThread(fire)
442439 except Exception as e:
443440 logger.exception("do_fetch")
444441
453450 self.hs.get_reactor().callFromThread(fire, event_list, e)
454451
455452 @defer.inlineCallbacks
456 def _enqueue_events(self, events, allow_rejected=False):
453 def _get_events_from_db(self, event_ids, allow_rejected=False):
454 """Fetch a bunch of events from the database.
455
456 Returned events will be added to the cache for future lookups.
457
458 Args:
459 event_ids (Iterable[str]): The event_ids of the events to fetch
460 allow_rejected (bool): Whether to include rejected events
461
462 Returns:
463 Deferred[Dict[str, _EventCacheEntry]]:
464 map from event id to result. May return extra events which
465 weren't asked for.
466 """
467 fetched_events = {}
468 events_to_fetch = event_ids
469
470 while events_to_fetch:
471 row_map = yield self._enqueue_events(events_to_fetch)
472
473 # we need to recursively fetch any redactions of those events
474 redaction_ids = set()
475 for event_id in events_to_fetch:
476 row = row_map.get(event_id)
477 fetched_events[event_id] = row
478 if row:
479 redaction_ids.update(row["redactions"])
480
481 events_to_fetch = redaction_ids.difference(fetched_events.keys())
482 if events_to_fetch:
483 logger.debug("Also fetching redaction events %s", events_to_fetch)
484
485 # build a map from event_id to EventBase
486 event_map = {}
487 for event_id, row in fetched_events.items():
488 if not row:
489 continue
490 assert row["event_id"] == event_id
491
492 rejected_reason = row["rejected_reason"]
493
494 if not allow_rejected and rejected_reason:
495 continue
496
497 d = json.loads(row["json"])
498 internal_metadata = json.loads(row["internal_metadata"])
499
500 format_version = row["format_version"]
501 if format_version is None:
502 # This means that we stored the event before we had the concept
503 # of a event format version, so it must be a V1 event.
504 format_version = EventFormatVersions.V1
505
506 original_ev = event_type_from_format_version(format_version)(
507 event_dict=d,
508 internal_metadata_dict=internal_metadata,
509 rejected_reason=rejected_reason,
510 )
511
512 event_map[event_id] = original_ev
513
514 # finally, we can decide whether each one nededs redacting, and build
515 # the cache entries.
516 result_map = {}
517 for event_id, original_ev in event_map.items():
518 redactions = fetched_events[event_id]["redactions"]
519 redacted_event = self._maybe_redact_event_row(
520 original_ev, redactions, event_map
521 )
522
523 cache_entry = _EventCacheEntry(
524 event=original_ev, redacted_event=redacted_event
525 )
526
527 self._get_event_cache.prefill((event_id,), cache_entry)
528 result_map[event_id] = cache_entry
529
530 return result_map
531
532 @defer.inlineCallbacks
533 def _enqueue_events(self, events):
457534 """Fetches events from the database using the _event_fetch_list. This
458535 allows batch and bulk fetching of events - it allows us to fetch events
459536 without having to create a new transaction for each request for events.
460 """
461 if not events:
462 defer.returnValue({})
537
538 Args:
539 events (Iterable[str]): events to be fetched.
540
541 Returns:
542 Deferred[Dict[str, Dict]]: map from event id to row data from the database.
543 May contain events that weren't requested.
544 """
463545
464546 events_d = defer.Deferred()
465547 with self._event_fetch_lock:
478560 "fetch_events", self.runWithConnection, self._do_fetch
479561 )
480562
481 logger.debug("Loading %d events", len(events))
563 logger.debug("Loading %d events: %s", len(events), events)
482564 with PreserveLoggingContext():
483 rows = yield events_d
484 logger.debug("Loaded %d events (%d rows)", len(events), len(rows))
485
486 if not allow_rejected:
487 rows[:] = [r for r in rows if r["rejected_reason"] is None]
488
489 res = yield make_deferred_yieldable(
490 defer.gatherResults(
491 [
492 run_in_background(
493 self._get_event_from_row,
494 row["internal_metadata"],
495 row["json"],
496 row["redactions"],
497 rejected_reason=row["rejected_reason"],
498 format_version=row["format_version"],
499 )
500 for row in rows
501 ],
502 consumeErrors=True,
503 )
504 )
505
506 defer.returnValue({e.event.event_id: e for e in res if e})
565 row_map = yield events_d
566 logger.debug("Loaded %d events (%d rows)", len(events), len(row_map))
567
568 return row_map
507569
508570 def _fetch_event_rows(self, txn, event_ids):
509571 """Fetch event rows from the database
576638
577639 return event_dict
578640
579 @defer.inlineCallbacks
580 def _get_event_from_row(
581 self, internal_metadata, js, redactions, format_version, rejected_reason=None
582 ):
583 """Parse an event row which has been read from the database
584
585 Args:
586 internal_metadata (str): json-encoded internal_metadata column
587 js (str): json-encoded event body from event_json
588 redactions (list[str]): a list of the events which claim to have redacted
589 this event, from the redactions table
590 format_version: (str): the 'format_version' column
591 rejected_reason (str|None): the reason this event was rejected, if any
592
593 Returns:
594 _EventCacheEntry
595 """
596 with Measure(self._clock, "_get_event_from_row"):
597 d = json.loads(js)
598 internal_metadata = json.loads(internal_metadata)
599
600 if format_version is None:
601 # This means that we stored the event before we had the concept
602 # of a event format version, so it must be a V1 event.
603 format_version = EventFormatVersions.V1
604
605 original_ev = event_type_from_format_version(format_version)(
606 event_dict=d,
607 internal_metadata_dict=internal_metadata,
608 rejected_reason=rejected_reason,
609 )
610
611 redacted_event = yield self._maybe_redact_event_row(original_ev, redactions)
612
613 cache_entry = _EventCacheEntry(
614 event=original_ev, redacted_event=redacted_event
615 )
616
617 self._get_event_cache.prefill((original_ev.event_id,), cache_entry)
618
619 defer.returnValue(cache_entry)
620
621 @defer.inlineCallbacks
622 def _maybe_redact_event_row(self, original_ev, redactions):
641 def _maybe_redact_event_row(self, original_ev, redactions, event_map):
623642 """Given an event object and a list of possible redacting event ids,
624643 determine whether to honour any of those redactions and if so return a redacted
625644 event.
627646 Args:
628647 original_ev (EventBase):
629648 redactions (iterable[str]): list of event ids of potential redaction events
649 event_map (dict[str, EventBase]): other events which have been fetched, in
650 which we can look up the redaaction events. Map from event id to event.
630651
631652 Returns:
632653 Deferred[EventBase|None]: if the event should be redacted, a pruned
636657 # we choose to ignore redactions of m.room.create events.
637658 return None
638659
639 if original_ev.type == "m.room.redaction":
640 # ... and redaction events
641 return None
642
643 redaction_map = yield self._get_events_from_cache_or_db(redactions)
644
645660 for redaction_id in redactions:
646 redaction_entry = redaction_map.get(redaction_id)
647 if not redaction_entry:
661 redaction_event = event_map.get(redaction_id)
662 if not redaction_event or redaction_event.rejected_reason:
648663 # we don't have the redaction event, or the redaction event was not
649664 # authorized.
650665 logger.debug(
654669 )
655670 continue
656671
657 redaction_event = redaction_entry.event
658672 if redaction_event.room_id != original_ev.room_id:
659673 logger.debug(
660674 "%s was redacted by %s but redaction was in a different room!",
709723 desc="have_events_in_timeline",
710724 )
711725
712 defer.returnValue(set(r["event_id"] for r in rows))
726 return set(r["event_id"] for r in rows)
713727
714728 @defer.inlineCallbacks
715729 def have_seen_events(self, event_ids):
735749 input_iterator = iter(event_ids)
736750 for chunk in iter(lambda: list(itertools.islice(input_iterator, 100)), []):
737751 yield self.runInteraction("have_seen_events", have_seen_events_txn, chunk)
738 defer.returnValue(results)
752 return results
739753
740754 def get_seen_events_with_rejections(self, event_ids):
741755 """Given a list of event ids, check if we rejected them.
846860 # it.
847861 complexity_v1 = round(state_events / 500, 2)
848862
849 defer.returnValue({"v1": complexity_v1})
863 return {"v1": complexity_v1}
1313 # limitations under the License.
1414
1515 from canonicaljson import encode_canonical_json
16
17 from twisted.internet import defer
1816
1917 from synapse.api.errors import Codes, SynapseError
2018 from synapse.util.caches.descriptors import cachedInlineCallbacks
4038 desc="get_user_filter",
4139 )
4240
43 defer.returnValue(db_to_json(def_json))
41 return db_to_json(def_json)
4442
4543 def add_user_filter(self, user_localpart, user_filter):
4644 def_json = encode_canonical_json(user_filter)
306306 desc="get_group_categories",
307307 )
308308
309 defer.returnValue(
310 {
311 row["category_id"]: {
312 "is_public": row["is_public"],
313 "profile": json.loads(row["profile"]),
314 }
315 for row in rows
309 return {
310 row["category_id"]: {
311 "is_public": row["is_public"],
312 "profile": json.loads(row["profile"]),
316313 }
317 )
314 for row in rows
315 }
318316
319317 @defer.inlineCallbacks
320318 def get_group_category(self, group_id, category_id):
327325
328326 category["profile"] = json.loads(category["profile"])
329327
330 defer.returnValue(category)
328 return category
331329
332330 def upsert_group_category(self, group_id, category_id, profile, is_public):
333331 """Add/update room category for group
369367 desc="get_group_roles",
370368 )
371369
372 defer.returnValue(
373 {
374 row["role_id"]: {
375 "is_public": row["is_public"],
376 "profile": json.loads(row["profile"]),
377 }
378 for row in rows
370 return {
371 row["role_id"]: {
372 "is_public": row["is_public"],
373 "profile": json.loads(row["profile"]),
379374 }
380 )
375 for row in rows
376 }
381377
382378 @defer.inlineCallbacks
383379 def get_group_role(self, group_id, role_id):
390386
391387 role["profile"] = json.loads(role["profile"])
392388
393 defer.returnValue(role)
389 return role
394390
395391 def upsert_group_role(self, group_id, role_id, profile, is_public):
396392 """Add/remove user role
959955 _register_user_group_membership_txn,
960956 next_id,
961957 )
962 defer.returnValue(res)
958 return res
963959
964960 @defer.inlineCallbacks
965961 def create_group(
10561052
10571053 now = int(self._clock.time_msec())
10581054 if row and now < row["valid_until_ms"]:
1059 defer.returnValue(json.loads(row["attestation_json"]))
1060
1061 defer.returnValue(None)
1055 return json.loads(row["attestation_json"])
1056
1057 return None
10621058
10631059 def get_joined_groups(self, user_id):
10641060 return self._simple_select_onecol(
172172 )
173173 if user_id:
174174 count = count + 1
175 defer.returnValue(count)
175 return count
176176
177177 @defer.inlineCallbacks
178178 def upsert_monthly_active_user(self, user_id):
2626
2727 # Remember to update this number every time a change is made to database
2828 # schema files, so the users will be informed on server restarts.
29 SCHEMA_VERSION = 55
29 SCHEMA_VERSION = 56
3030
3131 dir_path = os.path.abspath(os.path.dirname(__file__))
3232
8989 presence_states,
9090 )
9191
92 defer.returnValue(
93 (stream_orderings[-1], self._presence_id_gen.get_current_token())
94 )
92 return (stream_orderings[-1], self._presence_id_gen.get_current_token())
9593
9694 def _update_presence_txn(self, txn, stream_orderings, presence_states):
9795 for stream_id, state in zip(stream_orderings, presence_states):
179177 for row in rows:
180178 row["currently_active"] = bool(row["currently_active"])
181179
182 defer.returnValue({row["user_id"]: UserPresenceState(**row) for row in rows})
180 return {row["user_id"]: UserPresenceState(**row) for row in rows}
183181
184182 def get_current_presence_token(self):
185183 return self._presence_id_gen.get_current_token()
3333 except StoreError as e:
3434 if e.code == 404:
3535 # no match
36 defer.returnValue(ProfileInfo(None, None))
36 return ProfileInfo(None, None)
3737 return
3838 else:
3939 raise
4040
41 defer.returnValue(
42 ProfileInfo(
43 avatar_url=profile["avatar_url"], display_name=profile["displayname"]
44 )
41 return ProfileInfo(
42 avatar_url=profile["avatar_url"], display_name=profile["displayname"]
4543 )
4644
4745 def get_profile_displayname(self, user_localpart):
167165 )
168166
169167 if res:
170 defer.returnValue(True)
168 return True
171169
172170 res = yield self._simple_select_one_onecol(
173171 table="group_invites",
178176 )
179177
180178 if res:
181 defer.returnValue(True)
179 return True
119119
120120 rules = _load_rules(rows, enabled_map)
121121
122 defer.returnValue(rules)
122 return rules
123123
124124 @cachedInlineCallbacks(max_entries=5000)
125125 def get_push_rules_enabled_for_user(self, user_id):
129129 retcols=("user_name", "rule_id", "enabled"),
130130 desc="get_push_rules_enabled_for_user",
131131 )
132 defer.returnValue(
133 {r["rule_id"]: False if r["enabled"] == 0 else True for r in results}
134 )
132 return {r["rule_id"]: False if r["enabled"] == 0 else True for r in results}
135133
136134 def have_push_rules_changed_for_user(self, user_id, last_id):
137135 if not self.push_rules_stream_cache.has_entity_changed(user_id, last_id):
159157 )
160158 def bulk_get_push_rules(self, user_ids):
161159 if not user_ids:
162 defer.returnValue({})
160 return {}
163161
164162 results = {user_id: [] for user_id in user_ids}
165163
181179 for user_id, rules in results.items():
182180 results[user_id] = _load_rules(rules, enabled_map_by_user.get(user_id, {}))
183181
184 defer.returnValue(results)
182 return results
185183
186184 @defer.inlineCallbacks
187185 def move_push_rule_from_room_to_room(self, new_room_id, user_id, rule):
252250 result = yield self._bulk_get_push_rules_for_room(
253251 event.room_id, state_group, current_state_ids, event=event
254252 )
255 defer.returnValue(result)
253 return result
256254
257255 @cachedInlineCallbacks(num_args=2, cache_context=True)
258256 def _bulk_get_push_rules_for_room(
311309
312310 rules_by_user = {k: v for k, v in rules_by_user.items() if v is not None}
313311
314 defer.returnValue(rules_by_user)
312 return rules_by_user
315313
316314 @cachedList(
317315 cached_method_name="get_push_rules_enabled_for_user",
321319 )
322320 def bulk_get_push_rules_enabled(self, user_ids):
323321 if not user_ids:
324 defer.returnValue({})
322 return {}
325323
326324 results = {user_id: {} for user_id in user_ids}
327325
335333 for row in rows:
336334 enabled = bool(row["enabled"])
337335 results.setdefault(row["user_name"], {})[row["rule_id"]] = enabled
338 defer.returnValue(results)
336 return results
339337
340338
341339 class PushRuleStore(PushRulesWorkerStore):
6262 ret = yield self._simple_select_one_onecol(
6363 "pushers", {"user_name": user_id}, "id", allow_none=True
6464 )
65 defer.returnValue(ret is not None)
65 return ret is not None
6666
6767 def get_pushers_by_app_id_and_pushkey(self, app_id, pushkey):
6868 return self.get_pushers_by({"app_id": app_id, "pushkey": pushkey})
9494 ],
9595 desc="get_pushers_by",
9696 )
97 defer.returnValue(self._decode_pushers_rows(ret))
97 return self._decode_pushers_rows(ret)
9898
9999 @defer.inlineCallbacks
100100 def get_all_pushers(self):
105105 return self._decode_pushers_rows(rows)
106106
107107 rows = yield self.runInteraction("get_all_pushers", get_pushers)
108 defer.returnValue(rows)
108 return rows
109109
110110 def get_all_updated_pushers(self, last_id, current_id, limit):
111111 if last_id == current_id:
204204 result = {user_id: False for user_id in user_ids}
205205 result.update({r["user_name"]: True for r in rows})
206206
207 defer.returnValue(result)
207 return result
208208
209209
210210 class PusherStore(PusherWorkerStore):
307307 def update_pusher_last_stream_ordering_and_success(
308308 self, app_id, pushkey, user_id, last_stream_ordering, last_success
309309 ):
310 yield self._simple_update_one(
311 "pushers",
312 {"app_id": app_id, "pushkey": pushkey, "user_name": user_id},
313 {
310 """Update the last stream ordering position we've processed up to for
311 the given pusher.
312
313 Args:
314 app_id (str)
315 pushkey (str)
316 last_stream_ordering (int)
317 last_success (int)
318
319 Returns:
320 Deferred[bool]: True if the pusher still exists; False if it has been deleted.
321 """
322 updated = yield self._simple_update(
323 table="pushers",
324 keyvalues={"app_id": app_id, "pushkey": pushkey, "user_name": user_id},
325 updatevalues={
314326 "last_stream_ordering": last_stream_ordering,
315327 "last_success": last_success,
316328 },
317329 desc="update_pusher_last_stream_ordering_and_success",
318330 )
319331
332 return bool(updated)
333
320334 @defer.inlineCallbacks
321335 def update_pusher_failing_since(self, app_id, pushkey, user_id, failing_since):
322 yield self._simple_update_one(
323 "pushers",
324 {"app_id": app_id, "pushkey": pushkey, "user_name": user_id},
325 {"failing_since": failing_since},
336 yield self._simple_update(
337 table="pushers",
338 keyvalues={"app_id": app_id, "pushkey": pushkey, "user_name": user_id},
339 updatevalues={"failing_since": failing_since},
326340 desc="update_pusher_failing_since",
327341 )
328342
342356 "throttle_ms": row["throttle_ms"],
343357 }
344358
345 defer.returnValue(params_by_room)
359 return params_by_room
346360
347361 @defer.inlineCallbacks
348362 def set_throttle_params(self, pusher_id, room_id, params):
5757 @cachedInlineCallbacks()
5858 def get_users_with_read_receipts_in_room(self, room_id):
5959 receipts = yield self.get_receipts_for_room(room_id, "m.read")
60 defer.returnValue(set(r["user_id"] for r in receipts))
60 return set(r["user_id"] for r in receipts)
6161
6262 @cached(num_args=2)
6363 def get_receipts_for_room(self, room_id, receipt_type):
9191 desc="get_receipts_for_user",
9292 )
9393
94 defer.returnValue({row["room_id"]: row["event_id"] for row in rows})
94 return {row["room_id"]: row["event_id"] for row in rows}
9595
9696 @defer.inlineCallbacks
9797 def get_receipts_for_user_with_orderings(self, user_id, receipt_type):
109109 return txn.fetchall()
110110
111111 rows = yield self.runInteraction("get_receipts_for_user_with_orderings", f)
112 defer.returnValue(
113 {
114 row[0]: {
115 "event_id": row[1],
116 "topological_ordering": row[2],
117 "stream_ordering": row[3],
118 }
119 for row in rows
112 return {
113 row[0]: {
114 "event_id": row[1],
115 "topological_ordering": row[2],
116 "stream_ordering": row[3],
120117 }
121 )
118 for row in rows
119 }
122120
123121 @defer.inlineCallbacks
124122 def get_linearized_receipts_for_rooms(self, room_ids, to_key, from_key=None):
146144 room_ids, to_key, from_key=from_key
147145 )
148146
149 defer.returnValue([ev for res in results.values() for ev in res])
147 return [ev for res in results.values() for ev in res]
150148
151149 def get_linearized_receipts_for_room(self, room_id, to_key, from_key=None):
152150 """Get receipts for a single room for sending to clients.
196194 rows = yield self.runInteraction("get_linearized_receipts_for_room", f)
197195
198196 if not rows:
199 defer.returnValue([])
197 return []
200198
201199 content = {}
202200 for row in rows:
204202 row["user_id"]
205203 ] = json.loads(row["data"])
206204
207 defer.returnValue(
208 [{"type": "m.receipt", "room_id": room_id, "content": content}]
209 )
205 return [{"type": "m.receipt", "room_id": room_id, "content": content}]
210206
211207 @cachedList(
212208 cached_method_name="_get_linearized_receipts_for_room",
216212 )
217213 def _get_linearized_receipts_for_rooms(self, room_ids, to_key, from_key=None):
218214 if not room_ids:
219 defer.returnValue({})
215 return {}
220216
221217 def f(txn):
222218 if from_key:
263259 room_id: [results[room_id]] if room_id in results else []
264260 for room_id in room_ids
265261 }
266 defer.returnValue(results)
262 return results
267263
268264 def get_all_updated_receipts(self, last_id, current_id, limit=None):
269265 if last_id == current_id:
467463 )
468464
469465 if event_ts is None:
470 defer.returnValue(None)
466 return None
471467
472468 now = self._clock.time_msec()
473469 logger.debug(
481477
482478 max_persisted_id = self._receipts_id_gen.get_current_token()
483479
484 defer.returnValue((stream_id, max_persisted_id))
480 return (stream_id, max_persisted_id)
485481
486482 def insert_graph_receipt(self, room_id, receipt_type, user_id, event_ids, data):
487483 return self.runInteraction(
7474
7575 info = yield self.get_user_by_id(user_id)
7676 if not info:
77 defer.returnValue(False)
77 return False
7878
7979 now = self.clock.time_msec()
8080 trial_duration_ms = self.config.mau_trial_days * 24 * 60 * 60 * 1000
8181 is_trial = (now - info["creation_ts"] * 1000) < trial_duration_ms
82 defer.returnValue(is_trial)
82 return is_trial
8383
8484 @cached()
8585 def get_user_by_access_token(self, token):
114114 allow_none=True,
115115 desc="get_expiration_ts_for_user",
116116 )
117 defer.returnValue(res)
117 return res
118118
119119 @defer.inlineCallbacks
120120 def set_account_validity_for_user(
189189 desc="get_user_from_renewal_token",
190190 )
191191
192 defer.returnValue(res)
192 return res
193193
194194 @defer.inlineCallbacks
195195 def get_renewal_token_for_user(self, user_id):
208208 desc="get_renewal_token_for_user",
209209 )
210210
211 defer.returnValue(res)
211 return res
212212
213213 @defer.inlineCallbacks
214214 def get_users_expiring_soon(self):
236236 self.config.account_validity.renew_at,
237237 )
238238
239 defer.returnValue(res)
239 return res
240240
241241 @defer.inlineCallbacks
242242 def set_renewal_mail_status(self, user_id, email_sent):
279279 desc="is_server_admin",
280280 )
281281
282 defer.returnValue(res if res else False)
282 return res if res else False
283283
284284 def _query_for_auth(self, txn, token):
285285 sql = (
310310 res = yield self.runInteraction(
311311 "is_support_user", self.is_support_user_txn, user_id
312312 )
313 defer.returnValue(res)
313 return res
314314
315315 def is_support_user_txn(self, txn, user_id):
316316 res = self._simple_select_one_onecol_txn(
348348 return 0
349349
350350 ret = yield self.runInteraction("count_users", _count_users)
351 defer.returnValue(ret)
351 return ret
352352
353353 def count_daily_user_type(self):
354354 """
394394 return count
395395
396396 ret = yield self.runInteraction("count_users", _count_users)
397 defer.returnValue(ret)
397 return ret
398398
399399 @defer.inlineCallbacks
400400 def find_next_generated_user_id_localpart(self):
424424 if i not in found:
425425 return i
426426
427 defer.returnValue(
427 return (
428428 (
429429 yield self.runInteraction(
430430 "find_next_generated_user_id", _find_next_generated_user_id
446446 user_id = yield self.runInteraction(
447447 "get_user_id_by_threepid", self.get_user_id_by_threepid_txn, medium, address
448448 )
449 defer.returnValue(user_id)
449 return user_id
450450
451451 def get_user_id_by_threepid_txn(self, txn, medium, address):
452452 """Returns user id from threepid
486486 ["medium", "address", "validated_at", "added_at"],
487487 "user_get_threepids",
488488 )
489 defer.returnValue(ret)
489 return ret
490490
491491 def user_delete_threepid(self, user_id, medium, address):
492492 return self._simple_delete(
567567 retcol="id_server",
568568 desc="get_id_servers_user_bound",
569569 )
570
571 @cachedInlineCallbacks()
572 def get_user_deactivated_status(self, user_id):
573 """Retrieve the value for the `deactivated` property for the provided user.
574
575 Args:
576 user_id (str): The ID of the user to retrieve the status for.
577
578 Returns:
579 defer.Deferred(bool): The requested value.
580 """
581
582 res = yield self._simple_select_one_onecol(
583 table="users",
584 keyvalues={"name": user_id},
585 retcol="deactivated",
586 desc="get_user_deactivated_status",
587 )
588
589 # Convert the integer into a boolean.
590 return res == 1
570591
571592
572593 class RegistrationStore(
676697 if end:
677698 yield self._end_background_update("users_set_deactivated_flag")
678699
679 defer.returnValue(batch_size)
700 return batch_size
680701
681702 @defer.inlineCallbacks
682703 def add_access_token_to_user(self, user_id, token, device_id, valid_until_ms):
956977 desc="is_guest",
957978 )
958979
959 defer.returnValue(res if res else False)
980 return res if res else False
960981
961982 def add_user_pending_deactivation(self, user_id):
962983 """
10231044
10241045 yield self._end_background_update("user_threepids_grandfather")
10251046
1026 defer.returnValue(1)
1047 return 1
10271048
10281049 def get_threepid_validation_session(
10291050 self, medium, client_secret, address=None, sid=None, validated=True
13161337 user_id,
13171338 deactivated,
13181339 )
1319
1320 @cachedInlineCallbacks()
1321 def get_user_deactivated_status(self, user_id):
1322 """Retrieve the value for the `deactivated` property for the provided user.
1323
1324 Args:
1325 user_id (str): The ID of the user to retrieve the status for.
1326
1327 Returns:
1328 defer.Deferred(bool): The requested value.
1329 """
1330
1331 res = yield self._simple_select_one_onecol(
1332 table="users",
1333 keyvalues={"name": user_id},
1334 retcol="deactivated",
1335 desc="get_user_deactivated_status",
1336 )
1337
1338 # Convert the integer into a boolean.
1339 defer.returnValue(res == 1)
1515 import logging
1616
1717 import attr
18
19 from twisted.internet import defer
2018
2119 from synapse.api.constants import RelationTypes
2220 from synapse.api.errors import SynapseError
362360 return
363361
364362 edit_event = yield self.get_event(edit_id, allow_none=True)
365 defer.returnValue(edit_event)
363 return edit_event
366364
367365 def has_user_annotated_event(self, parent_id, event_type, aggregation_key, sender):
368366 """Check if a user has already annotated an event with the same key
192192 )
193193
194194 if row:
195 defer.returnValue(
196 RatelimitOverride(
197 messages_per_second=row["messages_per_second"],
198 burst_count=row["burst_count"],
199 )
195 return RatelimitOverride(
196 messages_per_second=row["messages_per_second"],
197 burst_count=row["burst_count"],
200198 )
201199 else:
202 defer.returnValue(None)
200 return None
203201
204202
205203 class RoomStore(RoomWorkerStore, SearchStore):
2323 from twisted.internet import defer
2424
2525 from synapse.api.constants import EventTypes, Membership
26 from synapse.metrics.background_process_metrics import run_as_background_process
27 from synapse.storage._base import LoggingTransaction
2628 from synapse.storage.events_worker import EventsWorkerStore
2729 from synapse.types import get_domain_from_id
2830 from synapse.util.async_helpers import Linearizer
5254 MemberSummary = namedtuple("MemberSummary", ("members", "count"))
5355
5456 _MEMBERSHIP_PROFILE_UPDATE_NAME = "room_membership_profile_update"
57 _CURRENT_STATE_MEMBERSHIP_UPDATE_NAME = "current_state_events_membership"
5558
5659
5760 class RoomMemberWorkerStore(EventsWorkerStore):
61 def __init__(self, db_conn, hs):
62 super(RoomMemberWorkerStore, self).__init__(db_conn, hs)
63
64 # Is the current_state_events.membership up to date? Or is the
65 # background update still running?
66 self._current_state_events_membership_up_to_date = False
67
68 txn = LoggingTransaction(
69 db_conn.cursor(),
70 name="_check_safe_current_state_events_membership_updated",
71 database_engine=self.database_engine,
72 )
73 self._check_safe_current_state_events_membership_updated_txn(txn)
74 txn.close()
75
76 def _check_safe_current_state_events_membership_updated_txn(self, txn):
77 """Checks if it is safe to assume the new current_state_events
78 membership column is up to date
79 """
80
81 pending_update = self._simple_select_one_txn(
82 txn,
83 table="background_updates",
84 keyvalues={"update_name": _CURRENT_STATE_MEMBERSHIP_UPDATE_NAME},
85 retcols=["update_name"],
86 allow_none=True,
87 )
88
89 self._current_state_events_membership_up_to_date = not pending_update
90
91 # If the update is still running, reschedule to run.
92 if pending_update:
93 self._clock.call_later(
94 15.0,
95 run_as_background_process,
96 "_check_safe_current_state_events_membership_updated",
97 self.runInteraction,
98 "_check_safe_current_state_events_membership_updated",
99 self._check_safe_current_state_events_membership_updated_txn,
100 )
101
58102 @cachedInlineCallbacks(max_entries=100000, iterable=True, cache_context=True)
59103 def get_hosts_in_room(self, room_id, cache_context):
60104 """Returns the set of all hosts currently in the room
63107 room_id, on_invalidate=cache_context.invalidate
64108 )
65109 hosts = frozenset(get_domain_from_id(user_id) for user_id in user_ids)
66 defer.returnValue(hosts)
110 return hosts
67111
68112 @cached(max_entries=100000, iterable=True)
69113 def get_users_in_room(self, room_id):
70114 def f(txn):
71 sql = (
72 "SELECT m.user_id FROM room_memberships as m"
73 " INNER JOIN current_state_events as c"
74 " ON m.event_id = c.event_id "
75 " AND m.room_id = c.room_id "
76 " AND m.user_id = c.state_key"
77 " WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?"
78 )
115 # If we can assume current_state_events.membership is up to date
116 # then we can avoid a join, which is a Very Good Thing given how
117 # frequently this function gets called.
118 if self._current_state_events_membership_up_to_date:
119 sql = """
120 SELECT state_key FROM current_state_events
121 WHERE type = 'm.room.member' AND room_id = ? AND membership = ?
122 """
123 else:
124 sql = """
125 SELECT state_key FROM room_memberships as m
126 INNER JOIN current_state_events as c
127 ON m.event_id = c.event_id
128 AND m.room_id = c.room_id
129 AND m.user_id = c.state_key
130 WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?
131 """
79132
80133 txn.execute(sql, (room_id, Membership.JOIN))
81134 return [to_ascii(r[0]) for r in txn]
97150 # first get counts.
98151 # We do this all in one transaction to keep the cache small.
99152 # FIXME: get rid of this when we have room_stats
100 sql = """
101 SELECT count(*), m.membership FROM room_memberships as m
102 INNER JOIN current_state_events as c
103 ON m.event_id = c.event_id
104 AND m.room_id = c.room_id
105 AND m.user_id = c.state_key
106 WHERE c.type = 'm.room.member' AND c.room_id = ?
107 GROUP BY m.membership
108 """
153
154 # If we can assume current_state_events.membership is up to date
155 # then we can avoid a join, which is a Very Good Thing given how
156 # frequently this function gets called.
157 if self._current_state_events_membership_up_to_date:
158 # Note, rejected events will have a null membership field, so
159 # we we manually filter them out.
160 sql = """
161 SELECT count(*), membership FROM current_state_events
162 WHERE type = 'm.room.member' AND room_id = ?
163 AND membership IS NOT NULL
164 GROUP BY membership
165 """
166 else:
167 sql = """
168 SELECT count(*), m.membership FROM room_memberships as m
169 INNER JOIN current_state_events as c
170 ON m.event_id = c.event_id
171 AND m.room_id = c.room_id
172 AND m.user_id = c.state_key
173 WHERE c.type = 'm.room.member' AND c.room_id = ?
174 GROUP BY m.membership
175 """
109176
110177 txn.execute(sql, (room_id,))
111178 res = {}
114181
115182 # we order by membership and then fairly arbitrarily by event_id so
116183 # heroes are consistent
117 sql = """
118 SELECT m.user_id, m.membership, m.event_id
119 FROM room_memberships as m
120 INNER JOIN current_state_events as c
121 ON m.event_id = c.event_id
122 AND m.room_id = c.room_id
123 AND m.user_id = c.state_key
124 WHERE c.type = 'm.room.member' AND c.room_id = ?
125 ORDER BY
126 CASE m.membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
127 m.event_id ASC
128 LIMIT ?
129 """
184 if self._current_state_events_membership_up_to_date:
185 # Note, rejected events will have a null membership field, so
186 # we we manually filter them out.
187 sql = """
188 SELECT state_key, membership, event_id
189 FROM current_state_events
190 WHERE type = 'm.room.member' AND room_id = ?
191 AND membership IS NOT NULL
192 ORDER BY
193 CASE membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
194 event_id ASC
195 LIMIT ?
196 """
197 else:
198 sql = """
199 SELECT c.state_key, m.membership, c.event_id
200 FROM room_memberships as m
201 INNER JOIN current_state_events as c USING (room_id, event_id)
202 WHERE c.type = 'm.room.member' AND c.room_id = ?
203 ORDER BY
204 CASE m.membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
205 c.event_id ASC
206 LIMIT ?
207 """
130208
131209 # 6 is 5 (number of heroes) plus 1, in case one of them is the calling user.
132210 txn.execute(sql, (room_id, Membership.JOIN, Membership.INVITE, 6))
188266 invites = yield self.get_invited_rooms_for_user(user_id)
189267 for invite in invites:
190268 if invite.room_id == room_id:
191 defer.returnValue(invite)
192 defer.returnValue(None)
193
269 return invite
270 return None
271
272 @defer.inlineCallbacks
194273 def get_rooms_for_user_where_membership_is(self, user_id, membership_list):
195274 """ Get all the rooms for this user where the membership for this user
196275 matches one in the membership list.
276
277 Filters out forgotten rooms.
197278
198279 Args:
199280 user_id (str): The user ID.
200281 membership_list (list): A list of synapse.api.constants.Membership
201282 values which the user must be in.
283
202284 Returns:
203 A list of dictionary objects, with room_id, membership and sender
204 defined.
285 Deferred[list[RoomsForUser]]
205286 """
206287 if not membership_list:
207288 return defer.succeed(None)
208289
209 return self.runInteraction(
290 rooms = yield self.runInteraction(
210291 "get_rooms_for_user_where_membership_is",
211292 self._get_rooms_for_user_where_membership_is_txn,
212293 user_id,
213294 membership_list,
214295 )
215296
297 # Now we filter out forgotten rooms
298 forgotten_rooms = yield self.get_forgotten_rooms_for_user(user_id)
299 return [room for room in rooms if room.room_id not in forgotten_rooms]
300
216301 def _get_rooms_for_user_where_membership_is_txn(
217302 self, txn, user_id, membership_list
218303 ):
222307
223308 results = []
224309 if membership_list:
225 where_clause = "user_id = ? AND (%s) AND forgotten = 0" % (
226 " OR ".join(["membership = ?" for _ in membership_list]),
227 )
228
229 args = [user_id]
230 args.extend(membership_list)
231
232 sql = (
233 "SELECT m.room_id, m.sender, m.membership, m.event_id, e.stream_ordering"
234 " FROM current_state_events as c"
235 " INNER JOIN room_memberships as m"
236 " ON m.event_id = c.event_id"
237 " INNER JOIN events as e"
238 " ON e.event_id = c.event_id"
239 " AND m.room_id = c.room_id"
240 " AND m.user_id = c.state_key"
241 " WHERE c.type = 'm.room.member' AND %s"
242 ) % (where_clause,)
243
244 txn.execute(sql, args)
310 if self._current_state_events_membership_up_to_date:
311 sql = """
312 SELECT room_id, e.sender, c.membership, event_id, e.stream_ordering
313 FROM current_state_events AS c
314 INNER JOIN events AS e USING (room_id, event_id)
315 WHERE
316 c.type = 'm.room.member'
317 AND state_key = ?
318 AND c.membership IN (%s)
319 """ % (
320 ",".join("?" * len(membership_list))
321 )
322 else:
323 sql = """
324 SELECT room_id, e.sender, m.membership, event_id, e.stream_ordering
325 FROM current_state_events AS c
326 INNER JOIN room_memberships AS m USING (room_id, event_id)
327 INNER JOIN events AS e USING (room_id, event_id)
328 WHERE
329 c.type = 'm.room.member'
330 AND state_key = ?
331 AND m.membership IN (%s)
332 """ % (
333 ",".join("?" * len(membership_list))
334 )
335
336 txn.execute(sql, (user_id, *membership_list))
245337 results = [RoomsForUser(**r) for r in self.cursor_to_dict(txn)]
246338
247339 if do_invite:
282374 rooms = yield self.get_rooms_for_user_where_membership_is(
283375 user_id, membership_list=[Membership.JOIN]
284376 )
285 defer.returnValue(
286 frozenset(
287 GetRoomsForUserWithStreamOrdering(r.room_id, r.stream_ordering)
288 for r in rooms
289 )
377 return frozenset(
378 GetRoomsForUserWithStreamOrdering(r.room_id, r.stream_ordering)
379 for r in rooms
290380 )
291381
292382 @defer.inlineCallbacks
296386 rooms = yield self.get_rooms_for_user_with_stream_ordering(
297387 user_id, on_invalidate=on_invalidate
298388 )
299 defer.returnValue(frozenset(r.room_id for r in rooms))
389 return frozenset(r.room_id for r in rooms)
300390
301391 @cachedInlineCallbacks(max_entries=500000, cache_context=True, iterable=True)
302392 def get_users_who_share_room_with_user(self, user_id, cache_context):
313403 )
314404 user_who_share_room.update(user_ids)
315405
316 defer.returnValue(user_who_share_room)
406 return user_who_share_room
317407
318408 @defer.inlineCallbacks
319409 def get_joined_users_from_context(self, event, context):
329419 result = yield self._get_joined_users_from_context(
330420 event.room_id, state_group, current_state_ids, event=event, context=context
331421 )
332 defer.returnValue(result)
422 return result
333423
334424 def get_joined_users_from_state(self, room_id, state_entry):
335425 state_group = state_entry.state_group
443533 avatar_url=to_ascii(event.content.get("avatar_url", None)),
444534 )
445535
446 defer.returnValue(users_in_room)
536 return users_in_room
447537
448538 @cachedInlineCallbacks(max_entries=10000)
449539 def is_host_joined(self, room_id, host):
452542
453543 sql = """
454544 SELECT state_key FROM current_state_events AS c
455 INNER JOIN room_memberships USING (event_id)
456 WHERE membership = 'join'
545 INNER JOIN room_memberships AS m USING (event_id)
546 WHERE m.membership = 'join'
457547 AND type = 'm.room.member'
458548 AND c.room_id = ?
459549 AND state_key LIKE ?
468558 rows = yield self._execute("is_host_joined", None, sql, room_id, like_clause)
469559
470560 if not rows:
471 defer.returnValue(False)
561 return False
472562
473563 user_id = rows[0][0]
474564 if get_domain_from_id(user_id) != host:
475565 # This can only happen if the host name has something funky in it
476566 raise Exception("Invalid host name")
477567
478 defer.returnValue(True)
568 return True
479569
480570 @cachedInlineCallbacks()
481571 def was_host_joined(self, room_id, host):
508598 rows = yield self._execute("was_host_joined", None, sql, room_id, like_clause)
509599
510600 if not rows:
511 defer.returnValue(False)
601 return False
512602
513603 user_id = rows[0][0]
514604 if get_domain_from_id(user_id) != host:
515605 # This can only happen if the host name has something funky in it
516606 raise Exception("Invalid host name")
517607
518 defer.returnValue(True)
608 return True
519609
520610 def get_joined_hosts(self, room_id, state_entry):
521611 state_group = state_entry.state_group
542632 cache = self._get_joined_hosts_cache(room_id)
543633 joined_hosts = yield cache.get_destinations(state_entry)
544634
545 defer.returnValue(joined_hosts)
635 return joined_hosts
546636
547637 @cached(max_entries=10000)
548638 def _get_joined_hosts_cache(self, room_id):
572662 return rows[0][0]
573663
574664 count = yield self.runInteraction("did_forget_membership", f)
575 defer.returnValue(count == 0)
665 return count == 0
666
667 @cached()
668 def get_forgotten_rooms_for_user(self, user_id):
669 """Gets all rooms the user has forgotten.
670
671 Args:
672 user_id (str)
673
674 Returns:
675 Deferred[set[str]]
676 """
677
678 def _get_forgotten_rooms_for_user_txn(txn):
679 # This is a slightly convoluted query that first looks up all rooms
680 # that the user has forgotten in the past, then rechecks that list
681 # to see if any have subsequently been updated. This is done so that
682 # we can use a partial index on `forgotten = 1` on the assumption
683 # that few users will actually forget many rooms.
684 #
685 # Note that a room is considered "forgotten" if *all* membership
686 # events for that user and room have the forgotten field set (as
687 # when a user forgets a room we update all rows for that user and
688 # room, not just the current one).
689 sql = """
690 SELECT room_id, (
691 SELECT count(*) FROM room_memberships
692 WHERE room_id = m.room_id AND user_id = m.user_id AND forgotten = 0
693 ) AS count
694 FROM room_memberships AS m
695 WHERE user_id = ? AND forgotten = 1
696 GROUP BY room_id, user_id;
697 """
698 txn.execute(sql, (user_id,))
699 return set(row[0] for row in txn if row[1] == 0)
700
701 return self.runInteraction(
702 "get_forgotten_rooms_for_user", _get_forgotten_rooms_for_user_txn
703 )
576704
577705 @defer.inlineCallbacks
578706 def get_rooms_user_has_been_in(self, user_id):
600728 super(RoomMemberStore, self).__init__(db_conn, hs)
601729 self.register_background_update_handler(
602730 _MEMBERSHIP_PROFILE_UPDATE_NAME, self._background_add_membership_profile
731 )
732 self.register_background_update_handler(
733 _CURRENT_STATE_MEMBERSHIP_UPDATE_NAME,
734 self._background_current_state_membership,
735 )
736 self.register_background_index_update(
737 "room_membership_forgotten_idx",
738 index_name="room_memberships_user_room_forgotten",
739 table="room_memberships",
740 columns=["user_id", "room_id"],
741 where_clause="forgotten = 1",
603742 )
604743
605744 def _store_room_members_txn(self, txn, events, backfilled):
702841 txn.execute(sql, (user_id, room_id))
703842
704843 self._invalidate_cache_and_stream(txn, self.did_forget, (user_id, room_id))
844 self._invalidate_cache_and_stream(
845 txn, self.get_forgotten_rooms_for_user, (user_id,)
846 )
705847
706848 return self.runInteraction("forget_membership", f)
707849
778920 if not result:
779921 yield self._end_background_update(_MEMBERSHIP_PROFILE_UPDATE_NAME)
780922
781 defer.returnValue(result)
923 return result
924
925 @defer.inlineCallbacks
926 def _background_current_state_membership(self, progress, batch_size):
927 """Update the new membership column on current_state_events.
928
929 This works by iterating over all rooms in alphebetical order.
930 """
931
932 def _background_current_state_membership_txn(txn, last_processed_room):
933 processed = 0
934 while processed < batch_size:
935 txn.execute(
936 """
937 SELECT MIN(room_id) FROM current_state_events WHERE room_id > ?
938 """,
939 (last_processed_room,),
940 )
941 row = txn.fetchone()
942 if not row or not row[0]:
943 return processed, True
944
945 next_room, = row
946
947 sql = """
948 UPDATE current_state_events
949 SET membership = (
950 SELECT membership FROM room_memberships
951 WHERE event_id = current_state_events.event_id
952 )
953 WHERE room_id = ?
954 """
955 txn.execute(sql, (next_room,))
956 processed += txn.rowcount
957
958 last_processed_room = next_room
959
960 self._background_update_progress_txn(
961 txn,
962 _CURRENT_STATE_MEMBERSHIP_UPDATE_NAME,
963 {"last_processed_room": last_processed_room},
964 )
965
966 return processed, False
967
968 # If we haven't got a last processed room then just use the empty
969 # string, which will compare before all room IDs correctly.
970 last_processed_room = progress.get("last_processed_room", "")
971
972 row_count, finished = yield self.runInteraction(
973 "_background_current_state_membership_update",
974 _background_current_state_membership_txn,
975 last_processed_room,
976 )
977
978 if finished:
979 yield self._end_background_update(_CURRENT_STATE_MEMBERSHIP_UPDATE_NAME)
980
981 return row_count
782982
783983
784984 class _JoinedHostsCache(object):
8061006 state_entry(synapse.state._StateCacheEntry)
8071007 """
8081008 if state_entry.state_group == self.state_group:
809 defer.returnValue(frozenset(self.hosts_to_joined_users))
1009 return frozenset(self.hosts_to_joined_users)
8101010
8111011 with (yield self.linearizer.queue(())):
8121012 if state_entry.state_group == self.state_group:
8431043 else:
8441044 self.state_group = object()
8451045 self._len = sum(len(v) for v in itervalues(self.hosts_to_joined_users))
846 defer.returnValue(frozenset(self.hosts_to_joined_users))
1046 return frozenset(self.hosts_to_joined_users)
8471047
8481048 def __len__(self):
8491049 return self._len
0 /* Copyright 2019 The Matrix.org Foundation C.I.C.
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- We add membership to current state so that we don't need to join against
16 -- room_memberships, which can be surprisingly costly (we do such queries
17 -- very frequently).
18 -- This will be null for non-membership events and the content.membership key
19 -- for membership events. (Will also be null for membership events until the
20 -- background update job has finished).
21 ALTER TABLE current_state_events ADD membership TEXT;
0 /* Copyright 2019 The Matrix.org Foundation C.I.C.
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- We add membership to current state so that we don't need to join against
16 -- room_memberships, which can be surprisingly costly (we do such queries
17 -- very frequently).
18 -- This will be null for non-membership events and the content.membership key
19 -- for membership events. (Will also be null for membership events until the
20 -- background update job has finished).
21
22 INSERT INTO background_updates (update_name, progress_json) VALUES
23 ('current_state_events_membership', '{}');
0 /* Copyright 2019 The Matrix.org Foundation C.I.C.
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- Adds an index on room_memberships for fetching all forgotten rooms for a user
16 INSERT INTO background_updates (update_name, progress_json) VALUES
17 ('room_membership_forgotten_idx', '{}');
165165 if not result:
166166 yield self._end_background_update(self.EVENT_SEARCH_UPDATE_NAME)
167167
168 defer.returnValue(result)
168 return result
169169
170170 @defer.inlineCallbacks
171171 def _background_reindex_gin_search(self, progress, batch_size):
208208 yield self.runWithConnection(create_index)
209209
210210 yield self._end_background_update(self.EVENT_SEARCH_USE_GIN_POSTGRES_NAME)
211 defer.returnValue(1)
211 return 1
212212
213213 @defer.inlineCallbacks
214214 def _background_reindex_search_order(self, progress, batch_size):
286286 if not finished:
287287 yield self._end_background_update(self.EVENT_SEARCH_ORDER_UPDATE_NAME)
288288
289 defer.returnValue(num_rows)
289 return num_rows
290290
291291 def store_event_search_txn(self, txn, event, key, value):
292292 """Add event to the search table
453453
454454 count = sum(row["count"] for row in count_results if row["room_id"] in room_ids)
455455
456 defer.returnValue(
457 {
458 "results": [
459 {"event": event_map[r["event_id"]], "rank": r["rank"]}
460 for r in results
461 if r["event_id"] in event_map
462 ],
463 "highlights": highlights,
464 "count": count,
465 }
466 )
456 return {
457 "results": [
458 {"event": event_map[r["event_id"]], "rank": r["rank"]}
459 for r in results
460 if r["event_id"] in event_map
461 ],
462 "highlights": highlights,
463 "count": count,
464 }
467465
468466 @defer.inlineCallbacks
469467 def search_rooms(self, room_ids, search_term, keys, limit, pagination_token=None):
598596
599597 count = sum(row["count"] for row in count_results if row["room_id"] in room_ids)
600598
601 defer.returnValue(
602 {
603 "results": [
604 {
605 "event": event_map[r["event_id"]],
606 "rank": r["rank"],
607 "pagination_token": "%s,%s"
608 % (r["origin_server_ts"], r["stream_ordering"]),
609 }
610 for r in results
611 if r["event_id"] in event_map
612 ],
613 "highlights": highlights,
614 "count": count,
615 }
616 )
599 return {
600 "results": [
601 {
602 "event": event_map[r["event_id"]],
603 "rank": r["rank"],
604 "pagination_token": "%s,%s"
605 % (r["origin_server_ts"], r["stream_ordering"]),
606 }
607 for r in results
608 if r["event_id"] in event_map
609 ],
610 "highlights": highlights,
611 "count": count,
612 }
617613
618614 def _find_highlights_in_postgres(self, search_query, events):
619615 """Given a list of events and a search term, return a list of words
5858 for e_id, h in hashes.items()
5959 }
6060
61 defer.returnValue(list(hashes.items()))
61 return list(hashes.items())
6262
6363 def _get_event_reference_hashes_txn(self, txn, event_id):
6464 """Get all the hashes for a given PDU.
421421
422422 # Retrieve the room's create event
423423 create_event = yield self.get_create_event_for_room(room_id)
424 defer.returnValue(create_event.content.get("room_version", "1"))
424 return create_event.content.get("room_version", "1")
425425
426426 @defer.inlineCallbacks
427427 def get_room_predecessor(self, room_id):
441441 create_event = yield self.get_create_event_for_room(room_id)
442442
443443 # Return predecessor if present
444 defer.returnValue(create_event.content.get("predecessor", None))
444 return create_event.content.get("predecessor", None)
445445
446446 @defer.inlineCallbacks
447447 def get_create_event_for_room(self, room_id):
465465
466466 # Retrieve the room's create event and return
467467 create_event = yield self.get_event(create_id)
468 defer.returnValue(create_event)
468 return create_event
469469
470470 @cached(max_entries=100000, iterable=True)
471471 def get_current_state_ids(self, room_id):
509509 event ID.
510510 """
511511
512 where_clause, where_args = state_filter.make_sql_filter_clause()
513
514 if not where_clause:
515 # We delegate to the cached version
516 return self.get_current_state_ids(room_id)
517
512518 def _get_filtered_current_state_ids_txn(txn):
513519 results = {}
514520 sql = """
515521 SELECT type, state_key, event_id FROM current_state_events
516522 WHERE room_id = ?
517523 """
518
519 where_clause, where_args = state_filter.make_sql_filter_clause()
520524
521525 if where_clause:
522526 sql += " AND (%s)" % (where_clause,)
558562 if not event:
559563 return
560564
561 defer.returnValue(event.content.get("canonical_alias"))
565 return event.content.get("canonical_alias")
562566
563567 @cached(max_entries=10000, iterable=True)
564568 def get_state_group_delta(self, state_group):
608612 dict of state_group_id -> (dict of (type, state_key) -> event id)
609613 """
610614 if not event_ids:
611 defer.returnValue({})
615 return {}
612616
613617 event_to_groups = yield self._get_state_group_for_events(event_ids)
614618
615619 groups = set(itervalues(event_to_groups))
616620 group_to_state = yield self._get_state_for_groups(groups)
617621
618 defer.returnValue(group_to_state)
622 return group_to_state
619623
620624 @defer.inlineCallbacks
621625 def get_state_ids_for_group(self, state_group):
629633 """
630634 group_to_state = yield self._get_state_for_groups((state_group,))
631635
632 defer.returnValue(group_to_state[state_group])
636 return group_to_state[state_group]
633637
634638 @defer.inlineCallbacks
635639 def get_state_groups(self, room_id, event_ids):
640644 dict of state_group_id -> list of state events.
641645 """
642646 if not event_ids:
643 defer.returnValue({})
647 return {}
644648
645649 group_to_ids = yield self.get_state_groups_ids(room_id, event_ids)
646650
653657 get_prev_content=False,
654658 )
655659
656 defer.returnValue(
657 {
658 group: [
659 state_event_map[v]
660 for v in itervalues(event_id_map)
661 if v in state_event_map
662 ]
663 for group, event_id_map in iteritems(group_to_ids)
664 }
665 )
660 return {
661 group: [
662 state_event_map[v]
663 for v in itervalues(event_id_map)
664 if v in state_event_map
665 ]
666 for group, event_id_map in iteritems(group_to_ids)
667 }
666668
667669 @defer.inlineCallbacks
668670 def _get_state_groups_from_groups(self, groups, state_filter):
689691 )
690692 results.update(res)
691693
692 defer.returnValue(results)
694 return results
693695
694696 def _get_state_groups_from_groups_txn(
695697 self, txn, groups, state_filter=StateFilter.all()
824826 for event_id, group in iteritems(event_to_groups)
825827 }
826828
827 defer.returnValue({event: event_to_state[event] for event in event_ids})
829 return {event: event_to_state[event] for event in event_ids}
828830
829831 @defer.inlineCallbacks
830832 def get_state_ids_for_events(self, event_ids, state_filter=StateFilter.all()):
850852 for event_id, group in iteritems(event_to_groups)
851853 }
852854
853 defer.returnValue({event: event_to_state[event] for event in event_ids})
855 return {event: event_to_state[event] for event in event_ids}
854856
855857 @defer.inlineCallbacks
856858 def get_state_for_event(self, event_id, state_filter=StateFilter.all()):
866868 A deferred dict from (type, state_key) -> state_event
867869 """
868870 state_map = yield self.get_state_for_events([event_id], state_filter)
869 defer.returnValue(state_map[event_id])
871 return state_map[event_id]
870872
871873 @defer.inlineCallbacks
872874 def get_state_ids_for_event(self, event_id, state_filter=StateFilter.all()):
882884 A deferred dict from (type, state_key) -> state_event
883885 """
884886 state_map = yield self.get_state_ids_for_events([event_id], state_filter)
885 defer.returnValue(state_map[event_id])
887 return state_map[event_id]
886888
887889 @cached(max_entries=50000)
888890 def _get_state_group_for_event(self, event_id):
912914 desc="_get_state_group_for_events",
913915 )
914916
915 defer.returnValue({row["event_id"]: row["state_group"] for row in rows})
917 return {row["event_id"]: row["state_group"] for row in rows}
916918
917919 def _get_state_for_group_using_cache(self, cache, group, state_filter):
918920 """Checks if group is in cache. See `_get_state_for_groups`
992994 incomplete_groups = incomplete_groups_m | incomplete_groups_nm
993995
994996 if not incomplete_groups:
995 defer.returnValue(state)
997 return state
996998
997999 cache_sequence_nm = self._state_group_cache.sequence
9981000 cache_sequence_m = self._state_group_members_cache.sequence
10191021 # everything we need from the database anyway.
10201022 state[group] = state_filter.filter_state(group_state_dict)
10211023
1022 defer.returnValue(state)
1024 return state
10231025
10241026 def _get_state_for_groups_using_cache(self, groups, cache, state_filter):
10251027 """Gets the state at each of a list of state groups, optionally
14931495 self.STATE_GROUP_DEDUPLICATION_UPDATE_NAME
14941496 )
14951497
1496 defer.returnValue(result * BATCH_SIZE_SCALE_FACTOR)
1498 return result * BATCH_SIZE_SCALE_FACTOR
14971499
14981500 @defer.inlineCallbacks
14991501 def _background_index_state(self, progress, batch_size):
15231525
15241526 yield self._end_background_update(self.STATE_GROUP_INDEX_UPDATE_NAME)
15251527
1526 defer.returnValue(1)
1528 return 1
6565
6666 if not self.stats_enabled:
6767 yield self._end_background_update("populate_stats_createtables")
68 defer.returnValue(1)
68 return 1
6969
7070 # Get all the rooms that we want to process.
7171 def _make_staging_area(txn):
119119 self.get_earliest_token_for_room_stats.invalidate_all()
120120
121121 yield self._end_background_update("populate_stats_createtables")
122 defer.returnValue(1)
122 return 1
123123
124124 @defer.inlineCallbacks
125125 def _populate_stats_cleanup(self, progress, batch_size):
128128 """
129129 if not self.stats_enabled:
130130 yield self._end_background_update("populate_stats_cleanup")
131 defer.returnValue(1)
131 return 1
132132
133133 position = yield self._simple_select_one_onecol(
134134 TEMP_TABLE + "_position", None, "position"
142142 yield self.runInteraction("populate_stats_cleanup", _delete_staging_area)
143143
144144 yield self._end_background_update("populate_stats_cleanup")
145 defer.returnValue(1)
145 return 1
146146
147147 @defer.inlineCallbacks
148148 def _populate_stats_process_rooms(self, progress, batch_size):
149149
150150 if not self.stats_enabled:
151151 yield self._end_background_update("populate_stats_process_rooms")
152 defer.returnValue(1)
152 return 1
153153
154154 # If we don't have progress filed, delete everything.
155155 if not progress:
185185 # No more rooms -- complete the transaction.
186186 if not rooms_to_work_on:
187187 yield self._end_background_update("populate_stats_process_rooms")
188 defer.returnValue(1)
188 return 1
189189
190190 logger.info(
191191 "Processing the next %d rooms of %d remaining",
210210 avatar_id = current_state_ids.get((EventTypes.RoomAvatar, ""))
211211 canonical_alias_id = current_state_ids.get((EventTypes.CanonicalAlias, ""))
212212
213 event_ids = [
214 join_rules_id,
215 history_visibility_id,
216 encryption_id,
217 name_id,
218 topic_id,
219 avatar_id,
220 canonical_alias_id,
221 ]
222
213223 state_events = yield self.get_events(
214 [
215 join_rules_id,
216 history_visibility_id,
217 encryption_id,
218 name_id,
219 topic_id,
220 avatar_id,
221 canonical_alias_id,
222 ]
224 [ev for ev in event_ids if ev is not None]
223225 )
224226
225227 def _get_or_none(event_id, arg):
302304
303305 if processed_event_count > batch_size:
304306 # Don't process any more rooms, we've hit our batch size.
305 defer.returnValue(processed_event_count)
306
307 defer.returnValue(processed_event_count)
307 return processed_event_count
308
309 return processed_event_count
308310
309311 def delete_all_stats(self):
310312 """
299299 )
300300
301301 if not room_ids:
302 defer.returnValue({})
302 return {}
303303
304304 results = {}
305305 room_ids = list(room_ids)
322322 )
323323 results.update(dict(zip(rm_ids, res)))
324324
325 defer.returnValue(results)
325 return results
326326
327327 def get_rooms_that_changed(self, room_ids, from_key):
328328 """Given a list of rooms and a token, return rooms where there may have
363363 the chunk of events returned.
364364 """
365365 if from_key == to_key:
366 defer.returnValue(([], from_key))
366 return ([], from_key)
367367
368368 from_id = RoomStreamToken.parse_stream_token(from_key).stream
369369 to_id = RoomStreamToken.parse_stream_token(to_key).stream
373373 )
374374
375375 if not has_changed:
376 defer.returnValue(([], from_key))
376 return ([], from_key)
377377
378378 def f(txn):
379379 sql = (
406406 # get.
407407 key = from_key
408408
409 defer.returnValue((ret, key))
409 return (ret, key)
410410
411411 @defer.inlineCallbacks
412412 def get_membership_changes_for_user(self, user_id, from_key, to_key):
414414 to_id = RoomStreamToken.parse_stream_token(to_key).stream
415415
416416 if from_key == to_key:
417 defer.returnValue([])
417 return []
418418
419419 if from_id:
420420 has_changed = self._membership_stream_cache.has_entity_changed(
421421 user_id, int(from_id)
422422 )
423423 if not has_changed:
424 defer.returnValue([])
424 return []
425425
426426 def f(txn):
427427 sql = (
446446
447447 self._set_before_and_after(ret, rows, topo_order=False)
448448
449 defer.returnValue(ret)
449 return ret
450450
451451 @defer.inlineCallbacks
452452 def get_recent_events_for_room(self, room_id, limit, end_token):
476476
477477 self._set_before_and_after(events, rows)
478478
479 defer.returnValue((events, token))
479 return (events, token)
480480
481481 @defer.inlineCallbacks
482482 def get_recent_event_ids_for_room(self, room_id, limit, end_token):
495495 """
496496 # Allow a zero limit here, and no-op.
497497 if limit == 0:
498 defer.returnValue(([], end_token))
498 return ([], end_token)
499499
500500 end_token = RoomStreamToken.parse(end_token)
501501
510510 # We want to return the results in ascending order.
511511 rows.reverse()
512512
513 defer.returnValue((rows, token))
513 return (rows, token)
514514
515515 def get_room_event_after_stream_ordering(self, room_id, stream_ordering):
516516 """Gets details of the first event in a room at or after a stream ordering
548548 """
549549 token = yield self.get_room_max_stream_ordering()
550550 if room_id is None:
551 defer.returnValue("s%d" % (token,))
551 return "s%d" % (token,)
552552 else:
553553 topo = yield self.runInteraction(
554554 "_get_max_topological_txn", self._get_max_topological_txn, room_id
555555 )
556 defer.returnValue("t%d-%d" % (topo, token))
556 return "t%d-%d" % (topo, token)
557557
558558 def get_stream_token_for_event(self, event_id):
559559 """The stream token for an event
673673 [e for e in results["after"]["event_ids"]], get_prev_content=True
674674 )
675675
676 defer.returnValue(
677 {
678 "events_before": events_before,
679 "events_after": events_after,
680 "start": results["before"]["token"],
681 "end": results["after"]["token"],
682 }
683 )
676 return {
677 "events_before": events_before,
678 "events_after": events_after,
679 "start": results["before"]["token"],
680 "end": results["after"]["token"],
681 }
684682
685683 def _get_events_around_txn(
686684 self, txn, room_id, event_id, before_limit, after_limit, event_filter
784782
785783 events = yield self.get_events_as_list(event_ids)
786784
787 defer.returnValue((upper_bound, events))
785 return (upper_bound, events)
788786
789787 def get_federation_out_pos(self, typ):
790788 return self._simple_select_one_onecol(
938936
939937 self._set_before_and_after(events, rows)
940938
941 defer.returnValue((events, token))
939 return (events, token)
942940
943941
944942 class StreamStore(StreamWorkerStore):
6565 room_id string, tag string and content string.
6666 """
6767 if last_id == current_id:
68 defer.returnValue([])
68 return []
6969
7070 def get_all_updated_tags_txn(txn):
7171 sql = (
106106 )
107107 results.extend(tags)
108108
109 defer.returnValue(results)
109 return results
110110
111111 @defer.inlineCallbacks
112112 def get_updated_tags(self, user_id, stream_id):
134134 user_id, int(stream_id)
135135 )
136136 if not changed:
137 defer.returnValue({})
137 return {}
138138
139139 room_ids = yield self.runInteraction("get_updated_tags", get_updated_tags_txn)
140140
144144 for room_id in room_ids:
145145 results[room_id] = tags_by_room.get(room_id, {})
146146
147 defer.returnValue(results)
147 return results
148148
149149 def get_tags_for_room(self, user_id, room_id):
150150 """Get all the tags for the given room
193193 self.get_tags_for_user.invalidate((user_id,))
194194
195195 result = self._account_data_id_gen.get_current_token()
196 defer.returnValue(result)
196 return result
197197
198198 @defer.inlineCallbacks
199199 def remove_tag_from_room(self, user_id, room_id, tag):
216216 self.get_tags_for_user.invalidate((user_id,))
217217
218218 result = self._account_data_id_gen.get_current_token()
219 defer.returnValue(result)
219 return result
220220
221221 def _update_revision_txn(self, txn, user_id, room_id, next_id):
222222 """Update the latest revision of the tags for the given user and room.
146146
147147 result = self._destination_retry_cache.get(destination, SENTINEL)
148148 if result is not SENTINEL:
149 defer.returnValue(result)
149 return result
150150
151151 result = yield self.runInteraction(
152152 "get_destination_retry_timings",
157157 # We don't hugely care about race conditions between getting and
158158 # invalidating the cache, since we time out fairly quickly anyway.
159159 self._destination_retry_cache[destination] = result
160 defer.returnValue(result)
160 return result
161161
162162 def _get_destination_retry_timings(self, txn, destination):
163163 result = self._simple_select_one_txn(
195195 def _set_destination_retry_timings(
196196 self, txn, destination, retry_last_ts, retry_interval
197197 ):
198
199 if self.database_engine.can_native_upsert:
200 # Upsert retry time interval if retry_interval is zero (i.e. we're
201 # resetting it) or greater than the existing retry interval.
202
203 sql = """
204 INSERT INTO destinations (destination, retry_last_ts, retry_interval)
205 VALUES (?, ?, ?)
206 ON CONFLICT (destination) DO UPDATE SET
207 retry_last_ts = EXCLUDED.retry_last_ts,
208 retry_interval = EXCLUDED.retry_interval
209 WHERE
210 EXCLUDED.retry_interval = 0
211 OR destinations.retry_interval < EXCLUDED.retry_interval
212 """
213
214 txn.execute(sql, (destination, retry_last_ts, retry_interval))
215
216 return
217
198218 self.database_engine.lock_table(txn, "destinations")
199219
200220 # We need to be careful here as the data may have changed from under us
108108 yield self._simple_insert(TEMP_TABLE + "_position", {"position": new_pos})
109109
110110 yield self._end_background_update("populate_user_directory_createtables")
111 defer.returnValue(1)
111 return 1
112112
113113 @defer.inlineCallbacks
114114 def _populate_user_directory_cleanup(self, progress, batch_size):
130130 )
131131
132132 yield self._end_background_update("populate_user_directory_cleanup")
133 defer.returnValue(1)
133 return 1
134134
135135 @defer.inlineCallbacks
136136 def _populate_user_directory_process_rooms(self, progress, batch_size):
176176 # No more rooms -- complete the transaction.
177177 if not rooms_to_work_on:
178178 yield self._end_background_update("populate_user_directory_process_rooms")
179 defer.returnValue(1)
179 return 1
180180
181181 logger.info(
182182 "Processing the next %d rooms of %d remaining"
256256
257257 if processed_event_count > batch_size:
258258 # Don't process any more rooms, we've hit our batch size.
259 defer.returnValue(processed_event_count)
260
261 defer.returnValue(processed_event_count)
259 return processed_event_count
260
261 return processed_event_count
262262
263263 @defer.inlineCallbacks
264264 def _populate_user_directory_process_users(self, progress, batch_size):
267267 """
268268 if not self.hs.config.user_directory_search_all_users:
269269 yield self._end_background_update("populate_user_directory_process_users")
270 defer.returnValue(1)
270 return 1
271271
272272 def _get_next_batch(txn):
273273 sql = "SELECT user_id FROM %s LIMIT %s" % (
297297 # No more users -- complete the transaction.
298298 if not users_to_work_on:
299299 yield self._end_background_update("populate_user_directory_process_users")
300 defer.returnValue(1)
300 return 1
301301
302302 logger.info(
303303 "Processing the next %d users of %d remaining"
321321 progress,
322322 )
323323
324 defer.returnValue(len(users_to_work_on))
324 return len(users_to_work_on)
325325
326326 @defer.inlineCallbacks
327327 def is_room_world_readable_or_publicly_joinable(self, room_id):
343343 join_rule_ev = yield self.get_event(join_rules_id, allow_none=True)
344344 if join_rule_ev:
345345 if join_rule_ev.content.get("join_rule") == JoinRules.PUBLIC:
346 defer.returnValue(True)
346 return True
347347
348348 hist_vis_id = current_state_ids.get((EventTypes.RoomHistoryVisibility, ""))
349349 if hist_vis_id:
350350 hist_vis_ev = yield self.get_event(hist_vis_id, allow_none=True)
351351 if hist_vis_ev:
352352 if hist_vis_ev.content.get("history_visibility") == "world_readable":
353 defer.returnValue(True)
354
355 defer.returnValue(False)
353 return True
354
355 return False
356356
357357 def update_profile_in_user_dir(self, user_id, display_name, avatar_url):
358358 """
498498 user_ids = set(user_ids_share_pub)
499499 user_ids.update(user_ids_share_priv)
500500
501 defer.returnValue(user_ids)
501 return user_ids
502502
503503 def add_users_who_share_private_room(self, room_id, user_id_tuples):
504504 """Insert entries into the users_who_share_private_rooms table. The first
608608
609609 users = set(pub_rows)
610610 users.update(rows)
611 defer.returnValue(list(users))
611 return list(users)
612612
613613 @defer.inlineCallbacks
614614 def get_rooms_in_common_for_users(self, user_id, other_user_id):
617617 sql = """
618618 SELECT room_id FROM (
619619 SELECT c.room_id FROM current_state_events AS c
620 INNER JOIN room_memberships USING (event_id)
620 INNER JOIN room_memberships AS m USING (event_id)
621621 WHERE type = 'm.room.member'
622 AND membership = 'join'
622 AND m.membership = 'join'
623623 AND state_key = ?
624624 ) AS f1 INNER JOIN (
625625 SELECT c.room_id FROM current_state_events AS c
626 INNER JOIN room_memberships USING (event_id)
626 INNER JOIN room_memberships AS m USING (event_id)
627627 WHERE type = 'm.room.member'
628 AND membership = 'join'
628 AND m.membership = 'join'
629629 AND state_key = ?
630630 ) f2 USING (room_id)
631631 """
634634 "get_rooms_in_common_for_users", None, sql, user_id, other_user_id
635635 )
636636
637 defer.returnValue([room_id for room_id, in rows])
637 return [room_id for room_id, in rows]
638638
639639 def delete_all_from_user_dir(self):
640640 """Delete the entire user directory
781781
782782 limited = len(results) > limit
783783
784 defer.returnValue({"limited": limited, "results": results})
784 return {"limited": limited, "results": results}
785785
786786
787787 def _parse_query_sqlite(search_term):
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
1415 import operator
15
16 from twisted.internet import defer
1716
1817 from synapse.storage._base import SQLBaseStore
1918 from synapse.util.caches.descriptors import cached, cachedList
6665
6766 erased_users = yield self.runInteraction("are_users_erased", _get_erased_users)
6867 res = dict((u, u in erased_users) for u in user_ids)
69 defer.returnValue(res)
68 return res
7069
7170
7271 class UserErasureStore(UserErasureWorkerStore):
5555 device_list_key=device_list_key,
5656 groups_key=groups_key,
5757 )
58 defer.returnValue(token)
58 return token
5959
6060 @defer.inlineCallbacks
6161 def get_current_token_for_pagination(self):
7979 device_list_key=0,
8080 groups_key=0,
8181 )
82 defer.returnValue(token)
82 return token
4848 with context.PreserveLoggingContext():
4949 self._reactor.callLater(seconds, d.callback, seconds)
5050 res = yield d
51 defer.returnValue(res)
51 return res
5252
5353 def time(self):
5454 """Returns the current system time in seconds since epoch."""
5858 """Returns the current system time in miliseconds since epoch."""
5959 return int(self.time() * 1000)
6060
61 def looping_call(self, f, msec):
61 def looping_call(self, f, msec, *args, **kwargs):
6262 """Call a function repeatedly.
6363
6464 Waits `msec` initially before calling `f` for the first time.
6969 Args:
7070 f(function): The function to call repeatedly.
7171 msec(float): How long to wait between calls in milliseconds.
72 *args: Postional arguments to pass to function.
73 **kwargs: Key arguments to pass to function.
7274 """
73 call = task.LoopingCall(f)
75 call = task.LoopingCall(f, *args, **kwargs)
7476 call.clock = self._reactor
7577 d = call.start(msec / 1000.0, now=False)
7678 d.addErrback(log_failure, "Looping call died", consumeErrors=False)
365365 new_defer.callback(None)
366366 self.key_to_current_readers.get(key, set()).discard(new_defer)
367367
368 defer.returnValue(_ctx_manager())
368 return _ctx_manager()
369369
370370 @defer.inlineCallbacks
371371 def write(self, key):
395395 if self.key_to_current_writer[key] == new_defer:
396396 self.key_to_current_writer.pop(key)
397397
398 defer.returnValue(_ctx_manager())
398 return _ctx_manager()
399399
400400
401401 def _cancelled_to_timed_out_error(value, timeout):
00 # -*- coding: utf-8 -*-
11 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2019 The Matrix.org Foundation C.I.C.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
5051 response_cache_total = Gauge("synapse_util_caches_response_cache:total", "", ["name"])
5152
5253
53 def register_cache(cache_type, cache_name, cache):
54 def register_cache(cache_type, cache_name, cache, collect_callback=None):
55 """Register a cache object for metric collection.
56
57 Args:
58 cache_type (str):
59 cache_name (str): name of the cache
60 cache (object): cache itself
61 collect_callback (callable|None): if not None, a function which is called during
62 metric collection to update additional metrics.
63
64 Returns:
65 CacheMetric: an object which provides inc_{hits,misses,evictions} methods
66 """
5467
5568 # Check if the metric is already registered. Unregister it, if so.
5669 # This usually happens during tests, as at runtime these caches are
89102 cache_hits.labels(cache_name).set(self.hits)
90103 cache_evicted.labels(cache_name).set(self.evicted_size)
91104 cache_total.labels(cache_name).set(self.hits + self.misses)
105 if collect_callback:
106 collect_callback()
92107 except Exception as e:
93108 logger.warn("Error calculating metrics for %s: %s", cache_name, e)
94109 raise
1818 import threading
1919 from collections import namedtuple
2020
21 import six
22 from six import itervalues, string_types
21 from six import itervalues
22
23 from prometheus_client import Gauge
2324
2425 from twisted.internet import defer
2526
2930 from synapse.util.caches import get_cache_factor_for
3031 from synapse.util.caches.lrucache import LruCache
3132 from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry
32 from synapse.util.stringutils import to_ascii
3333
3434 from . import register_cache
3535
3636 logger = logging.getLogger(__name__)
3737
38
39 cache_pending_metric = Gauge(
40 "synapse_util_caches_cache_pending",
41 "Number of lookups currently pending for this cache",
42 ["name"],
43 )
3844
3945 _CacheSentinel = object()
4046
8187 self.name = name
8288 self.keylen = keylen
8389 self.thread = None
84 self.metrics = register_cache("cache", name, self.cache)
90 self.metrics = register_cache(
91 "cache",
92 name,
93 self.cache,
94 collect_callback=self._metrics_collection_callback,
95 )
8596
8697 def _on_evicted(self, evicted_count):
8798 self.metrics.inc_evictions(evicted_count)
99
100 def _metrics_collection_callback(self):
101 cache_pending_metric.labels(self.name).set(len(self._pending_deferred_cache))
88102
89103 def check_thread(self):
90104 expected_thread = self.thread
107121 update_metrics (bool): whether to update the cache hit rate metrics
108122
109123 Returns:
110 Either a Deferred or the raw result
124 Either an ObservableDeferred or the raw result
111125 """
112126 callbacks = [callback] if callback else []
113127 val = self._pending_deferred_cache.get(key, _CacheSentinel)
131145 return default
132146
133147 def set(self, key, value, callback=None):
148 if not isinstance(value, defer.Deferred):
149 raise TypeError("not a Deferred")
150
134151 callbacks = [callback] if callback else []
135152 self.check_thread()
136 entry = CacheEntry(deferred=value, callbacks=callbacks)
153 observable = ObservableDeferred(value, consumeErrors=True)
154 observer = defer.maybeDeferred(observable.observe)
155 entry = CacheEntry(deferred=observable, callbacks=callbacks)
137156
138157 existing_entry = self._pending_deferred_cache.pop(key, None)
139158 if existing_entry:
141160
142161 self._pending_deferred_cache[key] = entry
143162
144 def shuffle(result):
163 def compare_and_pop():
164 """Check if our entry is still the one in _pending_deferred_cache, and
165 if so, pop it.
166
167 Returns true if the entries matched.
168 """
145169 existing_entry = self._pending_deferred_cache.pop(key, None)
146170 if existing_entry is entry:
171 return True
172
173 # oops, the _pending_deferred_cache has been updated since
174 # we started our query, so we are out of date.
175 #
176 # Better put back whatever we took out. (We do it this way
177 # round, rather than peeking into the _pending_deferred_cache
178 # and then removing on a match, to make the common case faster)
179 if existing_entry is not None:
180 self._pending_deferred_cache[key] = existing_entry
181
182 return False
183
184 def cb(result):
185 if compare_and_pop():
147186 self.cache.set(key, result, entry.callbacks)
148187 else:
149 # oops, the _pending_deferred_cache has been updated since
150 # we started our query, so we are out of date.
151 #
152 # Better put back whatever we took out. (We do it this way
153 # round, rather than peeking into the _pending_deferred_cache
154 # and then removing on a match, to make the common case faster)
155 if existing_entry is not None:
156 self._pending_deferred_cache[key] = existing_entry
157
158188 # we're not going to put this entry into the cache, so need
159189 # to make sure that the invalidation callbacks are called.
160190 # That was probably done when _pending_deferred_cache was
162192 # `invalidate` being previously called, in which case it may
163193 # not have been. Either way, let's double-check now.
164194 entry.invalidate()
165 return result
166
167 entry.deferred.addCallback(shuffle)
195
196 def eb(_fail):
197 compare_and_pop()
198 entry.invalidate()
199
200 # once the deferred completes, we can move the entry from the
201 # _pending_deferred_cache to the real cache.
202 #
203 observer.addCallbacks(cb, eb)
204 return observable
168205
169206 def prefill(self, key, value, callback=None):
170207 callbacks = [callback] if callback else []
288325 def foo(self, key, cache_context):
289326 r1 = yield self.bar1(key, on_invalidate=cache_context.invalidate)
290327 r2 = yield self.bar2(key, on_invalidate=cache_context.invalidate)
291 defer.returnValue(r1 + r2)
328 return r1 + r2
292329
293330 Args:
294331 num_args (int): number of positional arguments (excluding ``self`` and
397434
398435 ret.addErrback(onErr)
399436
400 # If our cache_key is a string on py2, try to convert to ascii
401 # to save a bit of space in large caches. Py3 does this
402 # internally automatically.
403 if six.PY2 and isinstance(cache_key, string_types):
404 cache_key = to_ascii(cache_key)
405
406 result_d = ObservableDeferred(ret, consumeErrors=True)
407 cache.set(cache_key, result_d, callback=invalidate_callback)
437 result_d = cache.set(cache_key, ret, callback=invalidate_callback)
408438 observer = result_d.observe()
409439
410 if isinstance(observer, defer.Deferred):
411 return make_deferred_yieldable(observer)
412 else:
413 return observer
440 return make_deferred_yieldable(observer)
414441
415442 if self.num_args == 1:
416443 wrapped.invalidate = lambda key: cache.invalidate(key[0])
526553 missing.add(arg)
527554
528555 if missing:
529 # we need an observable deferred for each entry in the list,
556 # we need a deferred for each entry in the list,
530557 # which we put in the cache. Each deferred resolves with the
531558 # relevant result for that key.
532559 deferreds_map = {}
534561 deferred = defer.Deferred()
535562 deferreds_map[arg] = deferred
536563 key = arg_to_cache_key(arg)
537 observable = ObservableDeferred(deferred)
538 cache.set(key, observable, callback=invalidate_callback)
564 cache.set(key, deferred, callback=invalidate_callback)
539565
540566 def complete_all(res):
541567 # the wrapped function has completed. It returns a
120120 @defer.inlineCallbacks
121121 def handle_request(request):
122122 # etc
123 defer.returnValue(result)
123 return result
124124
125125 result = yield response_cache.wrap(
126126 key,
6666 def measured_func(self, *args, **kwargs):
6767 with Measure(self.clock, name):
6868 r = yield func(self, *args, **kwargs)
69 defer.returnValue(r)
69 return r
7070
7171 return measured_func
7272
9494 # maximum backoff even though it might only have been down briefly
9595 backoff_on_failure = not ignore_backoff
9696
97 defer.returnValue(
98 RetryDestinationLimiter(
99 destination,
100 clock,
101 store,
102 retry_interval,
103 backoff_on_failure=backoff_on_failure,
104 **kwargs
105 )
97 return RetryDestinationLimiter(
98 destination,
99 clock,
100 store,
101 retry_interval,
102 backoff_on_failure=backoff_on_failure,
103 **kwargs
106104 )
107105
108106
2121
2222
2323 def get_version_string(module):
24 """Given a module calculate a git-aware version string for it.
25
26 If called on a module not in a git checkout will return `__verison__`.
27
28 Args:
29 module (module)
30
31 Returns:
32 str
33 """
34
35 cached_version = getattr(module, "_synapse_version_string_cache", None)
36 if cached_version:
37 return cached_version
38
39 version_string = module.__version__
40
2441 try:
2542 null = open(os.devnull, "w")
2643 cwd = os.path.dirname(os.path.abspath(module.__file__))
7996 s for s in (git_branch, git_tag, git_commit, git_dirty) if s
8097 )
8198
82 return "%s (%s)" % (module.__version__, git_version)
99 version_string = "%s (%s)" % (module.__version__, git_version)
83100 except Exception as e:
84101 logger.info("Failed to check for git repository: %s", e)
85102
86 return module.__version__
103 module._synapse_version_string_cache = version_string
104
105 return version_string
207207 filtered_events = filter(operator.truth, filtered_events)
208208
209209 # we turn it into a list before returning it.
210 defer.returnValue(list(filtered_events))
210 return list(filtered_events)
211211
212212
213213 @defer.inlineCallbacks
316316 elif redact:
317317 to_return.append(prune_event(e))
318318
319 defer.returnValue(to_return)
319 return to_return
320320
321321 # If there are no erased users then we can just return the given list
322322 # of events without having to copy it.
323 defer.returnValue(events)
323 return events
324324
325325 # Ok, so we're dealing with events that have non-trivial visibility
326326 # rules, so we need to also get the memberships of the room.
383383 elif redact:
384384 to_return.append(prune_event(e))
385385
386 defer.returnValue(to_return)
386 return to_return
8585 getattr(LoggingContext.current_context(), "request", None), expected
8686 )
8787
88 def test_wait_for_previous_lookups(self):
89 kr = keyring.Keyring(self.hs)
90
91 lookup_1_deferred = defer.Deferred()
92 lookup_2_deferred = defer.Deferred()
93
94 # we run the lookup in a logcontext so that the patched inlineCallbacks can check
95 # it is doing the right thing with logcontexts.
96 wait_1_deferred = run_in_context(
97 kr.wait_for_previous_lookups, {"server1": lookup_1_deferred}
98 )
99
100 # there were no previous lookups, so the deferred should be ready
101 self.successResultOf(wait_1_deferred)
102
103 # set off another wait. It should block because the first lookup
104 # hasn't yet completed.
105 wait_2_deferred = run_in_context(
106 kr.wait_for_previous_lookups, {"server1": lookup_2_deferred}
107 )
108
109 self.assertFalse(wait_2_deferred.called)
110
111 # let the first lookup complete (in the sentinel context)
112 lookup_1_deferred.callback(None)
113
114 # now the second wait should complete.
115 self.successResultOf(wait_2_deferred)
116
11788 def test_verify_json_objects_for_server_awaits_previous_requests(self):
11889 key1 = signedjson.key.generate_signing_key(1)
11990
135106 self.assertEquals(LoggingContext.current_context().request, "11")
136107 with PreserveLoggingContext():
137108 yield persp_deferred
138 defer.returnValue(persp_resp)
109 return persp_resp
139110
140111 self.http_client.post_json.side_effect = get_perspectives
141112
582553 # logs.
583554 ctx.request = "testctx"
584555 rv = yield f(*args, **kwargs)
585 defer.returnValue(rv)
556 return rv
586557
587558
588559 def _verify_json_for_server(kr, *args):
593564 @defer.inlineCallbacks
594565 def v():
595566 rv1 = yield kr.verify_json_for_server(*args)
596 defer.returnValue(rv1)
567 return rv1
597568
598569 return run_in_context(v)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from mock import Mock
16
1517 from twisted.internet import defer
1618
19 from synapse.api.errors import Codes, SynapseError
1720 from synapse.config.ratelimiting import FederationRateLimitConfig
1821 from synapse.federation.transport import server
1922 from synapse.rest import admin
2023 from synapse.rest.client.v1 import login, room
24 from synapse.types import UserID
2125 from synapse.util.ratelimitutils import FederationRateLimiter
2226
2327 from tests import unittest
3236 ]
3337
3438 def default_config(self, name="test"):
35 config = super(RoomComplexityTests, self).default_config(name=name)
36 config["limit_large_remote_room_joins"] = True
37 config["limit_large_remote_room_complexity"] = 0.05
39 config = super().default_config(name=name)
40 config["limit_remote_rooms"] = {"enabled": True, "complexity": 0.05}
3841 return config
3942
4043 def prepare(self, reactor, clock, homeserver):
8790 self.assertEquals(200, channel.code)
8891 complexity = channel.json_body["v1"]
8992 self.assertEqual(complexity, 1.23)
93
94 def test_join_too_large(self):
95
96 u1 = self.register_user("u1", "pass")
97
98 handler = self.hs.get_room_member_handler()
99 fed_transport = self.hs.get_federation_transport_client()
100
101 # Mock out some things, because we don't want to test the whole join
102 fed_transport.client.get_json = Mock(return_value=defer.succeed({"v1": 9999}))
103 handler.federation_handler.do_invite_join = Mock(return_value=defer.succeed(1))
104
105 d = handler._remote_join(
106 None,
107 ["otherserver.example"],
108 "roomid",
109 UserID.from_string(u1),
110 {"membership": "join"},
111 )
112
113 self.pump()
114
115 # The request failed with a SynapseError saying the resource limit was
116 # exceeded.
117 f = self.get_failure(d, SynapseError)
118 self.assertEqual(f.value.code, 400, f.value)
119 self.assertEqual(f.value.errcode, Codes.RESOURCE_LIMIT_EXCEEDED)
120
121 def test_join_too_large_once_joined(self):
122
123 u1 = self.register_user("u1", "pass")
124 u1_token = self.login("u1", "pass")
125
126 # Ok, this might seem a bit weird -- I want to test that we actually
127 # leave the room, but I don't want to simulate two servers. So, we make
128 # a local room, which we say we're joining remotely, even if there's no
129 # remote, because we mock that out. Then, we'll leave the (actually
130 # local) room, which will be propagated over federation in a real
131 # scenario.
132 room_1 = self.helper.create_room_as(u1, tok=u1_token)
133
134 handler = self.hs.get_room_member_handler()
135 fed_transport = self.hs.get_federation_transport_client()
136
137 # Mock out some things, because we don't want to test the whole join
138 fed_transport.client.get_json = Mock(return_value=defer.succeed(None))
139 handler.federation_handler.do_invite_join = Mock(return_value=defer.succeed(1))
140
141 # Artificially raise the complexity
142 self.hs.get_datastore().get_current_state_event_counts = lambda x: defer.succeed(
143 600
144 )
145
146 d = handler._remote_join(
147 None,
148 ["otherserver.example"],
149 room_1,
150 UserID.from_string(u1),
151 {"membership": "join"},
152 )
153
154 self.pump()
155
156 # The request failed with a SynapseError saying the resource limit was
157 # exceeded.
158 f = self.get_failure(d, SynapseError)
159 self.assertEqual(f.value.code, 400)
160 self.assertEqual(f.value.errcode, Codes.RESOURCE_LIMIT_EXCEEDED)
4343 hs_config["max_mau_value"] = 50
4444 hs_config["limit_usage_by_mau"] = True
4545
46 hs = self.setup_test_homeserver(config=hs_config, expire_access_token=True)
46 hs = self.setup_test_homeserver(config=hs_config)
4747 return hs
4848
4949 def prepare(self, reactor, clock, hs):
282282 user, requester, displayname, by_admin=True
283283 )
284284
285 defer.returnValue((user_id, token))
285 return (user_id, token)
2424 from twisted.internet.protocol import Factory
2525 from twisted.protocols.tls import TLSMemoryBIOFactory
2626 from twisted.web._newclient import ResponseNeverReceived
27 from twisted.web.client import Agent
2728 from twisted.web.http import HTTPChannel
2829 from twisted.web.http_headers import Headers
2930 from twisted.web.iweb import IPolicyForHTTPS
3031
3132 from synapse.config.homeserver import HomeServerConfig
3233 from synapse.crypto.context_factory import ClientTLSOptionsFactory
33 from synapse.http.federation.matrix_federation_agent import (
34 MatrixFederationAgent,
34 from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
35 from synapse.http.federation.srv_resolver import Server
36 from synapse.http.federation.well_known_resolver import (
37 WellKnownResolver,
3538 _cache_period_from_headers,
3639 )
37 from synapse.http.federation.srv_resolver import Server
3840 from synapse.logging.context import LoggingContext
3941 from synapse.util.caches.ttlcache import TTLCache
4042
7476
7577 config_dict = default_config("test", parse=False)
7678 config_dict["federation_custom_ca_list"] = [get_test_ca_cert_file()]
77 # config_dict["trusted_key_servers"] = []
7879
7980 self._config = config = HomeServerConfig()
8081 config.parse_config_dict(config_dict, "", "")
8182
83 self.tls_factory = ClientTLSOptionsFactory(config)
8284 self.agent = MatrixFederationAgent(
8385 reactor=self.reactor,
84 tls_client_options_factory=ClientTLSOptionsFactory(config),
85 _well_known_tls_policy=TrustingTLSPolicyForHTTPS(),
86 tls_client_options_factory=self.tls_factory,
8687 _srv_resolver=self.mock_resolver,
8788 _well_known_cache=self.well_known_cache,
8889 )
144145
145146 try:
146147 fetch_res = yield fetch_d
147 defer.returnValue(fetch_res)
148 return fetch_res
148149 except Exception as e:
149150 logger.info("Fetch of %s failed: %s", uri.decode("ascii"), e)
150151 raise
690691 not signed by a CA
691692 """
692693
693 # we use the same test server as the other tests, but use an agent
694 # with _well_known_tls_policy left to the default, which will not
695 # trust it (since the presented cert is signed by a test CA)
694 # we use the same test server as the other tests, but use an agent with
695 # the config left to the default, which will not trust it (since the
696 # presented cert is signed by a test CA)
696697
697698 self.mock_resolver.resolve_service.side_effect = lambda _: []
698699 self.reactor.lookups["testserv"] = "1.2.3.4"
699700
701 config = default_config("test", parse=True)
702
700703 agent = MatrixFederationAgent(
701704 reactor=self.reactor,
702 tls_client_options_factory=ClientTLSOptionsFactory(self._config),
705 tls_client_options_factory=ClientTLSOptionsFactory(config),
703706 _srv_resolver=self.mock_resolver,
704707 _well_known_cache=self.well_known_cache,
705708 )
927930 self.reactor.pump((0.1,))
928931 self.successResultOf(test_d)
929932
930 @defer.inlineCallbacks
931 def do_get_well_known(self, serv):
932 try:
933 result = yield self.agent._get_well_known(serv)
934 logger.info("Result from well-known fetch: %s", result)
935 except Exception as e:
936 logger.warning("Error fetching well-known: %s", e)
937 raise
938 defer.returnValue(result)
939
940933 def test_well_known_cache(self):
934 well_known_resolver = WellKnownResolver(
935 self.reactor,
936 Agent(self.reactor, contextFactory=self.tls_factory),
937 well_known_cache=self.well_known_cache,
938 )
939
941940 self.reactor.lookups["testserv"] = "1.2.3.4"
942941
943 fetch_d = self.do_get_well_known(b"testserv")
942 fetch_d = well_known_resolver.get_well_known(b"testserv")
944943
945944 # there should be an attempt to connect on port 443 for the .well-known
946945 clients = self.reactor.tcpClients
952951 well_known_server = self._handle_well_known_connection(
953952 client_factory,
954953 expected_sni=b"testserv",
955 response_headers={b"Cache-Control": b"max-age=10"},
954 response_headers={b"Cache-Control": b"max-age=1000"},
956955 content=b'{ "m.server": "target-server" }',
957956 )
958957
959958 r = self.successResultOf(fetch_d)
960 self.assertEqual(r, b"target-server")
959 self.assertEqual(r.delegated_server, b"target-server")
961960
962961 # close the tcp connection
963962 well_known_server.loseConnection()
964963
965964 # repeat the request: it should hit the cache
966 fetch_d = self.do_get_well_known(b"testserv")
965 fetch_d = well_known_resolver.get_well_known(b"testserv")
967966 r = self.successResultOf(fetch_d)
968 self.assertEqual(r, b"target-server")
967 self.assertEqual(r.delegated_server, b"target-server")
969968
970969 # expire the cache
971 self.reactor.pump((10.0,))
970 self.reactor.pump((1000.0,))
972971
973972 # now it should connect again
974 fetch_d = self.do_get_well_known(b"testserv")
973 fetch_d = well_known_resolver.get_well_known(b"testserv")
975974
976975 self.assertEqual(len(clients), 1)
977976 (host, port, client_factory, _timeout, _bindAddress) = clients.pop(0)
985984 )
986985
987986 r = self.successResultOf(fetch_d)
988 self.assertEqual(r, b"other-server")
987 self.assertEqual(r.delegated_server, b"other-server")
989988
990989
991990 class TestCachePeriodFromHeaders(TestCase):
6060 # should have restored our context
6161 self.assertIs(LoggingContext.current_context(), ctx)
6262
63 defer.returnValue(result)
63 return result
6464
6565 test_d = do_lookup()
6666 self.assertNoResult(test_d)
6767
6868 try:
6969 fetch_res = yield fetch_d
70 defer.returnValue(fetch_res)
70 return fetch_res
7171 finally:
7272 check_logcontext(context)
7373
4545 @defer.inlineCallbacks
4646 def cb():
4747 yield Clock(reactor).sleep(0)
48 defer.returnValue("yay")
48 return "yay"
4949
5050 @defer.inlineCallbacks
5151 def test():
322322 "renew_at": 172800000, # Time in ms for 2 days
323323 "renew_by_email_enabled": True,
324324 "renew_email_subject": "Renew your account",
325 "account_renewed_html_path": "account_renewed.html",
326 "invalid_token_html_path": "invalid_token.html",
325327 }
326328
327329 # Email config.
372374 self.render(request)
373375 self.assertEquals(channel.result["code"], b"200", channel.result)
374376
377 # Check that we're getting HTML back.
378 content_type = None
379 for header in channel.result.get("headers", []):
380 if header[0] == b"Content-Type":
381 content_type = header[1]
382 self.assertEqual(content_type, b"text/html; charset=utf-8", channel.result)
383
384 # Check that the HTML we're getting is the one we expect on a successful renewal.
385 expected_html = self.hs.config.account_validity.account_renewed_html_content
386 self.assertEqual(
387 channel.result["body"], expected_html.encode("utf8"), channel.result
388 )
389
375390 # Move 3 days forward. If the renewal failed, every authed request with
376391 # our access token should be denied from now, otherwise they should
377392 # succeed.
379394 request, channel = self.make_request(b"GET", "/sync", access_token=tok)
380395 self.render(request)
381396 self.assertEquals(channel.result["code"], b"200", channel.result)
397
398 def test_renewal_invalid_token(self):
399 # Hit the renewal endpoint with an invalid token and check that it behaves as
400 # expected, i.e. that it responds with 404 Not Found and the correct HTML.
401 url = "/_matrix/client/unstable/account_validity/renew?token=123"
402 request, channel = self.make_request(b"GET", url)
403 self.render(request)
404 self.assertEquals(channel.result["code"], b"404", channel.result)
405
406 # Check that we're getting HTML back.
407 content_type = None
408 for header in channel.result.get("headers", []):
409 if header[0] == b"Content-Type":
410 content_type = header[1]
411 self.assertEqual(content_type, b"text/html; charset=utf-8", channel.result)
412
413 # Check that the HTML we're getting is the one we expect when using an
414 # invalid/unknown token.
415 expected_html = self.hs.config.account_validity.invalid_token_html_content
416 self.assertEqual(
417 channel.result["body"], expected_html.encode("utf8"), channel.result
418 )
382419
383420 def test_manual_email_send(self):
384421 self.email_attempts = []
3535 "room_name": "Server Notices",
3636 }
3737
38 hs = self.setup_test_homeserver(config=hs_config, expire_access_token=True)
38 hs = self.setup_test_homeserver(config=hs_config)
3939 return hs
4040
4141 def prepare(self, reactor, clock, hs):
4242 "test_update",
4343 progress,
4444 )
45 defer.returnValue(count)
45 return count
4646
4747 self.update_handler.side_effect = update
4848
5959 @defer.inlineCallbacks
6060 def update(progress, count):
6161 yield self.store._end_background_update("test_update")
62 defer.returnValue(count)
62 return count
6363
6464 self.update_handler.side_effect = update
6565 self.update_handler.reset_mock()
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2019 The Matrix.org Foundation C.I.C.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
2223 from synapse.types import RoomID, UserID
2324
2425 from tests import unittest
25 from tests.utils import create_room, setup_test_homeserver
26
27
28 class RedactionTestCase(unittest.TestCase):
29 @defer.inlineCallbacks
30 def setUp(self):
31 hs = yield setup_test_homeserver(
32 self.addCleanup, resource_for_federation=Mock(), http_client=None
33 )
34
26 from tests.utils import create_room
27
28
29 class RedactionTestCase(unittest.HomeserverTestCase):
30 def make_homeserver(self, reactor, clock):
31 return self.setup_test_homeserver(
32 resource_for_federation=Mock(), http_client=None
33 )
34
35 def prepare(self, reactor, clock, hs):
3536 self.store = hs.get_datastore()
3637 self.event_builder_factory = hs.get_event_builder_factory()
3738 self.event_creation_handler = hs.get_event_creation_handler()
4142
4243 self.room1 = RoomID.from_string("!abc123:test")
4344
44 yield create_room(hs, self.room1.to_string(), self.u_alice.to_string())
45 self.get_success(
46 create_room(hs, self.room1.to_string(), self.u_alice.to_string())
47 )
4548
4649 self.depth = 1
4750
48 @defer.inlineCallbacks
4951 def inject_room_member(
5052 self, room, user, membership, replaces_state=None, extra_content={}
5153 ):
6264 },
6365 )
6466
65 event, context = yield self.event_creation_handler.create_new_client_event(
66 builder
67 )
68
69 yield self.store.persist_event(event, context)
70
71 defer.returnValue(event)
72
73 @defer.inlineCallbacks
67 event, context = self.get_success(
68 self.event_creation_handler.create_new_client_event(builder)
69 )
70
71 self.get_success(self.store.persist_event(event, context))
72
73 return event
74
7475 def inject_message(self, room, user, body):
7576 self.depth += 1
7677
8586 },
8687 )
8788
88 event, context = yield self.event_creation_handler.create_new_client_event(
89 builder
90 )
91
92 yield self.store.persist_event(event, context)
93
94 defer.returnValue(event)
95
96 @defer.inlineCallbacks
89 event, context = self.get_success(
90 self.event_creation_handler.create_new_client_event(builder)
91 )
92
93 self.get_success(self.store.persist_event(event, context))
94
95 return event
96
9797 def inject_redaction(self, room, event_id, user, reason):
9898 builder = self.event_builder_factory.for_room_version(
9999 RoomVersions.V1,
107107 },
108108 )
109109
110 event, context = yield self.event_creation_handler.create_new_client_event(
111 builder
112 )
113
114 yield self.store.persist_event(event, context)
115
116 @defer.inlineCallbacks
110 event, context = self.get_success(
111 self.event_creation_handler.create_new_client_event(builder)
112 )
113
114 self.get_success(self.store.persist_event(event, context))
115
117116 def test_redact(self):
118 yield self.inject_room_member(self.room1, self.u_alice, Membership.JOIN)
119
120 msg_event = yield self.inject_message(self.room1, self.u_alice, "t")
117 self.get_success(
118 self.inject_room_member(self.room1, self.u_alice, Membership.JOIN)
119 )
120
121 msg_event = self.get_success(self.inject_message(self.room1, self.u_alice, "t"))
121122
122123 # Check event has not been redacted:
123 event = yield self.store.get_event(msg_event.event_id)
124 event = self.get_success(self.store.get_event(msg_event.event_id))
124125
125126 self.assertObjectHasAttributes(
126127 {
135136
136137 # Redact event
137138 reason = "Because I said so"
138 yield self.inject_redaction(
139 self.room1, msg_event.event_id, self.u_alice, reason
140 )
141
142 event = yield self.store.get_event(msg_event.event_id)
139 self.get_success(
140 self.inject_redaction(self.room1, msg_event.event_id, self.u_alice, reason)
141 )
142
143 event = self.get_success(self.store.get_event(msg_event.event_id))
143144
144145 self.assertEqual(msg_event.event_id, event.event_id)
145146
163164 event.unsigned["redacted_because"],
164165 )
165166
166 @defer.inlineCallbacks
167167 def test_redact_join(self):
168 yield self.inject_room_member(self.room1, self.u_alice, Membership.JOIN)
169
170 msg_event = yield self.inject_room_member(
171 self.room1, self.u_bob, Membership.JOIN, extra_content={"blue": "red"}
172 )
173
174 event = yield self.store.get_event(msg_event.event_id)
168 self.get_success(
169 self.inject_room_member(self.room1, self.u_alice, Membership.JOIN)
170 )
171
172 msg_event = self.get_success(
173 self.inject_room_member(
174 self.room1, self.u_bob, Membership.JOIN, extra_content={"blue": "red"}
175 )
176 )
177
178 event = self.get_success(self.store.get_event(msg_event.event_id))
175179
176180 self.assertObjectHasAttributes(
177181 {
186190
187191 # Redact event
188192 reason = "Because I said so"
189 yield self.inject_redaction(
190 self.room1, msg_event.event_id, self.u_alice, reason
193 self.get_success(
194 self.inject_redaction(self.room1, msg_event.event_id, self.u_alice, reason)
191195 )
192196
193197 # Check redaction
194198
195 event = yield self.store.get_event(msg_event.event_id)
199 event = self.get_success(self.store.get_event(msg_event.event_id))
196200
197201 self.assertTrue("redacted_because" in event.unsigned)
198202
213217 },
214218 event.unsigned["redacted_because"],
215219 )
220
221 def test_circular_redaction(self):
222 redaction_event_id1 = "$redaction1_id:test"
223 redaction_event_id2 = "$redaction2_id:test"
224
225 class EventIdManglingBuilder:
226 def __init__(self, base_builder, event_id):
227 self._base_builder = base_builder
228 self._event_id = event_id
229
230 @defer.inlineCallbacks
231 def build(self, prev_event_ids):
232 built_event = yield self._base_builder.build(prev_event_ids)
233 built_event.event_id = self._event_id
234 built_event._event_dict["event_id"] = self._event_id
235 return built_event
236
237 @property
238 def room_id(self):
239 return self._base_builder.room_id
240
241 event_1, context_1 = self.get_success(
242 self.event_creation_handler.create_new_client_event(
243 EventIdManglingBuilder(
244 self.event_builder_factory.for_room_version(
245 RoomVersions.V1,
246 {
247 "type": EventTypes.Redaction,
248 "sender": self.u_alice.to_string(),
249 "room_id": self.room1.to_string(),
250 "content": {"reason": "test"},
251 "redacts": redaction_event_id2,
252 },
253 ),
254 redaction_event_id1,
255 )
256 )
257 )
258
259 self.get_success(self.store.persist_event(event_1, context_1))
260
261 event_2, context_2 = self.get_success(
262 self.event_creation_handler.create_new_client_event(
263 EventIdManglingBuilder(
264 self.event_builder_factory.for_room_version(
265 RoomVersions.V1,
266 {
267 "type": EventTypes.Redaction,
268 "sender": self.u_alice.to_string(),
269 "room_id": self.room1.to_string(),
270 "content": {"reason": "test"},
271 "redacts": redaction_event_id1,
272 },
273 ),
274 redaction_event_id2,
275 )
276 )
277 )
278 self.get_success(self.store.persist_event(event_2, context_2))
279
280 # fetch one of the redactions
281 fetched = self.get_success(self.store.get_event(redaction_event_id1))
282
283 # it should have been redacted
284 self.assertEqual(fetched.unsigned["redacted_by"], redaction_event_id2)
285 self.assertEqual(
286 fetched.unsigned["redacted_because"].event_id, redaction_event_id2
287 )
1919
2020 from synapse.api.constants import EventTypes, Membership
2121 from synapse.api.room_versions import RoomVersions
22 from synapse.types import RoomID, UserID
22 from synapse.types import Requester, RoomID, UserID
2323
2424 from tests import unittest
2525 from tests.utils import create_room, setup_test_homeserver
6666
6767 yield self.store.persist_event(event, context)
6868
69 defer.returnValue(event)
69 return event
7070
7171 @defer.inlineCallbacks
7272 def test_one_member(self):
8383 )
8484 ],
8585 )
86
87
88 class CurrentStateMembershipUpdateTestCase(unittest.HomeserverTestCase):
89 def prepare(self, reactor, clock, homeserver):
90 self.store = homeserver.get_datastore()
91 self.room_creator = homeserver.get_room_creation_handler()
92
93 def test_can_rerun_update(self):
94 # First make sure we have completed all updates.
95 while not self.get_success(self.store.has_completed_background_updates()):
96 self.get_success(self.store.do_next_background_update(100), by=0.1)
97
98 # Now let's create a room, which will insert a membership
99 user = UserID("alice", "test")
100 requester = Requester(user, None, False, None, None)
101 self.get_success(self.room_creator.create_room(requester, {}))
102
103 # Register the background update to run again.
104 self.get_success(
105 self.store._simple_insert(
106 table="background_updates",
107 values={
108 "update_name": "current_state_events_membership",
109 "progress_json": "{}",
110 "depends_on": None,
111 },
112 )
113 )
114
115 # ... and tell the DataStore that it hasn't finished all updates yet
116 self.store._all_done = False
117
118 # Now let's actually drive the updates to completion
119 while not self.get_success(self.store.has_completed_background_updates()):
120 self.get_success(self.store.do_next_background_update(100), by=0.1)
6464
6565 yield self.store.persist_event(event, context)
6666
67 defer.returnValue(event)
67 return event
6868
6969 def assertStateMapEqual(self, s1, s2):
7070 for t in s1:
138138 builder
139139 )
140140 yield self.hs.get_datastore().persist_event(event, context)
141 defer.returnValue(event)
141 return event
142142
143143 @defer.inlineCallbacks
144144 def inject_room_member(self, user_id, membership="join", extra_content={}):
160160 )
161161
162162 yield self.hs.get_datastore().persist_event(event, context)
163 defer.returnValue(event)
163 return event
164164
165165 @defer.inlineCallbacks
166166 def inject_message(self, user_id, content=None):
181181 )
182182
183183 yield self.hs.get_datastore().persist_event(event, context)
184 defer.returnValue(event)
184 return event
185185
186186 @defer.inlineCallbacks
187187 def test_large_room(self):
2222
2323 from canonicaljson import json
2424
25 import twisted
26 import twisted.logger
2725 from twisted.internet.defer import Deferred, succeed
2826 from twisted.python.threadpool import ThreadPool
2927 from twisted.trial import unittest
7977
8078 @around(self)
8179 def setUp(orig):
82 # enable debugging of delayed calls - this means that we get a
83 # traceback when a unit test exits leaving things on the reactor.
84 twisted.internet.base.DelayedCall.debug = True
85
8680 # if we're not starting in the sentinel logcontext, then to be honest
8781 # all future bets are off.
8882 if LoggingContext.current_context() is not LoggingContext.sentinel:
2626 make_deferred_yieldable,
2727 )
2828 from synapse.util.caches import descriptors
29 from synapse.util.caches.descriptors import cached
2930
3031 from tests import unittest
3132
5455 d2 = defer.Deferred()
5556 cache.set("key2", d2, partial(record_callback, 1))
5657
57 # lookup should return the deferreds
58 self.assertIs(cache.get("key1"), d1)
59 self.assertIs(cache.get("key2"), d2)
58 # lookup should return observable deferreds
59 self.assertFalse(cache.get("key1").has_called())
60 self.assertFalse(cache.get("key2").has_called())
6061
6162 # let one of the lookups complete
6263 d2.callback("result2")
64
65 # for now at least, the cache will return real results rather than an
66 # observabledeferred
6367 self.assertEqual(cache.get("key2"), "result2")
6468
6569 # now do the invalidation
144148 r = yield obj.fn(2, 5)
145149 self.assertEqual(r, "chips")
146150 obj.mock.assert_not_called()
151
152 def test_cache_with_sync_exception(self):
153 """If the wrapped function throws synchronously, things should continue to work
154 """
155
156 class Cls(object):
157 @cached()
158 def fn(self, arg1):
159 raise SynapseError(100, "mai spoon iz too big!!1")
160
161 obj = Cls()
162
163 # this should fail immediately
164 d = obj.fn(1)
165 self.failureResultOf(d, SynapseError)
166
167 # ... leaving the cache empty
168 self.assertEqual(len(obj.fn.cache.cache), 0)
169
170 # and a second call should result in a second exception
171 d = obj.fn(1)
172 self.failureResultOf(d, SynapseError)
147173
148174 def test_cache_logcontexts(self):
149175 """Check that logcontexts are set and restored correctly when
158184 def inner_fn():
159185 with PreserveLoggingContext():
160186 yield complete_lookup
161 defer.returnValue(1)
187 return 1
162188
163189 return inner_fn()
164190
168194 c1.name = "c1"
169195 r = yield obj.fn(1)
170196 self.assertEqual(LoggingContext.current_context(), c1)
171 defer.returnValue(r)
197 return r
172198
173199 def check_result(r):
174200 self.assertEqual(r, 1)
221247
222248 self.assertEqual(LoggingContext.current_context(), c1)
223249
250 # the cache should now be empty
251 self.assertEqual(len(obj.fn.cache.cache), 0)
252
224253 obj = Cls()
225254
226255 # set off a deferred which will do a cache lookup
266295 r = yield obj.fn(2, 3)
267296 self.assertEqual(r, "chips")
268297 obj.mock.assert_not_called()
298
299 def test_cache_iterable(self):
300 class Cls(object):
301 def __init__(self):
302 self.mock = mock.Mock()
303
304 @descriptors.cached(iterable=True)
305 def fn(self, arg1, arg2):
306 return self.mock(arg1, arg2)
307
308 obj = Cls()
309
310 obj.mock.return_value = ["spam", "eggs"]
311 r = obj.fn(1, 2)
312 self.assertEqual(r, ["spam", "eggs"])
313 obj.mock.assert_called_once_with(1, 2)
314 obj.mock.reset_mock()
315
316 # a call with different params should call the mock again
317 obj.mock.return_value = ["chips"]
318 r = obj.fn(1, 3)
319 self.assertEqual(r, ["chips"])
320 obj.mock.assert_called_once_with(1, 3)
321 obj.mock.reset_mock()
322
323 # the two values should now be cached
324 self.assertEqual(len(obj.fn.cache.cache), 3)
325
326 r = obj.fn(1, 2)
327 self.assertEqual(r, ["spam", "eggs"])
328 r = obj.fn(1, 3)
329 self.assertEqual(r, ["chips"])
330 obj.mock.assert_not_called()
331
332 def test_cache_iterable_with_sync_exception(self):
333 """If the wrapped function throws synchronously, things should continue to work
334 """
335
336 class Cls(object):
337 @descriptors.cached(iterable=True)
338 def fn(self, arg1):
339 raise SynapseError(100, "mai spoon iz too big!!1")
340
341 obj = Cls()
342
343 # this should fail immediately
344 d = obj.fn(1)
345 self.failureResultOf(d, SynapseError)
346
347 # ... leaving the cache empty
348 self.assertEqual(len(obj.fn.cache.cache), 0)
349
350 # and a second call should result in a second exception
351 d = obj.fn(1)
352 self.failureResultOf(d, SynapseError)
269353
270354
271355 class CachedListDescriptorTestCase(unittest.TestCase):
285369 # we want this to behave like an asynchronous function
286370 yield run_on_reactor()
287371 assert LoggingContext.current_context().request == "c1"
288 defer.returnValue(self.mock(args1, arg2))
372 return self.mock(args1, arg2)
289373
290374 with LoggingContext() as c1:
291375 c1.request = "c1"
333417 def list_fn(self, args1, arg2):
334418 # we want this to behave like an asynchronous function
335419 yield run_on_reactor()
336 defer.returnValue(self.mock(args1, arg2))
420 return self.mock(args1, arg2)
337421
338422 obj = Cls()
339423 invalidate0 = mock.Mock()
125125 "enable_registration": True,
126126 "enable_registration_captcha": False,
127127 "macaroon_secret_key": "not even a little secret",
128 "expire_access_token": False,
129128 "trusted_third_party_id_servers": [],
130129 "room_invite_state_types": [],
131130 "password_providers": [],
360359 if fed:
361360 register_federation_servlets(hs, fed)
362361
363 defer.returnValue(hs)
362 return hs
364363
365364
366365 def register_federation_servlets(hs, resource):
464463 args = [urlparse.unquote(u) for u in matcher.groups()]
465464
466465 (code, response) = yield func(mock_request, *args)
467 defer.returnValue((code, response))
466 return (code, response)
468467 except CodeMessageException as e:
469 defer.returnValue((e.code, cs_error(e.msg, code=e.errcode)))
468 return (e.code, cs_error(e.msg, code=e.errcode))
470469
471470 raise KeyError("No event can handle %s" % path)
472471