Codebase list matrix-synapse / 97ef96c
Update upstream source from tag 'upstream/0.33.2+dfsg' Update to upstream version '0.33.2+dfsg' with Debian dir b23d4d5e9a86282476b929eed4dcf34901fac7ca Andrej Shadura 5 years ago
135 changed file(s) with 12635 addition(s) and 5750 deletion(s). Raw diff Collapse all Expand all
2626
2727 Describe how what happens differs from what you expected.
2828
29 If you can identify any relevant log snippets from _homeserver.log_, please include
30 those here (please be careful to remove any personal or private data):
29 <!-- If you can identify any relevant log snippets from _homeserver.log_, please include
30 those (please be careful to remove any personal or private data). Please surround them with
31 ``` (three backticks, on a line on their own), so that they are formatted legibly. -->
3132
3233 ### Version information
3334
6161 * Add LDAP support for authentication
6262
6363 Pierre Jaury <pierre at jaury.eu>
64 * Docker packaging
64 * Docker packaging
65
66 Serban Constantin <serban.constantin at gmail dot com>
67 * Small bug fix
0 Synapse 0.33.2 (2018-08-09)
1 ===========================
2
3 No significant changes.
4
5
6 Synapse 0.33.2rc1 (2018-08-07)
7 ==============================
8
9 Features
10 --------
11
12 - add support for the lazy_loaded_members filter as per MSC1227 ([\#2970](https://github.com/matrix-org/synapse/issues/2970))
13 - add support for the include_redundant_members filter param as per MSC1227 ([\#3331](https://github.com/matrix-org/synapse/issues/3331))
14 - Add metrics to track resource usage by background processes ([\#3553](https://github.com/matrix-org/synapse/issues/3553), [\#3556](https://github.com/matrix-org/synapse/issues/3556), [\#3604](https://github.com/matrix-org/synapse/issues/3604), [\#3610](https://github.com/matrix-org/synapse/issues/3610))
15 - Add `code` label to `synapse_http_server_response_time_seconds` prometheus metric ([\#3554](https://github.com/matrix-org/synapse/issues/3554))
16 - Add support for client_reader to handle more APIs ([\#3555](https://github.com/matrix-org/synapse/issues/3555), [\#3597](https://github.com/matrix-org/synapse/issues/3597))
17 - make the /context API filter & lazy-load aware as per MSC1227 ([\#3567](https://github.com/matrix-org/synapse/issues/3567))
18 - Add ability to limit number of monthly active users on the server ([\#3630](https://github.com/matrix-org/synapse/issues/3630))
19 - When we fail to join a room over federation, pass the error code back to the client. ([\#3639](https://github.com/matrix-org/synapse/issues/3639))
20 - Add a new /admin/register API for non-interactively creating users. ([\#3415](https://github.com/matrix-org/synapse/issues/3415))
21
22
23 Bugfixes
24 --------
25
26 - Make /directory/list API return 404 for room not found instead of 400 ([\#2952](https://github.com/matrix-org/synapse/issues/2952))
27 - Default inviter_display_name to mxid for email invites ([\#3391](https://github.com/matrix-org/synapse/issues/3391))
28 - Don't generate TURN credentials if no TURN config options are set ([\#3514](https://github.com/matrix-org/synapse/issues/3514))
29 - Correctly announce deleted devices over federation ([\#3520](https://github.com/matrix-org/synapse/issues/3520))
30 - Catch failures saving metrics captured by Measure, and instead log the faulty metrics information for further analysis. ([\#3548](https://github.com/matrix-org/synapse/issues/3548))
31 - Unicode passwords are now normalised before hashing, preventing the instance where two different devices or browsers might send a different UTF-8 sequence for the password. ([\#3569](https://github.com/matrix-org/synapse/issues/3569))
32 - Fix potential stack overflow and deadlock under heavy load ([\#3570](https://github.com/matrix-org/synapse/issues/3570))
33 - Respond with M_NOT_FOUND when profiles are not found locally or over federation. Fixes #3585 ([\#3585](https://github.com/matrix-org/synapse/issues/3585))
34 - Fix failure to persist events over federation under load ([\#3601](https://github.com/matrix-org/synapse/issues/3601))
35 - Fix updating of cached remote profiles ([\#3605](https://github.com/matrix-org/synapse/issues/3605))
36 - Fix 'tuple index out of range' error ([\#3607](https://github.com/matrix-org/synapse/issues/3607))
37 - Only import secrets when available (fix for py < 3.6) ([\#3626](https://github.com/matrix-org/synapse/issues/3626))
38
39
40 Internal Changes
41 ----------------
42
43 - Remove redundant checks on who_forgot_in_room ([\#3350](https://github.com/matrix-org/synapse/issues/3350))
44 - Remove unnecessary event re-signing hacks ([\#3367](https://github.com/matrix-org/synapse/issues/3367))
45 - Rewrite cache list decorator ([\#3384](https://github.com/matrix-org/synapse/issues/3384))
46 - Move v1-only REST APIs into their own module. ([\#3460](https://github.com/matrix-org/synapse/issues/3460))
47 - Replace more instances of Python 2-only iteritems and itervalues uses. ([\#3562](https://github.com/matrix-org/synapse/issues/3562))
48 - Refactor EventContext to accept state during init ([\#3577](https://github.com/matrix-org/synapse/issues/3577))
49 - Improve Dockerfile and docker-compose instructions ([\#3543](https://github.com/matrix-org/synapse/issues/3543))
50 - Release notes are now in the Markdown format. ([\#3552](https://github.com/matrix-org/synapse/issues/3552))
51 - add config for pep8 ([\#3559](https://github.com/matrix-org/synapse/issues/3559))
52 - Merge Linearizer and Limiter ([\#3571](https://github.com/matrix-org/synapse/issues/3571), [\#3572](https://github.com/matrix-org/synapse/issues/3572))
53 - Lazily load state on master process when using workers to reduce DB consumption ([\#3579](https://github.com/matrix-org/synapse/issues/3579), [\#3581](https://github.com/matrix-org/synapse/issues/3581), [\#3582](https://github.com/matrix-org/synapse/issues/3582), [\#3584](https://github.com/matrix-org/synapse/issues/3584))
54 - Fixes and optimisations for resolve_state_groups ([\#3586](https://github.com/matrix-org/synapse/issues/3586))
55 - Improve logging for exceptions when handling PDUs ([\#3587](https://github.com/matrix-org/synapse/issues/3587))
56 - Add some measure blocks to persist_events ([\#3590](https://github.com/matrix-org/synapse/issues/3590))
57 - Fix some random logcontext leaks. ([\#3591](https://github.com/matrix-org/synapse/issues/3591), [\#3606](https://github.com/matrix-org/synapse/issues/3606))
58 - Speed up calculating state deltas in persist_event loop ([\#3592](https://github.com/matrix-org/synapse/issues/3592))
59 - Attempt to reduce amount of state pulled out of DB during persist_events ([\#3595](https://github.com/matrix-org/synapse/issues/3595))
60 - Fix a documentation typo in on_make_leave_request ([\#3609](https://github.com/matrix-org/synapse/issues/3609))
61 - Make EventStore inherit from EventFederationStore ([\#3612](https://github.com/matrix-org/synapse/issues/3612))
62 - Remove some redundant joins on event_edges.room_id ([\#3613](https://github.com/matrix-org/synapse/issues/3613))
63 - Stop populating events.content ([\#3614](https://github.com/matrix-org/synapse/issues/3614))
64 - Update the /send_leave path registration to use event_id rather than a transaction ID. ([\#3616](https://github.com/matrix-org/synapse/issues/3616))
65 - Refactor FederationHandler to move DB writes into separate functions ([\#3621](https://github.com/matrix-org/synapse/issues/3621))
66 - Remove unused field "pdu_failures" from transactions. ([\#3628](https://github.com/matrix-org/synapse/issues/3628))
67 - rename replication_layer to federation_client ([\#3634](https://github.com/matrix-org/synapse/issues/3634))
68 - Factor out exception handling in federation_client ([\#3638](https://github.com/matrix-org/synapse/issues/3638))
69 - Refactor location of docker build script. ([\#3644](https://github.com/matrix-org/synapse/issues/3644))
70 - Update CONTRIBUTING to mention newsfragments. ([\#3645](https://github.com/matrix-org/synapse/issues/3645))
71
72
73 Synapse 0.33.1 (2018-08-02)
74 ===========================
75
76 SECURITY FIXES
77 --------------
78
79 - Fix a potential issue where servers could request events for rooms they have not joined. ([\#3641](https://github.com/matrix-org/synapse/issues/3641))
80 - Fix a potential issue where users could see events in private rooms before they joined. ([\#3642](https://github.com/matrix-org/synapse/issues/3642))
81
82 Synapse 0.33.0 (2018-07-19)
83 ===========================
84
85 Bugfixes
86 --------
87
88 - Disable a noisy warning about logcontexts. ([\#3561](https://github.com/matrix-org/synapse/issues/3561))
89
90 Synapse 0.33.0rc1 (2018-07-18)
91 ==============================
92
93 Features
94 --------
95
96 - Enforce the specified API for report\_event. ([\#3316](https://github.com/matrix-org/synapse/issues/3316))
97 - Include CPU time from database threads in request/block metrics. ([\#3496](https://github.com/matrix-org/synapse/issues/3496), [\#3501](https://github.com/matrix-org/synapse/issues/3501))
98 - Add CPU metrics for \_fetch\_event\_list. ([\#3497](https://github.com/matrix-org/synapse/issues/3497))
99 - Optimisation to make handling incoming federation requests more efficient. ([\#3541](https://github.com/matrix-org/synapse/issues/3541))
100
101 Bugfixes
102 --------
103
104 - Fix a significant performance regression in /sync. ([\#3505](https://github.com/matrix-org/synapse/issues/3505), [\#3521](https://github.com/matrix-org/synapse/issues/3521), [\#3530](https://github.com/matrix-org/synapse/issues/3530), [\#3544](https://github.com/matrix-org/synapse/issues/3544))
105 - Use more portable syntax in our use of the attrs package, widening the supported versions. ([\#3498](https://github.com/matrix-org/synapse/issues/3498))
106 - Fix queued federation requests being processed in the wrong order. ([\#3533](https://github.com/matrix-org/synapse/issues/3533))
107 - Ensure that erasure requests are correctly honoured for publicly accessible rooms when accessed over federation. ([\#3546](https://github.com/matrix-org/synapse/issues/3546))
108
109 Misc
110 ----
111
112 - Refactoring to improve testability. ([\#3351](https://github.com/matrix-org/synapse/issues/3351), [\#3499](https://github.com/matrix-org/synapse/issues/3499))
113 - Use `isort` to sort imports. ([\#3463](https://github.com/matrix-org/synapse/issues/3463), [\#3464](https://github.com/matrix-org/synapse/issues/3464), [\#3540](https://github.com/matrix-org/synapse/issues/3540))
114 - Use parse and asserts from http.servlet. ([\#3534](https://github.com/matrix-org/synapse/issues/3534), [\#3535](https://github.com/matrix-org/synapse/issues/3535)).
115
116 Synapse 0.32.2 (2018-07-07)
117 ===========================
118
119 Bugfixes
120 --------
121
122 - Amend the Python dependencies to depend on attrs from PyPI, not attr ([\#3492](https://github.com/matrix-org/synapse/issues/3492))
123
124 Synapse 0.32.1 (2018-07-06)
125 ===========================
126
127 Bugfixes
128 --------
129
130 - Add explicit dependency on netaddr ([\#3488](https://github.com/matrix-org/synapse/issues/3488))
131
132 Changes in synapse v0.32.0 (2018-07-06)
133 =======================================
134
135 No changes since 0.32.0rc1
136
137 Synapse 0.32.0rc1 (2018-07-05)
138 ==============================
139
140 Features
141 --------
142
143 - Add blacklist & whitelist of servers allowed to send events to a room via `m.room.server_acl` event.
144 - Cache factor override system for specific caches ([\#3334](https://github.com/matrix-org/synapse/issues/3334))
145 - Add metrics to track appservice transactions ([\#3344](https://github.com/matrix-org/synapse/issues/3344))
146 - Try to log more helpful info when a sig verification fails ([\#3372](https://github.com/matrix-org/synapse/issues/3372))
147 - Synapse now uses the best performing JSON encoder/decoder according to your runtime (simplejson on CPython, stdlib json on PyPy). ([\#3462](https://github.com/matrix-org/synapse/issues/3462))
148 - Add optional ip\_range\_whitelist param to AS registration files to lock AS IP access ([\#3465](https://github.com/matrix-org/synapse/issues/3465))
149 - Reject invalid server names in federation requests ([\#3480](https://github.com/matrix-org/synapse/issues/3480))
150 - Reject invalid server names in homeserver.yaml ([\#3483](https://github.com/matrix-org/synapse/issues/3483))
151
152 Bugfixes
153 --------
154
155 - Strip access\_token from outgoing requests ([\#3327](https://github.com/matrix-org/synapse/issues/3327))
156 - Redact AS tokens in logs ([\#3349](https://github.com/matrix-org/synapse/issues/3349))
157 - Fix federation backfill from SQLite servers ([\#3355](https://github.com/matrix-org/synapse/issues/3355))
158 - Fix event-purge-by-ts admin API ([\#3363](https://github.com/matrix-org/synapse/issues/3363))
159 - Fix event filtering in get\_missing\_events handler ([\#3371](https://github.com/matrix-org/synapse/issues/3371))
160 - Synapse is now stricter regarding accepting events which it cannot retrieve the prev\_events for. ([\#3456](https://github.com/matrix-org/synapse/issues/3456))
161 - Fix bug where synapse would explode when receiving unicode in HTTP User-Agent header ([\#3470](https://github.com/matrix-org/synapse/issues/3470))
162 - Invalidate cache on correct thread to avoid race ([\#3473](https://github.com/matrix-org/synapse/issues/3473))
163
164 Improved Documentation
165 ----------------------
166
167 - `doc/postgres.rst`: fix display of the last command block. Thanks to @ArchangeGabriel! ([\#3340](https://github.com/matrix-org/synapse/issues/3340))
168
169 Deprecations and Removals
170 -------------------------
171
172 - Remove was\_forgotten\_at ([\#3324](https://github.com/matrix-org/synapse/issues/3324))
173
174 Misc
175 ----
176
177 - [\#3332](https://github.com/matrix-org/synapse/issues/3332), [\#3341](https://github.com/matrix-org/synapse/issues/3341), [\#3347](https://github.com/matrix-org/synapse/issues/3347), [\#3348](https://github.com/matrix-org/synapse/issues/3348), [\#3356](https://github.com/matrix-org/synapse/issues/3356), [\#3385](https://github.com/matrix-org/synapse/issues/3385), [\#3446](https://github.com/matrix-org/synapse/issues/3446), [\#3447](https://github.com/matrix-org/synapse/issues/3447), [\#3467](https://github.com/matrix-org/synapse/issues/3467), [\#3474](https://github.com/matrix-org/synapse/issues/3474)
178
179 Changes in synapse v0.31.2 (2018-06-14)
180 =======================================
181
182 SECURITY UPDATE: Prevent unauthorised users from setting state events in a room when there is no `m.room.power_levels` event in force in the room. (PR #3397)
183
184 Discussion around the Matrix Spec change proposal for this change can be followed at <https://github.com/matrix-org/matrix-doc/issues/1304>.
185
186 Changes in synapse v0.31.1 (2018-06-08)
187 =======================================
188
189 v0.31.1 fixes a security bug in the `get_missing_events` federation API where event visibility rules were not applied correctly.
190
191 We are not aware of it being actively exploited but please upgrade asap.
192
193 Bug Fixes:
194
195 - Fix event filtering in get\_missing\_events handler (PR #3371)
196
197 Changes in synapse v0.31.0 (2018-06-06)
198 =======================================
199
200 Most notable change from v0.30.0 is to switch to the python prometheus library to improve system stats reporting. WARNING: this changes a number of prometheus metrics in a backwards-incompatible manner. For more details, see [docs/metrics-howto.rst](docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310).
201
202 Bug Fixes:
203
204 - Fix metric documentation tables (PR #3341)
205 - Fix LaterGauge error handling (694968f)
206 - Fix replication metrics (b7e7fd2)
207
208 Changes in synapse v0.31.0-rc1 (2018-06-04)
209 ===========================================
210
211 Features:
212
213 - Switch to the Python Prometheus library (PR #3256, #3274)
214 - Let users leave the server notice room after joining (PR #3287)
215
216 Changes:
217
218 - daily user type phone home stats (PR #3264)
219 - Use iter\* methods for \_filter\_events\_for\_server (PR #3267)
220 - Docs on consent bits (PR #3268)
221 - Remove users from user directory on deactivate (PR #3277)
222 - Avoid sending consent notice to guest users (PR #3288)
223 - disable CPUMetrics if no /proc/self/stat (PR #3299)
224 - Consistently use six\'s iteritems and wrap lazy keys/values in list() if they\'re not meant to be lazy (PR #3307)
225 - Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
226 - Reduce stuck read-receipts: ignore depth when updating (PR #3318)
227 - Put python\'s logs into Trial when running unit tests (PR #3319)
228
229 Changes, python 3 migration:
230
231 - Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
232 - replace some iteritems with six (PR #3244) Thanks to @NotAFile!
233 - Add batch\_iter to utils (PR #3245) Thanks to @NotAFile!
234 - use repr, not str (PR #3246) Thanks to @NotAFile!
235 - Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
236 - Py3 storage/\_base.py (PR #3278) Thanks to @NotAFile!
237 - more six iteritems (PR #3279) Thanks to @NotAFile!
238 - More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
239 - remaining isintance fixes (PR #3281) Thanks to @NotAFile!
240 - py3-ize state.py (PR #3283) Thanks to @NotAFile!
241 - extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
242 - use memoryview in py3 (PR #3303) Thanks to @NotAFile!
243
244 Bugs:
245
246 - Fix federation backfill bugs (PR #3261)
247 - federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
248
249 Changes in synapse v0.30.0 (2018-05-24)
250 =======================================
251
252 \'Server Notices\' are a new feature introduced in Synapse 0.30. They provide a channel whereby server administrators can send messages to users on the server.
253
254 They are used as part of communication of the server policies (see `docs/consent_tracking.md`), however the intention is that they may also find a use for features such as \"Message of the day\".
255
256 This feature is specific to Synapse, but uses standard Matrix communication mechanisms, so should work with any Matrix client. For more details see `docs/server_notices.md`
257
258 Further Server Notices/Consent Tracking Support:
259
260 - Allow overriding the server\_notices user\'s avatar (PR #3273)
261 - Use the localpart in the consent uri (PR #3272)
262 - Support for putting %(consent\_uri)s in messages (PR #3271)
263 - Block attempts to send server notices to remote users (PR #3270)
264 - Docs on consent bits (PR #3268)
265
266 Changes in synapse v0.30.0-rc1 (2018-05-23)
267 ===========================================
268
269 Server Notices/Consent Tracking Support:
270
271 - ConsentResource to gather policy consent from users (PR #3213)
272 - Move RoomCreationHandler out of synapse.handlers.Handlers (PR #3225)
273 - Infrastructure for a server notices room (PR #3232)
274 - Send users a server notice about consent (PR #3236)
275 - Reject attempts to send event before privacy consent is given (PR #3257)
276 - Add a \'has\_consented\' template var to consent forms (PR #3262)
277 - Fix dependency on jinja2 (PR #3263)
278
279 Features:
280
281 - Cohort analytics (PR #3163, #3241, #3251)
282 - Add lxml to docker image for web previews (PR #3239) Thanks to @ptman!
283 - Add in flight request metrics (PR #3252)
284
285 Changes:
286
287 - Remove unused update\_external\_syncs (PR #3233)
288 - Use stream rather depth ordering for push actions (PR #3212)
289 - Make purge\_history operate on tokens (PR #3221)
290 - Don\'t support limitless pagination (PR #3265)
291
292 Bug Fixes:
293
294 - Fix logcontext resource usage tracking (PR #3258)
295 - Fix error in handling receipts (PR #3235)
296 - Stop the transaction cache caching failures (PR #3255)
297
298 Changes in synapse v0.29.1 (2018-05-17)
299 =======================================
300
301 Changes:
302
303 - Update docker documentation (PR #3222)
304
305 Changes in synapse v0.29.0 (2018-05-16)
306 =======================================
307
308 Not changes since v0.29.0-rc1
309
310 Changes in synapse v0.29.0-rc1 (2018-05-14)
311 ===========================================
312
313 Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a closed spec bug in the Client Server API. Additionally further prep for Python 3 migration.
314
315 Potentially breaking change:
316
317 - Make Client-Server API return 401 for invalid token (PR #3161).
318
319 This changes the Client-server spec to return a 401 error code instead of 403 when the access token is unrecognised. This is the behaviour required by the specification, but some clients may be relying on the old, incorrect behaviour.
320
321 Thanks to @NotAFile for fixing this.
322
323 Features:
324
325 - Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!
326
327 Changes - General:
328
329 - nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
330 - Part user from rooms on account deactivate (PR #3201)
331 - Make \'unexpected logging context\' into warnings (PR #3007)
332 - Set Server header in SynapseRequest (PR #3208)
333 - remove duplicates from groups tables (PR #3129)
334 - Improve exception handling for background processes (PR #3138)
335 - Add missing consumeErrors to improve exception handling (PR #3139)
336 - reraise exceptions more carefully (PR #3142)
337 - Remove redundant call to preserve\_fn (PR #3143)
338 - Trap exceptions thrown within run\_in\_background (PR #3144)
339
340 Changes - Refactors:
341
342 - Refactor /context to reuse pagination storage functions (PR #3193)
343 - Refactor recent events func to use pagination func (PR #3195)
344 - Refactor pagination DB API to return concrete type (PR #3196)
345 - Refactor get\_recent\_events\_for\_room return type (PR #3198)
346 - Refactor sync APIs to reuse pagination API (PR #3199)
347 - Remove unused code path from member change DB func (PR #3200)
348 - Refactor request handling wrappers (PR #3203)
349 - transaction\_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
350 - Refactor event storage to prepare for changes in state calculations (PR #3141)
351 - Set Server header in SynapseRequest (PR #3208)
352 - Use deferred.addTimeout instead of time\_bound\_deferred (PR #3127, #3178)
353 - Use run\_in\_background in preference to preserve\_fn (PR #3140)
354
355 Changes - Python 3 migration:
356
357 - Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
358 - run config tests on py3 (PR #3159) Thanks to @NotAFile!
359 - Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
360 - Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
361 - Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
362 - Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
363 - Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
364 - Don\'t yield in list comprehensions (PR #3150) Thanks to @NotAFile!
365 - Move more xrange to six (PR #3151) Thanks to @NotAFile!
366 - make imports local (PR #3152) Thanks to @NotAFile!
367 - move httplib import to six (PR #3153) Thanks to @NotAFile!
368 - Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
369 - more bytes strings (PR #3155) Thanks to @NotAFile!
370
371 Bug Fixes:
372
373 - synapse fails to start under Twisted \>= 18.4 (PR #3157)
374 - Fix a class of logcontext leaks (PR #3170)
375 - Fix a couple of logcontext leaks in unit tests (PR #3172)
376 - Fix logcontext leak in media repo (PR #3174)
377 - Escape label values in prometheus metrics (PR #3175, #3186)
378 - Fix \'Unhandled Error\' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
379 - Fix logcontext leaks in rate limiter (PR #3183)
380 - notifications: Convert next\_token to string according to the spec (PR #3190) Thanks to @mujx!
381 - nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
382 - add guard for None on purge\_history api (PR #3160) Thanks to @krombel!
383
384 Changes in synapse v0.28.1 (2018-05-01)
385 =======================================
386
387 SECURITY UPDATE
388
389 - Clamp the allowed values of event depth received over federation to be \[0, 2\^63 - 1\]. This mitigates an attack where malicious events injected with depth = 2\^63 - 1 render rooms unusable. Depth is used to determine the cosmetic ordering of events within a room, and so the ordering of events in such a room will default to using stream\_ordering rather than depth (topological\_ordering).
390
391 This is a temporary solution to mitigate abuse in the wild, whilst a long term solution is being implemented to improve how the depth parameter is used.
392
393 Full details at <https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI>
394
395 - Pin Twisted to \<18.4 until we stop using the private \_OpenSSLECCurve API.
396
397 Changes in synapse v0.28.0 (2018-04-26)
398 =======================================
399
400 Bug Fixes:
401
402 - Fix quarantine media admin API and search reindex (PR #3130)
403 - Fix media admin APIs (PR #3134)
404
405 Changes in synapse v0.28.0-rc1 (2018-04-24)
406 ===========================================
407
408 Minor performance improvement to federation sending and bug fixes.
409
410 (Note: This release does not include the delta state resolution implementation discussed in matrix live)
411
412 Features:
413
414 - Add metrics for event processing lag (PR #3090)
415 - Add metrics for ResponseCache (PR #3092)
416
417 Changes:
418
419 - Synapse on PyPy (PR #2760) Thanks to @Valodim!
420 - move handling of auto\_join\_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
421 - Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
422 - Document the behaviour of ResponseCache (PR #3059)
423 - Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
424 - update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
425 - use python3-compatible prints (PR #3074) Thanks to @NotAFile!
426 - Send federation events concurrently (PR #3078)
427 - Limit concurrent event sends for a room (PR #3079)
428 - Improve R30 stat definition (PR #3086)
429 - Send events to ASes concurrently (PR #3088)
430 - Refactor ResponseCache usage (PR #3093)
431 - Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
432 - Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
433 - Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
434 - Refactor store.have\_events (PR #3117)
435
436 Bug Fixes:
437
438 - Return 401 for invalid access\_token on logout (PR #2938) Thanks to @dklug!
439 - Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
440 - fix federation\_domain\_whitelist (PR #3099)
441 - Avoid creating events with huge numbers of prev\_events (PR #3113)
442 - Reject events which have lots of prev\_events (PR #3118)
443
444 Changes in synapse v0.27.4 (2018-04-13)
445 =======================================
446
447 Changes:
448
449 - Update canonicaljson dependency (\#3095)
450
451 Changes in synapse v0.27.3 (2018-04-11)
452 ======================================
453
454 Bug fixes:
455
456 - URL quote path segments over federation (\#3082)
457
458 Changes in synapse v0.27.3-rc2 (2018-04-09)
459 ===========================================
460
461 v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
462
463 Changes in synapse v0.27.3-rc1 (2018-04-09)
464 ===========================================
465
466 Notable changes include API support for joinability of groups. Also new metrics and phone home stats. Phone home stats include better visibility of system usage so we can tweak synpase to work better for all users rather than our own experience with matrix.org. Also, recording \'r30\' stat which is the measure we use to track overal growth of the Matrix ecosystem. It is defined as:-
467
468 Counts the number of native 30 day retained users, defined as:- \* Users who have created their accounts more than 30 days
469
470 : - Where last seen at most 30 days ago
471 - Where account creation and last\_seen are \> 30 days\"
472
473 Features:
474
475 - Add joinability for groups (PR #3045)
476 - Implement group join API (PR #3046)
477 - Add counter metrics for calculating state delta (PR #3033)
478 - R30 stats (PR #3041)
479 - Measure time it takes to calculate state group ID (PR #3043)
480 - Add basic performance statistics to phone home (PR #3044)
481 - Add response size metrics (PR #3071)
482 - phone home cache size configurations (PR #3063)
483
484 Changes:
485
486 - Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
487 - Replace old style error catching with \'as\' keyword (PR #3000) Thanks to @NotAFile!
488 - Use .iter\* to avoid copies in StateHandler (PR #3006)
489 - Linearize calls to \_generate\_user\_id (PR #3029)
490 - Remove last usage of ujson (PR #3030)
491 - Use simplejson throughout (PR #3048)
492 - Use static JSONEncoders (PR #3049)
493 - Remove uses of events.content (PR #3060)
494 - Improve database cache performance (PR #3068)
495
496 Bug fixes:
497
498 - Add room\_id to the response of rooms/{roomId}/join (PR #2986) Thanks to @jplatte!
499 - Fix replication after switch to simplejson (PR #3015)
500 - 404 correctly on missing paths via NoResource (PR #3022)
501 - Fix error when claiming e2e keys from offline servers (PR #3034)
502 - fix tests/storage/test\_user\_directory.py (PR #3042)
503 - use PUT instead of POST for federating groups/m.join\_policy (PR #3070) Thanks to @krombel!
504 - postgres port script: fix state\_groups\_pkey error (PR #3072)
505
506 Changes in synapse v0.27.2 (2018-03-26)
507 =======================================
508
509 Bug fixes:
510
511 - Fix bug which broke TCP replication between workers (PR #3015)
512
513 Changes in synapse v0.27.1 (2018-03-26)
514 =======================================
515
516 Meta release as v0.27.0 temporarily pointed to the wrong commit
517
518 Changes in synapse v0.27.0 (2018-03-26)
519 =======================================
520
521 No changes since v0.27.0-rc2
522
523 Changes in synapse v0.27.0-rc2 (2018-03-19)
524 ===========================================
525
526 Pulls in v0.26.1
527
528 Bug fixes:
529
530 - Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
531
532 Changes in synapse v0.26.1 (2018-03-15)
533 =======================================
534
535 Bug fixes:
536
537 - Fix bug where an invalid event caused server to stop functioning correctly, due to parsing and serializing bugs in ujson library (PR #3008)
538
539 Changes in synapse v0.27.0-rc1 (2018-03-14)
540 ===========================================
541
542 The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using `-a` option with workers. A new worker file should be added with `worker_app: synapse.app.homeserver`.
543
544 This release also begins the process of renaming a number of the metrics reported to prometheus. See [docs/metrics-howto.rst](docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0). Note that the v0.28.0 release will remove the deprecated metric names.
545
546 Features:
547
548 - Add ability for ASes to override message send time (PR #2754)
549 - Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
550 - Add purge API features, see [docs/admin\_api/purge\_history\_api.rst](docs/admin_api/purge_history_api.rst) for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
551 - Add support for whitelisting 3PIDs that users can register. (PR #2813)
552 - Add `/room/{id}/event/{id}` API (PR #2766)
553 - Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
554 - Add `federation_domain_whitelist` option (PR #2820, #2821)
555
556 Changes:
557
558 - Continue to factor out processing from main process and into worker processes. See updated [docs/workers.rst](docs/workers.rst) (PR #2892 - \#2904, #2913, #2920 - \#2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - \#2984, #2987 - \#2989, #2991 - \#2993, #2995, #2784)
559 - Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
560 - Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
561 - No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
562 - Remove `verbosity`/`log_file` from generated config (PR #2755)
563 - Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
564 - When using synctl with workers, don\'t start the main synapse automatically (PR #2774)
565 - Minor performance improvements (PR #2773, #2792)
566 - Use a connection pool for non-federation outbound connections (PR #2817)
567 - Make it possible to run unit tests against postgres (PR #2829)
568 - Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
569 - Remove ability for AS users to call /events and /sync (PR #2948)
570 - Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
571
572 Bug fixes:
573
574 - Fix broken `ldap_config` config option (PR #2683) Thanks to @seckrv!
575 - Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
576 - Fix publicised groups GET API (singular) over federation (PR #2772)
577 - Fix user directory when using `user_directory_search_all_users` config option (PR #2803, #2831)
578 - Fix error on `/publicRooms` when no rooms exist (PR #2827)
579 - Fix bug in quarantine\_media (PR #2837)
580 - Fix url\_previews when no Content-Type is returned from URL (PR #2845)
581 - Fix rare race in sync API when joining room (PR #2944)
582 - Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
583
584 Changes in synapse v0.26.0 (2018-01-05)
585 =======================================
586
587 No changes since v0.26.0-rc1
588
589 Changes in synapse v0.26.0-rc1 (2017-12-13)
590 ===========================================
591
592 Features:
593
594 - Add ability for ASes to publicise groups for their users (PR #2686)
595 - Add all local users to the user\_directory and optionally search them (PR #2723)
596 - Add support for custom login types for validating users (PR #2729)
597
598 Changes:
599
600 - Update example Prometheus config to new format (PR #2648) Thanks to @krombel!
601 - Rename redact\_content option to include\_content in Push API (PR #2650)
602 - Declare support for r0.3.0 (PR #2677)
603 - Improve upserts (PR #2684, #2688, #2689, #2713)
604 - Improve documentation of workers (PR #2700)
605 - Improve tracebacks on exceptions (PR #2705)
606 - Allow guest access to group APIs for reading (PR #2715)
607 - Support for posting content in federation\_client script (PR #2716)
608 - Delete devices and pushers on logouts etc (PR #2722)
609
610 Bug fixes:
611
612 - Fix database port script (PR #2673)
613 - Fix internal server error on login with ldap\_auth\_provider (PR #2678) Thanks to @jkolo!
614 - Fix error on sqlite 3.7 (PR #2697)
615 - Fix OPTIONS on preview\_url (PR #2707)
616 - Fix error handling on dns lookup (PR #2711)
617 - Fix wrong avatars when inviting multiple users when creating room (PR #2717)
618 - Fix 500 when joining matrix-dev (PR #2719)
619
620 Changes in synapse v0.25.1 (2017-11-17)
621 =======================================
622
623 Bug fixes:
624
625 - Fix login with LDAP and other password provider modules (PR #2678). Thanks to @jkolo!
626
627 Changes in synapse v0.25.0 (2017-11-15)
628 =======================================
629
630 Bug fixes:
631
632 - Fix port script (PR #2673)
633
634 Changes in synapse v0.25.0-rc1 (2017-11-14)
635 ===========================================
636
637 Features:
638
639 - Add is\_public to groups table to allow for private groups (PR #2582)
640 - Add a route for determining who you are (PR #2668) Thanks to @turt2live!
641 - Add more features to the password providers (PR #2608, #2610, #2620, #2622, #2623, #2624, #2626, #2628, #2629)
642 - Add a hook for custom rest endpoints (PR #2627)
643 - Add API to update group room visibility (PR #2651)
644
645 Changes:
646
647 - Ignore \<noscript\> tags when generating URL preview descriptions (PR #2576) Thanks to @maximevaillancourt!
648 - Register some /unstable endpoints in /r0 as well (PR #2579) Thanks to @krombel!
649 - Support /keys/upload on /r0 as well as /unstable (PR #2585)
650 - Front-end proxy: pass through auth header (PR #2586)
651 - Allow ASes to deactivate their own users (PR #2589)
652 - Remove refresh tokens (PR #2613)
653 - Automatically set default displayname on register (PR #2617)
654 - Log login requests (PR #2618)
655 - Always return is\_public in the /groups/:group\_id/rooms API (PR #2630)
656 - Avoid no-op media deletes (PR #2637) Thanks to @spantaleev!
657 - Fix various embarrassing typos around user\_directory and add some doc. (PR #2643)
658 - Return whether a user is an admin within a group (PR #2647)
659 - Namespace visibility options for groups (PR #2657)
660 - Downcase UserIDs on registration (PR #2662)
661 - Cache failures when fetching URL previews (PR #2669)
662
663 Bug fixes:
664
665 - Fix port script (PR #2577)
666 - Fix error when running synapse with no logfile (PR #2581)
667 - Fix UI auth when deleting devices (PR #2591)
668 - Fix typo when checking if user is invited to group (PR #2599)
669 - Fix the port script to drop NUL values in all tables (PR #2611)
670 - Fix appservices being backlogged and not receiving new events due to a bug in notify\_interested\_services (PR #2631) Thanks to @xyzz!
671 - Fix updating rooms avatar/display name when modified by admin (PR #2636) Thanks to @farialima!
672 - Fix bug in state group storage (PR #2649)
673 - Fix 500 on invalid utf-8 in request (PR #2663)
674
675 Changes in synapse v0.24.1 (2017-10-24)
676 =======================================
677
678 Bug fixes:
679
680 - Fix updating group profiles over federation (PR #2567)
681
682 Changes in synapse v0.24.0 (2017-10-23)
683 =======================================
684
685 No changes since v0.24.0-rc1
686
687 Changes in synapse v0.24.0-rc1 (2017-10-19)
688 ===========================================
689
690 Features:
691
692 - Add Group Server (PR #2352, #2363, #2374, #2377, #2378, #2382, #2410, #2426, #2430, #2454, #2471, #2472, #2544)
693 - Add support for channel notifications (PR #2501)
694 - Add basic implementation of backup media store (PR #2538)
695 - Add config option to auto-join new users to rooms (PR #2545)
696
697 Changes:
698
699 - Make the spam checker a module (PR #2474)
700 - Delete expired url cache data (PR #2478)
701 - Ignore incoming events for rooms that we have left (PR #2490)
702 - Allow spam checker to reject invites too (PR #2492)
703 - Add room creation checks to spam checker (PR #2495)
704 - Spam checking: add the invitee to user\_may\_invite (PR #2502)
705 - Process events from federation for different rooms in parallel (PR #2520)
706 - Allow error strings from spam checker (PR #2531)
707 - Improve error handling for missing files in config (PR #2551)
708
709 Bug fixes:
710
711 - Fix handling SERVFAILs when doing AAAA lookups for federation (PR #2477)
712 - Fix incompatibility with newer versions of ujson (PR #2483) Thanks to @jeremycline!
713 - Fix notification keywords that start/end with non-word chars (PR #2500)
714 - Fix stack overflow and logcontexts from linearizer (PR #2532)
715 - Fix 500 error when fields missing from power\_levels event (PR #2552)
716 - Fix 500 error when we get an error handling a PDU (PR #2553)
717
718 Changes in synapse v0.23.1 (2017-10-02)
719 =======================================
720
721 Changes:
722
723 - Make \'affinity\' package optional, as it is not supported on some platforms
724
725 Changes in synapse v0.23.0 (2017-10-02)
726 =======================================
727
728 No changes since v0.23.0-rc2
729
730 Changes in synapse v0.23.0-rc2 (2017-09-26)
731 ===========================================
732
733 Bug fixes:
734
735 - Fix regression in performance of syncs (PR #2470)
736
737 Changes in synapse v0.23.0-rc1 (2017-09-25)
738 ===========================================
739
740 Features:
741
742 - Add a frontend proxy worker (PR #2344)
743 - Add support for event\_id\_only push format (PR #2450)
744 - Add a PoC for filtering spammy events (PR #2456)
745 - Add a config option to block all room invites (PR #2457)
746
747 Changes:
748
749 - Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
750 - Improve performance of generating push notifications (PR #2343, #2357, #2365, #2366, #2371)
751 - Improve DB performance for device list handling in sync (PR #2362)
752 - Include a sample prometheus config (PR #2416)
753 - Document known to work postgres version (PR #2433) Thanks to @ptman!
754
755 Bug fixes:
756
757 - Fix caching error in the push evaluator (PR #2332)
758 - Fix bug where pusherpool didn\'t start and broke some rooms (PR #2342)
759 - Fix port script for user directory tables (PR #2375)
760 - Fix device lists notifications when user rejoins a room (PR #2443, #2449)
761 - Fix sync to always send down current state events in timeline (PR #2451)
762 - Fix bug where guest users were incorrectly kicked (PR #2453)
763 - Fix bug talking to IPv6 only servers using SRV records (PR #2462)
764
765 Changes in synapse v0.22.1 (2017-07-06)
766 =======================================
767
768 Bug fixes:
769
770 - Fix bug where pusher pool didn\'t start and caused issues when interacting with some rooms (PR #2342)
771
772 Changes in synapse v0.22.0 (2017-07-06)
773 =======================================
774
775 No changes since v0.22.0-rc2
776
777 Changes in synapse v0.22.0-rc2 (2017-07-04)
778 ===========================================
779
780 Changes:
781
782 - Improve performance of storing user IPs (PR #2307, #2308)
783 - Slightly improve performance of verifying access tokens (PR #2320)
784 - Slightly improve performance of event persistence (PR #2321)
785 - Increase default cache factor size from 0.1 to 0.5 (PR #2330)
786
787 Bug fixes:
788
789 - Fix bug with storing registration sessions that caused frequent CPU churn (PR #2319)
790
791 Changes in synapse v0.22.0-rc1 (2017-06-26)
792 ===========================================
793
794 Features:
795
796 - Add a user directory API (PR #2252, and many more)
797 - Add shutdown room API to remove room from local server (PR #2291)
798 - Add API to quarantine media (PR #2292)
799 - Add new config option to not send event contents to push servers (PR #2301) Thanks to @cjdelisle!
800
801 Changes:
802
803 - Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256, #2274)
804 - Deduplicate sync filters (PR #2219) Thanks to @krombel!
805 - Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist!
806 - Add count of one time keys to sync stream (PR #2237)
807 - Only store event\_auth for state events (PR #2247)
808 - Store URL cache preview downloads separately (PR #2299)
809
810 Bug fixes:
811
812 - Fix users not getting notifications when AS listened to that user\_id (PR #2216) Thanks to @slipeer!
813 - Fix users without push set up not getting notifications after joining rooms (PR #2236)
814 - Fix preview url API to trim long descriptions (PR #2243)
815 - Fix bug where we used cached but unpersisted state group as prev group, resulting in broken state of restart (PR #2263)
816 - Fix removing of pushers when using workers (PR #2267)
817 - Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!
818
819 Changes in synapse v0.21.1 (2017-06-15)
820 =======================================
821
822 Bug fixes:
823
824 - Fix bug in anonymous usage statistic reporting (PR #2281)
825
826 Changes in synapse v0.21.0 (2017-05-18)
827 =======================================
828
829 No changes since v0.21.0-rc3
830
831 Changes in synapse v0.21.0-rc3 (2017-05-17)
832 ===========================================
833
834 Features:
835
836 - Add per user rate-limiting overrides (PR #2208)
837 - Add config option to limit maximum number of events requested by `/sync` and `/messages` (PR #2221) Thanks to @psaavedra!
838
839 Changes:
840
841 - Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228, #2229)
842 - Update username availability checker API (PR #2209, #2213)
843 - When purging, don\'t de-delta state groups we\'re about to delete (PR #2214)
844 - Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
845 - Add an index to event\_search to speed up purge history API (PR #2218)
846
847 Bug fixes:
848
849 - Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)
850
851 Changes in synapse v0.21.0-rc2 (2017-05-08)
852 ===========================================
853
854 Changes:
855
856 - Always mark remotes as up if we receive a signed request from them (PR #2190)
857
858 Bug fixes:
859
860 - Fix bug where users got pushed for rooms they had muted (PR #2200)
861
862 Changes in synapse v0.21.0-rc1 (2017-05-08)
863 ===========================================
864
865 Features:
866
867 - Add username availability checker API (PR #2183)
868 - Add read marker API (PR #2120)
869
870 Changes:
871
872 - Enable guest access for the 3pl/3pid APIs (PR #1986)
873 - Add setting to support TURN for guests (PR #2011)
874 - Various performance improvements (PR #2075, #2076, #2080, #2083, #2108, #2158, #2176, #2185)
875 - Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat!
876 - Replace HTTP replication with TCP replication (PR #2082, #2097, #2098, #2099, #2103, #2014, #2016, #2115, #2116, #2117)
877 - Support authenticated SMTP (PR #2102) Thanks @DanielDent!
878 - Add a counter metric for successfully-sent transactions (PR #2121)
879 - Propagate errors sensibly from proxied IS requests (PR #2147)
880 - Add more granular event send metrics (PR #2178)
881
882 Bug fixes:
883
884 - Fix nuke-room script to work with current schema (PR #1927) Thanks @zuckschwerdt!
885 - Fix db port script to not assume postgres tables are in the public schema (PR #2024) Thanks @jerrykan!
886 - Fix getting latest device IP for user with no devices (PR #2118)
887 - Fix rejection of invites to unreachable servers (PR #2145)
888 - Fix code for reporting old verify keys in synapse (PR #2156)
889 - Fix invite state to always include all events (PR #2163)
890 - Fix bug where synapse would always fetch state for any missing event (PR #2170)
891 - Fix a leak with timed out HTTP connections (PR #2180)
892 - Fix bug where we didn\'t time out HTTP requests to ASes (PR #2192)
893
894 Docs:
895
896 - Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
897 - Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
898 - `web_client_location` documentation fix (PR #2131) Thanks @matthewjwolff!
899 - Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
900 - Clarify setting up metrics (PR #2149) Thanks @encks!
901
902 Changes in synapse v0.20.0 (2017-04-11)
903 =======================================
904
905 Bug fixes:
906
907 - Fix joining rooms over federation where not all servers in the room saw the new server had joined (PR #2094)
908
909 Changes in synapse v0.20.0-rc1 (2017-03-30)
910 ===========================================
911
912 Features:
913
914 - Add delete\_devices API (PR #1993)
915 - Add phone number registration/login support (PR #1994, #2055)
916
917 Changes:
918
919 - Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
920 - Reread log config on SIGHUP (PR #1982)
921 - Speed up public room list (PR #1989)
922 - Add helpful texts to logger config options (PR #1990)
923 - Minor `/sync` performance improvements. (PR #2002, #2013, #2022)
924 - Add some debug to help diagnose weird federation issue (PR #2035)
925 - Correctly limit retries for all federation requests (PR #2050, #2061)
926 - Don\'t lock table when persisting new one time keys (PR #2053)
927 - Reduce some CPU work on DB threads (PR #2054)
928 - Cache hosts in room (PR #2060)
929 - Batch sending of device list pokes (PR #2063)
930 - Speed up persist event path in certain edge cases (PR #2070)
931
932 Bug fixes:
933
934 - Fix bug where current\_state\_events renamed to current\_state\_ids (PR #1849)
935 - Fix routing loop when fetching remote media (PR #1992)
936 - Fix current\_state\_events table to not lie (PR #1996)
937 - Fix CAS login to handle PartialDownloadError (PR #1997)
938 - Fix assertion to stop transaction queue getting wedged (PR #2010)
939 - Fix presence to fallback to last\_active\_ts if it beats the last sync time. Thanks @Half-Shot! (PR #2014)
940 - Fix bug when federation received a PDU while a room join is in progress (PR #2016)
941 - Fix resetting state on rejected events (PR #2025)
942 - Fix installation issues in readme. Thanks @ricco386 (PR #2037)
943 - Fix caching of remote servers\' signature keys (PR #2042)
944 - Fix some leaking log context (PR #2048, #2049, #2057, #2058)
945 - Fix rejection of invites not reaching sync (PR #2056)
946
947 Changes in synapse v0.19.3 (2017-03-20)
948 =======================================
949
950 No changes since v0.19.3-rc2
951
952 Changes in synapse v0.19.3-rc2 (2017-03-13)
953 ===========================================
954
955 Bug fixes:
956
957 - Fix bug in handling of incoming device list updates over federation.
958
959 Changes in synapse v0.19.3-rc1 (2017-03-08)
960 ===========================================
961
962 Features:
963
964 - Add some administration functionalities. Thanks to morteza-araby! (PR #1784)
965
966 Changes:
967
968 - Reduce database table sizes (PR #1873, #1916, #1923, #1963)
969 - Update contrib/ to not use syutil. Thanks to andrewshadura! (PR #1907)
970 - Don\'t fetch current state when sending an event in common case (PR #1955)
971
972 Bug fixes:
973
974 - Fix synapse\_port\_db failure. Thanks to Pneumaticat! (PR #1904)
975 - Fix caching to not cache error responses (PR #1913)
976 - Fix APIs to make kick & ban reasons work (PR #1917)
977 - Fix bugs in the /keys/changes api (PR #1921)
978 - Fix bug where users couldn\'t forget rooms they were banned from (PR #1922)
979 - Fix issue with long language values in pushers API (PR #1925)
980 - Fix a race in transaction queue (PR #1930)
981 - Fix dynamic thumbnailing to preserve aspect ratio. Thanks to jkolo! (PR #1945)
982 - Fix device list update to not constantly resync (PR #1964)
983 - Fix potential for huge memory usage when getting device that have changed (PR #1969)
984
985 Changes in synapse v0.19.2 (2017-02-20)
986 =======================================
987
988 - Fix bug with event visibility check in /context/ API. Thanks to Tokodomo for pointing it out! (PR #1929)
989
990 Changes in synapse v0.19.1 (2017-02-09)
991 =======================================
992
993 - Fix bug where state was incorrectly reset in a room when synapse received an event over federation that did not pass auth checks (PR #1892)
994
995 Changes in synapse v0.19.0 (2017-02-04)
996 =======================================
997
998 No changes since RC 4.
999
1000 Changes in synapse v0.19.0-rc4 (2017-02-02)
1001 ===========================================
1002
1003 - Bump cache sizes for common membership queries (PR #1879)
1004
1005 Changes in synapse v0.19.0-rc3 (2017-02-02)
1006 ===========================================
1007
1008 - Fix email push in pusher worker (PR #1875)
1009 - Make presence.get\_new\_events a bit faster (PR #1876)
1010 - Make /keys/changes a bit more performant (PR #1877)
1011
1012 Changes in synapse v0.19.0-rc2 (2017-02-02)
1013 ===========================================
1014
1015 - Include newly joined users in /keys/changes API (PR #1872)
1016
1017 Changes in synapse v0.19.0-rc1 (2017-02-02)
1018 ===========================================
1019
1020 Features:
1021
1022 - Add support for specifying multiple bind addresses (PR #1709, #1712, #1795, #1835). Thanks to @kyrias!
1023 - Add /account/3pid/delete endpoint (PR #1714)
1024 - Add config option to configure the Riot URL used in notification emails (PR #1811). Thanks to @aperezdc!
1025 - Add username and password config options for turn server (PR #1832). Thanks to @xsteadfastx!
1026 - Implement device lists updates over federation (PR #1857, #1861, #1864)
1027 - Implement /keys/changes (PR #1869, #1872)
1028
1029 Changes:
1030
1031 - Improve IPv6 support (PR #1696). Thanks to @kyrias and @glyph!
1032 - Log which files we saved attachments to in the media\_repository (PR #1791)
1033 - Linearize updates to membership via PUT /state/ to better handle multiple joins (PR #1787)
1034 - Limit number of entries to prefill from cache on startup (PR #1792)
1035 - Remove full\_twisted\_stacktraces option (PR #1802)
1036 - Measure size of some caches by sum of the size of cached values (PR #1815)
1037 - Measure metrics of string\_cache (PR #1821)
1038 - Reduce logging verbosity (PR #1822, #1823, #1824)
1039 - Don\'t clobber a displayname or avatar\_url if provided by an m.room.member event (PR #1852)
1040 - Better handle 401/404 response for federation /send/ (PR #1866, #1871)
1041
1042 Fixes:
1043
1044 - Fix ability to change password to a non-ascii one (PR #1711)
1045 - Fix push getting stuck due to looking at the wrong view of state (PR #1820)
1046 - Fix email address comparison to be case insensitive (PR #1827)
1047 - Fix occasional inconsistencies of room membership (PR #1836, #1840)
1048
1049 Performance:
1050
1051 - Don\'t block messages sending on bumping presence (PR #1789)
1052 - Change device\_inbox stream index to include user (PR #1793)
1053 - Optimise state resolution (PR #1818)
1054 - Use DB cache of joined users for presence (PR #1862)
1055 - Add an index to make membership queries faster (PR #1867)
1056
1057 Changes in synapse v0.18.7 (2017-01-09)
1058 =======================================
1059
1060 No changes from v0.18.7-rc2
1061
1062 Changes in synapse v0.18.7-rc2 (2017-01-07)
1063 ===========================================
1064
1065 Bug fixes:
1066
1067 - Fix error in rc1\'s discarding invalid inbound traffic logic that was incorrectly discarding missing events
1068
1069 Changes in synapse v0.18.7-rc1 (2017-01-06)
1070 ===========================================
1071
1072 Bug fixes:
1073
1074 - Fix error in \#PR 1764 to actually fix the nightmare \#1753 bug.
1075 - Improve deadlock logging further
1076 - Discard inbound federation traffic from invalid domains, to immunise against \#1753
1077
1078 Changes in synapse v0.18.6 (2017-01-06)
1079 =======================================
1080
1081 Bug fixes:
1082
1083 - Fix bug when checking if a guest user is allowed to join a room (PR #1772) Thanks to Patrik Oldsberg for diagnosing and the fix!
1084
1085 Changes in synapse v0.18.6-rc3 (2017-01-05)
1086 ===========================================
1087
1088 Bug fixes:
1089
1090 - Fix bug where we failed to send ban events to the banned server (PR #1758)
1091 - Fix bug where we sent event that didn\'t originate on this server to other servers (PR #1764)
1092 - Fix bug where processing an event from a remote server took a long time because we were making long HTTP requests (PR #1765, PR #1744)
1093
1094 Changes:
1095
1096 - Improve logging for debugging deadlocks (PR #1766, PR #1767)
1097
1098 Changes in synapse v0.18.6-rc2 (2016-12-30)
1099 ===========================================
1100
1101 Bug fixes:
1102
1103 - Fix memory leak in twisted by initialising logging correctly (PR #1731)
1104 - Fix bug where fetching missing events took an unacceptable amount of time in large rooms (PR #1734)
1105
1106 Changes in synapse v0.18.6-rc1 (2016-12-29)
1107 ===========================================
1108
1109 Bug fixes:
1110
1111 - Make sure that outbound connections are closed (PR #1725)
1112
1113 Changes in synapse v0.18.5 (2016-12-16)
1114 =======================================
1115
1116 Bug fixes:
1117
1118 - Fix federation /backfill returning events it shouldn\'t (PR #1700)
1119 - Fix crash in url preview (PR #1701)
1120
1121 Changes in synapse v0.18.5-rc3 (2016-12-13)
1122 ===========================================
1123
1124 Features:
1125
1126 - Add support for E2E for guests (PR #1653)
1127 - Add new API appservice specific public room list (PR #1676)
1128 - Add new room membership APIs (PR #1680)
1129
1130 Changes:
1131
1132 - Enable guest access for private rooms by default (PR #653)
1133 - Limit the number of events that can be created on a given room concurrently (PR #1620)
1134 - Log the args that we have on UI auth completion (PR #1649)
1135 - Stop generating refresh\_tokens (PR #1654)
1136 - Stop putting a time caveat on access tokens (PR #1656)
1137 - Remove unspecced GET endpoints for e2e keys (PR #1694)
1138
1139 Bug fixes:
1140
1141 - Fix handling of 500 and 429\'s over federation (PR #1650)
1142 - Fix Content-Type header parsing (PR #1660)
1143 - Fix error when previewing sites that include unicode, thanks to kyrias (PR #1664)
1144 - Fix some cases where we drop read receipts (PR #1678)
1145 - Fix bug where calls to `/sync` didn\'t correctly timeout (PR #1683)
1146 - Fix bug where E2E key query would fail if a single remote host failed (PR #1686)
1147
1148 Changes in synapse v0.18.5-rc2 (2016-11-24)
1149 ===========================================
1150
1151 Bug fixes:
1152
1153 - Don\'t send old events over federation, fixes bug in -rc1.
1154
1155 Changes in synapse v0.18.5-rc1 (2016-11-24)
1156 ===========================================
1157
1158 Features:
1159
1160 - Implement \"event\_fields\" in filters (PR #1638)
1161
1162 Changes:
1163
1164 - Use external ldap auth pacakge (PR #1628)
1165 - Split out federation transaction sending to a worker (PR #1635)
1166 - Fail with a coherent error message if /sync?filter= is invalid (PR #1636)
1167 - More efficient notif count queries (PR #1644)
1168
1169 Changes in synapse v0.18.4 (2016-11-22)
1170 =======================================
1171
1172 Bug fixes:
1173
1174 - Add workaround for buggy clients that the fail to register (PR #1632)
1175
1176 Changes in synapse v0.18.4-rc1 (2016-11-14)
1177 ===========================================
1178
1179 Changes:
1180
1181 - Various database efficiency improvements (PR #1188, #1192)
1182 - Update default config to blacklist more internal IPs, thanks to Euan Kemp (PR #1198)
1183 - Allow specifying duration in minutes in config, thanks to Daniel Dent (PR #1625)
1184
1185 Bug fixes:
1186
1187 - Fix media repo to set CORs headers on responses (PR #1190)
1188 - Fix registration to not error on non-ascii passwords (PR #1191)
1189 - Fix create event code to limit the number of prev\_events (PR #1615)
1190 - Fix bug in transaction ID deduplication (PR #1624)
1191
1192 Changes in synapse v0.18.3 (2016-11-08)
1193 =======================================
1194
1195 SECURITY UPDATE
1196
1197 Explicitly require authentication when using LDAP3. This is the default on versions of `ldap3` above 1.0, but some distributions will package an older version.
1198
1199 If you are using LDAP3 login and have a version of `ldap3` older than 1.0 it is **CRITICAL to updgrade**.
1200
1201 Changes in synapse v0.18.2 (2016-11-01)
1202 =======================================
1203
1204 No changes since v0.18.2-rc5
1205
1206 Changes in synapse v0.18.2-rc5 (2016-10-28)
1207 ===========================================
1208
1209 Bug fixes:
1210
1211 - Fix prometheus process metrics in worker processes (PR #1184)
1212
1213 Changes in synapse v0.18.2-rc4 (2016-10-27)
1214 ===========================================
1215
1216 Bug fixes:
1217
1218 - Fix `user_threepids` schema delta, which in some instances prevented startup after upgrade (PR #1183)
1219
1220 Changes in synapse v0.18.2-rc3 (2016-10-27)
1221 ===========================================
1222
1223 Changes:
1224
1225 - Allow clients to supply access tokens as headers (PR #1098)
1226 - Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164)
1227 - Make password reset email field case insensitive (PR #1170)
1228 - Reduce redundant database work in email pusher (PR #1174)
1229 - Allow configurable rate limiting per AS (PR #1175)
1230 - Check whether to ratelimit sooner to avoid work (PR #1176)
1231 - Standardise prometheus metrics (PR #1177)
1232
1233 Bug fixes:
1234
1235 - Fix incredibly slow back pagination query (PR #1178)
1236 - Fix infinite typing bug (PR #1179)
1237
1238 Changes in synapse v0.18.2-rc2 (2016-10-25)
1239 ===========================================
1240
1241 (This release did not include the changes advertised and was identical to RC1)
1242
1243 Changes in synapse v0.18.2-rc1 (2016-10-17)
1244 ===========================================
1245
1246 Changes:
1247
1248 - Remove redundant event\_auth index (PR #1113)
1249 - Reduce DB hits for replication (PR #1141)
1250 - Implement pluggable password auth (PR #1155)
1251 - Remove rate limiting from app service senders and fix get\_or\_create\_user requester, thanks to Patrik Oldsberg (PR #1157)
1252 - window.postmessage for Interactive Auth fallback (PR #1159)
1253 - Use sys.executable instead of hardcoded python, thanks to Pedro Larroy (PR #1162)
1254 - Add config option for adding additional TLS fingerprints (PR #1167)
1255 - User-interactive auth on delete device (PR #1168)
1256
1257 Bug fixes:
1258
1259 - Fix not being allowed to set your own state\_key, thanks to Patrik Oldsberg (PR #1150)
1260 - Fix interactive auth to return 401 from for incorrect password (PR #1160, #1166)
1261 - Fix email push notifs being dropped (PR #1169)
1262
1263 Changes in synapse v0.18.1 (2016-10-05)
1264 =======================================
1265
1266 No changes since v0.18.1-rc1
1267
1268 Changes in synapse v0.18.1-rc1 (2016-09-30)
1269 ===========================================
1270
1271 Features:
1272
1273 - Add total\_room\_count\_estimate to `/publicRooms` (PR #1133)
1274
1275 Changes:
1276
1277 - Time out typing over federation (PR #1140)
1278 - Restructure LDAP authentication (PR #1153)
1279
1280 Bug fixes:
1281
1282 - Fix 3pid invites when server is already in the room (PR #1136)
1283 - Fix upgrading with SQLite taking lots of CPU for a few days after upgrade (PR #1144)
1284 - Fix upgrading from very old database versions (PR #1145)
1285 - Fix port script to work with recently added tables (PR #1146)
1286
1287 Changes in synapse v0.18.0 (2016-09-19)
1288 =======================================
1289
1290 The release includes major changes to the state storage database schemas, which significantly reduce database size. Synapse will attempt to upgrade the current data in the background. Servers with large SQLite database may experience degradation of performance while this upgrade is in progress, therefore you may want to consider migrating to using Postgres before upgrading very large SQLite databases
1291
1292 Changes:
1293
1294 - Make public room search case insensitive (PR #1127)
1295
1296 Bug fixes:
1297
1298 - Fix and clean up publicRooms pagination (PR #1129)
1299
1300 Changes in synapse v0.18.0-rc1 (2016-09-16)
1301 ===========================================
1302
1303 Features:
1304
1305 - Add `only=highlight` on `/notifications` (PR #1081)
1306 - Add server param to /publicRooms (PR #1082)
1307 - Allow clients to ask for the whole of a single state event (PR #1094)
1308 - Add is\_direct param to /createRoom (PR #1108)
1309 - Add pagination support to publicRooms (PR #1121)
1310 - Add very basic filter API to /publicRooms (PR #1126)
1311 - Add basic direct to device messaging support for E2E (PR #1074, #1084, #1104, #1111)
1312
1313 Changes:
1314
1315 - Move to storing state\_groups\_state as deltas, greatly reducing DB size (PR #1065)
1316 - Reduce amount of state pulled out of the DB during common requests (PR #1069)
1317 - Allow PDF to be rendered from media repo (PR #1071)
1318 - Reindex state\_groups\_state after pruning (PR #1085)
1319 - Clobber EDUs in send queue (PR #1095)
1320 - Conform better to the CAS protocol specification (PR #1100)
1321 - Limit how often we ask for keys from dead servers (PR #1114)
1322
1323 Bug fixes:
1324
1325 - Fix /notifications API when used with `from` param (PR #1080)
1326 - Fix backfill when cannot find an event. (PR #1107)
1327
1328 Changes in synapse v0.17.3 (2016-09-09)
1329 =======================================
1330
1331 This release fixes a major bug that stopped servers from handling rooms with over 1000 members.
1332
1333 Changes in synapse v0.17.2 (2016-09-08)
1334 =======================================
1335
1336 This release contains security bug fixes. Please upgrade.
1337
1338 No changes since v0.17.2-rc1
1339
1340 Changes in synapse v0.17.2-rc1 (2016-09-05)
1341 ===========================================
1342
1343 Features:
1344
1345 - Start adding store-and-forward direct-to-device messaging (PR #1046, #1050, #1062, #1066)
1346
1347 Changes:
1348
1349 - Avoid pulling the full state of a room out so often (PR #1047, #1049, #1063, #1068)
1350 - Don\'t notify for online to online presence transitions. (PR #1054)
1351 - Occasionally persist unpersisted presence updates (PR #1055)
1352 - Allow application services to have an optional \'url\' (PR #1056)
1353 - Clean up old sent transactions from DB (PR #1059)
1354
1355 Bug fixes:
1356
1357 - Fix None check in backfill (PR #1043)
1358 - Fix membership changes to be idempotent (PR #1067)
1359 - Fix bug in get\_pdu where it would sometimes return events with incorrect signature
1360
1361 Changes in synapse v0.17.1 (2016-08-24)
1362 =======================================
1363
1364 Changes:
1365
1366 - Delete old received\_transactions rows (PR #1038)
1367 - Pass through user-supplied content in /join/\$room\_id (PR #1039)
1368
1369 Bug fixes:
1370
1371 - Fix bug with backfill (PR #1040)
1372
1373 Changes in synapse v0.17.1-rc1 (2016-08-22)
1374 ===========================================
1375
1376 Features:
1377
1378 - Add notification API (PR #1028)
1379
1380 Changes:
1381
1382 - Don\'t print stack traces when failing to get remote keys (PR #996)
1383 - Various federation /event/ perf improvements (PR #998)
1384 - Only process one local membership event per room at a time (PR #1005)
1385 - Move default display name push rule (PR #1011, #1023)
1386 - Fix up preview URL API. Add tests. (PR #1015)
1387 - Set `Content-Security-Policy` on media repo (PR #1021)
1388 - Make notify\_interested\_services faster (PR #1022)
1389 - Add usage stats to prometheus monitoring (PR #1037)
1390
1391 Bug fixes:
1392
1393 - Fix token login (PR #993)
1394 - Fix CAS login (PR #994, #995)
1395 - Fix /sync to not clobber status\_msg (PR #997)
1396 - Fix redacted state events to include prev\_content (PR #1003)
1397 - Fix some bugs in the auth/ldap handler (PR #1007)
1398 - Fix backfill request to limit URI length, so that remotes don\'t reject the requests due to path length limits (PR #1012)
1399 - Fix AS push code to not send duplicate events (PR #1025)
1400
1401 Changes in synapse v0.17.0 (2016-08-08)
1402 =======================================
1403
1404 This release contains significant security bug fixes regarding authenticating events received over federation. PLEASE UPGRADE.
1405
1406 This release changes the LDAP configuration format in a backwards incompatible way, see PR #843 for details.
1407
1408 Changes:
1409
1410 - Add federation /version API (PR #990)
1411 - Make psutil dependency optional (PR #992)
1412
1413 Bug fixes:
1414
1415 - Fix URL preview API to exclude HTML comments in description (PR #988)
1416 - Fix error handling of remote joins (PR #991)
1417
1418 Changes in synapse v0.17.0-rc4 (2016-08-05)
1419 ===========================================
1420
1421 Changes:
1422
1423 - Change the way we summarize URLs when previewing (PR #973)
1424 - Add new `/state_ids/` federation API (PR #979)
1425 - Speed up processing of `/state/` response (PR #986)
1426
1427 Bug fixes:
1428
1429 - Fix event persistence when event has already been partially persisted (PR #975, #983, #985)
1430 - Fix port script to also copy across backfilled events (PR #982)
1431
1432 Changes in synapse v0.17.0-rc3 (2016-08-02)
1433 ===========================================
1434
1435 Changes:
1436
1437 - Forbid non-ASes from registering users whose names begin with \'\_\' (PR #958)
1438 - Add some basic admin API docs (PR #963)
1439
1440 Bug fixes:
1441
1442 - Send the correct host header when fetching keys (PR #941)
1443 - Fix joining a room that has missing auth events (PR #964)
1444 - Fix various push bugs (PR #966, #970)
1445 - Fix adding emails on registration (PR #968)
1446
1447 Changes in synapse v0.17.0-rc2 (2016-08-02)
1448 ===========================================
1449
1450 (This release did not include the changes advertised and was identical to RC1)
1451
1452 Changes in synapse v0.17.0-rc1 (2016-07-28)
1453 ===========================================
1454
1455 This release changes the LDAP configuration format in a backwards incompatible way, see PR #843 for details.
1456
1457 Features:
1458
1459 - Add purge\_media\_cache admin API (PR #902)
1460 - Add deactivate account admin API (PR #903)
1461 - Add optional pepper to password hashing (PR #907, #910 by KentShikama)
1462 - Add an admin option to shared secret registration (breaks backwards compat) (PR #909)
1463 - Add purge local room history API (PR #911, #923, #924)
1464 - Add requestToken endpoints (PR #915)
1465 - Add an /account/deactivate endpoint (PR #921)
1466 - Add filter param to /messages. Add \'contains\_url\' to filter. (PR #922)
1467 - Add device\_id support to /login (PR #929)
1468 - Add device\_id support to /v2/register flow. (PR #937, #942)
1469 - Add GET /devices endpoint (PR #939, #944)
1470 - Add GET /device/{deviceId} (PR #943)
1471 - Add update and delete APIs for devices (PR #949)
1472
1473 Changes:
1474
1475 - Rewrite LDAP Authentication against ldap3 (PR #843 by mweinelt)
1476 - Linearize some federation endpoints based on (origin, room\_id) (PR #879)
1477 - Remove the legacy v0 content upload API. (PR #888)
1478 - Use similar naming we use in email notifs for push (PR #894)
1479 - Optionally include password hash in createUser endpoint (PR #905 by KentShikama)
1480 - Use a query that postgresql optimises better for get\_events\_around (PR #906)
1481 - Fall back to \'username\' if \'user\' is not given for appservice registration. (PR #927 by Half-Shot)
1482 - Add metrics for psutil derived memory usage (PR #936)
1483 - Record device\_id in client\_ips (PR #938)
1484 - Send the correct host header when fetching keys (PR #941)
1485 - Log the hostname the reCAPTCHA was completed on (PR #946)
1486 - Make the device id on e2e key upload optional (PR #956)
1487 - Add r0.2.0 to the \"supported versions\" list (PR #960)
1488 - Don\'t include name of room for invites in push (PR #961)
1489
1490 Bug fixes:
1491
1492 - Fix substitution failure in mail template (PR #887)
1493 - Put most recent 20 messages in email notif (PR #892)
1494 - Ensure that the guest user is in the database when upgrading accounts (PR #914)
1495 - Fix various edge cases in auth handling (PR #919)
1496 - Fix 500 ISE when sending alias event without a state\_key (PR #925)
1497 - Fix bug where we stored rejections in the state\_group, persist all rejections (PR #948)
1498 - Fix lack of check of if the user is banned when handling 3pid invites (PR #952)
1499 - Fix a couple of bugs in the transaction and keyring code (PR #954, #955)
1500
1501 Changes in synapse v0.16.1-r1 (2016-07-08)
1502 ==========================================
1503
1504 THIS IS A CRITICAL SECURITY UPDATE.
1505
1506 This fixes a bug which allowed users\' accounts to be accessed by unauthorised users.
1507
1508 Changes in synapse v0.16.1 (2016-06-20)
1509 =======================================
1510
1511 Bug fixes:
1512
1513 - Fix assorted bugs in `/preview_url` (PR #872)
1514 - Fix TypeError when setting unicode passwords (PR #873)
1515
1516 Performance improvements:
1517
1518 - Turn `use_frozen_events` off by default (PR #877)
1519 - Disable responding with canonical json for federation (PR #878)
1520
1521 Changes in synapse v0.16.1-rc1 (2016-06-15)
1522 ===========================================
1523
1524 Features: None
1525
1526 Changes:
1527
1528 - Log requester for `/publicRoom` endpoints when possible (PR #856)
1529 - 502 on `/thumbnail` when can\'t connect to remote server (PR #862)
1530 - Linearize fetching of gaps on incoming events (PR #871)
1531
1532 Bugs fixes:
1533
1534 - Fix bug where rooms where marked as published by default (PR #857)
1535 - Fix bug where joining room with an event with invalid sender (PR #868)
1536 - Fix bug where backfilled events were sent down sync streams (PR #869)
1537 - Fix bug where outgoing connections could wedge indefinitely, causing push notifications to be unreliable (PR #870)
1538
1539 Performance improvements:
1540
1541 - Improve `/publicRooms` performance(PR #859)
1542
1543 Changes in synapse v0.16.0 (2016-06-09)
1544 =======================================
1545
1546 NB: As of v0.14 all AS config files must have an ID field.
1547
1548 Bug fixes:
1549
1550 - Don\'t make rooms published by default (PR #857)
1551
1552 Changes in synapse v0.16.0-rc2 (2016-06-08)
1553 ===========================================
1554
1555 Features:
1556
1557 - Add configuration option for tuning GC via `gc.set_threshold` (PR #849)
1558
1559 Changes:
1560
1561 - Record metrics about GC (PR #771, #847, #852)
1562 - Add metric counter for number of persisted events (PR #841)
1563
1564 Bug fixes:
1565
1566 - Fix \'From\' header in email notifications (PR #843)
1567 - Fix presence where timeouts were not being fired for the first 8h after restarts (PR #842)
1568 - Fix bug where synapse sent malformed transactions to AS\'s when retrying transactions (Commits 310197b, 8437906)
1569
1570 Performance improvements:
1571
1572 - Remove event fetching from DB threads (PR #835)
1573 - Change the way we cache events (PR #836)
1574 - Add events to cache when we persist them (PR #840)
1575
1576 Changes in synapse v0.16.0-rc1 (2016-06-03)
1577 ===========================================
1578
1579 Version 0.15 was not released. See v0.15.0-rc1 below for additional changes.
1580
1581 Features:
1582
1583 - Add email notifications for missed messages (PR #759, #786, #799, #810, #815, #821)
1584 - Add a `url_preview_ip_range_whitelist` config param (PR #760)
1585 - Add /report endpoint (PR #762)
1586 - Add basic ignore user API (PR #763)
1587 - Add an openidish mechanism for proving that you own a given user\_id (PR #765)
1588 - Allow clients to specify a server\_name to avoid \'No known servers\' (PR #794)
1589 - Add secondary\_directory\_servers option to fetch room list from other servers (PR #808, #813)
1590
1591 Changes:
1592
1593 - Report per request metrics for all of the things using request\_handler (PR #756)
1594 - Correctly handle `NULL` password hashes from the database (PR #775)
1595 - Allow receipts for events we haven\'t seen in the db (PR #784)
1596 - Make synctl read a cache factor from config file (PR #785)
1597 - Increment badge count per missed convo, not per msg (PR #793)
1598 - Special case m.room.third\_party\_invite event auth to match invites (PR #814)
1599
1600 Bug fixes:
1601
1602 - Fix typo in event\_auth servlet path (PR #757)
1603 - Fix password reset (PR #758)
1604
1605 Performance improvements:
1606
1607 - Reduce database inserts when sending transactions (PR #767)
1608 - Queue events by room for persistence (PR #768)
1609 - Add cache to `get_user_by_id` (PR #772)
1610 - Add and use `get_domain_from_id` (PR #773)
1611 - Use tree cache for `get_linearized_receipts_for_room` (PR #779)
1612 - Remove unused indices (PR #782)
1613 - Add caches to `bulk_get_push_rules*` (PR #804)
1614 - Cache `get_event_reference_hashes` (PR #806)
1615 - Add `get_users_with_read_receipts_in_room` cache (PR #809)
1616 - Use state to calculate `get_users_in_room` (PR #811)
1617 - Load push rules in storage layer so that they get cached (PR #825)
1618 - Make `get_joined_hosts_for_room` use get\_users\_in\_room (PR #828)
1619 - Poke notifier on next reactor tick (PR #829)
1620 - Change CacheMetrics to be quicker (PR #830)
1621
1622 Changes in synapse v0.15.0-rc1 (2016-04-26)
1623 ===========================================
1624
1625 Features:
1626
1627 - Add login support for Javascript Web Tokens, thanks to Niklas Riekenbrauck (PR #671,\#687)
1628 - Add URL previewing support (PR #688)
1629 - Add login support for LDAP, thanks to Christoph Witzany (PR #701)
1630 - Add GET endpoint for pushers (PR #716)
1631
1632 Changes:
1633
1634 - Never notify for member events (PR #667)
1635 - Deduplicate identical `/sync` requests (PR #668)
1636 - Require user to have left room to forget room (PR #673)
1637 - Use DNS cache if within TTL (PR #677)
1638 - Let users see their own leave events (PR #699)
1639 - Deduplicate membership changes (PR #700)
1640 - Increase performance of pusher code (PR #705)
1641 - Respond with error status 504 if failed to talk to remote server (PR #731)
1642 - Increase search performance on postgres (PR #745)
1643
1644 Bug fixes:
1645
1646 - Fix bug where disabling all notifications still resulted in push (PR #678)
1647 - Fix bug where users couldn\'t reject remote invites if remote refused (PR #691)
1648 - Fix bug where synapse attempted to backfill from itself (PR #693)
1649 - Fix bug where profile information was not correctly added when joining remote rooms (PR #703)
1650 - Fix bug where register API required incorrect key name for AS registration (PR #727)
1651
1652 Changes in synapse v0.14.0 (2016-03-30)
1653 =======================================
1654
1655 No changes from v0.14.0-rc2
1656
1657 Changes in synapse v0.14.0-rc2 (2016-03-23)
1658 ===========================================
1659
1660 Features:
1661
1662 - Add published room list API (PR #657)
1663
1664 Changes:
1665
1666 - Change various caches to consume less memory (PR #656, #658, #660, #662, #663, #665)
1667 - Allow rooms to be published without requiring an alias (PR #664)
1668 - Intern common strings in caches to reduce memory footprint (\#666)
1669
1670 Bug fixes:
1671
1672 - Fix reject invites over federation (PR #646)
1673 - Fix bug where registration was not idempotent (PR #649)
1674 - Update aliases event after deleting aliases (PR #652)
1675 - Fix unread notification count, which was sometimes wrong (PR #661)
1676
1677 Changes in synapse v0.14.0-rc1 (2016-03-14)
1678 ===========================================
1679
1680 Features:
1681
1682 - Add event\_id to response to state event PUT (PR #581)
1683 - Allow guest users access to messages in rooms they have joined (PR #587)
1684 - Add config for what state is included in a room invite (PR #598)
1685 - Send the inviter\'s member event in room invite state (PR #607)
1686 - Add error codes for malformed/bad JSON in /login (PR #608)
1687 - Add support for changing the actions for default rules (PR #609)
1688 - Add environment variable SYNAPSE\_CACHE\_FACTOR, default it to 0.1 (PR #612)
1689 - Add ability for alias creators to delete aliases (PR #614)
1690 - Add profile information to invites (PR #624)
1691
1692 Changes:
1693
1694 - Enforce user\_id exclusivity for AS registrations (PR #572)
1695 - Make adding push rules idempotent (PR #587)
1696 - Improve presence performance (PR #582, #586)
1697 - Change presence semantics for `last_active_ago` (PR #582, #586)
1698 - Don\'t allow `m.room.create` to be changed (PR #596)
1699 - Add 800x600 to default list of valid thumbnail sizes (PR #616)
1700 - Always include kicks and bans in full /sync (PR #625)
1701 - Send history visibility on boundary changes (PR #626)
1702 - Register endpoint now returns a refresh\_token (PR #637)
1703
1704 Bug fixes:
1705
1706 - Fix bug where we returned incorrect state in /sync (PR #573)
1707 - Always return a JSON object from push rule API (PR #606)
1708 - Fix bug where registering without a user id sometimes failed (PR #610)
1709 - Report size of ExpiringCache in cache size metrics (PR #611)
1710 - Fix rejection of invites to empty rooms (PR #615)
1711 - Fix usage of `bcrypt` to not use `checkpw` (PR #619)
1712 - Pin `pysaml2` dependency (PR #634)
1713 - Fix bug in `/sync` where timeline order was incorrect for backfilled events (PR #635)
1714
1715 Changes in synapse v0.13.3 (2016-02-11)
1716 =======================================
1717
1718 - Fix bug where `/sync` would occasionally return events in the wrong room.
1719
1720 Changes in synapse v0.13.2 (2016-02-11)
1721 =======================================
1722
1723 - Fix bug where `/events` would fail to skip some events if there had been more events than the limit specified since the last request (PR #570)
1724
1725 Changes in synapse v0.13.1 (2016-02-10)
1726 =======================================
1727
1728 - Bump matrix-angular-sdk (matrix web console) dependency to 0.6.8 to pull in the fix for SYWEB-361 so that the default client can display HTML messages again(!)
1729
1730 Changes in synapse v0.13.0 (2016-02-10)
1731 =======================================
1732
1733 This version includes an upgrade of the schema, specifically adding an index to the `events` table. This may cause synapse to pause for several minutes the first time it is started after the upgrade.
1734
1735 Changes:
1736
1737 - Improve general performance (PR #540, #543. \#544, #54, #549, #567)
1738 - Change guest user ids to be incrementing integers (PR #550)
1739 - Improve performance of public room list API (PR #552)
1740 - Change profile API to omit keys rather than return null (PR #557)
1741 - Add `/media/r0` endpoint prefix, which is equivalent to `/media/v1/` (PR #595)
1742
1743 Bug fixes:
1744
1745 - Fix bug with upgrading guest accounts where it would fail if you opened the registration email on a different device (PR #547)
1746 - Fix bug where unread count could be wrong (PR #568)
1747
1748 Changes in synapse v0.12.1-rc1 (2016-01-29)
1749 ===========================================
1750
1751 Features:
1752
1753 - Add unread notification counts in `/sync` (PR #456)
1754 - Add support for inviting 3pids in `/createRoom` (PR #460)
1755 - Add ability for guest accounts to upgrade (PR #462)
1756 - Add `/versions` API (PR #468)
1757 - Add `event` to `/context` API (PR #492)
1758 - Add specific error code for invalid user names in `/register` (PR #499)
1759 - Add support for push badge counts (PR #507)
1760 - Add support for non-guest users to peek in rooms using `/events` (PR #510)
1761
1762 Changes:
1763
1764 - Change `/sync` so that guest users only get rooms they\'ve joined (PR #469)
1765 - Change to require unbanning before other membership changes (PR #501)
1766 - Change default push rules to notify for all messages (PR #486)
1767 - Change default push rules to not notify on membership changes (PR #514)
1768 - Change default push rules in one to one rooms to only notify for events that are messages (PR #529)
1769 - Change `/sync` to reject requests with a `from` query param (PR #512)
1770 - Change server manhole to use SSH rather than telnet (PR #473)
1771 - Change server to require AS users to be registered before use (PR #487)
1772 - Change server not to start when ASes are invalidly configured (PR #494)
1773 - Change server to require ID and `as_token` to be unique for AS\'s (PR #496)
1774 - Change maximum pagination limit to 1000 (PR #497)
1775
1776 Bug fixes:
1777
1778 - Fix bug where `/sync` didn\'t return when something under the leave key changed (PR #461)
1779 - Fix bug where we returned smaller rather than larger than requested thumbnails when `method=crop` (PR #464)
1780 - Fix thumbnails API to only return cropped thumbnails when asking for a cropped thumbnail (PR #475)
1781 - Fix bug where we occasionally still logged access tokens (PR #477)
1782 - Fix bug where `/events` would always return immediately for guest users (PR #480)
1783 - Fix bug where `/sync` unexpectedly returned old left rooms (PR #481)
1784 - Fix enabling and disabling push rules (PR #498)
1785 - Fix bug where `/register` returned 500 when given unicode username (PR #513)
1786
1787 Changes in synapse v0.12.0 (2016-01-04)
1788 =======================================
1789
1790 - Expose `/login` under `r0` (PR #459)
1791
1792 Changes in synapse v0.12.0-rc3 (2015-12-23)
1793 ===========================================
1794
1795 - Allow guest accounts access to `/sync` (PR #455)
1796 - Allow filters to include/exclude rooms at the room level rather than just from the components of the sync for each room. (PR #454)
1797 - Include urls for room avatars in the response to `/publicRooms` (PR #453)
1798 - Don\'t set a identicon as the avatar for a user when they register (PR #450)
1799 - Add a `display_name` to third-party invites (PR #449)
1800 - Send more information to the identity server for third-party invites so that it can send richer messages to the invitee (PR #446)
1801 - Cache the responses to `/initialSync` for 5 minutes. If a client retries a request to `/initialSync` before the a response was computed to the first request then the same response is used for both requests (PR #457)
1802 - Fix a bug where synapse would always request the signing keys of remote servers even when the key was cached locally (PR #452)
1803 - Fix 500 when pagination search results (PR #447)
1804 - Fix a bug where synapse was leaking raw email address in third-party invites (PR #448)
1805
1806 Changes in synapse v0.12.0-rc2 (2015-12-14)
1807 ===========================================
1808
1809 - Add caches for whether rooms have been forgotten by a user (PR #434)
1810 - Remove instructions to use `--process-dependency-link` since all of the dependencies of synapse are on PyPI (PR #436)
1811 - Parallelise the processing of `/sync` requests (PR #437)
1812 - Fix race updating presence in `/events` (PR #444)
1813 - Fix bug back-populating search results (PR #441)
1814 - Fix bug calculating state in `/sync` requests (PR #442)
1815
1816 Changes in synapse v0.12.0-rc1 (2015-12-10)
1817 ===========================================
1818
1819 - Host the client APIs released as r0 by <https://matrix.org/docs/spec/r0.0.0/client_server.html> on paths prefixed by `/_matrix/client/r0`. (PR #430, PR #415, PR #400)
1820 - Updates the client APIs to match r0 of the matrix specification.
1821 - All APIs return events in the new event format, old APIs also include the fields needed to parse the event using the old format for compatibility. (PR #402)
1822 - Search results are now given as a JSON array rather than a JSON object (PR #405)
1823 - Miscellaneous changes to search (PR #403, PR #406, PR #412)
1824 - Filter JSON objects may now be passed as query parameters to `/sync` (PR #431)
1825 - Fix implementation of `/admin/whois` (PR #418)
1826 - Only include the rooms that user has left in `/sync` if the client requests them in the filter (PR #423)
1827 - Don\'t push for `m.room.message` by default (PR #411)
1828 - Add API for setting per account user data (PR #392)
1829 - Allow users to forget rooms (PR #385)
1830 - Performance improvements and monitoring:
1831 - Add per-request counters for CPU time spent on the main python thread. (PR #421, PR #420)
1832 - Add per-request counters for time spent in the database (PR #429)
1833 - Make state updates in the C+S API idempotent (PR #416)
1834 - Only fire `user_joined_room` if the user has actually joined. (PR #410)
1835 - Reuse a single http client, rather than creating new ones (PR #413)
1836 - Fixed a bug upgrading from older versions of synapse on postgresql (PR #417)
1837
1838 Changes in synapse v0.11.1 (2015-11-20)
1839 =======================================
1840
1841 - Add extra options to search API (PR #394)
1842 - Fix bug where we did not correctly cap federation retry timers. This meant it could take several hours for servers to start talking to ressurected servers, even when they were receiving traffic from them (PR #393)
1843 - Don\'t advertise login token flow unless CAS is enabled. This caused issues where some clients would always use the fallback API if they did not recognize all login flows (PR #391)
1844 - Change /v2 sync API to rename `private_user_data` to `account_data` (PR #386)
1845 - Change /v2 sync API to remove the `event_map` and rename keys in `rooms` object (PR #389)
1846
1847 Changes in synapse v0.11.0-r2 (2015-11-19)
1848 ==========================================
1849
1850 - Fix bug in database port script (PR #387)
1851
1852 Changes in synapse v0.11.0-r1 (2015-11-18)
1853 ==========================================
1854
1855 - Retry and fail federation requests more aggressively for requests that block client side requests (PR #384)
1856
1857 Changes in synapse v0.11.0 (2015-11-17)
1858 =======================================
1859
1860 - Change CAS login API (PR #349)
1861
1862 Changes in synapse v0.11.0-rc2 (2015-11-13)
1863 ===========================================
1864
1865 - Various changes to /sync API response format (PR #373)
1866 - Fix regression when setting display name in newly joined room over federation (PR #368)
1867 - Fix problem where /search was slow when using SQLite (PR #366)
1868
1869 Changes in synapse v0.11.0-rc1 (2015-11-11)
1870 ===========================================
1871
1872 - Add Search API (PR #307, #324, #327, #336, #350, #359)
1873 - Add \'archived\' state to v2 /sync API (PR #316)
1874 - Add ability to reject invites (PR #317)
1875 - Add config option to disable password login (PR #322)
1876 - Add the login fallback API (PR #330)
1877 - Add room context API (PR #334)
1878 - Add room tagging support (PR #335)
1879 - Update v2 /sync API to match spec (PR #305, #316, #321, #332, #337, #341)
1880 - Change retry schedule for application services (PR #320)
1881 - Change retry schedule for remote servers (PR #340)
1882 - Fix bug where we hosted static content in the incorrect place (PR #329)
1883 - Fix bug where we didn\'t increment retry interval for remote servers (PR #343)
1884
1885 Changes in synapse v0.10.1-rc1 (2015-10-15)
1886 ===========================================
1887
1888 - Add support for CAS, thanks to Steven Hammerton (PR #295, #296)
1889 - Add support for using macaroons for `access_token` (PR #256, #229)
1890 - Add support for `m.room.canonical_alias` (PR #287)
1891 - Add support for viewing the history of rooms that they have left. (PR #276, #294)
1892 - Add support for refresh tokens (PR #240)
1893 - Add flag on creation which disables federation of the room (PR #279)
1894 - Add some room state to invites. (PR #275)
1895 - Atomically persist events when joining a room over federation (PR #283)
1896 - Change default history visibility for private rooms (PR #271)
1897 - Allow users to redact their own sent events (PR #262)
1898 - Use tox for tests (PR #247)
1899 - Split up syutil into separate libraries (PR #243)
1900
1901 Changes in synapse v0.10.0-r2 (2015-09-16)
1902 ==========================================
1903
1904 - Fix bug where we always fetched remote server signing keys instead of using ones in our cache.
1905 - Fix adding threepids to an existing account.
1906 - Fix bug with invinting over federation where remote server was already in the room. (PR #281, SYN-392)
1907
1908 Changes in synapse v0.10.0-r1 (2015-09-08)
1909 ==========================================
1910
1911 - Fix bug with python packaging
1912
1913 Changes in synapse v0.10.0 (2015-09-03)
1914 =======================================
1915
1916 No change from release candidate.
1917
1918 Changes in synapse v0.10.0-rc6 (2015-09-02)
1919 ===========================================
1920
1921 - Remove some of the old database upgrade scripts.
1922 - Fix database port script to work with newly created sqlite databases.
1923
1924 Changes in synapse v0.10.0-rc5 (2015-08-27)
1925 ===========================================
1926
1927 - Fix bug that broke downloading files with ascii filenames across federation.
1928
1929 Changes in synapse v0.10.0-rc4 (2015-08-27)
1930 ===========================================
1931
1932 - Allow UTF-8 filenames for upload. (PR #259)
1933
1934 Changes in synapse v0.10.0-rc3 (2015-08-25)
1935 ===========================================
1936
1937 - Add `--keys-directory` config option to specify where files such as certs and signing keys should be stored in, when using `--generate-config` or `--generate-keys`. (PR #250)
1938 - Allow `--config-path` to specify a directory, causing synapse to use all \*.yaml files in the directory as config files. (PR #249)
1939 - Add `web_client_location` config option to specify static files to be hosted by synapse under `/_matrix/client`. (PR #245)
1940 - Add helper utility to synapse to read and parse the config files and extract the value of a given key. For example:
1941
1942 $ python -m synapse.config read server_name -c homeserver.yaml
1943 localhost
1944
1945 (PR #246)
1946
1947 Changes in synapse v0.10.0-rc2 (2015-08-24)
1948 ===========================================
1949
1950 - Fix bug where we incorrectly populated the `event_forward_extremities` table, resulting in problems joining large remote rooms (e.g. `#matrix:matrix.org`)
1951 - Reduce the number of times we wake up pushers by not listening for presence or typing events, reducing the CPU cost of each pusher.
1952
1953 Changes in synapse v0.10.0-rc1 (2015-08-21)
1954 ===========================================
1955
1956 Also see v0.9.4-rc1 changelog, which has been amalgamated into this release.
1957
1958 General:
1959
1960 - Upgrade to Twisted 15 (PR #173)
1961 - Add support for serving and fetching encryption keys over federation. (PR #208)
1962 - Add support for logging in with email address (PR #234)
1963 - Add support for new `m.room.canonical_alias` event. (PR #233)
1964 - Change synapse to treat user IDs case insensitively during registration and login. (If two users already exist with case insensitive matching user ids, synapse will continue to require them to specify their user ids exactly.)
1965 - Error if a user tries to register with an email already in use. (PR #211)
1966 - Add extra and improve existing caches (PR #212, #219, #226, #228)
1967 - Batch various storage request (PR #226, #228)
1968 - Fix bug where we didn\'t correctly log the entity that triggered the request if the request came in via an application service (PR #230)
1969 - Fix bug where we needlessly regenerated the full list of rooms an AS is interested in. (PR #232)
1970 - Add support for AS\'s to use v2\_alpha registration API (PR #210)
1971
1972 Configuration:
1973
1974 - Add `--generate-keys` that will generate any missing cert and key files in the configuration files. This is equivalent to running `--generate-config` on an existing configuration file. (PR #220)
1975 - `--generate-config` now no longer requires a `--server-name` parameter when used on existing configuration files. (PR #220)
1976 - Add `--print-pidfile` flag that controls the printing of the pid to stdout of the demonised process. (PR #213)
1977
1978 Media Repository:
1979
1980 - Fix bug where we picked a lower resolution image than requested. (PR #205)
1981 - Add support for specifying if a the media repository should dynamically thumbnail images or not. (PR #206)
1982
1983 Metrics:
1984
1985 - Add statistics from the reactor to the metrics API. (PR #224, #225)
1986
1987 Demo Homeservers:
1988
1989 - Fix starting the demo homeservers without rate-limiting enabled. (PR #182)
1990 - Fix enabling registration on demo homeservers (PR #223)
1991
1992 Changes in synapse v0.9.4-rc1 (2015-07-21)
1993 ==========================================
1994
1995 General:
1996
1997 - Add basic implementation of receipts. (SPEC-99)
1998 - Add support for configuration presets in room creation API. (PR #203)
1999 - Add auth event that limits the visibility of history for new users. (SPEC-134)
2000 - Add SAML2 login/registration support. (PR #201. Thanks Muthu Subramanian!)
2001 - Add client side key management APIs for end to end encryption. (PR #198)
2002 - Change power level semantics so that you cannot kick, ban or change power levels of users that have equal or greater power level than you. (SYN-192)
2003 - Improve performance by bulk inserting events where possible. (PR #193)
2004 - Improve performance by bulk verifying signatures where possible. (PR #194)
2005
2006 Configuration:
2007
2008 - Add support for including TLS certificate chains.
2009
2010 Media Repository:
2011
2012 - Add Content-Disposition headers to content repository responses. (SYN-150)
2013
2014 Changes in synapse v0.9.3 (2015-07-01)
2015 ======================================
2016
2017 No changes from v0.9.3 Release Candidate 1.
2018
2019 Changes in synapse v0.9.3-rc1 (2015-06-23)
2020 ==========================================
2021
2022 General:
2023
2024 - Fix a memory leak in the notifier. (SYN-412)
2025 - Improve performance of room initial sync. (SYN-418)
2026 - General improvements to logging.
2027 - Remove `access_token` query params from `INFO` level logging.
2028
2029 Configuration:
2030
2031 - Add support for specifying and configuring multiple listeners. (SYN-389)
2032
2033 Application services:
2034
2035 - Fix bug where synapse failed to send user queries to application services.
2036
2037 Changes in synapse v0.9.2-r2 (2015-06-15)
2038 =========================================
2039
2040 Fix packaging so that schema delta python files get included in the package.
2041
2042 Changes in synapse v0.9.2 (2015-06-12)
2043 ======================================
2044
2045 General:
2046
2047 - Use ultrajson for json (de)serialisation when a canonical encoding is not required. Ultrajson is significantly faster than simplejson in certain circumstances.
2048 - Use connection pools for outgoing HTTP connections.
2049 - Process thumbnails on separate threads.
2050
2051 Configuration:
2052
2053 - Add option, `gzip_responses`, to disable HTTP response compression.
2054
2055 Federation:
2056
2057 - Improve resilience of backfill by ensuring we fetch any missing auth events.
2058 - Improve performance of backfill and joining remote rooms by removing unnecessary computations. This included handling events we\'d previously handled as well as attempting to compute the current state for outliers.
2059
2060 Changes in synapse v0.9.1 (2015-05-26)
2061 ======================================
2062
2063 General:
2064
2065 - Add support for backfilling when a client paginates. This allows servers to request history for a room from remote servers when a client tries to paginate history the server does not have - SYN-36
2066 - Fix bug where you couldn\'t disable non-default pushrules - SYN-378
2067 - Fix `register_new_user` script - SYN-359
2068 - Improve performance of fetching events from the database, this improves both initialSync and sending of events.
2069 - Improve performance of event streams, allowing synapse to handle more simultaneous connected clients.
2070
2071 Federation:
2072
2073 - Fix bug with existing backfill implementation where it returned the wrong selection of events in some circumstances.
2074 - Improve performance of joining remote rooms.
2075
2076 Configuration:
2077
2078 - Add support for changing the bind host of the metrics listener via the `metrics_bind_host` option.
2079
2080 Changes in synapse v0.9.0-r5 (2015-05-21)
2081 =========================================
2082
2083 - Add more database caches to reduce amount of work done for each pusher. This radically reduces CPU usage when multiple pushers are set up in the same room.
2084
2085 Changes in synapse v0.9.0 (2015-05-07)
2086 ======================================
2087
2088 General:
2089
2090 - Add support for using a PostgreSQL database instead of SQLite. See [docs/postgres.rst](docs/postgres.rst) for details.
2091 - Add password change and reset APIs. See [Registration](https://github.com/matrix-org/matrix-doc/blob/master/specification/10_client_server_api.rst#registration) in the spec.
2092 - Fix memory leak due to not releasing stale notifiers - SYN-339.
2093 - Fix race in caches that occasionally caused some presence updates to be dropped - SYN-369.
2094 - Check server name has not changed on restart.
2095 - Add a sample systemd unit file and a logger configuration in contrib/systemd. Contributed Ivan Shapovalov.
2096
2097 Federation:
2098
2099 - Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
2100
2101 Configuration:
2102
2103 - Add support for multiple config files.
2104 - Add support for dictionaries in config files.
2105 - Remove support for specifying config options on the command line, except for:
2106 - `--daemonize` - Daemonize the home server.
2107 - `--manhole` - Turn on the twisted telnet manhole service on the given port.
2108 - `--database-path` - The path to a sqlite database to use.
2109 - `--verbose` - The verbosity level.
2110 - `--log-file` - File to log to.
2111 - `--log-config` - Python logging config file.
2112 - `--enable-registration` - Enable registration for new users.
2113
2114 Application services:
2115
2116 - Reliably retry sending of events from Synapse to application services, as per [Application Services](https://github.com/matrix-org/matrix-doc/blob/0c6bd9/specification/25_application_service_api.rst#home-server---application-service-api) spec.
2117 - Application services can no longer register via the `/register` API, instead their configuration should be saved to a file and listed in the synapse `app_service_config_files` config option. The AS configuration file has the same format as the old `/register` request. See [docs/application\_services.rst](docs/application_services.rst) for more information.
2118
2119 Changes in synapse v0.8.1 (2015-03-18)
2120 ======================================
2121
2122 - Disable registration by default. New users can be added using the command `register_new_matrix_user` or by enabling registration in the config.
2123 - Add metrics to synapse. To enable metrics use config options `enable_metrics` and `metrics_port`.
2124 - Fix bug where banning only kicked the user.
2125
2126 Changes in synapse v0.8.0 (2015-03-06)
2127 ======================================
2128
2129 General:
2130
2131 - Add support for registration fallback. This is a page hosted on the server which allows a user to register for an account, regardless of what client they are using (e.g. mobile devices).
2132 - Added new default push rules and made them configurable by clients:
2133 - Suppress all notice messages.
2134 - Notify when invited to a new room.
2135 - Notify for messages that don\'t match any rule.
2136 - Notify on incoming call.
2137
2138 Federation:
2139
2140 - Added per host server side rate-limiting of incoming federation requests.
2141 - Added a `/get_missing_events/` API to federation to reduce number of `/events/` requests.
2142
2143 Configuration:
2144
2145 - Added configuration option to disable registration: `disable_registration`.
2146 - Added configuration option to change soft limit of number of open file descriptors: `soft_file_limit`.
2147 - Make `tls_private_key_path` optional when running with `no_tls`.
2148
2149 Application services:
2150
2151 - Application services can now poll on the CS API `/events` for their events, by providing their application service `access_token`.
2152 - Added exclusive namespace support to application services API.
2153
2154 Changes in synapse v0.7.1 (2015-02-19)
2155 ======================================
2156
2157 - Initial alpha implementation of parts of the Application Services API. Including:
2158 - AS Registration / Unregistration
2159 - User Query API
2160 - Room Alias Query API
2161 - Push transport for receiving events.
2162 - User/Alias namespace admin control
2163 - Add cache when fetching events from remote servers to stop repeatedly fetching events with bad signatures.
2164 - Respect the per remote server retry scheme when fetching both events and server keys to reduce the number of times we send requests to dead servers.
2165 - Inform remote servers when the local server fails to handle a received event.
2166 - Turn off python bytecode generation due to problems experienced when upgrading from previous versions.
2167
2168 Changes in synapse v0.7.0 (2015-02-12)
2169 ======================================
2170
2171 - Add initial implementation of the query auth federation API, allowing servers to agree on whether an event should be allowed or rejected.
2172 - Persist events we have rejected from federation, fixing the bug where servers would keep requesting the same events.
2173 - Various federation performance improvements, including:
2174 - Add in memory caches on queries such as:
2175
2176 > - Computing the state of a room at a point in time, used for authorization on federation requests.
2177 > - Fetching events from the database.
2178 > - User\'s room membership, used for authorizing presence updates.
2179
2180 - Upgraded JSON library to improve parsing and serialisation speeds.
2181
2182 - Add default avatars to new user accounts using pydenticon library.
2183 - Correctly time out federation requests.
2184 - Retry federation requests against different servers.
2185 - Add support for push and push rules.
2186 - Add alpha versions of proposed new CSv2 APIs, including `/sync` API.
2187
2188 Changes in synapse 0.6.1 (2015-01-07)
2189 =====================================
2190
2191 - Major optimizations to improve performance of initial sync and event sending in large rooms (by up to 10x)
2192 - Media repository now includes a Content-Length header on media downloads.
2193 - Improve quality of thumbnails by changing resizing algorithm.
2194
2195 Changes in synapse 0.6.0 (2014-12-16)
2196 =====================================
2197
2198 - Add new API for media upload and download that supports thumbnailing.
2199 - Replicate media uploads over multiple homeservers so media is always served to clients from their local homeserver. This obsoletes the \--content-addr parameter and confusion over accessing content directly from remote homeservers.
2200 - Implement exponential backoff when retrying federation requests when sending to remote homeservers which are offline.
2201 - Implement typing notifications.
2202 - Fix bugs where we sent events with invalid signatures due to bugs where we incorrectly persisted events.
2203 - Improve performance of database queries involving retrieving events.
2204
2205 Changes in synapse 0.5.4a (2014-12-13)
2206 ======================================
2207
2208 - Fix bug while generating the error message when a file path specified in the config doesn\'t exist.
2209
2210 Changes in synapse 0.5.4 (2014-12-03)
2211 =====================================
2212
2213 - Fix presence bug where some rooms did not display presence updates for remote users.
2214 - Do not log SQL timing log lines when started with \"-v\"
2215 - Fix potential memory leak.
2216
2217 Changes in synapse 0.5.3c (2014-12-02)
2218 ======================================
2219
2220 - Change the default value for the content\_addr option to use the HTTP listener, as by default the HTTPS listener will be using a self-signed certificate.
2221
2222 Changes in synapse 0.5.3 (2014-11-27)
2223 =====================================
2224
2225 - Fix bug that caused joining a remote room to fail if a single event was not signed correctly.
2226 - Fix bug which caused servers to continuously try and fetch events from other servers.
2227
2228 Changes in synapse 0.5.2 (2014-11-26)
2229 =====================================
2230
2231 Fix major bug that caused rooms to disappear from peoples initial sync.
2232
2233 Changes in synapse 0.5.1 (2014-11-26)
2234 =====================================
2235
2236 See UPGRADES.rst for specific instructions on how to upgrade.
2237
2238 > - Fix bug where we served up an Event that did not match its signatures.
2239 > - Fix regression where we no longer correctly handled the case where a homeserver receives an event for a room it doesn\'t recognise (but is in.)
2240
2241 Changes in synapse 0.5.0 (2014-11-19)
2242 =====================================
2243
2244 This release includes changes to the federation protocol and client-server API that is not backwards compatible.
2245
2246 This release also changes the internal database schemas and so requires servers to drop their current history. See UPGRADES.rst for details.
2247
2248 Homeserver:
2249
2250 : - Add authentication and authorization to the federation protocol. Events are now signed by their originating homeservers.
2251 - Implement the new authorization model for rooms.
2252 - Split out web client into a seperate repository: matrix-angular-sdk.
2253 - Change the structure of PDUs.
2254 - Fix bug where user could not join rooms via an alias containing 4-byte UTF-8 characters.
2255 - Merge concept of PDUs and Events internally.
2256 - Improve logging by adding request ids to log lines.
2257 - Implement a very basic room initial sync API.
2258 - Implement the new invite/join federation APIs.
2259
2260 Webclient:
2261
2262 : - The webclient has been moved to a seperate repository.
2263
2264 Changes in synapse 0.4.2 (2014-10-31)
2265 =====================================
2266
2267 Homeserver:
2268
2269 : - Fix bugs where we did not notify users of correct presence updates.
2270 - Fix bug where we did not handle sub second event stream timeouts.
2271
2272 Webclient:
2273
2274 : - Add ability to click on messages to see JSON.
2275 - Add ability to redact messages.
2276 - Add ability to view and edit all room state JSON.
2277 - Handle incoming redactions.
2278 - Improve feedback on errors.
2279 - Fix bugs in mobile CSS.
2280 - Fix bugs with desktop notifications.
2281
2282 Changes in synapse 0.4.1 (2014-10-17)
2283 =====================================
2284
2285 Webclient:
2286
2287 : - Fix bug with display of timestamps.
2288
2289 Changes in synpase 0.4.0 (2014-10-17)
2290 =====================================
2291
2292 This release includes changes to the federation protocol and client-server API that is not backwards compatible.
2293
2294 The Matrix specification has been moved to a separate git repository: <http://github.com/matrix-org/matrix-doc>
2295
2296 You will also need an updated syutil and config. See UPGRADES.rst.
2297
2298 Homeserver:
2299
2300 : - Sign federation transactions to assert strong identity over federation.
2301 - Rename timestamp keys in PDUs and events from \'ts\' and \'hsob\_ts\' to \'origin\_server\_ts\'.
2302
2303 Changes in synapse 0.3.4 (2014-09-25)
2304 =====================================
2305
2306 This version adds support for using a TURN server. See docs/turn-howto.rst on how to set one up.
2307
2308 Homeserver:
2309
2310 : - Add support for redaction of messages.
2311 - Fix bug where inviting a user on a remote home server could take up to 20-30s.
2312 - Implement a get current room state API.
2313 - Add support specifying and retrieving turn server configuration.
2314
2315 Webclient:
2316
2317 : - Add button to send messages to users from the home page.
2318 - Add support for using TURN for VoIP calls.
2319 - Show display name change messages.
2320 - Fix bug where the client didn\'t get the state of a newly joined room until after it has been refreshed.
2321 - Fix bugs with tab complete.
2322 - Fix bug where holding down the down arrow caused chrome to chew 100% CPU.
2323 - Fix bug where desktop notifications occasionally used \"Undefined\" as the display name.
2324 - Fix more places where we sometimes saw room IDs incorrectly.
2325 - Fix bug which caused lag when entering text in the text box.
2326
2327 Changes in synapse 0.3.3 (2014-09-22)
2328 =====================================
2329
2330 Homeserver:
2331
2332 : - Fix bug where you continued to get events for rooms you had left.
2333
2334 Webclient:
2335
2336 : - Add support for video calls with basic UI.
2337 - Fix bug where one to one chats were named after your display name rather than the other person\'s.
2338 - Fix bug which caused lag when typing in the textarea.
2339 - Refuse to run on browsers we know won\'t work.
2340 - Trigger pagination when joining new rooms.
2341 - Fix bug where we sometimes didn\'t display invitations in recents.
2342 - Automatically join room when accepting a VoIP call.
2343 - Disable outgoing and reject incoming calls on browsers we don\'t support VoIP in.
2344 - Don\'t display desktop notifications for messages in the room you are non-idle and speaking in.
2345
2346 Changes in synapse 0.3.2 (2014-09-18)
2347 =====================================
2348
2349 Webclient:
2350
2351 : - Fix bug where an empty \"bing words\" list in old accounts didn\'t send notifications when it should have done.
2352
2353 Changes in synapse 0.3.1 (2014-09-18)
2354 =====================================
2355
2356 This is a release to hotfix v0.3.0 to fix two regressions.
2357
2358 Webclient:
2359
2360 : - Fix a regression where we sometimes displayed duplicate events.
2361 - Fix a regression where we didn\'t immediately remove rooms you were banned in from the recents list.
2362
2363 Changes in synapse 0.3.0 (2014-09-18)
2364 =====================================
2365
2366 See UPGRADE for information about changes to the client server API, including breaking backwards compatibility with VoIP calls and registration API.
2367
2368 Homeserver:
2369
2370 : - When a user changes their displayname or avatar the server will now update all their join states to reflect this.
2371 - The server now adds \"age\" key to events to indicate how old they are. This is clock independent, so at no point does any server or webclient have to assume their clock is in sync with everyone else.
2372 - Fix bug where we didn\'t correctly pull in missing PDUs.
2373 - Fix bug where prev\_content key wasn\'t always returned.
2374 - Add support for password resets.
2375
2376 Webclient:
2377
2378 : - Improve page content loading.
2379 - Join/parts now trigger desktop notifications.
2380 - Always show room aliases in the UI if one is present.
2381 - No longer show user-count in the recents side panel.
2382 - Add up & down arrow support to the text box for message sending to step through your sent history.
2383 - Don\'t display notifications for our own messages.
2384 - Emotes are now formatted correctly in desktop notifications.
2385 - The recents list now differentiates between public & private rooms.
2386 - Fix bug where when switching between rooms the pagination flickered before the view jumped to the bottom of the screen.
2387 - Add bing word support.
2388
2389 Registration API:
2390
2391 : - The registration API has been overhauled to function like the login API. In practice, this means registration requests must now include the following: \'type\':\'m.login.password\'. See UPGRADE for more information on this.
2392 - The \'user\_id\' key has been renamed to \'user\' to better match the login API.
2393 - There is an additional login type: \'m.login.email.identity\'.
2394 - The command client and web client have been updated to reflect these changes.
2395
2396 Changes in synapse 0.2.3 (2014-09-12)
2397 =====================================
2398
2399 Homeserver:
2400
2401 : - Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
2402 - Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
2403
2404 Webclient:
2405
2406 : - Display room names and topics.
2407 - Allow setting/editing of room names and topics.
2408 - Display information about rooms on the main page.
2409 - Handle ban and kick events in real time.
2410 - VoIP UI and reliability improvements.
2411 - Add glare support for VoIP.
2412 - Improvements to initial startup speed.
2413 - Don\'t display duplicate join events.
2414 - Local echo of messages.
2415 - Differentiate sending and sent of local echo.
2416 - Various minor bug fixes.
2417
2418 Changes in synapse 0.2.2 (2014-09-06)
2419 =====================================
2420
2421 Homeserver:
2422
2423 : - When the server returns state events it now also includes the previous content.
2424 - Add support for inviting people when creating a new room.
2425 - Make the homeserver inform the room via m.room.aliases when a new alias is added for a room.
2426 - Validate m.room.power\_level events.
2427
2428 Webclient:
2429
2430 : - Add support for captchas on registration.
2431 - Handle m.room.aliases events.
2432 - Asynchronously send messages and show a local echo.
2433 - Inform the UI when a message failed to send.
2434 - Only autoscroll on receiving a new message if the user was already at the bottom of the screen.
2435 - Add support for ban/kick reasons.
2436
2437 Changes in synapse 0.2.1 (2014-09-03)
2438 =====================================
2439
2440 Homeserver:
2441
2442 : - Added support for signing up with a third party id.
2443 - Add synctl scripts.
2444 - Added rate limiting.
2445 - Add option to change the external address the content repo uses.
2446 - Presence bug fixes.
2447
2448 Webclient:
2449
2450 : - Added support for signing up with a third party id.
2451 - Added support for banning and kicking users.
2452 - Added support for displaying and setting ops.
2453 - Added support for room names.
2454 - Fix bugs with room membership event display.
2455
2456 Changes in synapse 0.2.0 (2014-09-02)
2457 =====================================
2458
2459 This update changes many configuration options, updates the database schema and mandates SSL for server-server connections.
2460
2461 Homeserver:
2462
2463 : - Require SSL for server-server connections.
2464 - Add SSL listener for client-server connections.
2465 - Add ability to use config files.
2466 - Add support for kicking/banning and power levels.
2467 - Allow setting of room names and topics on creation.
2468 - Change presence to include last seen time of the user.
2469 - Change url path prefix to /\_matrix/\...
2470 - Bug fixes to presence.
2471
2472 Webclient:
2473
2474 : - Reskin the CSS for registration and login.
2475 - Various improvements to rooms CSS.
2476 - Support changes in client-server API.
2477 - Bug fixes to VOIP UI.
2478 - Various bug fixes to handling of changes to room member list.
2479
2480 Changes in synapse 0.1.2 (2014-08-29)
2481 =====================================
2482
2483 Webclient:
2484
2485 : - Add basic call state UI for VoIP calls.
2486
2487 Changes in synapse 0.1.1 (2014-08-29)
2488 =====================================
2489
2490 Homeserver:
2491
2492 : - Fix bug that caused the event stream to not notify some clients about changes.
2493
2494 Changes in synapse 0.1.0 (2014-08-29)
2495 =====================================
2496
2497 Presence has been reenabled in this release.
2498
2499 Homeserver:
2500
2501 : -
2502
2503 Update client to server API, including:
2504
2505 : - Use a more consistent url scheme.
2506 - Provide more useful information in the initial sync api.
2507
2508 - Change the presence handling to be much more efficient.
2509 - Change the presence server to server API to not require explicit polling of all users who share a room with a user.
2510 - Fix races in the event streaming logic.
2511
2512 Webclient:
2513
2514 : - Update to use new client to server API.
2515 - Add basic VOIP support.
2516 - Add idle timers that change your status to away.
2517 - Add recent rooms column when viewing a room.
2518 - Various network efficiency improvements.
2519 - Add basic mobile browser support.
2520 - Add a settings page.
2521
2522 Changes in synapse 0.0.1 (2014-08-22)
2523 =====================================
2524
2525 Presence has been disabled in this release due to a bug that caused the homeserver to spam other remote homeservers.
2526
2527 Homeserver:
2528
2529 : - Completely change the database schema to support generic event types.
2530 - Improve presence reliability.
2531 - Improve reliability of joining remote rooms.
2532 - Fix bug where room join events were duplicated.
2533 - Improve initial sync API to return more information to the client.
2534 - Stop generating fake messages for room membership events.
2535
2536 Webclient:
2537
2538 : - Add tab completion of names.
2539 - Add ability to upload and send images.
2540 - Add profile pages.
2541 - Improve CSS layout of room.
2542 - Disambiguate identical display names.
2543 - Don\'t get remote users display names and avatars individually.
2544 - Use the new initial sync API to reduce number of round trips to the homeserver.
2545 - Change url scheme to use room aliases instead of room ids where known.
2546 - Increase longpoll timeout.
2547
2548 Changes in synapse 0.0.0 (2014-08-13)
2549 =====================================
2550
2551 > - Initial alpha release
+0
-2887
CHANGES.rst less more
0 Synapse 0.33.1 (2018-08-02)
1 ===========================
2
3 SECURITY FIXES
4 --------------
5
6 - Fix a potential issue where servers could request events for rooms they have not joined. (`#3641 <https://github.com/matrix-org/synapse/issues/3641>`_)
7 - Fix a potential issue where users could see events in private rooms before they joined. (`#3642 <https://github.com/matrix-org/synapse/issues/3642>`_)
8
9
10 Synapse 0.33.0 (2018-07-19)
11 ===========================
12
13 Bugfixes
14 --------
15
16 - Disable a noisy warning about logcontexts. (`#3561 <https://github.com/matrix-org/synapse/issues/3561>`_)
17
18
19 Synapse 0.33.0rc1 (2018-07-18)
20 ==============================
21
22 Features
23 --------
24
25 - Enforce the specified API for report_event. (`#3316 <https://github.com/matrix-org/synapse/issues/3316>`_)
26 - Include CPU time from database threads in request/block metrics. (`#3496 <https://github.com/matrix-org/synapse/issues/3496>`_, `#3501 <https://github.com/matrix-org/synapse/issues/3501>`_)
27 - Add CPU metrics for _fetch_event_list. (`#3497 <https://github.com/matrix-org/synapse/issues/3497>`_)
28 - Optimisation to make handling incoming federation requests more efficient. (`#3541 <https://github.com/matrix-org/synapse/issues/3541>`_)
29
30
31 Bugfixes
32 --------
33
34 - Fix a significant performance regression in /sync. (`#3505 <https://github.com/matrix-org/synapse/issues/3505>`_, `#3521 <https://github.com/matrix-org/synapse/issues/3521>`_, `#3530 <https://github.com/matrix-org/synapse/issues/3530>`_, `#3544 <https://github.com/matrix-org/synapse/issues/3544>`_)
35 - Use more portable syntax in our use of the attrs package, widening the supported versions. (`#3498 <https://github.com/matrix-org/synapse/issues/3498>`_)
36 - Fix queued federation requests being processed in the wrong order. (`#3533 <https://github.com/matrix-org/synapse/issues/3533>`_)
37 - Ensure that erasure requests are correctly honoured for publicly accessible rooms when accessed over federation. (`#3546 <https://github.com/matrix-org/synapse/issues/3546>`_)
38
39
40 Misc
41 ----
42
43 - Refactoring to improve testability. (`#3351 <https://github.com/matrix-org/synapse/issues/3351>`_, `#3499 <https://github.com/matrix-org/synapse/issues/3499>`_)
44 - Use ``isort`` to sort imports. (`#3463 <https://github.com/matrix-org/synapse/issues/3463>`_, `#3464 <https://github.com/matrix-org/synapse/issues/3464>`_, `#3540 <https://github.com/matrix-org/synapse/issues/3540>`_)
45 - Use parse and asserts from http.servlet. (`#3534 <https://github.com/matrix-org/synapse/issues/3534>`_, `#3535 <https://github.com/matrix-org/synapse/issues/3535>`_).
46
47
48 Synapse 0.32.2 (2018-07-07)
49 ===========================
50
51 Bugfixes
52 --------
53
54 - Amend the Python dependencies to depend on attrs from PyPI, not attr (`#3492 <https://github.com/matrix-org/synapse/issues/3492>`_)
55
56
57 Synapse 0.32.1 (2018-07-06)
58 ===========================
59
60 Bugfixes
61 --------
62
63 - Add explicit dependency on netaddr (`#3488 <https://github.com/matrix-org/synapse/issues/3488>`_)
64
65
66 Changes in synapse v0.32.0 (2018-07-06)
67 ===========================================
68 No changes since 0.32.0rc1
69
70 Synapse 0.32.0rc1 (2018-07-05)
71 ==============================
72
73 Features
74 --------
75
76 - Add blacklist & whitelist of servers allowed to send events to a room via ``m.room.server_acl`` event.
77 - Cache factor override system for specific caches (`#3334 <https://github.com/matrix-org/synapse/issues/3334>`_)
78 - Add metrics to track appservice transactions (`#3344 <https://github.com/matrix-org/synapse/issues/3344>`_)
79 - Try to log more helpful info when a sig verification fails (`#3372 <https://github.com/matrix-org/synapse/issues/3372>`_)
80 - Synapse now uses the best performing JSON encoder/decoder according to your runtime (simplejson on CPython, stdlib json on PyPy). (`#3462 <https://github.com/matrix-org/synapse/issues/3462>`_)
81 - Add optional ip_range_whitelist param to AS registration files to lock AS IP access (`#3465 <https://github.com/matrix-org/synapse/issues/3465>`_)
82 - Reject invalid server names in federation requests (`#3480 <https://github.com/matrix-org/synapse/issues/3480>`_)
83 - Reject invalid server names in homeserver.yaml (`#3483 <https://github.com/matrix-org/synapse/issues/3483>`_)
84
85
86 Bugfixes
87 --------
88
89 - Strip access_token from outgoing requests (`#3327 <https://github.com/matrix-org/synapse/issues/3327>`_)
90 - Redact AS tokens in logs (`#3349 <https://github.com/matrix-org/synapse/issues/3349>`_)
91 - Fix federation backfill from SQLite servers (`#3355 <https://github.com/matrix-org/synapse/issues/3355>`_)
92 - Fix event-purge-by-ts admin API (`#3363 <https://github.com/matrix-org/synapse/issues/3363>`_)
93 - Fix event filtering in get_missing_events handler (`#3371 <https://github.com/matrix-org/synapse/issues/3371>`_)
94 - Synapse is now stricter regarding accepting events which it cannot retrieve the prev_events for. (`#3456 <https://github.com/matrix-org/synapse/issues/3456>`_)
95 - Fix bug where synapse would explode when receiving unicode in HTTP User-Agent header (`#3470 <https://github.com/matrix-org/synapse/issues/3470>`_)
96 - Invalidate cache on correct thread to avoid race (`#3473 <https://github.com/matrix-org/synapse/issues/3473>`_)
97
98
99 Improved Documentation
100 ----------------------
101
102 - ``doc/postgres.rst``: fix display of the last command block. Thanks to @ArchangeGabriel! (`#3340 <https://github.com/matrix-org/synapse/issues/3340>`_)
103
104
105 Deprecations and Removals
106 -------------------------
107
108 - Remove was_forgotten_at (`#3324 <https://github.com/matrix-org/synapse/issues/3324>`_)
109
110
111 Misc
112 ----
113
114 - `#3332 <https://github.com/matrix-org/synapse/issues/3332>`_, `#3341 <https://github.com/matrix-org/synapse/issues/3341>`_, `#3347 <https://github.com/matrix-org/synapse/issues/3347>`_, `#3348 <https://github.com/matrix-org/synapse/issues/3348>`_, `#3356 <https://github.com/matrix-org/synapse/issues/3356>`_, `#3385 <https://github.com/matrix-org/synapse/issues/3385>`_, `#3446 <https://github.com/matrix-org/synapse/issues/3446>`_, `#3447 <https://github.com/matrix-org/synapse/issues/3447>`_, `#3467 <https://github.com/matrix-org/synapse/issues/3467>`_, `#3474 <https://github.com/matrix-org/synapse/issues/3474>`_
115
116
117 Changes in synapse v0.31.2 (2018-06-14)
118 =======================================
119
120 SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
121 when there is no ``m.room.power_levels`` event in force in the room. (PR #3397)
122
123 Discussion around the Matrix Spec change proposal for this change can be
124 followed at https://github.com/matrix-org/matrix-doc/issues/1304.
125
126 Changes in synapse v0.31.1 (2018-06-08)
127 =======================================
128
129 v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
130 where event visibility rules were not applied correctly.
131
132 We are not aware of it being actively exploited but please upgrade asap.
133
134 Bug Fixes:
135
136 * Fix event filtering in get_missing_events handler (PR #3371)
137
138 Changes in synapse v0.31.0 (2018-06-06)
139 =======================================
140
141 Most notable change from v0.30.0 is to switch to the python prometheus library to improve system
142 stats reporting. WARNING: this changes a number of prometheus metrics in a
143 backwards-incompatible manner. For more details, see
144 `docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.
145
146 Bug Fixes:
147
148 * Fix metric documentation tables (PR #3341)
149 * Fix LaterGauge error handling (694968f)
150 * Fix replication metrics (b7e7fd2)
151
152 Changes in synapse v0.31.0-rc1 (2018-06-04)
153 ==========================================
154
155 Features:
156
157 * Switch to the Python Prometheus library (PR #3256, #3274)
158 * Let users leave the server notice room after joining (PR #3287)
159
160
161 Changes:
162
163 * daily user type phone home stats (PR #3264)
164 * Use iter* methods for _filter_events_for_server (PR #3267)
165 * Docs on consent bits (PR #3268)
166 * Remove users from user directory on deactivate (PR #3277)
167 * Avoid sending consent notice to guest users (PR #3288)
168 * disable CPUMetrics if no /proc/self/stat (PR #3299)
169 * Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
170 * Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
171 * Reduce stuck read-receipts: ignore depth when updating (PR #3318)
172 * Put python's logs into Trial when running unit tests (PR #3319)
173
174 Changes, python 3 migration:
175
176 * Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
177 * replace some iteritems with six (PR #3244) Thanks to @NotAFile!
178 * Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
179 * use repr, not str (PR #3246) Thanks to @NotAFile!
180 * Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
181 * Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
182 * more six iteritems (PR #3279) Thanks to @NotAFile!
183 * More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
184 * remaining isintance fixes (PR #3281) Thanks to @NotAFile!
185 * py3-ize state.py (PR #3283) Thanks to @NotAFile!
186 * extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
187 * use memoryview in py3 (PR #3303) Thanks to @NotAFile!
188
189 Bugs:
190
191 * Fix federation backfill bugs (PR #3261)
192 * federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
193
194
195 Changes in synapse v0.30.0 (2018-05-24)
196 ==========================================
197
198 'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
199 channel whereby server administrators can send messages to users on the server.
200
201 They are used as part of communication of the server policies (see ``docs/consent_tracking.md``),
202 however the intention is that they may also find a use for features such
203 as "Message of the day".
204
205 This feature is specific to Synapse, but uses standard Matrix communication mechanisms,
206 so should work with any Matrix client. For more details see ``docs/server_notices.md``
207
208 Further Server Notices/Consent Tracking Support:
209
210 * Allow overriding the server_notices user's avatar (PR #3273)
211 * Use the localpart in the consent uri (PR #3272)
212 * Support for putting %(consent_uri)s in messages (PR #3271)
213 * Block attempts to send server notices to remote users (PR #3270)
214 * Docs on consent bits (PR #3268)
215
216
217
218 Changes in synapse v0.30.0-rc1 (2018-05-23)
219 ==========================================
220
221 Server Notices/Consent Tracking Support:
222
223 * ConsentResource to gather policy consent from users (PR #3213)
224 * Move RoomCreationHandler out of synapse.handlers.Handlers (PR #3225)
225 * Infrastructure for a server notices room (PR #3232)
226 * Send users a server notice about consent (PR #3236)
227 * Reject attempts to send event before privacy consent is given (PR #3257)
228 * Add a 'has_consented' template var to consent forms (PR #3262)
229 * Fix dependency on jinja2 (PR #3263)
230
231 Features:
232
233 * Cohort analytics (PR #3163, #3241, #3251)
234 * Add lxml to docker image for web previews (PR #3239) Thanks to @ptman!
235 * Add in flight request metrics (PR #3252)
236
237 Changes:
238
239 * Remove unused `update_external_syncs` (PR #3233)
240 * Use stream rather depth ordering for push actions (PR #3212)
241 * Make purge_history operate on tokens (PR #3221)
242 * Don't support limitless pagination (PR #3265)
243
244 Bug Fixes:
245
246 * Fix logcontext resource usage tracking (PR #3258)
247 * Fix error in handling receipts (PR #3235)
248 * Stop the transaction cache caching failures (PR #3255)
249
250
251 Changes in synapse v0.29.1 (2018-05-17)
252 ==========================================
253 Changes:
254
255 * Update docker documentation (PR #3222)
256
257 Changes in synapse v0.29.0 (2018-05-16)
258 ===========================================
259 Not changes since v0.29.0-rc1
260
261 Changes in synapse v0.29.0-rc1 (2018-05-14)
262 ===========================================
263
264 Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a
265 closed spec bug in the Client Server API. Additionally further prep for Python 3
266 migration.
267
268 Potentially breaking change:
269
270 * Make Client-Server API return 401 for invalid token (PR #3161).
271
272 This changes the Client-server spec to return a 401 error code instead of 403
273 when the access token is unrecognised. This is the behaviour required by the
274 specification, but some clients may be relying on the old, incorrect
275 behaviour.
276
277 Thanks to @NotAFile for fixing this.
278
279 Features:
280
281 * Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!
282
283 Changes - General:
284
285 * nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
286 * Part user from rooms on account deactivate (PR #3201)
287 * Make 'unexpected logging context' into warnings (PR #3007)
288 * Set Server header in SynapseRequest (PR #3208)
289 * remove duplicates from groups tables (PR #3129)
290 * Improve exception handling for background processes (PR #3138)
291 * Add missing consumeErrors to improve exception handling (PR #3139)
292 * reraise exceptions more carefully (PR #3142)
293 * Remove redundant call to preserve_fn (PR #3143)
294 * Trap exceptions thrown within run_in_background (PR #3144)
295
296 Changes - Refactors:
297
298 * Refactor /context to reuse pagination storage functions (PR #3193)
299 * Refactor recent events func to use pagination func (PR #3195)
300 * Refactor pagination DB API to return concrete type (PR #3196)
301 * Refactor get_recent_events_for_room return type (PR #3198)
302 * Refactor sync APIs to reuse pagination API (PR #3199)
303 * Remove unused code path from member change DB func (PR #3200)
304 * Refactor request handling wrappers (PR #3203)
305 * transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
306 * Refactor event storage to prepare for changes in state calculations (PR #3141)
307 * Set Server header in SynapseRequest (PR #3208)
308 * Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
309 * Use run_in_background in preference to preserve_fn (PR #3140)
310
311 Changes - Python 3 migration:
312
313 * Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
314 * run config tests on py3 (PR #3159) Thanks to @NotAFile!
315 * Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
316 * Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
317 * Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
318 * Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
319 * Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
320 * Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
321 * Move more xrange to six (PR #3151) Thanks to @NotAFile!
322 * make imports local (PR #3152) Thanks to @NotAFile!
323 * move httplib import to six (PR #3153) Thanks to @NotAFile!
324 * Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
325 * more bytes strings (PR #3155) Thanks to @NotAFile!
326
327 Bug Fixes:
328
329 * synapse fails to start under Twisted >= 18.4 (PR #3157)
330 * Fix a class of logcontext leaks (PR #3170)
331 * Fix a couple of logcontext leaks in unit tests (PR #3172)
332 * Fix logcontext leak in media repo (PR #3174)
333 * Escape label values in prometheus metrics (PR #3175, #3186)
334 * Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
335 * Fix logcontext leaks in rate limiter (PR #3183)
336 * notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
337 * nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
338 * add guard for None on purge_history api (PR #3160) Thanks to @krombel!
339
340 Changes in synapse v0.28.1 (2018-05-01)
341 =======================================
342
343 SECURITY UPDATE
344
345 * Clamp the allowed values of event depth received over federation to be
346 [0, 2^63 - 1]. This mitigates an attack where malicious events
347 injected with depth = 2^63 - 1 render rooms unusable. Depth is used to
348 determine the cosmetic ordering of events within a room, and so the ordering
349 of events in such a room will default to using stream_ordering rather than depth
350 (topological_ordering).
351
352 This is a temporary solution to mitigate abuse in the wild, whilst a long term solution
353 is being implemented to improve how the depth parameter is used.
354
355 Full details at
356 https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
357
358 * Pin Twisted to <18.4 until we stop using the private _OpenSSLECCurve API.
359
360
361 Changes in synapse v0.28.0 (2018-04-26)
362 =======================================
363
364 Bug Fixes:
365
366 * Fix quarantine media admin API and search reindex (PR #3130)
367 * Fix media admin APIs (PR #3134)
368
369
370 Changes in synapse v0.28.0-rc1 (2018-04-24)
371 ===========================================
372
373 Minor performance improvement to federation sending and bug fixes.
374
375 (Note: This release does not include the delta state resolution implementation discussed in matrix live)
376
377
378 Features:
379
380 * Add metrics for event processing lag (PR #3090)
381 * Add metrics for ResponseCache (PR #3092)
382
383 Changes:
384
385 * Synapse on PyPy (PR #2760) Thanks to @Valodim!
386 * move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
387 * Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
388 * Document the behaviour of ResponseCache (PR #3059)
389 * Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
390 * update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
391 * use python3-compatible prints (PR #3074) Thanks to @NotAFile!
392 * Send federation events concurrently (PR #3078)
393 * Limit concurrent event sends for a room (PR #3079)
394 * Improve R30 stat definition (PR #3086)
395 * Send events to ASes concurrently (PR #3088)
396 * Refactor ResponseCache usage (PR #3093)
397 * Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
398 * Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
399 * Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
400 * Refactor store.have_events (PR #3117)
401
402 Bug Fixes:
403
404 * Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
405 * Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
406 * fix federation_domain_whitelist (PR #3099)
407 * Avoid creating events with huge numbers of prev_events (PR #3113)
408 * Reject events which have lots of prev_events (PR #3118)
409
410
411 Changes in synapse v0.27.4 (2018-04-13)
412 ======================================
413
414 Changes:
415
416 * Update canonicaljson dependency (#3095)
417
418
419 Changes in synapse v0.27.3 (2018-04-11)
420 ======================================
421
422 Bug fixes:
423
424 * URL quote path segments over federation (#3082)
425
426 Changes in synapse v0.27.3-rc2 (2018-04-09)
427 ==========================================
428
429 v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
430 the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
431
432 Changes in synapse v0.27.3-rc1 (2018-04-09)
433 =======================================
434
435 Notable changes include API support for joinability of groups. Also new metrics
436 and phone home stats. Phone home stats include better visibility of system usage
437 so we can tweak synpase to work better for all users rather than our own experience
438 with matrix.org. Also, recording 'r30' stat which is the measure we use to track
439 overal growth of the Matrix ecosystem. It is defined as:-
440
441 Counts the number of native 30 day retained users, defined as:-
442 * Users who have created their accounts more than 30 days
443 * Where last seen at most 30 days ago
444 * Where account creation and last_seen are > 30 days"
445
446
447 Features:
448
449 * Add joinability for groups (PR #3045)
450 * Implement group join API (PR #3046)
451 * Add counter metrics for calculating state delta (PR #3033)
452 * R30 stats (PR #3041)
453 * Measure time it takes to calculate state group ID (PR #3043)
454 * Add basic performance statistics to phone home (PR #3044)
455 * Add response size metrics (PR #3071)
456 * phone home cache size configurations (PR #3063)
457
458 Changes:
459
460 * Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
461 * Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
462 * Use .iter* to avoid copies in StateHandler (PR #3006)
463 * Linearize calls to _generate_user_id (PR #3029)
464 * Remove last usage of ujson (PR #3030)
465 * Use simplejson throughout (PR #3048)
466 * Use static JSONEncoders (PR #3049)
467 * Remove uses of events.content (PR #3060)
468 * Improve database cache performance (PR #3068)
469
470 Bug fixes:
471
472 * Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
473 * Fix replication after switch to simplejson (PR #3015)
474 * 404 correctly on missing paths via NoResource (PR #3022)
475 * Fix error when claiming e2e keys from offline servers (PR #3034)
476 * fix tests/storage/test_user_directory.py (PR #3042)
477 * use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
478 * postgres port script: fix state_groups_pkey error (PR #3072)
479
480
481 Changes in synapse v0.27.2 (2018-03-26)
482 =======================================
483
484 Bug fixes:
485
486 * Fix bug which broke TCP replication between workers (PR #3015)
487
488
489 Changes in synapse v0.27.1 (2018-03-26)
490 =======================================
491
492 Meta release as v0.27.0 temporarily pointed to the wrong commit
493
494
495 Changes in synapse v0.27.0 (2018-03-26)
496 =======================================
497
498 No changes since v0.27.0-rc2
499
500
501 Changes in synapse v0.27.0-rc2 (2018-03-19)
502 ===========================================
503
504 Pulls in v0.26.1
505
506 Bug fixes:
507
508 * Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
509
510
511 Changes in synapse v0.26.1 (2018-03-15)
512 =======================================
513
514 Bug fixes:
515
516 * Fix bug where an invalid event caused server to stop functioning correctly,
517 due to parsing and serializing bugs in ujson library (PR #3008)
518
519
520 Changes in synapse v0.27.0-rc1 (2018-03-14)
521 ===========================================
522
523 The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
524
525 This release also begins the process of renaming a number of the metrics
526 reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
527 Note that the v0.28.0 release will remove the deprecated metric names.
528
529 Features:
530
531 * Add ability for ASes to override message send time (PR #2754)
532 * Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
533 * Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
534 * Add support for whitelisting 3PIDs that users can register. (PR #2813)
535 * Add ``/room/{id}/event/{id}`` API (PR #2766)
536 * Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
537 * Add ``federation_domain_whitelist`` option (PR #2820, #2821)
538
539
540 Changes:
541
542 * Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
543 * Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
544 * Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
545 * No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
546 * Remove ``verbosity``/``log_file`` from generated config (PR #2755)
547 * Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
548 * When using synctl with workers, don't start the main synapse automatically (PR #2774)
549 * Minor performance improvements (PR #2773, #2792)
550 * Use a connection pool for non-federation outbound connections (PR #2817)
551 * Make it possible to run unit tests against postgres (PR #2829)
552 * Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
553 * Remove ability for AS users to call /events and /sync (PR #2948)
554 * Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
555
556 Bug fixes:
557
558 * Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
559 * Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
560 * Fix publicised groups GET API (singular) over federation (PR #2772)
561 * Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
562 * Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
563 * Fix bug in quarantine_media (PR #2837)
564 * Fix url_previews when no Content-Type is returned from URL (PR #2845)
565 * Fix rare race in sync API when joining room (PR #2944)
566 * Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
567
568
569 Changes in synapse v0.26.0 (2018-01-05)
570 =======================================
571
572 No changes since v0.26.0-rc1
573
574
575 Changes in synapse v0.26.0-rc1 (2017-12-13)
576 ===========================================
577
578 Features:
579
580 * Add ability for ASes to publicise groups for their users (PR #2686)
581 * Add all local users to the user_directory and optionally search them (PR
582 #2723)
583 * Add support for custom login types for validating users (PR #2729)
584
585
586 Changes:
587
588 * Update example Prometheus config to new format (PR #2648) Thanks to
589 @krombel!
590 * Rename redact_content option to include_content in Push API (PR #2650)
591 * Declare support for r0.3.0 (PR #2677)
592 * Improve upserts (PR #2684, #2688, #2689, #2713)
593 * Improve documentation of workers (PR #2700)
594 * Improve tracebacks on exceptions (PR #2705)
595 * Allow guest access to group APIs for reading (PR #2715)
596 * Support for posting content in federation_client script (PR #2716)
597 * Delete devices and pushers on logouts etc (PR #2722)
598
599
600 Bug fixes:
601
602 * Fix database port script (PR #2673)
603 * Fix internal server error on login with ldap_auth_provider (PR #2678) Thanks
604 to @jkolo!
605 * Fix error on sqlite 3.7 (PR #2697)
606 * Fix OPTIONS on preview_url (PR #2707)
607 * Fix error handling on dns lookup (PR #2711)
608 * Fix wrong avatars when inviting multiple users when creating room (PR #2717)
609 * Fix 500 when joining matrix-dev (PR #2719)
610
611
612 Changes in synapse v0.25.1 (2017-11-17)
613 =======================================
614
615 Bug fixes:
616
617 * Fix login with LDAP and other password provider modules (PR #2678). Thanks to
618 @jkolo!
619
620 Changes in synapse v0.25.0 (2017-11-15)
621 =======================================
622
623 Bug fixes:
624
625 * Fix port script (PR #2673)
626
627
628 Changes in synapse v0.25.0-rc1 (2017-11-14)
629 ===========================================
630
631 Features:
632
633 * Add is_public to groups table to allow for private groups (PR #2582)
634 * Add a route for determining who you are (PR #2668) Thanks to @turt2live!
635 * Add more features to the password providers (PR #2608, #2610, #2620, #2622,
636 #2623, #2624, #2626, #2628, #2629)
637 * Add a hook for custom rest endpoints (PR #2627)
638 * Add API to update group room visibility (PR #2651)
639
640
641 Changes:
642
643 * Ignore <noscript> tags when generating URL preview descriptions (PR #2576)
644 Thanks to @maximevaillancourt!
645 * Register some /unstable endpoints in /r0 as well (PR #2579) Thanks to
646 @krombel!
647 * Support /keys/upload on /r0 as well as /unstable (PR #2585)
648 * Front-end proxy: pass through auth header (PR #2586)
649 * Allow ASes to deactivate their own users (PR #2589)
650 * Remove refresh tokens (PR #2613)
651 * Automatically set default displayname on register (PR #2617)
652 * Log login requests (PR #2618)
653 * Always return `is_public` in the `/groups/:group_id/rooms` API (PR #2630)
654 * Avoid no-op media deletes (PR #2637) Thanks to @spantaleev!
655 * Fix various embarrassing typos around user_directory and add some doc. (PR
656 #2643)
657 * Return whether a user is an admin within a group (PR #2647)
658 * Namespace visibility options for groups (PR #2657)
659 * Downcase UserIDs on registration (PR #2662)
660 * Cache failures when fetching URL previews (PR #2669)
661
662
663 Bug fixes:
664
665 * Fix port script (PR #2577)
666 * Fix error when running synapse with no logfile (PR #2581)
667 * Fix UI auth when deleting devices (PR #2591)
668 * Fix typo when checking if user is invited to group (PR #2599)
669 * Fix the port script to drop NUL values in all tables (PR #2611)
670 * Fix appservices being backlogged and not receiving new events due to a bug in
671 notify_interested_services (PR #2631) Thanks to @xyzz!
672 * Fix updating rooms avatar/display name when modified by admin (PR #2636)
673 Thanks to @farialima!
674 * Fix bug in state group storage (PR #2649)
675 * Fix 500 on invalid utf-8 in request (PR #2663)
676
677
678 Changes in synapse v0.24.1 (2017-10-24)
679 =======================================
680
681 Bug fixes:
682
683 * Fix updating group profiles over federation (PR #2567)
684
685
686 Changes in synapse v0.24.0 (2017-10-23)
687 =======================================
688
689 No changes since v0.24.0-rc1
690
691
692 Changes in synapse v0.24.0-rc1 (2017-10-19)
693 ===========================================
694
695 Features:
696
697 * Add Group Server (PR #2352, #2363, #2374, #2377, #2378, #2382, #2410, #2426,
698 #2430, #2454, #2471, #2472, #2544)
699 * Add support for channel notifications (PR #2501)
700 * Add basic implementation of backup media store (PR #2538)
701 * Add config option to auto-join new users to rooms (PR #2545)
702
703
704 Changes:
705
706 * Make the spam checker a module (PR #2474)
707 * Delete expired url cache data (PR #2478)
708 * Ignore incoming events for rooms that we have left (PR #2490)
709 * Allow spam checker to reject invites too (PR #2492)
710 * Add room creation checks to spam checker (PR #2495)
711 * Spam checking: add the invitee to user_may_invite (PR #2502)
712 * Process events from federation for different rooms in parallel (PR #2520)
713 * Allow error strings from spam checker (PR #2531)
714 * Improve error handling for missing files in config (PR #2551)
715
716
717 Bug fixes:
718
719 * Fix handling SERVFAILs when doing AAAA lookups for federation (PR #2477)
720 * Fix incompatibility with newer versions of ujson (PR #2483) Thanks to
721 @jeremycline!
722 * Fix notification keywords that start/end with non-word chars (PR #2500)
723 * Fix stack overflow and logcontexts from linearizer (PR #2532)
724 * Fix 500 error when fields missing from power_levels event (PR #2552)
725 * Fix 500 error when we get an error handling a PDU (PR #2553)
726
727
728 Changes in synapse v0.23.1 (2017-10-02)
729 =======================================
730
731 Changes:
732
733 * Make 'affinity' package optional, as it is not supported on some platforms
734
735
736 Changes in synapse v0.23.0 (2017-10-02)
737 =======================================
738
739 No changes since v0.23.0-rc2
740
741
742 Changes in synapse v0.23.0-rc2 (2017-09-26)
743 ===========================================
744
745 Bug fixes:
746
747 * Fix regression in performance of syncs (PR #2470)
748
749
750 Changes in synapse v0.23.0-rc1 (2017-09-25)
751 ===========================================
752
753 Features:
754
755 * Add a frontend proxy worker (PR #2344)
756 * Add support for event_id_only push format (PR #2450)
757 * Add a PoC for filtering spammy events (PR #2456)
758 * Add a config option to block all room invites (PR #2457)
759
760
761 Changes:
762
763 * Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
764 * Improve performance of generating push notifications (PR #2343, #2357, #2365,
765 #2366, #2371)
766 * Improve DB performance for device list handling in sync (PR #2362)
767 * Include a sample prometheus config (PR #2416)
768 * Document known to work postgres version (PR #2433) Thanks to @ptman!
769
770
771 Bug fixes:
772
773 * Fix caching error in the push evaluator (PR #2332)
774 * Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
775 * Fix port script for user directory tables (PR #2375)
776 * Fix device lists notifications when user rejoins a room (PR #2443, #2449)
777 * Fix sync to always send down current state events in timeline (PR #2451)
778 * Fix bug where guest users were incorrectly kicked (PR #2453)
779 * Fix bug talking to IPv6 only servers using SRV records (PR #2462)
780
781
782 Changes in synapse v0.22.1 (2017-07-06)
783 =======================================
784
785 Bug fixes:
786
787 * Fix bug where pusher pool didn't start and caused issues when
788 interacting with some rooms (PR #2342)
789
790
791 Changes in synapse v0.22.0 (2017-07-06)
792 =======================================
793
794 No changes since v0.22.0-rc2
795
796
797 Changes in synapse v0.22.0-rc2 (2017-07-04)
798 ===========================================
799
800 Changes:
801
802 * Improve performance of storing user IPs (PR #2307, #2308)
803 * Slightly improve performance of verifying access tokens (PR #2320)
804 * Slightly improve performance of event persistence (PR #2321)
805 * Increase default cache factor size from 0.1 to 0.5 (PR #2330)
806
807 Bug fixes:
808
809 * Fix bug with storing registration sessions that caused frequent CPU churn
810 (PR #2319)
811
812
813 Changes in synapse v0.22.0-rc1 (2017-06-26)
814 ===========================================
815
816 Features:
817
818 * Add a user directory API (PR #2252, and many more)
819 * Add shutdown room API to remove room from local server (PR #2291)
820 * Add API to quarantine media (PR #2292)
821 * Add new config option to not send event contents to push servers (PR #2301)
822 Thanks to @cjdelisle!
823
824 Changes:
825
826 * Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256,
827 #2274)
828 * Deduplicate sync filters (PR #2219) Thanks to @krombel!
829 * Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist!
830 * Add count of one time keys to sync stream (PR #2237)
831 * Only store event_auth for state events (PR #2247)
832 * Store URL cache preview downloads separately (PR #2299)
833
834 Bug fixes:
835
836 * Fix users not getting notifications when AS listened to that user_id (PR
837 #2216) Thanks to @slipeer!
838 * Fix users without push set up not getting notifications after joining rooms
839 (PR #2236)
840 * Fix preview url API to trim long descriptions (PR #2243)
841 * Fix bug where we used cached but unpersisted state group as prev group,
842 resulting in broken state of restart (PR #2263)
843 * Fix removing of pushers when using workers (PR #2267)
844 * Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!
845
846
847 Changes in synapse v0.21.1 (2017-06-15)
848 =======================================
849
850 Bug fixes:
851
852 * Fix bug in anonymous usage statistic reporting (PR #2281)
853
854
855 Changes in synapse v0.21.0 (2017-05-18)
856 =======================================
857
858 No changes since v0.21.0-rc3
859
860
861 Changes in synapse v0.21.0-rc3 (2017-05-17)
862 ===========================================
863
864 Features:
865
866 * Add per user rate-limiting overrides (PR #2208)
867 * Add config option to limit maximum number of events requested by ``/sync``
868 and ``/messages`` (PR #2221) Thanks to @psaavedra!
869
870
871 Changes:
872
873 * Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228,
874 #2229)
875 * Update username availability checker API (PR #2209, #2213)
876 * When purging, don't de-delta state groups we're about to delete (PR #2214)
877 * Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
878 * Add an index to event_search to speed up purge history API (PR #2218)
879
880
881 Bug fixes:
882
883 * Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)
884
885
886 Changes in synapse v0.21.0-rc2 (2017-05-08)
887 ===========================================
888
889 Changes:
890
891 * Always mark remotes as up if we receive a signed request from them (PR #2190)
892
893
894 Bug fixes:
895
896 * Fix bug where users got pushed for rooms they had muted (PR #2200)
897
898
899 Changes in synapse v0.21.0-rc1 (2017-05-08)
900 ===========================================
901
902 Features:
903
904 * Add username availability checker API (PR #2183)
905 * Add read marker API (PR #2120)
906
907
908 Changes:
909
910 * Enable guest access for the 3pl/3pid APIs (PR #1986)
911 * Add setting to support TURN for guests (PR #2011)
912 * Various performance improvements (PR #2075, #2076, #2080, #2083, #2108,
913 #2158, #2176, #2185)
914 * Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat!
915 * Replace HTTP replication with TCP replication (PR #2082, #2097, #2098,
916 #2099, #2103, #2014, #2016, #2115, #2116, #2117)
917 * Support authenticated SMTP (PR #2102) Thanks @DanielDent!
918 * Add a counter metric for successfully-sent transactions (PR #2121)
919 * Propagate errors sensibly from proxied IS requests (PR #2147)
920 * Add more granular event send metrics (PR #2178)
921
922
923
924 Bug fixes:
925
926 * Fix nuke-room script to work with current schema (PR #1927) Thanks
927 @zuckschwerdt!
928 * Fix db port script to not assume postgres tables are in the public schema
929 (PR #2024) Thanks @jerrykan!
930 * Fix getting latest device IP for user with no devices (PR #2118)
931 * Fix rejection of invites to unreachable servers (PR #2145)
932 * Fix code for reporting old verify keys in synapse (PR #2156)
933 * Fix invite state to always include all events (PR #2163)
934 * Fix bug where synapse would always fetch state for any missing event (PR #2170)
935 * Fix a leak with timed out HTTP connections (PR #2180)
936 * Fix bug where we didn't time out HTTP requests to ASes (PR #2192)
937
938
939 Docs:
940
941 * Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
942 * Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
943 * ``web_client_location`` documentation fix (PR #2131) Thanks @matthewjwolff!
944 * Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
945 * Clarify setting up metrics (PR #2149) Thanks @encks!
946
947
948 Changes in synapse v0.20.0 (2017-04-11)
949 =======================================
950
951 Bug fixes:
952
953 * Fix joining rooms over federation where not all servers in the room saw the
954 new server had joined (PR #2094)
955
956
957 Changes in synapse v0.20.0-rc1 (2017-03-30)
958 ===========================================
959
960 Features:
961
962 * Add delete_devices API (PR #1993)
963 * Add phone number registration/login support (PR #1994, #2055)
964
965
966 Changes:
967
968 * Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
969 * Reread log config on SIGHUP (PR #1982)
970 * Speed up public room list (PR #1989)
971 * Add helpful texts to logger config options (PR #1990)
972 * Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022)
973 * Add some debug to help diagnose weird federation issue (PR #2035)
974 * Correctly limit retries for all federation requests (PR #2050, #2061)
975 * Don't lock table when persisting new one time keys (PR #2053)
976 * Reduce some CPU work on DB threads (PR #2054)
977 * Cache hosts in room (PR #2060)
978 * Batch sending of device list pokes (PR #2063)
979 * Speed up persist event path in certain edge cases (PR #2070)
980
981
982 Bug fixes:
983
984 * Fix bug where current_state_events renamed to current_state_ids (PR #1849)
985 * Fix routing loop when fetching remote media (PR #1992)
986 * Fix current_state_events table to not lie (PR #1996)
987 * Fix CAS login to handle PartialDownloadError (PR #1997)
988 * Fix assertion to stop transaction queue getting wedged (PR #2010)
989 * Fix presence to fallback to last_active_ts if it beats the last sync time.
990 Thanks @Half-Shot! (PR #2014)
991 * Fix bug when federation received a PDU while a room join is in progress (PR
992 #2016)
993 * Fix resetting state on rejected events (PR #2025)
994 * Fix installation issues in readme. Thanks @ricco386 (PR #2037)
995 * Fix caching of remote servers' signature keys (PR #2042)
996 * Fix some leaking log context (PR #2048, #2049, #2057, #2058)
997 * Fix rejection of invites not reaching sync (PR #2056)
998
999
1000
1001 Changes in synapse v0.19.3 (2017-03-20)
1002 =======================================
1003
1004 No changes since v0.19.3-rc2
1005
1006
1007 Changes in synapse v0.19.3-rc2 (2017-03-13)
1008 ===========================================
1009
1010 Bug fixes:
1011
1012 * Fix bug in handling of incoming device list updates over federation.
1013
1014
1015
1016 Changes in synapse v0.19.3-rc1 (2017-03-08)
1017 ===========================================
1018
1019 Features:
1020
1021 * Add some administration functionalities. Thanks to morteza-araby! (PR #1784)
1022
1023
1024 Changes:
1025
1026 * Reduce database table sizes (PR #1873, #1916, #1923, #1963)
1027 * Update contrib/ to not use syutil. Thanks to andrewshadura! (PR #1907)
1028 * Don't fetch current state when sending an event in common case (PR #1955)
1029
1030
1031 Bug fixes:
1032
1033 * Fix synapse_port_db failure. Thanks to Pneumaticat! (PR #1904)
1034 * Fix caching to not cache error responses (PR #1913)
1035 * Fix APIs to make kick & ban reasons work (PR #1917)
1036 * Fix bugs in the /keys/changes api (PR #1921)
1037 * Fix bug where users couldn't forget rooms they were banned from (PR #1922)
1038 * Fix issue with long language values in pushers API (PR #1925)
1039 * Fix a race in transaction queue (PR #1930)
1040 * Fix dynamic thumbnailing to preserve aspect ratio. Thanks to jkolo! (PR
1041 #1945)
1042 * Fix device list update to not constantly resync (PR #1964)
1043 * Fix potential for huge memory usage when getting device that have
1044 changed (PR #1969)
1045
1046
1047
1048 Changes in synapse v0.19.2 (2017-02-20)
1049 =======================================
1050
1051 * Fix bug with event visibility check in /context/ API. Thanks to Tokodomo for
1052 pointing it out! (PR #1929)
1053
1054
1055 Changes in synapse v0.19.1 (2017-02-09)
1056 =======================================
1057
1058 * Fix bug where state was incorrectly reset in a room when synapse received an
1059 event over federation that did not pass auth checks (PR #1892)
1060
1061
1062 Changes in synapse v0.19.0 (2017-02-04)
1063 =======================================
1064
1065 No changes since RC 4.
1066
1067
1068 Changes in synapse v0.19.0-rc4 (2017-02-02)
1069 ===========================================
1070
1071 * Bump cache sizes for common membership queries (PR #1879)
1072
1073
1074 Changes in synapse v0.19.0-rc3 (2017-02-02)
1075 ===========================================
1076
1077 * Fix email push in pusher worker (PR #1875)
1078 * Make presence.get_new_events a bit faster (PR #1876)
1079 * Make /keys/changes a bit more performant (PR #1877)
1080
1081
1082 Changes in synapse v0.19.0-rc2 (2017-02-02)
1083 ===========================================
1084
1085 * Include newly joined users in /keys/changes API (PR #1872)
1086
1087
1088 Changes in synapse v0.19.0-rc1 (2017-02-02)
1089 ===========================================
1090
1091 Features:
1092
1093 * Add support for specifying multiple bind addresses (PR #1709, #1712, #1795,
1094 #1835). Thanks to @kyrias!
1095 * Add /account/3pid/delete endpoint (PR #1714)
1096 * Add config option to configure the Riot URL used in notification emails (PR
1097 #1811). Thanks to @aperezdc!
1098 * Add username and password config options for turn server (PR #1832). Thanks
1099 to @xsteadfastx!
1100 * Implement device lists updates over federation (PR #1857, #1861, #1864)
1101 * Implement /keys/changes (PR #1869, #1872)
1102
1103
1104 Changes:
1105
1106 * Improve IPv6 support (PR #1696). Thanks to @kyrias and @glyph!
1107 * Log which files we saved attachments to in the media_repository (PR #1791)
1108 * Linearize updates to membership via PUT /state/ to better handle multiple
1109 joins (PR #1787)
1110 * Limit number of entries to prefill from cache on startup (PR #1792)
1111 * Remove full_twisted_stacktraces option (PR #1802)
1112 * Measure size of some caches by sum of the size of cached values (PR #1815)
1113 * Measure metrics of string_cache (PR #1821)
1114 * Reduce logging verbosity (PR #1822, #1823, #1824)
1115 * Don't clobber a displayname or avatar_url if provided by an m.room.member
1116 event (PR #1852)
1117 * Better handle 401/404 response for federation /send/ (PR #1866, #1871)
1118
1119
1120 Fixes:
1121
1122 * Fix ability to change password to a non-ascii one (PR #1711)
1123 * Fix push getting stuck due to looking at the wrong view of state (PR #1820)
1124 * Fix email address comparison to be case insensitive (PR #1827)
1125 * Fix occasional inconsistencies of room membership (PR #1836, #1840)
1126
1127
1128 Performance:
1129
1130 * Don't block messages sending on bumping presence (PR #1789)
1131 * Change device_inbox stream index to include user (PR #1793)
1132 * Optimise state resolution (PR #1818)
1133 * Use DB cache of joined users for presence (PR #1862)
1134 * Add an index to make membership queries faster (PR #1867)
1135
1136
1137 Changes in synapse v0.18.7 (2017-01-09)
1138 =======================================
1139
1140 No changes from v0.18.7-rc2
1141
1142
1143 Changes in synapse v0.18.7-rc2 (2017-01-07)
1144 ===========================================
1145
1146 Bug fixes:
1147
1148 * Fix error in rc1's discarding invalid inbound traffic logic that was
1149 incorrectly discarding missing events
1150
1151
1152 Changes in synapse v0.18.7-rc1 (2017-01-06)
1153 ===========================================
1154
1155 Bug fixes:
1156
1157 * Fix error in #PR 1764 to actually fix the nightmare #1753 bug.
1158 * Improve deadlock logging further
1159 * Discard inbound federation traffic from invalid domains, to immunise
1160 against #1753
1161
1162
1163 Changes in synapse v0.18.6 (2017-01-06)
1164 =======================================
1165
1166 Bug fixes:
1167
1168 * Fix bug when checking if a guest user is allowed to join a room (PR #1772)
1169 Thanks to Patrik Oldsberg for diagnosing and the fix!
1170
1171
1172 Changes in synapse v0.18.6-rc3 (2017-01-05)
1173 ===========================================
1174
1175 Bug fixes:
1176
1177 * Fix bug where we failed to send ban events to the banned server (PR #1758)
1178 * Fix bug where we sent event that didn't originate on this server to
1179 other servers (PR #1764)
1180 * Fix bug where processing an event from a remote server took a long time
1181 because we were making long HTTP requests (PR #1765, PR #1744)
1182
1183 Changes:
1184
1185 * Improve logging for debugging deadlocks (PR #1766, PR #1767)
1186
1187
1188 Changes in synapse v0.18.6-rc2 (2016-12-30)
1189 ===========================================
1190
1191 Bug fixes:
1192
1193 * Fix memory leak in twisted by initialising logging correctly (PR #1731)
1194 * Fix bug where fetching missing events took an unacceptable amount of time in
1195 large rooms (PR #1734)
1196
1197
1198 Changes in synapse v0.18.6-rc1 (2016-12-29)
1199 ===========================================
1200
1201 Bug fixes:
1202
1203 * Make sure that outbound connections are closed (PR #1725)
1204
1205
1206 Changes in synapse v0.18.5 (2016-12-16)
1207 =======================================
1208
1209 Bug fixes:
1210
1211 * Fix federation /backfill returning events it shouldn't (PR #1700)
1212 * Fix crash in url preview (PR #1701)
1213
1214
1215 Changes in synapse v0.18.5-rc3 (2016-12-13)
1216 ===========================================
1217
1218 Features:
1219
1220 * Add support for E2E for guests (PR #1653)
1221 * Add new API appservice specific public room list (PR #1676)
1222 * Add new room membership APIs (PR #1680)
1223
1224
1225 Changes:
1226
1227 * Enable guest access for private rooms by default (PR #653)
1228 * Limit the number of events that can be created on a given room concurrently
1229 (PR #1620)
1230 * Log the args that we have on UI auth completion (PR #1649)
1231 * Stop generating refresh_tokens (PR #1654)
1232 * Stop putting a time caveat on access tokens (PR #1656)
1233 * Remove unspecced GET endpoints for e2e keys (PR #1694)
1234
1235
1236 Bug fixes:
1237
1238 * Fix handling of 500 and 429's over federation (PR #1650)
1239 * Fix Content-Type header parsing (PR #1660)
1240 * Fix error when previewing sites that include unicode, thanks to kyrias (PR
1241 #1664)
1242 * Fix some cases where we drop read receipts (PR #1678)
1243 * Fix bug where calls to ``/sync`` didn't correctly timeout (PR #1683)
1244 * Fix bug where E2E key query would fail if a single remote host failed (PR
1245 #1686)
1246
1247
1248
1249 Changes in synapse v0.18.5-rc2 (2016-11-24)
1250 ===========================================
1251
1252 Bug fixes:
1253
1254 * Don't send old events over federation, fixes bug in -rc1.
1255
1256 Changes in synapse v0.18.5-rc1 (2016-11-24)
1257 ===========================================
1258
1259 Features:
1260
1261 * Implement "event_fields" in filters (PR #1638)
1262
1263 Changes:
1264
1265 * Use external ldap auth pacakge (PR #1628)
1266 * Split out federation transaction sending to a worker (PR #1635)
1267 * Fail with a coherent error message if `/sync?filter=` is invalid (PR #1636)
1268 * More efficient notif count queries (PR #1644)
1269
1270
1271 Changes in synapse v0.18.4 (2016-11-22)
1272 =======================================
1273
1274 Bug fixes:
1275
1276 * Add workaround for buggy clients that the fail to register (PR #1632)
1277
1278
1279 Changes in synapse v0.18.4-rc1 (2016-11-14)
1280 ===========================================
1281
1282 Changes:
1283
1284 * Various database efficiency improvements (PR #1188, #1192)
1285 * Update default config to blacklist more internal IPs, thanks to Euan Kemp (PR
1286 #1198)
1287 * Allow specifying duration in minutes in config, thanks to Daniel Dent (PR
1288 #1625)
1289
1290
1291 Bug fixes:
1292
1293 * Fix media repo to set CORs headers on responses (PR #1190)
1294 * Fix registration to not error on non-ascii passwords (PR #1191)
1295 * Fix create event code to limit the number of prev_events (PR #1615)
1296 * Fix bug in transaction ID deduplication (PR #1624)
1297
1298
1299 Changes in synapse v0.18.3 (2016-11-08)
1300 =======================================
1301
1302 SECURITY UPDATE
1303
1304 Explicitly require authentication when using LDAP3. This is the default on
1305 versions of ``ldap3`` above 1.0, but some distributions will package an older
1306 version.
1307
1308 If you are using LDAP3 login and have a version of ``ldap3`` older than 1.0 it
1309 is **CRITICAL to updgrade**.
1310
1311
1312 Changes in synapse v0.18.2 (2016-11-01)
1313 =======================================
1314
1315 No changes since v0.18.2-rc5
1316
1317
1318 Changes in synapse v0.18.2-rc5 (2016-10-28)
1319 ===========================================
1320
1321 Bug fixes:
1322
1323 * Fix prometheus process metrics in worker processes (PR #1184)
1324
1325
1326 Changes in synapse v0.18.2-rc4 (2016-10-27)
1327 ===========================================
1328
1329 Bug fixes:
1330
1331 * Fix ``user_threepids`` schema delta, which in some instances prevented
1332 startup after upgrade (PR #1183)
1333
1334
1335 Changes in synapse v0.18.2-rc3 (2016-10-27)
1336 ===========================================
1337
1338 Changes:
1339
1340 * Allow clients to supply access tokens as headers (PR #1098)
1341 * Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164)
1342 * Make password reset email field case insensitive (PR #1170)
1343 * Reduce redundant database work in email pusher (PR #1174)
1344 * Allow configurable rate limiting per AS (PR #1175)
1345 * Check whether to ratelimit sooner to avoid work (PR #1176)
1346 * Standardise prometheus metrics (PR #1177)
1347
1348
1349 Bug fixes:
1350
1351 * Fix incredibly slow back pagination query (PR #1178)
1352 * Fix infinite typing bug (PR #1179)
1353
1354
1355 Changes in synapse v0.18.2-rc2 (2016-10-25)
1356 ===========================================
1357
1358 (This release did not include the changes advertised and was identical to RC1)
1359
1360
1361 Changes in synapse v0.18.2-rc1 (2016-10-17)
1362 ===========================================
1363
1364 Changes:
1365
1366 * Remove redundant event_auth index (PR #1113)
1367 * Reduce DB hits for replication (PR #1141)
1368 * Implement pluggable password auth (PR #1155)
1369 * Remove rate limiting from app service senders and fix get_or_create_user
1370 requester, thanks to Patrik Oldsberg (PR #1157)
1371 * window.postmessage for Interactive Auth fallback (PR #1159)
1372 * Use sys.executable instead of hardcoded python, thanks to Pedro Larroy
1373 (PR #1162)
1374 * Add config option for adding additional TLS fingerprints (PR #1167)
1375 * User-interactive auth on delete device (PR #1168)
1376
1377
1378 Bug fixes:
1379
1380 * Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg
1381 (PR #1150)
1382 * Fix interactive auth to return 401 from for incorrect password (PR #1160,
1383 #1166)
1384 * Fix email push notifs being dropped (PR #1169)
1385
1386
1387
1388 Changes in synapse v0.18.1 (2016-10-05)
1389 ======================================
1390
1391 No changes since v0.18.1-rc1
1392
1393
1394 Changes in synapse v0.18.1-rc1 (2016-09-30)
1395 ===========================================
1396
1397 Features:
1398
1399 * Add total_room_count_estimate to ``/publicRooms`` (PR #1133)
1400
1401
1402 Changes:
1403
1404 * Time out typing over federation (PR #1140)
1405 * Restructure LDAP authentication (PR #1153)
1406
1407
1408 Bug fixes:
1409
1410 * Fix 3pid invites when server is already in the room (PR #1136)
1411 * Fix upgrading with SQLite taking lots of CPU for a few days
1412 after upgrade (PR #1144)
1413 * Fix upgrading from very old database versions (PR #1145)
1414 * Fix port script to work with recently added tables (PR #1146)
1415
1416
1417 Changes in synapse v0.18.0 (2016-09-19)
1418 =======================================
1419
1420 The release includes major changes to the state storage database schemas, which
1421 significantly reduce database size. Synapse will attempt to upgrade the current
1422 data in the background. Servers with large SQLite database may experience
1423 degradation of performance while this upgrade is in progress, therefore you may
1424 want to consider migrating to using Postgres before upgrading very large SQLite
1425 databases
1426
1427
1428 Changes:
1429
1430 * Make public room search case insensitive (PR #1127)
1431
1432
1433 Bug fixes:
1434
1435 * Fix and clean up publicRooms pagination (PR #1129)
1436
1437
1438 Changes in synapse v0.18.0-rc1 (2016-09-16)
1439 ===========================================
1440
1441 Features:
1442
1443 * Add ``only=highlight`` on ``/notifications`` (PR #1081)
1444 * Add server param to /publicRooms (PR #1082)
1445 * Allow clients to ask for the whole of a single state event (PR #1094)
1446 * Add is_direct param to /createRoom (PR #1108)
1447 * Add pagination support to publicRooms (PR #1121)
1448 * Add very basic filter API to /publicRooms (PR #1126)
1449 * Add basic direct to device messaging support for E2E (PR #1074, #1084, #1104,
1450 #1111)
1451
1452
1453 Changes:
1454
1455 * Move to storing state_groups_state as deltas, greatly reducing DB size (PR
1456 #1065)
1457 * Reduce amount of state pulled out of the DB during common requests (PR #1069)
1458 * Allow PDF to be rendered from media repo (PR #1071)
1459 * Reindex state_groups_state after pruning (PR #1085)
1460 * Clobber EDUs in send queue (PR #1095)
1461 * Conform better to the CAS protocol specification (PR #1100)
1462 * Limit how often we ask for keys from dead servers (PR #1114)
1463
1464
1465 Bug fixes:
1466
1467 * Fix /notifications API when used with ``from`` param (PR #1080)
1468 * Fix backfill when cannot find an event. (PR #1107)
1469
1470
1471 Changes in synapse v0.17.3 (2016-09-09)
1472 =======================================
1473
1474 This release fixes a major bug that stopped servers from handling rooms with
1475 over 1000 members.
1476
1477
1478 Changes in synapse v0.17.2 (2016-09-08)
1479 =======================================
1480
1481 This release contains security bug fixes. Please upgrade.
1482
1483
1484 No changes since v0.17.2-rc1
1485
1486
1487 Changes in synapse v0.17.2-rc1 (2016-09-05)
1488 ===========================================
1489
1490 Features:
1491
1492 * Start adding store-and-forward direct-to-device messaging (PR #1046, #1050,
1493 #1062, #1066)
1494
1495
1496 Changes:
1497
1498 * Avoid pulling the full state of a room out so often (PR #1047, #1049, #1063,
1499 #1068)
1500 * Don't notify for online to online presence transitions. (PR #1054)
1501 * Occasionally persist unpersisted presence updates (PR #1055)
1502 * Allow application services to have an optional 'url' (PR #1056)
1503 * Clean up old sent transactions from DB (PR #1059)
1504
1505
1506 Bug fixes:
1507
1508 * Fix None check in backfill (PR #1043)
1509 * Fix membership changes to be idempotent (PR #1067)
1510 * Fix bug in get_pdu where it would sometimes return events with incorrect
1511 signature
1512
1513
1514
1515 Changes in synapse v0.17.1 (2016-08-24)
1516 =======================================
1517
1518 Changes:
1519
1520 * Delete old received_transactions rows (PR #1038)
1521 * Pass through user-supplied content in /join/$room_id (PR #1039)
1522
1523
1524 Bug fixes:
1525
1526 * Fix bug with backfill (PR #1040)
1527
1528
1529 Changes in synapse v0.17.1-rc1 (2016-08-22)
1530 ===========================================
1531
1532 Features:
1533
1534 * Add notification API (PR #1028)
1535
1536
1537 Changes:
1538
1539 * Don't print stack traces when failing to get remote keys (PR #996)
1540 * Various federation /event/ perf improvements (PR #998)
1541 * Only process one local membership event per room at a time (PR #1005)
1542 * Move default display name push rule (PR #1011, #1023)
1543 * Fix up preview URL API. Add tests. (PR #1015)
1544 * Set ``Content-Security-Policy`` on media repo (PR #1021)
1545 * Make notify_interested_services faster (PR #1022)
1546 * Add usage stats to prometheus monitoring (PR #1037)
1547
1548
1549 Bug fixes:
1550
1551 * Fix token login (PR #993)
1552 * Fix CAS login (PR #994, #995)
1553 * Fix /sync to not clobber status_msg (PR #997)
1554 * Fix redacted state events to include prev_content (PR #1003)
1555 * Fix some bugs in the auth/ldap handler (PR #1007)
1556 * Fix backfill request to limit URI length, so that remotes don't reject the
1557 requests due to path length limits (PR #1012)
1558 * Fix AS push code to not send duplicate events (PR #1025)
1559
1560
1561
1562 Changes in synapse v0.17.0 (2016-08-08)
1563 =======================================
1564
1565 This release contains significant security bug fixes regarding authenticating
1566 events received over federation. PLEASE UPGRADE.
1567
1568 This release changes the LDAP configuration format in a backwards incompatible
1569 way, see PR #843 for details.
1570
1571
1572 Changes:
1573
1574 * Add federation /version API (PR #990)
1575 * Make psutil dependency optional (PR #992)
1576
1577
1578 Bug fixes:
1579
1580 * Fix URL preview API to exclude HTML comments in description (PR #988)
1581 * Fix error handling of remote joins (PR #991)
1582
1583
1584 Changes in synapse v0.17.0-rc4 (2016-08-05)
1585 ===========================================
1586
1587 Changes:
1588
1589 * Change the way we summarize URLs when previewing (PR #973)
1590 * Add new ``/state_ids/`` federation API (PR #979)
1591 * Speed up processing of ``/state/`` response (PR #986)
1592
1593 Bug fixes:
1594
1595 * Fix event persistence when event has already been partially persisted
1596 (PR #975, #983, #985)
1597 * Fix port script to also copy across backfilled events (PR #982)
1598
1599
1600 Changes in synapse v0.17.0-rc3 (2016-08-02)
1601 ===========================================
1602
1603 Changes:
1604
1605 * Forbid non-ASes from registering users whose names begin with '_' (PR #958)
1606 * Add some basic admin API docs (PR #963)
1607
1608
1609 Bug fixes:
1610
1611 * Send the correct host header when fetching keys (PR #941)
1612 * Fix joining a room that has missing auth events (PR #964)
1613 * Fix various push bugs (PR #966, #970)
1614 * Fix adding emails on registration (PR #968)
1615
1616
1617 Changes in synapse v0.17.0-rc2 (2016-08-02)
1618 ===========================================
1619
1620 (This release did not include the changes advertised and was identical to RC1)
1621
1622
1623 Changes in synapse v0.17.0-rc1 (2016-07-28)
1624 ===========================================
1625
1626 This release changes the LDAP configuration format in a backwards incompatible
1627 way, see PR #843 for details.
1628
1629
1630 Features:
1631
1632 * Add purge_media_cache admin API (PR #902)
1633 * Add deactivate account admin API (PR #903)
1634 * Add optional pepper to password hashing (PR #907, #910 by KentShikama)
1635 * Add an admin option to shared secret registration (breaks backwards compat)
1636 (PR #909)
1637 * Add purge local room history API (PR #911, #923, #924)
1638 * Add requestToken endpoints (PR #915)
1639 * Add an /account/deactivate endpoint (PR #921)
1640 * Add filter param to /messages. Add 'contains_url' to filter. (PR #922)
1641 * Add device_id support to /login (PR #929)
1642 * Add device_id support to /v2/register flow. (PR #937, #942)
1643 * Add GET /devices endpoint (PR #939, #944)
1644 * Add GET /device/{deviceId} (PR #943)
1645 * Add update and delete APIs for devices (PR #949)
1646
1647
1648 Changes:
1649
1650 * Rewrite LDAP Authentication against ldap3 (PR #843 by mweinelt)
1651 * Linearize some federation endpoints based on (origin, room_id) (PR #879)
1652 * Remove the legacy v0 content upload API. (PR #888)
1653 * Use similar naming we use in email notifs for push (PR #894)
1654 * Optionally include password hash in createUser endpoint (PR #905 by
1655 KentShikama)
1656 * Use a query that postgresql optimises better for get_events_around (PR #906)
1657 * Fall back to 'username' if 'user' is not given for appservice registration.
1658 (PR #927 by Half-Shot)
1659 * Add metrics for psutil derived memory usage (PR #936)
1660 * Record device_id in client_ips (PR #938)
1661 * Send the correct host header when fetching keys (PR #941)
1662 * Log the hostname the reCAPTCHA was completed on (PR #946)
1663 * Make the device id on e2e key upload optional (PR #956)
1664 * Add r0.2.0 to the "supported versions" list (PR #960)
1665 * Don't include name of room for invites in push (PR #961)
1666
1667
1668 Bug fixes:
1669
1670 * Fix substitution failure in mail template (PR #887)
1671 * Put most recent 20 messages in email notif (PR #892)
1672 * Ensure that the guest user is in the database when upgrading accounts
1673 (PR #914)
1674 * Fix various edge cases in auth handling (PR #919)
1675 * Fix 500 ISE when sending alias event without a state_key (PR #925)
1676 * Fix bug where we stored rejections in the state_group, persist all
1677 rejections (PR #948)
1678 * Fix lack of check of if the user is banned when handling 3pid invites
1679 (PR #952)
1680 * Fix a couple of bugs in the transaction and keyring code (PR #954, #955)
1681
1682
1683
1684 Changes in synapse v0.16.1-r1 (2016-07-08)
1685 ==========================================
1686
1687 THIS IS A CRITICAL SECURITY UPDATE.
1688
1689 This fixes a bug which allowed users' accounts to be accessed by unauthorised
1690 users.
1691
1692 Changes in synapse v0.16.1 (2016-06-20)
1693 =======================================
1694
1695 Bug fixes:
1696
1697 * Fix assorted bugs in ``/preview_url`` (PR #872)
1698 * Fix TypeError when setting unicode passwords (PR #873)
1699
1700
1701 Performance improvements:
1702
1703 * Turn ``use_frozen_events`` off by default (PR #877)
1704 * Disable responding with canonical json for federation (PR #878)
1705
1706
1707 Changes in synapse v0.16.1-rc1 (2016-06-15)
1708 ===========================================
1709
1710 Features: None
1711
1712 Changes:
1713
1714 * Log requester for ``/publicRoom`` endpoints when possible (PR #856)
1715 * 502 on ``/thumbnail`` when can't connect to remote server (PR #862)
1716 * Linearize fetching of gaps on incoming events (PR #871)
1717
1718
1719 Bugs fixes:
1720
1721 * Fix bug where rooms where marked as published by default (PR #857)
1722 * Fix bug where joining room with an event with invalid sender (PR #868)
1723 * Fix bug where backfilled events were sent down sync streams (PR #869)
1724 * Fix bug where outgoing connections could wedge indefinitely, causing push
1725 notifications to be unreliable (PR #870)
1726
1727
1728 Performance improvements:
1729
1730 * Improve ``/publicRooms`` performance(PR #859)
1731
1732
1733 Changes in synapse v0.16.0 (2016-06-09)
1734 =======================================
1735
1736 NB: As of v0.14 all AS config files must have an ID field.
1737
1738
1739 Bug fixes:
1740
1741 * Don't make rooms published by default (PR #857)
1742
1743 Changes in synapse v0.16.0-rc2 (2016-06-08)
1744 ===========================================
1745
1746 Features:
1747
1748 * Add configuration option for tuning GC via ``gc.set_threshold`` (PR #849)
1749
1750 Changes:
1751
1752 * Record metrics about GC (PR #771, #847, #852)
1753 * Add metric counter for number of persisted events (PR #841)
1754
1755 Bug fixes:
1756
1757 * Fix 'From' header in email notifications (PR #843)
1758 * Fix presence where timeouts were not being fired for the first 8h after
1759 restarts (PR #842)
1760 * Fix bug where synapse sent malformed transactions to AS's when retrying
1761 transactions (Commits 310197b, 8437906)
1762
1763 Performance improvements:
1764
1765 * Remove event fetching from DB threads (PR #835)
1766 * Change the way we cache events (PR #836)
1767 * Add events to cache when we persist them (PR #840)
1768
1769
1770 Changes in synapse v0.16.0-rc1 (2016-06-03)
1771 ===========================================
1772
1773 Version 0.15 was not released. See v0.15.0-rc1 below for additional changes.
1774
1775 Features:
1776
1777 * Add email notifications for missed messages (PR #759, #786, #799, #810, #815,
1778 #821)
1779 * Add a ``url_preview_ip_range_whitelist`` config param (PR #760)
1780 * Add /report endpoint (PR #762)
1781 * Add basic ignore user API (PR #763)
1782 * Add an openidish mechanism for proving that you own a given user_id (PR #765)
1783 * Allow clients to specify a server_name to avoid 'No known servers' (PR #794)
1784 * Add secondary_directory_servers option to fetch room list from other servers
1785 (PR #808, #813)
1786
1787 Changes:
1788
1789 * Report per request metrics for all of the things using request_handler (PR
1790 #756)
1791 * Correctly handle ``NULL`` password hashes from the database (PR #775)
1792 * Allow receipts for events we haven't seen in the db (PR #784)
1793 * Make synctl read a cache factor from config file (PR #785)
1794 * Increment badge count per missed convo, not per msg (PR #793)
1795 * Special case m.room.third_party_invite event auth to match invites (PR #814)
1796
1797
1798 Bug fixes:
1799
1800 * Fix typo in event_auth servlet path (PR #757)
1801 * Fix password reset (PR #758)
1802
1803
1804 Performance improvements:
1805
1806 * Reduce database inserts when sending transactions (PR #767)
1807 * Queue events by room for persistence (PR #768)
1808 * Add cache to ``get_user_by_id`` (PR #772)
1809 * Add and use ``get_domain_from_id`` (PR #773)
1810 * Use tree cache for ``get_linearized_receipts_for_room`` (PR #779)
1811 * Remove unused indices (PR #782)
1812 * Add caches to ``bulk_get_push_rules*`` (PR #804)
1813 * Cache ``get_event_reference_hashes`` (PR #806)
1814 * Add ``get_users_with_read_receipts_in_room`` cache (PR #809)
1815 * Use state to calculate ``get_users_in_room`` (PR #811)
1816 * Load push rules in storage layer so that they get cached (PR #825)
1817 * Make ``get_joined_hosts_for_room`` use get_users_in_room (PR #828)
1818 * Poke notifier on next reactor tick (PR #829)
1819 * Change CacheMetrics to be quicker (PR #830)
1820
1821
1822 Changes in synapse v0.15.0-rc1 (2016-04-26)
1823 ===========================================
1824
1825 Features:
1826
1827 * Add login support for Javascript Web Tokens, thanks to Niklas Riekenbrauck
1828 (PR #671,#687)
1829 * Add URL previewing support (PR #688)
1830 * Add login support for LDAP, thanks to Christoph Witzany (PR #701)
1831 * Add GET endpoint for pushers (PR #716)
1832
1833 Changes:
1834
1835 * Never notify for member events (PR #667)
1836 * Deduplicate identical ``/sync`` requests (PR #668)
1837 * Require user to have left room to forget room (PR #673)
1838 * Use DNS cache if within TTL (PR #677)
1839 * Let users see their own leave events (PR #699)
1840 * Deduplicate membership changes (PR #700)
1841 * Increase performance of pusher code (PR #705)
1842 * Respond with error status 504 if failed to talk to remote server (PR #731)
1843 * Increase search performance on postgres (PR #745)
1844
1845 Bug fixes:
1846
1847 * Fix bug where disabling all notifications still resulted in push (PR #678)
1848 * Fix bug where users couldn't reject remote invites if remote refused (PR #691)
1849 * Fix bug where synapse attempted to backfill from itself (PR #693)
1850 * Fix bug where profile information was not correctly added when joining remote
1851 rooms (PR #703)
1852 * Fix bug where register API required incorrect key name for AS registration
1853 (PR #727)
1854
1855
1856 Changes in synapse v0.14.0 (2016-03-30)
1857 =======================================
1858
1859 No changes from v0.14.0-rc2
1860
1861 Changes in synapse v0.14.0-rc2 (2016-03-23)
1862 ===========================================
1863
1864 Features:
1865
1866 * Add published room list API (PR #657)
1867
1868 Changes:
1869
1870 * Change various caches to consume less memory (PR #656, #658, #660, #662,
1871 #663, #665)
1872 * Allow rooms to be published without requiring an alias (PR #664)
1873 * Intern common strings in caches to reduce memory footprint (#666)
1874
1875 Bug fixes:
1876
1877 * Fix reject invites over federation (PR #646)
1878 * Fix bug where registration was not idempotent (PR #649)
1879 * Update aliases event after deleting aliases (PR #652)
1880 * Fix unread notification count, which was sometimes wrong (PR #661)
1881
1882 Changes in synapse v0.14.0-rc1 (2016-03-14)
1883 ===========================================
1884
1885 Features:
1886
1887 * Add event_id to response to state event PUT (PR #581)
1888 * Allow guest users access to messages in rooms they have joined (PR #587)
1889 * Add config for what state is included in a room invite (PR #598)
1890 * Send the inviter's member event in room invite state (PR #607)
1891 * Add error codes for malformed/bad JSON in /login (PR #608)
1892 * Add support for changing the actions for default rules (PR #609)
1893 * Add environment variable SYNAPSE_CACHE_FACTOR, default it to 0.1 (PR #612)
1894 * Add ability for alias creators to delete aliases (PR #614)
1895 * Add profile information to invites (PR #624)
1896
1897 Changes:
1898
1899 * Enforce user_id exclusivity for AS registrations (PR #572)
1900 * Make adding push rules idempotent (PR #587)
1901 * Improve presence performance (PR #582, #586)
1902 * Change presence semantics for ``last_active_ago`` (PR #582, #586)
1903 * Don't allow ``m.room.create`` to be changed (PR #596)
1904 * Add 800x600 to default list of valid thumbnail sizes (PR #616)
1905 * Always include kicks and bans in full /sync (PR #625)
1906 * Send history visibility on boundary changes (PR #626)
1907 * Register endpoint now returns a refresh_token (PR #637)
1908
1909 Bug fixes:
1910
1911 * Fix bug where we returned incorrect state in /sync (PR #573)
1912 * Always return a JSON object from push rule API (PR #606)
1913 * Fix bug where registering without a user id sometimes failed (PR #610)
1914 * Report size of ExpiringCache in cache size metrics (PR #611)
1915 * Fix rejection of invites to empty rooms (PR #615)
1916 * Fix usage of ``bcrypt`` to not use ``checkpw`` (PR #619)
1917 * Pin ``pysaml2`` dependency (PR #634)
1918 * Fix bug in ``/sync`` where timeline order was incorrect for backfilled events
1919 (PR #635)
1920
1921 Changes in synapse v0.13.3 (2016-02-11)
1922 =======================================
1923
1924 * Fix bug where ``/sync`` would occasionally return events in the wrong room.
1925
1926 Changes in synapse v0.13.2 (2016-02-11)
1927 =======================================
1928
1929 * Fix bug where ``/events`` would fail to skip some events if there had been
1930 more events than the limit specified since the last request (PR #570)
1931
1932 Changes in synapse v0.13.1 (2016-02-10)
1933 =======================================
1934
1935 * Bump matrix-angular-sdk (matrix web console) dependency to 0.6.8 to
1936 pull in the fix for SYWEB-361 so that the default client can display
1937 HTML messages again(!)
1938
1939 Changes in synapse v0.13.0 (2016-02-10)
1940 =======================================
1941
1942 This version includes an upgrade of the schema, specifically adding an index to
1943 the ``events`` table. This may cause synapse to pause for several minutes the
1944 first time it is started after the upgrade.
1945
1946 Changes:
1947
1948 * Improve general performance (PR #540, #543. #544, #54, #549, #567)
1949 * Change guest user ids to be incrementing integers (PR #550)
1950 * Improve performance of public room list API (PR #552)
1951 * Change profile API to omit keys rather than return null (PR #557)
1952 * Add ``/media/r0`` endpoint prefix, which is equivalent to ``/media/v1/``
1953 (PR #595)
1954
1955 Bug fixes:
1956
1957 * Fix bug with upgrading guest accounts where it would fail if you opened the
1958 registration email on a different device (PR #547)
1959 * Fix bug where unread count could be wrong (PR #568)
1960
1961
1962
1963 Changes in synapse v0.12.1-rc1 (2016-01-29)
1964 ===========================================
1965
1966 Features:
1967
1968 * Add unread notification counts in ``/sync`` (PR #456)
1969 * Add support for inviting 3pids in ``/createRoom`` (PR #460)
1970 * Add ability for guest accounts to upgrade (PR #462)
1971 * Add ``/versions`` API (PR #468)
1972 * Add ``event`` to ``/context`` API (PR #492)
1973 * Add specific error code for invalid user names in ``/register`` (PR #499)
1974 * Add support for push badge counts (PR #507)
1975 * Add support for non-guest users to peek in rooms using ``/events`` (PR #510)
1976
1977 Changes:
1978
1979 * Change ``/sync`` so that guest users only get rooms they've joined (PR #469)
1980 * Change to require unbanning before other membership changes (PR #501)
1981 * Change default push rules to notify for all messages (PR #486)
1982 * Change default push rules to not notify on membership changes (PR #514)
1983 * Change default push rules in one to one rooms to only notify for events that
1984 are messages (PR #529)
1985 * Change ``/sync`` to reject requests with a ``from`` query param (PR #512)
1986 * Change server manhole to use SSH rather than telnet (PR #473)
1987 * Change server to require AS users to be registered before use (PR #487)
1988 * Change server not to start when ASes are invalidly configured (PR #494)
1989 * Change server to require ID and ``as_token`` to be unique for AS's (PR #496)
1990 * Change maximum pagination limit to 1000 (PR #497)
1991
1992 Bug fixes:
1993
1994 * Fix bug where ``/sync`` didn't return when something under the leave key
1995 changed (PR #461)
1996 * Fix bug where we returned smaller rather than larger than requested
1997 thumbnails when ``method=crop`` (PR #464)
1998 * Fix thumbnails API to only return cropped thumbnails when asking for a
1999 cropped thumbnail (PR #475)
2000 * Fix bug where we occasionally still logged access tokens (PR #477)
2001 * Fix bug where ``/events`` would always return immediately for guest users
2002 (PR #480)
2003 * Fix bug where ``/sync`` unexpectedly returned old left rooms (PR #481)
2004 * Fix enabling and disabling push rules (PR #498)
2005 * Fix bug where ``/register`` returned 500 when given unicode username
2006 (PR #513)
2007
2008 Changes in synapse v0.12.0 (2016-01-04)
2009 =======================================
2010
2011 * Expose ``/login`` under ``r0`` (PR #459)
2012
2013 Changes in synapse v0.12.0-rc3 (2015-12-23)
2014 ===========================================
2015
2016 * Allow guest accounts access to ``/sync`` (PR #455)
2017 * Allow filters to include/exclude rooms at the room level
2018 rather than just from the components of the sync for each
2019 room. (PR #454)
2020 * Include urls for room avatars in the response to ``/publicRooms`` (PR #453)
2021 * Don't set a identicon as the avatar for a user when they register (PR #450)
2022 * Add a ``display_name`` to third-party invites (PR #449)
2023 * Send more information to the identity server for third-party invites so that
2024 it can send richer messages to the invitee (PR #446)
2025 * Cache the responses to ``/initialSync`` for 5 minutes. If a client
2026 retries a request to ``/initialSync`` before the a response was computed
2027 to the first request then the same response is used for both requests
2028 (PR #457)
2029 * Fix a bug where synapse would always request the signing keys of
2030 remote servers even when the key was cached locally (PR #452)
2031 * Fix 500 when pagination search results (PR #447)
2032 * Fix a bug where synapse was leaking raw email address in third-party invites
2033 (PR #448)
2034
2035 Changes in synapse v0.12.0-rc2 (2015-12-14)
2036 ===========================================
2037
2038 * Add caches for whether rooms have been forgotten by a user (PR #434)
2039 * Remove instructions to use ``--process-dependency-link`` since all of the
2040 dependencies of synapse are on PyPI (PR #436)
2041 * Parallelise the processing of ``/sync`` requests (PR #437)
2042 * Fix race updating presence in ``/events`` (PR #444)
2043 * Fix bug back-populating search results (PR #441)
2044 * Fix bug calculating state in ``/sync`` requests (PR #442)
2045
2046 Changes in synapse v0.12.0-rc1 (2015-12-10)
2047 ===========================================
2048
2049 * Host the client APIs released as r0 by
2050 https://matrix.org/docs/spec/r0.0.0/client_server.html
2051 on paths prefixed by ``/_matrix/client/r0``. (PR #430, PR #415, PR #400)
2052 * Updates the client APIs to match r0 of the matrix specification.
2053
2054 * All APIs return events in the new event format, old APIs also include
2055 the fields needed to parse the event using the old format for
2056 compatibility. (PR #402)
2057 * Search results are now given as a JSON array rather than
2058 a JSON object (PR #405)
2059 * Miscellaneous changes to search (PR #403, PR #406, PR #412)
2060 * Filter JSON objects may now be passed as query parameters to ``/sync``
2061 (PR #431)
2062 * Fix implementation of ``/admin/whois`` (PR #418)
2063 * Only include the rooms that user has left in ``/sync`` if the client
2064 requests them in the filter (PR #423)
2065 * Don't push for ``m.room.message`` by default (PR #411)
2066 * Add API for setting per account user data (PR #392)
2067 * Allow users to forget rooms (PR #385)
2068
2069 * Performance improvements and monitoring:
2070
2071 * Add per-request counters for CPU time spent on the main python thread.
2072 (PR #421, PR #420)
2073 * Add per-request counters for time spent in the database (PR #429)
2074 * Make state updates in the C+S API idempotent (PR #416)
2075 * Only fire ``user_joined_room`` if the user has actually joined. (PR #410)
2076 * Reuse a single http client, rather than creating new ones (PR #413)
2077
2078 * Fixed a bug upgrading from older versions of synapse on postgresql (PR #417)
2079
2080 Changes in synapse v0.11.1 (2015-11-20)
2081 =======================================
2082
2083 * Add extra options to search API (PR #394)
2084 * Fix bug where we did not correctly cap federation retry timers. This meant it
2085 could take several hours for servers to start talking to ressurected servers,
2086 even when they were receiving traffic from them (PR #393)
2087 * Don't advertise login token flow unless CAS is enabled. This caused issues
2088 where some clients would always use the fallback API if they did not
2089 recognize all login flows (PR #391)
2090 * Change /v2 sync API to rename ``private_user_data`` to ``account_data``
2091 (PR #386)
2092 * Change /v2 sync API to remove the ``event_map`` and rename keys in ``rooms``
2093 object (PR #389)
2094
2095 Changes in synapse v0.11.0-r2 (2015-11-19)
2096 ==========================================
2097
2098 * Fix bug in database port script (PR #387)
2099
2100 Changes in synapse v0.11.0-r1 (2015-11-18)
2101 ==========================================
2102
2103 * Retry and fail federation requests more aggressively for requests that block
2104 client side requests (PR #384)
2105
2106 Changes in synapse v0.11.0 (2015-11-17)
2107 =======================================
2108
2109 * Change CAS login API (PR #349)
2110
2111 Changes in synapse v0.11.0-rc2 (2015-11-13)
2112 ===========================================
2113
2114 * Various changes to /sync API response format (PR #373)
2115 * Fix regression when setting display name in newly joined room over
2116 federation (PR #368)
2117 * Fix problem where /search was slow when using SQLite (PR #366)
2118
2119 Changes in synapse v0.11.0-rc1 (2015-11-11)
2120 ===========================================
2121
2122 * Add Search API (PR #307, #324, #327, #336, #350, #359)
2123 * Add 'archived' state to v2 /sync API (PR #316)
2124 * Add ability to reject invites (PR #317)
2125 * Add config option to disable password login (PR #322)
2126 * Add the login fallback API (PR #330)
2127 * Add room context API (PR #334)
2128 * Add room tagging support (PR #335)
2129 * Update v2 /sync API to match spec (PR #305, #316, #321, #332, #337, #341)
2130 * Change retry schedule for application services (PR #320)
2131 * Change retry schedule for remote servers (PR #340)
2132 * Fix bug where we hosted static content in the incorrect place (PR #329)
2133 * Fix bug where we didn't increment retry interval for remote servers (PR #343)
2134
2135 Changes in synapse v0.10.1-rc1 (2015-10-15)
2136 ===========================================
2137
2138 * Add support for CAS, thanks to Steven Hammerton (PR #295, #296)
2139 * Add support for using macaroons for ``access_token`` (PR #256, #229)
2140 * Add support for ``m.room.canonical_alias`` (PR #287)
2141 * Add support for viewing the history of rooms that they have left. (PR #276,
2142 #294)
2143 * Add support for refresh tokens (PR #240)
2144 * Add flag on creation which disables federation of the room (PR #279)
2145 * Add some room state to invites. (PR #275)
2146 * Atomically persist events when joining a room over federation (PR #283)
2147 * Change default history visibility for private rooms (PR #271)
2148 * Allow users to redact their own sent events (PR #262)
2149 * Use tox for tests (PR #247)
2150 * Split up syutil into separate libraries (PR #243)
2151
2152 Changes in synapse v0.10.0-r2 (2015-09-16)
2153 ==========================================
2154
2155 * Fix bug where we always fetched remote server signing keys instead of using
2156 ones in our cache.
2157 * Fix adding threepids to an existing account.
2158 * Fix bug with invinting over federation where remote server was already in
2159 the room. (PR #281, SYN-392)
2160
2161 Changes in synapse v0.10.0-r1 (2015-09-08)
2162 ==========================================
2163
2164 * Fix bug with python packaging
2165
2166 Changes in synapse v0.10.0 (2015-09-03)
2167 =======================================
2168
2169 No change from release candidate.
2170
2171 Changes in synapse v0.10.0-rc6 (2015-09-02)
2172 ===========================================
2173
2174 * Remove some of the old database upgrade scripts.
2175 * Fix database port script to work with newly created sqlite databases.
2176
2177 Changes in synapse v0.10.0-rc5 (2015-08-27)
2178 ===========================================
2179
2180 * Fix bug that broke downloading files with ascii filenames across federation.
2181
2182 Changes in synapse v0.10.0-rc4 (2015-08-27)
2183 ===========================================
2184
2185 * Allow UTF-8 filenames for upload. (PR #259)
2186
2187 Changes in synapse v0.10.0-rc3 (2015-08-25)
2188 ===========================================
2189
2190 * Add ``--keys-directory`` config option to specify where files such as
2191 certs and signing keys should be stored in, when using ``--generate-config``
2192 or ``--generate-keys``. (PR #250)
2193 * Allow ``--config-path`` to specify a directory, causing synapse to use all
2194 \*.yaml files in the directory as config files. (PR #249)
2195 * Add ``web_client_location`` config option to specify static files to be
2196 hosted by synapse under ``/_matrix/client``. (PR #245)
2197 * Add helper utility to synapse to read and parse the config files and extract
2198 the value of a given key. For example::
2199
2200 $ python -m synapse.config read server_name -c homeserver.yaml
2201 localhost
2202
2203 (PR #246)
2204
2205
2206 Changes in synapse v0.10.0-rc2 (2015-08-24)
2207 ===========================================
2208
2209 * Fix bug where we incorrectly populated the ``event_forward_extremities``
2210 table, resulting in problems joining large remote rooms (e.g.
2211 ``#matrix:matrix.org``)
2212 * Reduce the number of times we wake up pushers by not listening for presence
2213 or typing events, reducing the CPU cost of each pusher.
2214
2215
2216 Changes in synapse v0.10.0-rc1 (2015-08-21)
2217 ===========================================
2218
2219 Also see v0.9.4-rc1 changelog, which has been amalgamated into this release.
2220
2221 General:
2222
2223 * Upgrade to Twisted 15 (PR #173)
2224 * Add support for serving and fetching encryption keys over federation.
2225 (PR #208)
2226 * Add support for logging in with email address (PR #234)
2227 * Add support for new ``m.room.canonical_alias`` event. (PR #233)
2228 * Change synapse to treat user IDs case insensitively during registration and
2229 login. (If two users already exist with case insensitive matching user ids,
2230 synapse will continue to require them to specify their user ids exactly.)
2231 * Error if a user tries to register with an email already in use. (PR #211)
2232 * Add extra and improve existing caches (PR #212, #219, #226, #228)
2233 * Batch various storage request (PR #226, #228)
2234 * Fix bug where we didn't correctly log the entity that triggered the request
2235 if the request came in via an application service (PR #230)
2236 * Fix bug where we needlessly regenerated the full list of rooms an AS is
2237 interested in. (PR #232)
2238 * Add support for AS's to use v2_alpha registration API (PR #210)
2239
2240
2241 Configuration:
2242
2243 * Add ``--generate-keys`` that will generate any missing cert and key files in
2244 the configuration files. This is equivalent to running ``--generate-config``
2245 on an existing configuration file. (PR #220)
2246 * ``--generate-config`` now no longer requires a ``--server-name`` parameter
2247 when used on existing configuration files. (PR #220)
2248 * Add ``--print-pidfile`` flag that controls the printing of the pid to stdout
2249 of the demonised process. (PR #213)
2250
2251 Media Repository:
2252
2253 * Fix bug where we picked a lower resolution image than requested. (PR #205)
2254 * Add support for specifying if a the media repository should dynamically
2255 thumbnail images or not. (PR #206)
2256
2257 Metrics:
2258
2259 * Add statistics from the reactor to the metrics API. (PR #224, #225)
2260
2261 Demo Homeservers:
2262
2263 * Fix starting the demo homeservers without rate-limiting enabled. (PR #182)
2264 * Fix enabling registration on demo homeservers (PR #223)
2265
2266
2267 Changes in synapse v0.9.4-rc1 (2015-07-21)
2268 ==========================================
2269
2270 General:
2271
2272 * Add basic implementation of receipts. (SPEC-99)
2273 * Add support for configuration presets in room creation API. (PR #203)
2274 * Add auth event that limits the visibility of history for new users.
2275 (SPEC-134)
2276 * Add SAML2 login/registration support. (PR #201. Thanks Muthu Subramanian!)
2277 * Add client side key management APIs for end to end encryption. (PR #198)
2278 * Change power level semantics so that you cannot kick, ban or change power
2279 levels of users that have equal or greater power level than you. (SYN-192)
2280 * Improve performance by bulk inserting events where possible. (PR #193)
2281 * Improve performance by bulk verifying signatures where possible. (PR #194)
2282
2283
2284 Configuration:
2285
2286 * Add support for including TLS certificate chains.
2287
2288 Media Repository:
2289
2290 * Add Content-Disposition headers to content repository responses. (SYN-150)
2291
2292
2293 Changes in synapse v0.9.3 (2015-07-01)
2294 ======================================
2295
2296 No changes from v0.9.3 Release Candidate 1.
2297
2298 Changes in synapse v0.9.3-rc1 (2015-06-23)
2299 ==========================================
2300
2301 General:
2302
2303 * Fix a memory leak in the notifier. (SYN-412)
2304 * Improve performance of room initial sync. (SYN-418)
2305 * General improvements to logging.
2306 * Remove ``access_token`` query params from ``INFO`` level logging.
2307
2308 Configuration:
2309
2310 * Add support for specifying and configuring multiple listeners. (SYN-389)
2311
2312 Application services:
2313
2314 * Fix bug where synapse failed to send user queries to application services.
2315
2316 Changes in synapse v0.9.2-r2 (2015-06-15)
2317 =========================================
2318
2319 Fix packaging so that schema delta python files get included in the package.
2320
2321 Changes in synapse v0.9.2 (2015-06-12)
2322 ======================================
2323
2324 General:
2325
2326 * Use ultrajson for json (de)serialisation when a canonical encoding is not
2327 required. Ultrajson is significantly faster than simplejson in certain
2328 circumstances.
2329 * Use connection pools for outgoing HTTP connections.
2330 * Process thumbnails on separate threads.
2331
2332 Configuration:
2333
2334 * Add option, ``gzip_responses``, to disable HTTP response compression.
2335
2336 Federation:
2337
2338 * Improve resilience of backfill by ensuring we fetch any missing auth events.
2339 * Improve performance of backfill and joining remote rooms by removing
2340 unnecessary computations. This included handling events we'd previously
2341 handled as well as attempting to compute the current state for outliers.
2342
2343
2344 Changes in synapse v0.9.1 (2015-05-26)
2345 ======================================
2346
2347 General:
2348
2349 * Add support for backfilling when a client paginates. This allows servers to
2350 request history for a room from remote servers when a client tries to
2351 paginate history the server does not have - SYN-36
2352 * Fix bug where you couldn't disable non-default pushrules - SYN-378
2353 * Fix ``register_new_user`` script - SYN-359
2354 * Improve performance of fetching events from the database, this improves both
2355 initialSync and sending of events.
2356 * Improve performance of event streams, allowing synapse to handle more
2357 simultaneous connected clients.
2358
2359 Federation:
2360
2361 * Fix bug with existing backfill implementation where it returned the wrong
2362 selection of events in some circumstances.
2363 * Improve performance of joining remote rooms.
2364
2365 Configuration:
2366
2367 * Add support for changing the bind host of the metrics listener via the
2368 ``metrics_bind_host`` option.
2369
2370
2371 Changes in synapse v0.9.0-r5 (2015-05-21)
2372 =========================================
2373
2374 * Add more database caches to reduce amount of work done for each pusher. This
2375 radically reduces CPU usage when multiple pushers are set up in the same room.
2376
2377 Changes in synapse v0.9.0 (2015-05-07)
2378 ======================================
2379
2380 General:
2381
2382 * Add support for using a PostgreSQL database instead of SQLite. See
2383 `docs/postgres.rst`_ for details.
2384 * Add password change and reset APIs. See `Registration`_ in the spec.
2385 * Fix memory leak due to not releasing stale notifiers - SYN-339.
2386 * Fix race in caches that occasionally caused some presence updates to be
2387 dropped - SYN-369.
2388 * Check server name has not changed on restart.
2389 * Add a sample systemd unit file and a logger configuration in
2390 contrib/systemd. Contributed Ivan Shapovalov.
2391
2392 Federation:
2393
2394 * Add key distribution mechanisms for fetching public keys of unavailable
2395 remote home servers. See `Retrieving Server Keys`_ in the spec.
2396
2397 Configuration:
2398
2399 * Add support for multiple config files.
2400 * Add support for dictionaries in config files.
2401 * Remove support for specifying config options on the command line, except
2402 for:
2403
2404 * ``--daemonize`` - Daemonize the home server.
2405 * ``--manhole`` - Turn on the twisted telnet manhole service on the given
2406 port.
2407 * ``--database-path`` - The path to a sqlite database to use.
2408 * ``--verbose`` - The verbosity level.
2409 * ``--log-file`` - File to log to.
2410 * ``--log-config`` - Python logging config file.
2411 * ``--enable-registration`` - Enable registration for new users.
2412
2413 Application services:
2414
2415 * Reliably retry sending of events from Synapse to application services, as per
2416 `Application Services`_ spec.
2417 * Application services can no longer register via the ``/register`` API,
2418 instead their configuration should be saved to a file and listed in the
2419 synapse ``app_service_config_files`` config option. The AS configuration file
2420 has the same format as the old ``/register`` request.
2421 See `docs/application_services.rst`_ for more information.
2422
2423 .. _`docs/postgres.rst`: docs/postgres.rst
2424 .. _`docs/application_services.rst`: docs/application_services.rst
2425 .. _`Registration`: https://github.com/matrix-org/matrix-doc/blob/master/specification/10_client_server_api.rst#registration
2426 .. _`Retrieving Server Keys`: https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys
2427 .. _`Application Services`: https://github.com/matrix-org/matrix-doc/blob/0c6bd9/specification/25_application_service_api.rst#home-server---application-service-api
2428
2429 Changes in synapse v0.8.1 (2015-03-18)
2430 ======================================
2431
2432 * Disable registration by default. New users can be added using the command
2433 ``register_new_matrix_user`` or by enabling registration in the config.
2434 * Add metrics to synapse. To enable metrics use config options
2435 ``enable_metrics`` and ``metrics_port``.
2436 * Fix bug where banning only kicked the user.
2437
2438 Changes in synapse v0.8.0 (2015-03-06)
2439 ======================================
2440
2441 General:
2442
2443 * Add support for registration fallback. This is a page hosted on the server
2444 which allows a user to register for an account, regardless of what client
2445 they are using (e.g. mobile devices).
2446
2447 * Added new default push rules and made them configurable by clients:
2448
2449 * Suppress all notice messages.
2450 * Notify when invited to a new room.
2451 * Notify for messages that don't match any rule.
2452 * Notify on incoming call.
2453
2454 Federation:
2455
2456 * Added per host server side rate-limiting of incoming federation requests.
2457 * Added a ``/get_missing_events/`` API to federation to reduce number of
2458 ``/events/`` requests.
2459
2460 Configuration:
2461
2462 * Added configuration option to disable registration:
2463 ``disable_registration``.
2464 * Added configuration option to change soft limit of number of open file
2465 descriptors: ``soft_file_limit``.
2466 * Make ``tls_private_key_path`` optional when running with ``no_tls``.
2467
2468 Application services:
2469
2470 * Application services can now poll on the CS API ``/events`` for their events,
2471 by providing their application service ``access_token``.
2472 * Added exclusive namespace support to application services API.
2473
2474
2475 Changes in synapse v0.7.1 (2015-02-19)
2476 ======================================
2477
2478 * Initial alpha implementation of parts of the Application Services API.
2479 Including:
2480
2481 - AS Registration / Unregistration
2482 - User Query API
2483 - Room Alias Query API
2484 - Push transport for receiving events.
2485 - User/Alias namespace admin control
2486
2487 * Add cache when fetching events from remote servers to stop repeatedly
2488 fetching events with bad signatures.
2489 * Respect the per remote server retry scheme when fetching both events and
2490 server keys to reduce the number of times we send requests to dead servers.
2491 * Inform remote servers when the local server fails to handle a received event.
2492 * Turn off python bytecode generation due to problems experienced when
2493 upgrading from previous versions.
2494
2495 Changes in synapse v0.7.0 (2015-02-12)
2496 ======================================
2497
2498 * Add initial implementation of the query auth federation API, allowing
2499 servers to agree on whether an event should be allowed or rejected.
2500 * Persist events we have rejected from federation, fixing the bug where
2501 servers would keep requesting the same events.
2502 * Various federation performance improvements, including:
2503
2504 - Add in memory caches on queries such as:
2505
2506 * Computing the state of a room at a point in time, used for
2507 authorization on federation requests.
2508 * Fetching events from the database.
2509 * User's room membership, used for authorizing presence updates.
2510
2511 - Upgraded JSON library to improve parsing and serialisation speeds.
2512
2513 * Add default avatars to new user accounts using pydenticon library.
2514 * Correctly time out federation requests.
2515 * Retry federation requests against different servers.
2516 * Add support for push and push rules.
2517 * Add alpha versions of proposed new CSv2 APIs, including ``/sync`` API.
2518
2519 Changes in synapse 0.6.1 (2015-01-07)
2520 =====================================
2521
2522 * Major optimizations to improve performance of initial sync and event sending
2523 in large rooms (by up to 10x)
2524 * Media repository now includes a Content-Length header on media downloads.
2525 * Improve quality of thumbnails by changing resizing algorithm.
2526
2527 Changes in synapse 0.6.0 (2014-12-16)
2528 =====================================
2529
2530 * Add new API for media upload and download that supports thumbnailing.
2531 * Replicate media uploads over multiple homeservers so media is always served
2532 to clients from their local homeserver. This obsoletes the
2533 --content-addr parameter and confusion over accessing content directly
2534 from remote homeservers.
2535 * Implement exponential backoff when retrying federation requests when
2536 sending to remote homeservers which are offline.
2537 * Implement typing notifications.
2538 * Fix bugs where we sent events with invalid signatures due to bugs where
2539 we incorrectly persisted events.
2540 * Improve performance of database queries involving retrieving events.
2541
2542 Changes in synapse 0.5.4a (2014-12-13)
2543 ======================================
2544
2545 * Fix bug while generating the error message when a file path specified in
2546 the config doesn't exist.
2547
2548 Changes in synapse 0.5.4 (2014-12-03)
2549 =====================================
2550
2551 * Fix presence bug where some rooms did not display presence updates for
2552 remote users.
2553 * Do not log SQL timing log lines when started with "-v"
2554 * Fix potential memory leak.
2555
2556 Changes in synapse 0.5.3c (2014-12-02)
2557 ======================================
2558
2559 * Change the default value for the `content_addr` option to use the HTTP
2560 listener, as by default the HTTPS listener will be using a self-signed
2561 certificate.
2562
2563 Changes in synapse 0.5.3 (2014-11-27)
2564 =====================================
2565
2566 * Fix bug that caused joining a remote room to fail if a single event was not
2567 signed correctly.
2568 * Fix bug which caused servers to continuously try and fetch events from other
2569 servers.
2570
2571 Changes in synapse 0.5.2 (2014-11-26)
2572 =====================================
2573
2574 Fix major bug that caused rooms to disappear from peoples initial sync.
2575
2576 Changes in synapse 0.5.1 (2014-11-26)
2577 =====================================
2578 See UPGRADES.rst for specific instructions on how to upgrade.
2579
2580 * Fix bug where we served up an Event that did not match its signatures.
2581 * Fix regression where we no longer correctly handled the case where a
2582 homeserver receives an event for a room it doesn't recognise (but is in.)
2583
2584 Changes in synapse 0.5.0 (2014-11-19)
2585 =====================================
2586 This release includes changes to the federation protocol and client-server API
2587 that is not backwards compatible.
2588
2589 This release also changes the internal database schemas and so requires servers to
2590 drop their current history. See UPGRADES.rst for details.
2591
2592 Homeserver:
2593 * Add authentication and authorization to the federation protocol. Events are
2594 now signed by their originating homeservers.
2595 * Implement the new authorization model for rooms.
2596 * Split out web client into a seperate repository: matrix-angular-sdk.
2597 * Change the structure of PDUs.
2598 * Fix bug where user could not join rooms via an alias containing 4-byte
2599 UTF-8 characters.
2600 * Merge concept of PDUs and Events internally.
2601 * Improve logging by adding request ids to log lines.
2602 * Implement a very basic room initial sync API.
2603 * Implement the new invite/join federation APIs.
2604
2605 Webclient:
2606 * The webclient has been moved to a seperate repository.
2607
2608 Changes in synapse 0.4.2 (2014-10-31)
2609 =====================================
2610
2611 Homeserver:
2612 * Fix bugs where we did not notify users of correct presence updates.
2613 * Fix bug where we did not handle sub second event stream timeouts.
2614
2615 Webclient:
2616 * Add ability to click on messages to see JSON.
2617 * Add ability to redact messages.
2618 * Add ability to view and edit all room state JSON.
2619 * Handle incoming redactions.
2620 * Improve feedback on errors.
2621 * Fix bugs in mobile CSS.
2622 * Fix bugs with desktop notifications.
2623
2624 Changes in synapse 0.4.1 (2014-10-17)
2625 =====================================
2626 Webclient:
2627 * Fix bug with display of timestamps.
2628
2629 Changes in synpase 0.4.0 (2014-10-17)
2630 =====================================
2631 This release includes changes to the federation protocol and client-server API
2632 that is not backwards compatible.
2633
2634 The Matrix specification has been moved to a separate git repository:
2635 http://github.com/matrix-org/matrix-doc
2636
2637 You will also need an updated syutil and config. See UPGRADES.rst.
2638
2639 Homeserver:
2640 * Sign federation transactions to assert strong identity over federation.
2641 * Rename timestamp keys in PDUs and events from 'ts' and 'hsob_ts' to 'origin_server_ts'.
2642
2643
2644 Changes in synapse 0.3.4 (2014-09-25)
2645 =====================================
2646 This version adds support for using a TURN server. See docs/turn-howto.rst on
2647 how to set one up.
2648
2649 Homeserver:
2650 * Add support for redaction of messages.
2651 * Fix bug where inviting a user on a remote home server could take up to
2652 20-30s.
2653 * Implement a get current room state API.
2654 * Add support specifying and retrieving turn server configuration.
2655
2656 Webclient:
2657 * Add button to send messages to users from the home page.
2658 * Add support for using TURN for VoIP calls.
2659 * Show display name change messages.
2660 * Fix bug where the client didn't get the state of a newly joined room
2661 until after it has been refreshed.
2662 * Fix bugs with tab complete.
2663 * Fix bug where holding down the down arrow caused chrome to chew 100% CPU.
2664 * Fix bug where desktop notifications occasionally used "Undefined" as the
2665 display name.
2666 * Fix more places where we sometimes saw room IDs incorrectly.
2667 * Fix bug which caused lag when entering text in the text box.
2668
2669 Changes in synapse 0.3.3 (2014-09-22)
2670 =====================================
2671
2672 Homeserver:
2673 * Fix bug where you continued to get events for rooms you had left.
2674
2675 Webclient:
2676 * Add support for video calls with basic UI.
2677 * Fix bug where one to one chats were named after your display name rather
2678 than the other person's.
2679 * Fix bug which caused lag when typing in the textarea.
2680 * Refuse to run on browsers we know won't work.
2681 * Trigger pagination when joining new rooms.
2682 * Fix bug where we sometimes didn't display invitations in recents.
2683 * Automatically join room when accepting a VoIP call.
2684 * Disable outgoing and reject incoming calls on browsers we don't support
2685 VoIP in.
2686 * Don't display desktop notifications for messages in the room you are
2687 non-idle and speaking in.
2688
2689 Changes in synapse 0.3.2 (2014-09-18)
2690 =====================================
2691
2692 Webclient:
2693 * Fix bug where an empty "bing words" list in old accounts didn't send
2694 notifications when it should have done.
2695
2696 Changes in synapse 0.3.1 (2014-09-18)
2697 =====================================
2698 This is a release to hotfix v0.3.0 to fix two regressions.
2699
2700 Webclient:
2701 * Fix a regression where we sometimes displayed duplicate events.
2702 * Fix a regression where we didn't immediately remove rooms you were
2703 banned in from the recents list.
2704
2705 Changes in synapse 0.3.0 (2014-09-18)
2706 =====================================
2707 See UPGRADE for information about changes to the client server API, including
2708 breaking backwards compatibility with VoIP calls and registration API.
2709
2710 Homeserver:
2711 * When a user changes their displayname or avatar the server will now update
2712 all their join states to reflect this.
2713 * The server now adds "age" key to events to indicate how old they are. This
2714 is clock independent, so at no point does any server or webclient have to
2715 assume their clock is in sync with everyone else.
2716 * Fix bug where we didn't correctly pull in missing PDUs.
2717 * Fix bug where prev_content key wasn't always returned.
2718 * Add support for password resets.
2719
2720 Webclient:
2721 * Improve page content loading.
2722 * Join/parts now trigger desktop notifications.
2723 * Always show room aliases in the UI if one is present.
2724 * No longer show user-count in the recents side panel.
2725 * Add up & down arrow support to the text box for message sending to step
2726 through your sent history.
2727 * Don't display notifications for our own messages.
2728 * Emotes are now formatted correctly in desktop notifications.
2729 * The recents list now differentiates between public & private rooms.
2730 * Fix bug where when switching between rooms the pagination flickered before
2731 the view jumped to the bottom of the screen.
2732 * Add bing word support.
2733
2734 Registration API:
2735 * The registration API has been overhauled to function like the login API. In
2736 practice, this means registration requests must now include the following:
2737 'type':'m.login.password'. See UPGRADE for more information on this.
2738 * The 'user_id' key has been renamed to 'user' to better match the login API.
2739 * There is an additional login type: 'm.login.email.identity'.
2740 * The command client and web client have been updated to reflect these changes.
2741
2742 Changes in synapse 0.2.3 (2014-09-12)
2743 =====================================
2744
2745 Homeserver:
2746 * Fix bug where we stopped sending events to remote home servers if a
2747 user from that home server left, even if there were some still in the
2748 room.
2749 * Fix bugs in the state conflict resolution where it was incorrectly
2750 rejecting events.
2751
2752 Webclient:
2753 * Display room names and topics.
2754 * Allow setting/editing of room names and topics.
2755 * Display information about rooms on the main page.
2756 * Handle ban and kick events in real time.
2757 * VoIP UI and reliability improvements.
2758 * Add glare support for VoIP.
2759 * Improvements to initial startup speed.
2760 * Don't display duplicate join events.
2761 * Local echo of messages.
2762 * Differentiate sending and sent of local echo.
2763 * Various minor bug fixes.
2764
2765 Changes in synapse 0.2.2 (2014-09-06)
2766 =====================================
2767
2768 Homeserver:
2769 * When the server returns state events it now also includes the previous
2770 content.
2771 * Add support for inviting people when creating a new room.
2772 * Make the homeserver inform the room via `m.room.aliases` when a new alias
2773 is added for a room.
2774 * Validate `m.room.power_level` events.
2775
2776 Webclient:
2777 * Add support for captchas on registration.
2778 * Handle `m.room.aliases` events.
2779 * Asynchronously send messages and show a local echo.
2780 * Inform the UI when a message failed to send.
2781 * Only autoscroll on receiving a new message if the user was already at the
2782 bottom of the screen.
2783 * Add support for ban/kick reasons.
2784
2785 Changes in synapse 0.2.1 (2014-09-03)
2786 =====================================
2787
2788 Homeserver:
2789 * Added support for signing up with a third party id.
2790 * Add synctl scripts.
2791 * Added rate limiting.
2792 * Add option to change the external address the content repo uses.
2793 * Presence bug fixes.
2794
2795 Webclient:
2796 * Added support for signing up with a third party id.
2797 * Added support for banning and kicking users.
2798 * Added support for displaying and setting ops.
2799 * Added support for room names.
2800 * Fix bugs with room membership event display.
2801
2802 Changes in synapse 0.2.0 (2014-09-02)
2803 =====================================
2804 This update changes many configuration options, updates the
2805 database schema and mandates SSL for server-server connections.
2806
2807 Homeserver:
2808 * Require SSL for server-server connections.
2809 * Add SSL listener for client-server connections.
2810 * Add ability to use config files.
2811 * Add support for kicking/banning and power levels.
2812 * Allow setting of room names and topics on creation.
2813 * Change presence to include last seen time of the user.
2814 * Change url path prefix to /_matrix/...
2815 * Bug fixes to presence.
2816
2817 Webclient:
2818 * Reskin the CSS for registration and login.
2819 * Various improvements to rooms CSS.
2820 * Support changes in client-server API.
2821 * Bug fixes to VOIP UI.
2822 * Various bug fixes to handling of changes to room member list.
2823
2824 Changes in synapse 0.1.2 (2014-08-29)
2825 =====================================
2826
2827 Webclient:
2828 * Add basic call state UI for VoIP calls.
2829
2830 Changes in synapse 0.1.1 (2014-08-29)
2831 =====================================
2832
2833 Homeserver:
2834 * Fix bug that caused the event stream to not notify some clients about
2835 changes.
2836
2837 Changes in synapse 0.1.0 (2014-08-29)
2838 =====================================
2839 Presence has been reenabled in this release.
2840
2841 Homeserver:
2842 * Update client to server API, including:
2843 - Use a more consistent url scheme.
2844 - Provide more useful information in the initial sync api.
2845 * Change the presence handling to be much more efficient.
2846 * Change the presence server to server API to not require explicit polling of
2847 all users who share a room with a user.
2848 * Fix races in the event streaming logic.
2849
2850 Webclient:
2851 * Update to use new client to server API.
2852 * Add basic VOIP support.
2853 * Add idle timers that change your status to away.
2854 * Add recent rooms column when viewing a room.
2855 * Various network efficiency improvements.
2856 * Add basic mobile browser support.
2857 * Add a settings page.
2858
2859 Changes in synapse 0.0.1 (2014-08-22)
2860 =====================================
2861 Presence has been disabled in this release due to a bug that caused the
2862 homeserver to spam other remote homeservers.
2863
2864 Homeserver:
2865 * Completely change the database schema to support generic event types.
2866 * Improve presence reliability.
2867 * Improve reliability of joining remote rooms.
2868 * Fix bug where room join events were duplicated.
2869 * Improve initial sync API to return more information to the client.
2870 * Stop generating fake messages for room membership events.
2871
2872 Webclient:
2873 * Add tab completion of names.
2874 * Add ability to upload and send images.
2875 * Add profile pages.
2876 * Improve CSS layout of room.
2877 * Disambiguate identical display names.
2878 * Don't get remote users display names and avatars individually.
2879 * Use the new initial sync API to reduce number of round trips to the homeserver.
2880 * Change url scheme to use room aliases instead of room ids where known.
2881 * Increase longpoll timeout.
2882
2883 Changes in synapse 0.0.0 (2014-08-13)
2884 =====================================
2885
2886 * Initial alpha release
5050 Changelog
5151 ~~~~~~~~~
5252
53 All changes, even minor ones, need a corresponding changelog
53 All changes, even minor ones, need a corresponding changelog / newsfragment
5454 entry. These are managed by Towncrier
5555 (https://github.com/hawkowl/towncrier).
5656
+0
-19
Dockerfile less more
0 FROM docker.io/python:2-alpine3.7
1
2 RUN apk add --no-cache --virtual .nacl_deps su-exec build-base libffi-dev zlib-dev libressl-dev libjpeg-turbo-dev linux-headers postgresql-dev libxslt-dev
3
4 COPY . /synapse
5
6 # A wheel cache may be provided in ./cache for faster build
7 RUN cd /synapse \
8 && pip install --upgrade pip setuptools psycopg2 lxml \
9 && mkdir -p /synapse/cache \
10 && pip install -f /synapse/cache --upgrade --process-dependency-links . \
11 && mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \
12 && rm -rf setup.py setup.cfg synapse
13
14 VOLUME ["/data"]
15
16 EXPOSE 8008/tcp 8448/tcp
17
18 ENTRYPOINT ["/start.py"]
11 include LICENSE
22 include VERSION
33 include *.rst
4 include *.md
45 include demo/README
56 include demo/demo.tls.dh
67 include demo/*.py
3334
3435 prune .github
3536 prune demo/etc
37 prune docker
7070 https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look
7171 at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
7272 `APIs <https://matrix.org/docs/api>`_ and `Client SDKs
73 <http://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
73 <https://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
7474
7575 Thanks for using Matrix!
7676
156156
157157 In case of problems, please see the _`Troubleshooting` section below.
158158
159 There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`.
160
161 Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/
159 There is an offical synapse image available at
160 https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
161 the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
162 this including configuration options is available in the README on
163 hub.docker.com.
164
165 Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
166 Dockerfile to automate a synapse server in a single Docker image, at
167 https://hub.docker.com/r/avhost/docker-matrix/tags/
162168
163169 Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
164 tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
170 tested with VirtualBox/AWS/DigitalOcean - see
171 https://github.com/EMnify/matrix-synapse-auto-deploy
165172 for details.
166173
167174 Configuring synapse
282289
283290 The easiest way to try out your new Synapse installation is by connecting to it
284291 from a web client. The easiest option is probably the one at
285 http://riot.im/app. You will need to specify a "Custom server" when you log on
292 https://riot.im/app. You will need to specify a "Custom server" when you log on
286293 or register: set this to ``https://domain.tld`` if you setup a reverse proxy
287294 following the recommended setup, or ``https://localhost:8448`` - remember to specify the
288295 port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
328335 =============
329336
330337 Matrix serves raw user generated data in some APIs - specifically the `content
331 repository endpoints <http://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
338 repository endpoints <https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
332339
333340 Whilst we have tried to mitigate against possible XSS attacks (e.g.
334341 https://github.com/matrix-org/synapse/pull/1021) we recommend running
347354 Debian
348355 ------
349356
350 Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/.
357 Matrix provides official Debian packages via apt from https://matrix.org/packages/debian/.
351358 Note that these packages do not include a client - choose one from
352359 https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one of our SDKs :)
353360
360367
361368 Oleg Girko provides Fedora RPMs at
362369 https://obs.infoserver.lv/project/monitor/matrix-synapse
370
371 OpenSUSE
372 --------
373
374 Synapse is in the OpenSUSE repositories as ``matrix-synapse``::
375
376 sudo zypper install matrix-synapse
377
378 SUSE Linux Enterprise Server
379 ----------------------------
380
381 Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
382 https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/
363383
364384 ArchLinux
365385 ---------
523543 -----------------------
524544
525545 If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
526 to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for
546 to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
527547 encryption and digital signatures.
528548 Unfortunately PyNACL currently has a few issues
529549 (https://github.com/pyca/pynacl/issues/53) and
671691 Using PostgreSQL
672692 ================
673693
674 As of Synapse 0.9, `PostgreSQL <http://www.postgresql.org>`_ is supported as an
675 alternative to the `SQLite <http://sqlite.org/>`_ database that Synapse has
694 As of Synapse 0.9, `PostgreSQL <https://www.postgresql.org>`_ is supported as an
695 alternative to the `SQLite <https://sqlite.org/>`_ database that Synapse has
676696 traditionally used for convenience and simplicity.
677697
678698 The advantages of Postgres include:
696716 It is recommended to put a reverse proxy such as
697717 `nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
698718 `Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
699 `HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
719 `HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
700720 doing so is that it means that you can expose the default https port (443) to
701721 Matrix clients without needing to run Synapse with root privileges.
702722
00 # Synapse Docker
1
2 The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
3 database server or a TURN server, you should run these separately.
4
5 If you run a Postgres server, you should simply include it in the same Compose
6 project or set the proper environment variables and the image will automatically
7 use that server.
8
9 ## Build
10
11 Build the docker image with the `docker build` command from the root of the synapse repository.
12
13 ```
14 docker build -t docker.io/matrixdotorg/synapse .
15 ```
16
17 The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
18
19 You may have a local Python wheel cache available, in which case copy the relevant packages in the ``cache/`` directory at the root of the project.
20
21 ## Run
22
23 This image is designed to run either with an automatically generated configuration
24 file or with a custom configuration that requires manual edition.
251
262 ### Automated configuration
273
5935 docker-compose up -d
6036 ```
6137
62 ### Without Compose
38 ### More information
6339
64 If you do not wish to use Compose, you may still run this image using plain
65 Docker commands. Note that the following is just a guideline and you may need
66 to add parameters to the docker run command to account for the network situation
67 with your postgres database.
68
69 ```
70 docker run \
71 -d \
72 --name synapse \
73 -v ${DATA_PATH}:/data \
74 -e SYNAPSE_SERVER_NAME=my.matrix.host \
75 -e SYNAPSE_REPORT_STATS=yes \
76 docker.io/matrixdotorg/synapse:latest
77 ```
78
79 ## Volumes
80
81 The image expects a single volume, located at ``/data``, that will hold:
82
83 * temporary files during uploads;
84 * uploaded media and thumbnails;
85 * the SQLite database if you do not configure postgres;
86 * the appservices configuration.
87
88 You are free to use separate volumes depending on storage endpoints at your
89 disposal. For instance, ``/data/media`` coud be stored on a large but low
90 performance hdd storage while other files could be stored on high performance
91 endpoints.
92
93 In order to setup an application service, simply create an ``appservices``
94 directory in the data volume and write the application service Yaml
95 configuration file there. Multiple application services are supported.
96
97 ## Environment
98
99 Unless you specify a custom path for the configuration file, a very generic
100 file will be generated, based on the following environment settings.
101 These are a good starting point for setting up your own deployment.
102
103 Global settings:
104
105 * ``UID``, the user id Synapse will run as [default 991]
106 * ``GID``, the group id Synapse will run as [default 991]
107 * ``SYNAPSE_CONFIG_PATH``, path to a custom config file
108
109 If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
110 then customize it manually. No other environment variable is required.
111
112 Otherwise, a dynamic configuration file will be used. The following environment
113 variables are available for configuration:
114
115 * ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
116 * ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
117 statistics reporting back to the Matrix project which helps us to get funding.
118 * ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
119 you run your own TLS-capable reverse proxy).
120 * ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
121 the Synapse instance.
122 * ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
123 * ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
124 * ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
125 * ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
126 key in order to enable recaptcha upon registration.
127 * ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
128 key in order to enable recaptcha upon registration.
129 * ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
130 uris to enable TURN for this homeserver.
131 * ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
132
133 Shared secrets, that will be initialized to random values if not set:
134
135 * ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
136 registration is disable.
137 * ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
138 to the server.
139
140 Database specific values (will use SQLite if not set):
141
142 * `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
143 * `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
144 * `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
145 * `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
146
147 Mail server specific values (will not send emails if not set):
148
149 * ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
150 * ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
151 * ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
152 * ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
40 For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
+0
-219
contrib/docker/conf/homeserver.yaml less more
0 # vim:ft=yaml
1
2 ## TLS ##
3
4 tls_certificate_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.crt"
5 tls_private_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.key"
6 tls_dh_params_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.dh"
7 no_tls: {{ "True" if SYNAPSE_NO_TLS else "False" }}
8 tls_fingerprints: []
9
10 ## Server ##
11
12 server_name: "{{ SYNAPSE_SERVER_NAME }}"
13 pid_file: /homeserver.pid
14 web_client: False
15 soft_file_limit: 0
16
17 ## Ports ##
18
19 listeners:
20 {% if not SYNAPSE_NO_TLS %}
21 -
22 port: 8448
23 bind_addresses: ['0.0.0.0']
24 type: http
25 tls: true
26 x_forwarded: false
27 resources:
28 - names: [client]
29 compress: true
30 - names: [federation] # Federation APIs
31 compress: false
32 {% endif %}
33
34 - port: 8008
35 tls: false
36 bind_addresses: ['0.0.0.0']
37 type: http
38 x_forwarded: false
39
40 resources:
41 - names: [client]
42 compress: true
43 - names: [federation]
44 compress: false
45
46 ## Database ##
47
48 {% if POSTGRES_PASSWORD %}
49 database:
50 name: "psycopg2"
51 args:
52 user: "{{ POSTGRES_USER or "synapse" }}"
53 password: "{{ POSTGRES_PASSWORD }}"
54 database: "{{ POSTGRES_DB or "synapse" }}"
55 host: "{{ POSTGRES_HOST or "db" }}"
56 port: "{{ POSTGRES_PORT or "5432" }}"
57 cp_min: 5
58 cp_max: 10
59 {% else %}
60 database:
61 name: "sqlite3"
62 args:
63 database: "/data/homeserver.db"
64 {% endif %}
65
66 ## Performance ##
67
68 event_cache_size: "{{ SYNAPSE_EVENT_CACHE_SIZE or "10K" }}"
69 verbose: 0
70 log_file: "/data/homeserver.log"
71 log_config: "/compiled/log.config"
72
73 ## Ratelimiting ##
74
75 rc_messages_per_second: 0.2
76 rc_message_burst_count: 10.0
77 federation_rc_window_size: 1000
78 federation_rc_sleep_limit: 10
79 federation_rc_sleep_delay: 500
80 federation_rc_reject_limit: 50
81 federation_rc_concurrent: 3
82
83 ## Files ##
84
85 media_store_path: "/data/media"
86 uploads_path: "/data/uploads"
87 max_upload_size: "10M"
88 max_image_pixels: "32M"
89 dynamic_thumbnails: false
90
91 # List of thumbnail to precalculate when an image is uploaded.
92 thumbnail_sizes:
93 - width: 32
94 height: 32
95 method: crop
96 - width: 96
97 height: 96
98 method: crop
99 - width: 320
100 height: 240
101 method: scale
102 - width: 640
103 height: 480
104 method: scale
105 - width: 800
106 height: 600
107 method: scale
108
109 url_preview_enabled: False
110 max_spider_size: "10M"
111
112 ## Captcha ##
113
114 {% if SYNAPSE_RECAPTCHA_PUBLIC_KEY %}
115 recaptcha_public_key: "{{ SYNAPSE_RECAPTCHA_PUBLIC_KEY }}"
116 recaptcha_private_key: "{{ SYNAPSE_RECAPTCHA_PRIVATE_KEY }}"
117 enable_registration_captcha: True
118 recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
119 {% else %}
120 recaptcha_public_key: "YOUR_PUBLIC_KEY"
121 recaptcha_private_key: "YOUR_PRIVATE_KEY"
122 enable_registration_captcha: False
123 recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
124 {% endif %}
125
126 ## Turn ##
127
128 {% if SYNAPSE_TURN_URIS %}
129 turn_uris:
130 {% for uri in SYNAPSE_TURN_URIS.split(',') %} - "{{ uri }}"
131 {% endfor %}
132 turn_shared_secret: "{{ SYNAPSE_TURN_SECRET }}"
133 turn_user_lifetime: "1h"
134 turn_allow_guests: True
135 {% else %}
136 turn_uris: []
137 turn_shared_secret: "YOUR_SHARED_SECRET"
138 turn_user_lifetime: "1h"
139 turn_allow_guests: True
140 {% endif %}
141
142 ## Registration ##
143
144 enable_registration: {{ "True" if SYNAPSE_ENABLE_REGISTRATION else "False" }}
145 registration_shared_secret: "{{ SYNAPSE_REGISTRATION_SHARED_SECRET }}"
146 bcrypt_rounds: 12
147 allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
148 enable_group_creation: true
149
150 # The list of identity servers trusted to verify third party
151 # identifiers by this server.
152 trusted_third_party_id_servers:
153 - matrix.org
154 - vector.im
155 - riot.im
156
157 ## Metrics ###
158
159 {% if SYNAPSE_REPORT_STATS.lower() == "yes" %}
160 enable_metrics: True
161 report_stats: True
162 {% else %}
163 enable_metrics: False
164 report_stats: False
165 {% endif %}
166
167 ## API Configuration ##
168
169 room_invite_state_types:
170 - "m.room.join_rules"
171 - "m.room.canonical_alias"
172 - "m.room.avatar"
173 - "m.room.name"
174
175 {% if SYNAPSE_APPSERVICES %}
176 app_service_config_files:
177 {% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
178 {% endfor %}
179 {% else %}
180 app_service_config_files: []
181 {% endif %}
182
183 macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
184 expire_access_token: False
185
186 ## Signing Keys ##
187
188 signing_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.signing.key"
189 old_signing_keys: {}
190 key_refresh_interval: "1d" # 1 Day.
191
192 # The trusted servers to download signing keys from.
193 perspectives:
194 servers:
195 "matrix.org":
196 verify_keys:
197 "ed25519:auto":
198 key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
199
200 password_config:
201 enabled: true
202
203 {% if SYNAPSE_SMTP_HOST %}
204 email:
205 enable_notifs: false
206 smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
207 smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
208 smtp_user: "{{ SYNAPSE_SMTP_USER }}"
209 smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
210 require_transport_security: False
211 notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
212 app_name: Matrix
213 template_dir: res/templates
214 notif_template_html: notif_mail.html
215 notif_template_text: notif_mail.txt
216 notif_for_new_users: True
217 riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
218 {% endif %}
+0
-29
contrib/docker/conf/log.config less more
0 version: 1
1
2 formatters:
3 precise:
4 format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
5
6 filters:
7 context:
8 (): synapse.util.logcontext.LoggingContextFilter
9 request: ""
10
11 handlers:
12 console:
13 class: logging.StreamHandler
14 formatter: precise
15 filters: [context]
16
17 loggers:
18 synapse:
19 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
20
21 synapse.storage.SQL:
22 # beware: increasing this to DEBUG will make synapse log sensitive
23 # information such as access tokens.
24 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
25
26 root:
27 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
28 handlers: [console]
55 services:
66
77 synapse:
8 build: ../..
89 image: docker.io/matrixdotorg/synapse:latest
910 # Since snyapse does not retry to connect to the database, restart upon
1011 # failure
+0
-66
contrib/docker/start.py less more
0 #!/usr/local/bin/python
1
2 import jinja2
3 import os
4 import sys
5 import subprocess
6 import glob
7
8 # Utility functions
9 convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
10
11 def check_arguments(environ, args):
12 for argument in args:
13 if argument not in environ:
14 print("Environment variable %s is mandatory, exiting." % argument)
15 sys.exit(2)
16
17 def generate_secrets(environ, secrets):
18 for name, secret in secrets.items():
19 if secret not in environ:
20 filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
21 if os.path.exists(filename):
22 with open(filename) as handle: value = handle.read()
23 else:
24 print("Generating a random secret for {}".format(name))
25 value = os.urandom(32).encode("hex")
26 with open(filename, "w") as handle: handle.write(value)
27 environ[secret] = value
28
29 # Prepare the configuration
30 mode = sys.argv[1] if len(sys.argv) > 1 else None
31 environ = os.environ.copy()
32 ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
33 args = ["python", "-m", "synapse.app.homeserver"]
34
35 # In generate mode, generate a configuration, missing keys, then exit
36 if mode == "generate":
37 check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
38 args += [
39 "--server-name", environ["SYNAPSE_SERVER_NAME"],
40 "--report-stats", environ["SYNAPSE_REPORT_STATS"],
41 "--config-path", environ["SYNAPSE_CONFIG_PATH"],
42 "--generate-config"
43 ]
44 os.execv("/usr/local/bin/python", args)
45
46 # In normal mode, generate missing keys if any, then run synapse
47 else:
48 # Parse the configuration file
49 if "SYNAPSE_CONFIG_PATH" in environ:
50 args += ["--config-path", environ["SYNAPSE_CONFIG_PATH"]]
51 else:
52 check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
53 generate_secrets(environ, {
54 "registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
55 "macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
56 })
57 environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
58 if not os.path.exists("/compiled"): os.mkdir("/compiled")
59 convert("/conf/homeserver.yaml", "/compiled/homeserver.yaml", environ)
60 convert("/conf/log.config", "/compiled/log.config", environ)
61 subprocess.check_output(["chown", "-R", ownership, "/data"])
62 args += ["--config-path", "/compiled/homeserver.yaml"]
63 # Generate missing keys and start synapse
64 subprocess.check_output(args + ["--generate-keys"])
65 os.execv("/sbin/su-exec", ["su-exec", ownership] + args)
0 # Using the Synapse Grafana dashboard
1
2 0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
3 1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst
4 2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
5 3. Set up additional recording rules
0 {
1 "__inputs": [
2 {
3 "name": "DS_PROMETHEUS",
4 "label": "Prometheus",
5 "description": "",
6 "type": "datasource",
7 "pluginId": "prometheus",
8 "pluginName": "Prometheus"
9 }
10 ],
11 "__requires": [
12 {
13 "type": "grafana",
14 "id": "grafana",
15 "name": "Grafana",
16 "version": "5.2.0"
17 },
18 {
19 "type": "panel",
20 "id": "graph",
21 "name": "Graph",
22 "version": "5.0.0"
23 },
24 {
25 "type": "panel",
26 "id": "heatmap",
27 "name": "Heatmap",
28 "version": "5.0.0"
29 },
30 {
31 "type": "datasource",
32 "id": "prometheus",
33 "name": "Prometheus",
34 "version": "5.0.0"
35 }
36 ],
37 "annotations": {
38 "list": [
39 {
40 "builtIn": 1,
41 "datasource": "$datasource",
42 "enable": false,
43 "hide": true,
44 "iconColor": "rgba(0, 211, 255, 1)",
45 "limit": 100,
46 "name": "Annotations & Alerts",
47 "showIn": 0,
48 "type": "dashboard"
49 }
50 ]
51 },
52 "editable": true,
53 "gnetId": null,
54 "graphTooltip": 0,
55 "id": null,
56 "iteration": 1533026624326,
57 "links": [
58 {
59 "asDropdown": true,
60 "icon": "external link",
61 "keepTime": true,
62 "tags": [
63 "matrix"
64 ],
65 "title": "Dashboards",
66 "type": "dashboards"
67 }
68 ],
69 "panels": [
70 {
71 "collapsed": false,
72 "gridPos": {
73 "h": 1,
74 "w": 24,
75 "x": 0,
76 "y": 0
77 },
78 "id": 73,
79 "panels": [],
80 "title": "Overview",
81 "type": "row"
82 },
83 {
84 "aliasColors": {},
85 "bars": false,
86 "dashLength": 10,
87 "dashes": false,
88 "datasource": "${DS_PROMETHEUS}",
89 "fill": 1,
90 "gridPos": {
91 "h": 9,
92 "w": 12,
93 "x": 0,
94 "y": 1
95 },
96 "id": 75,
97 "legend": {
98 "avg": false,
99 "current": false,
100 "max": false,
101 "min": false,
102 "show": true,
103 "total": false,
104 "values": false
105 },
106 "lines": true,
107 "linewidth": 1,
108 "links": [],
109 "nullPointMode": "null",
110 "percentage": false,
111 "pointradius": 5,
112 "points": false,
113 "renderer": "flot",
114 "seriesOverrides": [],
115 "spaceLength": 10,
116 "stack": false,
117 "steppedLine": false,
118 "targets": [
119 {
120 "expr": "process_cpu_seconds:rate2m{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
121 "format": "time_series",
122 "intervalFactor": 1,
123 "legendFormat": "{{job}}-{{index}} ",
124 "refId": "A"
125 }
126 ],
127 "thresholds": [],
128 "timeFrom": null,
129 "timeShift": null,
130 "title": "CPU usage",
131 "tooltip": {
132 "shared": true,
133 "sort": 0,
134 "value_type": "individual"
135 },
136 "type": "graph",
137 "xaxis": {
138 "buckets": null,
139 "mode": "time",
140 "name": null,
141 "show": true,
142 "values": []
143 },
144 "yaxes": [
145 {
146 "decimals": null,
147 "format": "percentunit",
148 "label": null,
149 "logBase": 1,
150 "max": "1",
151 "min": "0",
152 "show": true
153 },
154 {
155 "format": "short",
156 "label": null,
157 "logBase": 1,
158 "max": null,
159 "min": null,
160 "show": true
161 }
162 ],
163 "yaxis": {
164 "align": false,
165 "alignLevel": null
166 }
167 },
168 {
169 "cards": {
170 "cardPadding": 0,
171 "cardRound": null
172 },
173 "color": {
174 "cardColor": "#b4ff00",
175 "colorScale": "sqrt",
176 "colorScheme": "interpolateSpectral",
177 "exponent": 0.5,
178 "mode": "spectrum"
179 },
180 "dataFormat": "tsbuckets",
181 "datasource": "${DS_PROMETHEUS}",
182 "gridPos": {
183 "h": 9,
184 "w": 12,
185 "x": 12,
186 "y": 1
187 },
188 "heatmap": {},
189 "highlightCards": true,
190 "id": 85,
191 "legend": {
192 "show": false
193 },
194 "links": [],
195 "targets": [
196 {
197 "expr": "sum(rate(synapse_http_server_response_time_seconds_bucket{servlet='RoomSendEventRestServlet',instance=\"$instance\"}[$bucket_size])) by (le)",
198 "format": "heatmap",
199 "intervalFactor": 1,
200 "legendFormat": "{{le}}",
201 "refId": "A"
202 }
203 ],
204 "title": "Event Send Time",
205 "tooltip": {
206 "show": true,
207 "showHistogram": false
208 },
209 "type": "heatmap",
210 "xAxis": {
211 "show": true
212 },
213 "xBucketNumber": null,
214 "xBucketSize": null,
215 "yAxis": {
216 "decimals": null,
217 "format": "s",
218 "logBase": 2,
219 "max": null,
220 "min": null,
221 "show": true,
222 "splitFactor": null
223 },
224 "yBucketBound": "auto",
225 "yBucketNumber": null,
226 "yBucketSize": null
227 },
228 {
229 "aliasColors": {},
230 "bars": false,
231 "dashLength": 10,
232 "dashes": false,
233 "datasource": "$datasource",
234 "editable": true,
235 "error": false,
236 "fill": 1,
237 "grid": {},
238 "gridPos": {
239 "h": 7,
240 "w": 12,
241 "x": 0,
242 "y": 10
243 },
244 "id": 33,
245 "legend": {
246 "avg": false,
247 "current": false,
248 "max": false,
249 "min": false,
250 "show": false,
251 "total": false,
252 "values": false
253 },
254 "lines": true,
255 "linewidth": 2,
256 "links": [],
257 "nullPointMode": "null",
258 "percentage": false,
259 "pointradius": 5,
260 "points": false,
261 "renderer": "flot",
262 "seriesOverrides": [],
263 "spaceLength": 10,
264 "stack": false,
265 "steppedLine": false,
266 "targets": [
267 {
268 "expr": "sum(rate(synapse_storage_events_persisted_events{instance=\"$instance\"}[$bucket_size])) without (job,index)",
269 "format": "time_series",
270 "intervalFactor": 2,
271 "legendFormat": "",
272 "refId": "A",
273 "step": 20,
274 "target": ""
275 }
276 ],
277 "thresholds": [],
278 "timeFrom": null,
279 "timeShift": null,
280 "title": "Events Persisted",
281 "tooltip": {
282 "shared": true,
283 "sort": 0,
284 "value_type": "cumulative"
285 },
286 "type": "graph",
287 "xaxis": {
288 "buckets": null,
289 "mode": "time",
290 "name": null,
291 "show": true,
292 "values": []
293 },
294 "yaxes": [
295 {
296 "format": "hertz",
297 "logBase": 1,
298 "max": null,
299 "min": null,
300 "show": true
301 },
302 {
303 "format": "short",
304 "logBase": 1,
305 "max": null,
306 "min": null,
307 "show": true
308 }
309 ],
310 "yaxis": {
311 "align": false,
312 "alignLevel": null
313 }
314 },
315 {
316 "collapsed": true,
317 "gridPos": {
318 "h": 1,
319 "w": 24,
320 "x": 0,
321 "y": 17
322 },
323 "id": 54,
324 "panels": [
325 {
326 "aliasColors": {},
327 "bars": false,
328 "dashLength": 10,
329 "dashes": false,
330 "datasource": "$datasource",
331 "editable": true,
332 "error": false,
333 "fill": 0,
334 "grid": {},
335 "gridPos": {
336 "h": 7,
337 "w": 12,
338 "x": 0,
339 "y": 18
340 },
341 "id": 34,
342 "legend": {
343 "avg": false,
344 "current": false,
345 "max": false,
346 "min": false,
347 "show": true,
348 "total": false,
349 "values": false
350 },
351 "lines": true,
352 "linewidth": 2,
353 "links": [],
354 "nullPointMode": "null",
355 "percentage": false,
356 "pointradius": 5,
357 "points": false,
358 "renderer": "flot",
359 "seriesOverrides": [],
360 "spaceLength": 10,
361 "stack": false,
362 "steppedLine": true,
363 "targets": [
364 {
365 "expr": "process_resident_memory_bytes{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
366 "format": "time_series",
367 "intervalFactor": 2,
368 "legendFormat": "{{job}} {{index}}",
369 "refId": "A",
370 "step": 20,
371 "target": ""
372 }
373 ],
374 "thresholds": [],
375 "timeFrom": null,
376 "timeShift": null,
377 "title": "Memory",
378 "tooltip": {
379 "shared": true,
380 "sort": 0,
381 "value_type": "cumulative"
382 },
383 "type": "graph",
384 "xaxis": {
385 "buckets": null,
386 "mode": "time",
387 "name": null,
388 "show": true,
389 "values": []
390 },
391 "yaxes": [
392 {
393 "format": "bytes",
394 "logBase": 1,
395 "max": null,
396 "min": "0",
397 "show": true
398 },
399 {
400 "format": "short",
401 "logBase": 1,
402 "max": null,
403 "min": null,
404 "show": true
405 }
406 ],
407 "yaxis": {
408 "align": false,
409 "alignLevel": null
410 }
411 },
412 {
413 "aliasColors": {},
414 "bars": false,
415 "dashLength": 10,
416 "dashes": false,
417 "datasource": "$datasource",
418 "fill": 1,
419 "gridPos": {
420 "h": 7,
421 "w": 12,
422 "x": 12,
423 "y": 18
424 },
425 "id": 37,
426 "legend": {
427 "avg": false,
428 "current": false,
429 "max": false,
430 "min": false,
431 "show": true,
432 "total": false,
433 "values": false
434 },
435 "lines": true,
436 "linewidth": 1,
437 "links": [],
438 "nullPointMode": "null",
439 "percentage": false,
440 "pointradius": 5,
441 "points": false,
442 "renderer": "flot",
443 "seriesOverrides": [
444 {
445 "alias": "/max$/",
446 "color": "#890F02",
447 "fill": 0,
448 "legend": false
449 }
450 ],
451 "spaceLength": 10,
452 "stack": false,
453 "steppedLine": false,
454 "targets": [
455 {
456 "expr": "process_open_fds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
457 "format": "time_series",
458 "hide": false,
459 "intervalFactor": 2,
460 "legendFormat": "{{job}}-{{index}}",
461 "refId": "A",
462 "step": 20
463 },
464 {
465 "expr": "process_max_fds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
466 "format": "time_series",
467 "hide": true,
468 "intervalFactor": 2,
469 "legendFormat": "{{job}}-{{index}} max",
470 "refId": "B",
471 "step": 20
472 }
473 ],
474 "thresholds": [],
475 "timeFrom": null,
476 "timeShift": null,
477 "title": "Open FDs",
478 "tooltip": {
479 "shared": true,
480 "sort": 0,
481 "value_type": "individual"
482 },
483 "type": "graph",
484 "xaxis": {
485 "buckets": null,
486 "mode": "time",
487 "name": null,
488 "show": true,
489 "values": []
490 },
491 "yaxes": [
492 {
493 "format": "none",
494 "label": null,
495 "logBase": 1,
496 "max": null,
497 "min": null,
498 "show": true
499 },
500 {
501 "format": "short",
502 "label": null,
503 "logBase": 1,
504 "max": null,
505 "min": null,
506 "show": true
507 }
508 ],
509 "yaxis": {
510 "align": false,
511 "alignLevel": null
512 }
513 },
514 {
515 "aliasColors": {},
516 "bars": false,
517 "dashLength": 10,
518 "dashes": false,
519 "datasource": "$datasource",
520 "fill": 1,
521 "gridPos": {
522 "h": 7,
523 "w": 12,
524 "x": 0,
525 "y": 25
526 },
527 "id": 48,
528 "legend": {
529 "avg": false,
530 "current": false,
531 "max": false,
532 "min": false,
533 "show": true,
534 "total": false,
535 "values": false
536 },
537 "lines": true,
538 "linewidth": 1,
539 "links": [],
540 "nullPointMode": "null",
541 "percentage": false,
542 "pointradius": 5,
543 "points": false,
544 "renderer": "flot",
545 "seriesOverrides": [],
546 "spaceLength": 10,
547 "stack": false,
548 "steppedLine": false,
549 "targets": [
550 {
551 "expr": "rate(synapse_storage_schedule_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])/rate(synapse_storage_schedule_time_count[$bucket_size])",
552 "format": "time_series",
553 "intervalFactor": 2,
554 "legendFormat": "{{job}}-{{index}}",
555 "refId": "A",
556 "step": 20
557 }
558 ],
559 "thresholds": [],
560 "timeFrom": null,
561 "timeShift": null,
562 "title": "Avg time waiting for db conn",
563 "tooltip": {
564 "shared": true,
565 "sort": 0,
566 "value_type": "individual"
567 },
568 "type": "graph",
569 "xaxis": {
570 "buckets": null,
571 "mode": "time",
572 "name": null,
573 "show": true,
574 "values": []
575 },
576 "yaxes": [
577 {
578 "decimals": null,
579 "format": "s",
580 "label": "",
581 "logBase": 1,
582 "max": null,
583 "min": "0",
584 "show": true
585 },
586 {
587 "format": "short",
588 "label": null,
589 "logBase": 1,
590 "max": null,
591 "min": null,
592 "show": false
593 }
594 ],
595 "yaxis": {
596 "align": false,
597 "alignLevel": null
598 }
599 },
600 {
601 "aliasColors": {},
602 "bars": false,
603 "dashLength": 10,
604 "dashes": false,
605 "datasource": "$datasource",
606 "fill": 1,
607 "gridPos": {
608 "h": 7,
609 "w": 12,
610 "x": 12,
611 "y": 25
612 },
613 "id": 49,
614 "legend": {
615 "avg": false,
616 "current": false,
617 "max": false,
618 "min": false,
619 "show": true,
620 "total": false,
621 "values": false
622 },
623 "lines": true,
624 "linewidth": 1,
625 "links": [],
626 "nullPointMode": "null",
627 "percentage": false,
628 "pointradius": 5,
629 "points": false,
630 "renderer": "flot",
631 "seriesOverrides": [
632 {
633 "alias": "/^up/",
634 "legend": false,
635 "yaxis": 2
636 }
637 ],
638 "spaceLength": 10,
639 "stack": false,
640 "steppedLine": false,
641 "targets": [
642 {
643 "expr": "scrape_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
644 "format": "time_series",
645 "interval": "",
646 "intervalFactor": 2,
647 "legendFormat": "{{job}}-{{index}}",
648 "refId": "A",
649 "step": 20
650 }
651 ],
652 "thresholds": [],
653 "timeFrom": null,
654 "timeShift": null,
655 "title": "Prometheus scrape time",
656 "tooltip": {
657 "shared": true,
658 "sort": 0,
659 "value_type": "individual"
660 },
661 "type": "graph",
662 "xaxis": {
663 "buckets": null,
664 "mode": "time",
665 "name": null,
666 "show": true,
667 "values": []
668 },
669 "yaxes": [
670 {
671 "format": "s",
672 "label": null,
673 "logBase": 1,
674 "max": null,
675 "min": "0",
676 "show": true
677 },
678 {
679 "decimals": 0,
680 "format": "none",
681 "label": "",
682 "logBase": 1,
683 "max": "0",
684 "min": "-1",
685 "show": false
686 }
687 ],
688 "yaxis": {
689 "align": false,
690 "alignLevel": null
691 }
692 },
693 {
694 "aliasColors": {},
695 "bars": false,
696 "dashLength": 10,
697 "dashes": false,
698 "datasource": "$datasource",
699 "fill": 1,
700 "gridPos": {
701 "h": 7,
702 "w": 12,
703 "x": 0,
704 "y": 32
705 },
706 "id": 50,
707 "legend": {
708 "avg": false,
709 "current": false,
710 "max": false,
711 "min": false,
712 "show": true,
713 "total": false,
714 "values": false
715 },
716 "lines": true,
717 "linewidth": 1,
718 "links": [],
719 "nullPointMode": "null",
720 "percentage": false,
721 "pointradius": 5,
722 "points": false,
723 "renderer": "flot",
724 "seriesOverrides": [],
725 "spaceLength": 10,
726 "stack": false,
727 "steppedLine": false,
728 "targets": [
729 {
730 "expr": "rate(python_twisted_reactor_tick_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])/rate(python_twisted_reactor_tick_time_count[$bucket_size])",
731 "format": "time_series",
732 "interval": "",
733 "intervalFactor": 2,
734 "legendFormat": "{{job}}-{{index}}",
735 "refId": "A",
736 "step": 20
737 }
738 ],
739 "thresholds": [],
740 "timeFrom": null,
741 "timeShift": null,
742 "title": "Avg reactor tick time",
743 "tooltip": {
744 "shared": true,
745 "sort": 0,
746 "value_type": "individual"
747 },
748 "type": "graph",
749 "xaxis": {
750 "buckets": null,
751 "mode": "time",
752 "name": null,
753 "show": true,
754 "values": []
755 },
756 "yaxes": [
757 {
758 "format": "s",
759 "label": null,
760 "logBase": 1,
761 "max": null,
762 "min": null,
763 "show": true
764 },
765 {
766 "format": "short",
767 "label": null,
768 "logBase": 1,
769 "max": null,
770 "min": null,
771 "show": false
772 }
773 ],
774 "yaxis": {
775 "align": false,
776 "alignLevel": null
777 }
778 },
779 {
780 "aliasColors": {},
781 "bars": false,
782 "dashLength": 10,
783 "dashes": false,
784 "datasource": "$datasource",
785 "editable": true,
786 "error": false,
787 "fill": 1,
788 "grid": {},
789 "gridPos": {
790 "h": 7,
791 "w": 12,
792 "x": 12,
793 "y": 32
794 },
795 "id": 5,
796 "legend": {
797 "alignAsTable": false,
798 "avg": false,
799 "current": false,
800 "hideEmpty": false,
801 "hideZero": false,
802 "max": false,
803 "min": false,
804 "rightSide": false,
805 "show": true,
806 "total": false,
807 "values": false
808 },
809 "lines": true,
810 "linewidth": 1,
811 "links": [],
812 "nullPointMode": "null",
813 "percentage": false,
814 "pointradius": 5,
815 "points": false,
816 "renderer": "flot",
817 "seriesOverrides": [
818 {
819 "alias": "/user/"
820 },
821 {
822 "alias": "/system/"
823 }
824 ],
825 "spaceLength": 10,
826 "stack": false,
827 "steppedLine": false,
828 "targets": [
829 {
830 "expr": "rate(process_cpu_system_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
831 "format": "time_series",
832 "intervalFactor": 1,
833 "legendFormat": "{{job}}-{{index}} system ",
834 "metric": "",
835 "refId": "B",
836 "step": 20
837 },
838 {
839 "expr": "rate(process_cpu_user_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
840 "format": "time_series",
841 "hide": false,
842 "interval": "",
843 "intervalFactor": 1,
844 "legendFormat": "{{job}}-{{index}} user",
845 "refId": "A",
846 "step": 20
847 }
848 ],
849 "thresholds": [
850 {
851 "colorMode": "custom",
852 "line": true,
853 "lineColor": "rgba(216, 200, 27, 0.27)",
854 "op": "gt",
855 "value": 0.5
856 },
857 {
858 "colorMode": "custom",
859 "line": true,
860 "lineColor": "rgba(234, 112, 112, 0.22)",
861 "op": "gt",
862 "value": 0.8
863 }
864 ],
865 "timeFrom": null,
866 "timeShift": null,
867 "title": "CPU",
868 "tooltip": {
869 "shared": true,
870 "sort": 0,
871 "value_type": "individual"
872 },
873 "type": "graph",
874 "xaxis": {
875 "buckets": null,
876 "mode": "time",
877 "name": null,
878 "show": true,
879 "values": []
880 },
881 "yaxes": [
882 {
883 "decimals": null,
884 "format": "percentunit",
885 "label": "",
886 "logBase": 1,
887 "max": "1.2",
888 "min": 0,
889 "show": true
890 },
891 {
892 "format": "short",
893 "logBase": 1,
894 "max": null,
895 "min": null,
896 "show": true
897 }
898 ],
899 "yaxis": {
900 "align": false,
901 "alignLevel": null
902 }
903 },
904 {
905 "aliasColors": {},
906 "bars": false,
907 "dashLength": 10,
908 "dashes": false,
909 "datasource": "${DS_PROMETHEUS}",
910 "fill": 0,
911 "gridPos": {
912 "h": 7,
913 "w": 12,
914 "x": 0,
915 "y": 39
916 },
917 "id": 53,
918 "legend": {
919 "avg": false,
920 "current": false,
921 "max": false,
922 "min": false,
923 "show": true,
924 "total": false,
925 "values": false
926 },
927 "lines": true,
928 "linewidth": 1,
929 "links": [],
930 "nullPointMode": "null",
931 "percentage": false,
932 "pointradius": 5,
933 "points": false,
934 "renderer": "flot",
935 "seriesOverrides": [],
936 "spaceLength": 10,
937 "stack": false,
938 "steppedLine": false,
939 "targets": [
940 {
941 "expr": "min_over_time(up{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
942 "format": "time_series",
943 "intervalFactor": 2,
944 "legendFormat": "{{job}}-{{index}}",
945 "refId": "A"
946 }
947 ],
948 "thresholds": [],
949 "timeFrom": null,
950 "timeShift": null,
951 "title": "Up",
952 "tooltip": {
953 "shared": true,
954 "sort": 0,
955 "value_type": "individual"
956 },
957 "type": "graph",
958 "xaxis": {
959 "buckets": null,
960 "mode": "time",
961 "name": null,
962 "show": true,
963 "values": []
964 },
965 "yaxes": [
966 {
967 "format": "short",
968 "label": null,
969 "logBase": 1,
970 "max": null,
971 "min": null,
972 "show": true
973 },
974 {
975 "format": "short",
976 "label": null,
977 "logBase": 1,
978 "max": null,
979 "min": null,
980 "show": true
981 }
982 ],
983 "yaxis": {
984 "align": false,
985 "alignLevel": null
986 }
987 }
988 ],
989 "repeat": null,
990 "title": "Process info",
991 "type": "row"
992 },
993 {
994 "collapsed": true,
995 "gridPos": {
996 "h": 1,
997 "w": 24,
998 "x": 0,
999 "y": 18
1000 },
1001 "id": 56,
1002 "panels": [
1003 {
1004 "aliasColors": {},
1005 "bars": false,
1006 "dashLength": 10,
1007 "dashes": false,
1008 "datasource": "$datasource",
1009 "decimals": 1,
1010 "fill": 1,
1011 "gridPos": {
1012 "h": 7,
1013 "w": 12,
1014 "x": 0,
1015 "y": 49
1016 },
1017 "id": 40,
1018 "legend": {
1019 "avg": false,
1020 "current": false,
1021 "max": false,
1022 "min": false,
1023 "show": true,
1024 "total": false,
1025 "values": false
1026 },
1027 "lines": true,
1028 "linewidth": 1,
1029 "links": [],
1030 "nullPointMode": "null",
1031 "percentage": false,
1032 "pointradius": 5,
1033 "points": false,
1034 "renderer": "flot",
1035 "seriesOverrides": [],
1036 "spaceLength": 10,
1037 "stack": false,
1038 "steppedLine": false,
1039 "targets": [
1040 {
1041 "expr": "rate(synapse_storage_events_persisted_by_source_type{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
1042 "format": "time_series",
1043 "intervalFactor": 2,
1044 "legendFormat": "{{type}}",
1045 "refId": "D"
1046 }
1047 ],
1048 "thresholds": [],
1049 "timeFrom": null,
1050 "timeShift": null,
1051 "title": "Events/s Local vs Remote",
1052 "tooltip": {
1053 "shared": true,
1054 "sort": 2,
1055 "value_type": "individual"
1056 },
1057 "type": "graph",
1058 "xaxis": {
1059 "buckets": null,
1060 "mode": "time",
1061 "name": null,
1062 "show": true,
1063 "values": []
1064 },
1065 "yaxes": [
1066 {
1067 "format": "hertz",
1068 "label": "",
1069 "logBase": 1,
1070 "max": null,
1071 "min": "0",
1072 "show": true
1073 },
1074 {
1075 "format": "short",
1076 "label": null,
1077 "logBase": 1,
1078 "max": null,
1079 "min": null,
1080 "show": true
1081 }
1082 ],
1083 "yaxis": {
1084 "align": false,
1085 "alignLevel": null
1086 }
1087 },
1088 {
1089 "aliasColors": {},
1090 "bars": false,
1091 "dashLength": 10,
1092 "dashes": false,
1093 "datasource": "$datasource",
1094 "decimals": 1,
1095 "fill": 1,
1096 "gridPos": {
1097 "h": 7,
1098 "w": 12,
1099 "x": 12,
1100 "y": 49
1101 },
1102 "id": 46,
1103 "legend": {
1104 "avg": false,
1105 "current": false,
1106 "max": false,
1107 "min": false,
1108 "show": true,
1109 "total": false,
1110 "values": false
1111 },
1112 "lines": true,
1113 "linewidth": 1,
1114 "links": [],
1115 "nullPointMode": "null",
1116 "percentage": false,
1117 "pointradius": 5,
1118 "points": false,
1119 "renderer": "flot",
1120 "seriesOverrides": [],
1121 "spaceLength": 10,
1122 "stack": false,
1123 "steppedLine": false,
1124 "targets": [
1125 {
1126 "expr": "rate(synapse_storage_events_persisted_by_event_type{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
1127 "format": "time_series",
1128 "instant": false,
1129 "intervalFactor": 2,
1130 "legendFormat": "{{type}}",
1131 "refId": "A",
1132 "step": 20
1133 }
1134 ],
1135 "thresholds": [],
1136 "timeFrom": null,
1137 "timeShift": null,
1138 "title": "Events/s by Type",
1139 "tooltip": {
1140 "shared": false,
1141 "sort": 2,
1142 "value_type": "individual"
1143 },
1144 "type": "graph",
1145 "xaxis": {
1146 "buckets": null,
1147 "mode": "time",
1148 "name": null,
1149 "show": true,
1150 "values": []
1151 },
1152 "yaxes": [
1153 {
1154 "format": "hertz",
1155 "label": null,
1156 "logBase": 1,
1157 "max": null,
1158 "min": "0",
1159 "show": true
1160 },
1161 {
1162 "format": "short",
1163 "label": null,
1164 "logBase": 1,
1165 "max": null,
1166 "min": null,
1167 "show": true
1168 }
1169 ],
1170 "yaxis": {
1171 "align": false,
1172 "alignLevel": null
1173 }
1174 },
1175 {
1176 "aliasColors": {
1177 "irc-freenode (local)": "#EAB839"
1178 },
1179 "bars": false,
1180 "dashLength": 10,
1181 "dashes": false,
1182 "datasource": "$datasource",
1183 "decimals": 1,
1184 "fill": 1,
1185 "gridPos": {
1186 "h": 7,
1187 "w": 12,
1188 "x": 0,
1189 "y": 56
1190 },
1191 "id": 44,
1192 "legend": {
1193 "alignAsTable": true,
1194 "avg": false,
1195 "current": false,
1196 "hideEmpty": true,
1197 "hideZero": true,
1198 "max": false,
1199 "min": false,
1200 "show": true,
1201 "total": false,
1202 "values": false
1203 },
1204 "lines": true,
1205 "linewidth": 1,
1206 "links": [],
1207 "nullPointMode": "null",
1208 "percentage": false,
1209 "pointradius": 5,
1210 "points": false,
1211 "renderer": "flot",
1212 "seriesOverrides": [],
1213 "spaceLength": 10,
1214 "stack": false,
1215 "steppedLine": false,
1216 "targets": [
1217 {
1218 "expr": "rate(synapse_storage_events_persisted_by_origin{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
1219 "format": "time_series",
1220 "intervalFactor": 2,
1221 "legendFormat": "{{origin_entity}} ({{origin_type}})",
1222 "refId": "A",
1223 "step": 20
1224 }
1225 ],
1226 "thresholds": [],
1227 "timeFrom": null,
1228 "timeShift": null,
1229 "title": "Events/s by Origin",
1230 "tooltip": {
1231 "shared": false,
1232 "sort": 2,
1233 "value_type": "individual"
1234 },
1235 "type": "graph",
1236 "xaxis": {
1237 "buckets": null,
1238 "mode": "time",
1239 "name": null,
1240 "show": true,
1241 "values": []
1242 },
1243 "yaxes": [
1244 {
1245 "format": "hertz",
1246 "label": null,
1247 "logBase": 1,
1248 "max": null,
1249 "min": "0",
1250 "show": true
1251 },
1252 {
1253 "format": "short",
1254 "label": null,
1255 "logBase": 1,
1256 "max": null,
1257 "min": null,
1258 "show": true
1259 }
1260 ],
1261 "yaxis": {
1262 "align": false,
1263 "alignLevel": null
1264 }
1265 },
1266 {
1267 "aliasColors": {},
1268 "bars": false,
1269 "dashLength": 10,
1270 "dashes": false,
1271 "datasource": "$datasource",
1272 "decimals": 1,
1273 "fill": 1,
1274 "gridPos": {
1275 "h": 7,
1276 "w": 12,
1277 "x": 12,
1278 "y": 56
1279 },
1280 "id": 45,
1281 "legend": {
1282 "alignAsTable": true,
1283 "avg": false,
1284 "current": false,
1285 "hideEmpty": true,
1286 "hideZero": true,
1287 "max": false,
1288 "min": false,
1289 "show": true,
1290 "total": false,
1291 "values": false
1292 },
1293 "lines": true,
1294 "linewidth": 1,
1295 "links": [],
1296 "nullPointMode": "null",
1297 "percentage": false,
1298 "pointradius": 5,
1299 "points": false,
1300 "renderer": "flot",
1301 "seriesOverrides": [],
1302 "spaceLength": 10,
1303 "stack": false,
1304 "steppedLine": false,
1305 "targets": [
1306 {
1307 "expr": "sum(rate(synapse_storage_events_persisted_events_sep{job=~\"$job\",index=~\"$index\", type=\"m.room.member\",instance=\"$instance\"}[$bucket_size])) by (origin_type, origin_entity)",
1308 "format": "time_series",
1309 "intervalFactor": 2,
1310 "legendFormat": "{{origin_entity}} ({{origin_type}})",
1311 "refId": "A",
1312 "step": 20
1313 }
1314 ],
1315 "thresholds": [],
1316 "timeFrom": null,
1317 "timeShift": null,
1318 "title": "Memberships/s by Origin",
1319 "tooltip": {
1320 "shared": true,
1321 "sort": 2,
1322 "value_type": "individual"
1323 },
1324 "type": "graph",
1325 "xaxis": {
1326 "buckets": null,
1327 "mode": "time",
1328 "name": null,
1329 "show": true,
1330 "values": []
1331 },
1332 "yaxes": [
1333 {
1334 "format": "hertz",
1335 "label": null,
1336 "logBase": 1,
1337 "max": null,
1338 "min": "0",
1339 "show": true
1340 },
1341 {
1342 "format": "short",
1343 "label": null,
1344 "logBase": 1,
1345 "max": null,
1346 "min": null,
1347 "show": true
1348 }
1349 ],
1350 "yaxis": {
1351 "align": false,
1352 "alignLevel": null
1353 }
1354 }
1355 ],
1356 "repeat": null,
1357 "title": "Event persist rates",
1358 "type": "row"
1359 },
1360 {
1361 "collapsed": true,
1362 "gridPos": {
1363 "h": 1,
1364 "w": 24,
1365 "x": 0,
1366 "y": 19
1367 },
1368 "id": 57,
1369 "panels": [
1370 {
1371 "aliasColors": {},
1372 "bars": false,
1373 "dashLength": 10,
1374 "dashes": false,
1375 "datasource": "$datasource",
1376 "decimals": null,
1377 "editable": true,
1378 "error": false,
1379 "fill": 2,
1380 "grid": {},
1381 "gridPos": {
1382 "h": 8,
1383 "w": 12,
1384 "x": 0,
1385 "y": 48
1386 },
1387 "id": 4,
1388 "legend": {
1389 "alignAsTable": true,
1390 "avg": false,
1391 "current": false,
1392 "hideEmpty": false,
1393 "hideZero": true,
1394 "max": false,
1395 "min": false,
1396 "rightSide": false,
1397 "show": true,
1398 "total": false,
1399 "values": false
1400 },
1401 "lines": true,
1402 "linewidth": 1,
1403 "links": [],
1404 "nullPointMode": "null",
1405 "percentage": false,
1406 "pointradius": 5,
1407 "points": false,
1408 "renderer": "flot",
1409 "seriesOverrides": [],
1410 "spaceLength": 10,
1411 "stack": false,
1412 "steppedLine": false,
1413 "targets": [
1414 {
1415 "expr": "rate(synapse_http_server_requests_received{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
1416 "format": "time_series",
1417 "interval": "",
1418 "intervalFactor": 2,
1419 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}} {{tag}}",
1420 "refId": "A",
1421 "step": 20
1422 }
1423 ],
1424 "thresholds": [
1425 {
1426 "colorMode": "custom",
1427 "fill": true,
1428 "fillColor": "rgba(216, 200, 27, 0.27)",
1429 "op": "gt",
1430 "value": 100
1431 },
1432 {
1433 "colorMode": "custom",
1434 "fill": true,
1435 "fillColor": "rgba(234, 112, 112, 0.22)",
1436 "op": "gt",
1437 "value": 250
1438 }
1439 ],
1440 "timeFrom": null,
1441 "timeShift": null,
1442 "title": "Request Count by arrival time",
1443 "tooltip": {
1444 "shared": false,
1445 "sort": 0,
1446 "value_type": "individual"
1447 },
1448 "transparent": false,
1449 "type": "graph",
1450 "xaxis": {
1451 "buckets": null,
1452 "mode": "time",
1453 "name": null,
1454 "show": true,
1455 "values": []
1456 },
1457 "yaxes": [
1458 {
1459 "format": "hertz",
1460 "logBase": 1,
1461 "max": null,
1462 "min": null,
1463 "show": true
1464 },
1465 {
1466 "format": "short",
1467 "logBase": 1,
1468 "max": null,
1469 "min": null,
1470 "show": true
1471 }
1472 ],
1473 "yaxis": {
1474 "align": false,
1475 "alignLevel": null
1476 }
1477 },
1478 {
1479 "aliasColors": {},
1480 "bars": false,
1481 "dashLength": 10,
1482 "dashes": false,
1483 "datasource": "$datasource",
1484 "editable": true,
1485 "error": false,
1486 "fill": 1,
1487 "grid": {},
1488 "gridPos": {
1489 "h": 8,
1490 "w": 12,
1491 "x": 12,
1492 "y": 48
1493 },
1494 "id": 32,
1495 "legend": {
1496 "avg": false,
1497 "current": false,
1498 "max": false,
1499 "min": false,
1500 "show": true,
1501 "total": false,
1502 "values": false
1503 },
1504 "lines": true,
1505 "linewidth": 2,
1506 "links": [],
1507 "nullPointMode": "null",
1508 "percentage": false,
1509 "pointradius": 5,
1510 "points": false,
1511 "renderer": "flot",
1512 "seriesOverrides": [],
1513 "spaceLength": 10,
1514 "stack": false,
1515 "steppedLine": false,
1516 "targets": [
1517 {
1518 "expr": "rate(synapse_http_server_requests_received{instance=\"$instance\",job=~\"$job\",index=~\"$index\",method!=\"OPTIONS\"}[$bucket_size]) and topk(10,synapse_http_server_requests_received{instance=\"$instance\",job=~\"$job\",method!=\"OPTIONS\"})",
1519 "format": "time_series",
1520 "intervalFactor": 2,
1521 "legendFormat": "{{method}} {{servlet}} {{job}}-{{index}}",
1522 "refId": "A",
1523 "step": 20,
1524 "target": ""
1525 }
1526 ],
1527 "thresholds": [],
1528 "timeFrom": null,
1529 "timeShift": null,
1530 "title": "Top 10 Request Counts",
1531 "tooltip": {
1532 "shared": false,
1533 "sort": 0,
1534 "value_type": "cumulative"
1535 },
1536 "type": "graph",
1537 "xaxis": {
1538 "buckets": null,
1539 "mode": "time",
1540 "name": null,
1541 "show": true,
1542 "values": []
1543 },
1544 "yaxes": [
1545 {
1546 "format": "hertz",
1547 "logBase": 1,
1548 "max": null,
1549 "min": null,
1550 "show": true
1551 },
1552 {
1553 "format": "short",
1554 "logBase": 1,
1555 "max": null,
1556 "min": null,
1557 "show": true
1558 }
1559 ],
1560 "yaxis": {
1561 "align": false,
1562 "alignLevel": null
1563 }
1564 },
1565 {
1566 "aliasColors": {},
1567 "bars": false,
1568 "dashLength": 10,
1569 "dashes": false,
1570 "datasource": "$datasource",
1571 "decimals": null,
1572 "editable": true,
1573 "error": false,
1574 "fill": 2,
1575 "grid": {},
1576 "gridPos": {
1577 "h": 8,
1578 "w": 12,
1579 "x": 0,
1580 "y": 56
1581 },
1582 "id": 23,
1583 "legend": {
1584 "alignAsTable": true,
1585 "avg": false,
1586 "current": false,
1587 "hideEmpty": false,
1588 "hideZero": true,
1589 "max": false,
1590 "min": false,
1591 "rightSide": false,
1592 "show": true,
1593 "total": false,
1594 "values": false
1595 },
1596 "lines": true,
1597 "linewidth": 1,
1598 "links": [],
1599 "nullPointMode": "null",
1600 "percentage": false,
1601 "pointradius": 5,
1602 "points": false,
1603 "renderer": "flot",
1604 "seriesOverrides": [],
1605 "spaceLength": 10,
1606 "stack": false,
1607 "steppedLine": false,
1608 "targets": [
1609 {
1610 "expr": "rate(synapse_http_server_response_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_http_server_response_ru_stime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
1611 "format": "time_series",
1612 "interval": "",
1613 "intervalFactor": 1,
1614 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}} {{tag}}",
1615 "refId": "A",
1616 "step": 20
1617 }
1618 ],
1619 "thresholds": [
1620 {
1621 "colorMode": "custom",
1622 "fill": true,
1623 "fillColor": "rgba(216, 200, 27, 0.27)",
1624 "op": "gt",
1625 "value": 100,
1626 "yaxis": "left"
1627 },
1628 {
1629 "colorMode": "custom",
1630 "fill": true,
1631 "fillColor": "rgba(234, 112, 112, 0.22)",
1632 "op": "gt",
1633 "value": 250,
1634 "yaxis": "left"
1635 }
1636 ],
1637 "timeFrom": null,
1638 "timeShift": null,
1639 "title": "Total CPU Usage by Endpoint",
1640 "tooltip": {
1641 "shared": false,
1642 "sort": 0,
1643 "value_type": "individual"
1644 },
1645 "transparent": false,
1646 "type": "graph",
1647 "xaxis": {
1648 "buckets": null,
1649 "mode": "time",
1650 "name": null,
1651 "show": true,
1652 "values": []
1653 },
1654 "yaxes": [
1655 {
1656 "format": "percentunit",
1657 "logBase": 1,
1658 "max": null,
1659 "min": null,
1660 "show": true
1661 },
1662 {
1663 "format": "short",
1664 "logBase": 1,
1665 "max": null,
1666 "min": null,
1667 "show": true
1668 }
1669 ],
1670 "yaxis": {
1671 "align": false,
1672 "alignLevel": null
1673 }
1674 },
1675 {
1676 "aliasColors": {},
1677 "bars": false,
1678 "dashLength": 10,
1679 "dashes": false,
1680 "datasource": "$datasource",
1681 "decimals": null,
1682 "editable": true,
1683 "error": false,
1684 "fill": 2,
1685 "grid": {},
1686 "gridPos": {
1687 "h": 8,
1688 "w": 12,
1689 "x": 12,
1690 "y": 56
1691 },
1692 "id": 52,
1693 "legend": {
1694 "alignAsTable": true,
1695 "avg": false,
1696 "current": false,
1697 "hideEmpty": false,
1698 "hideZero": true,
1699 "max": false,
1700 "min": false,
1701 "rightSide": false,
1702 "show": true,
1703 "total": false,
1704 "values": false
1705 },
1706 "lines": true,
1707 "linewidth": 1,
1708 "links": [],
1709 "nullPointMode": "null",
1710 "percentage": false,
1711 "pointradius": 5,
1712 "points": false,
1713 "renderer": "flot",
1714 "seriesOverrides": [],
1715 "spaceLength": 10,
1716 "stack": false,
1717 "steppedLine": false,
1718 "targets": [
1719 {
1720 "expr": "(rate(synapse_http_server_response_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_http_server_response_ru_stime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])) / rate(synapse_http_server_response_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
1721 "format": "time_series",
1722 "interval": "",
1723 "intervalFactor": 2,
1724 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}} {{tag}}",
1725 "refId": "A",
1726 "step": 20
1727 }
1728 ],
1729 "thresholds": [
1730 {
1731 "colorMode": "custom",
1732 "fill": true,
1733 "fillColor": "rgba(216, 200, 27, 0.27)",
1734 "op": "gt",
1735 "value": 100
1736 },
1737 {
1738 "colorMode": "custom",
1739 "fill": true,
1740 "fillColor": "rgba(234, 112, 112, 0.22)",
1741 "op": "gt",
1742 "value": 250
1743 }
1744 ],
1745 "timeFrom": null,
1746 "timeShift": null,
1747 "title": "Average CPU Usage by Endpoint",
1748 "tooltip": {
1749 "shared": false,
1750 "sort": 0,
1751 "value_type": "individual"
1752 },
1753 "transparent": false,
1754 "type": "graph",
1755 "xaxis": {
1756 "buckets": null,
1757 "mode": "time",
1758 "name": null,
1759 "show": true,
1760 "values": []
1761 },
1762 "yaxes": [
1763 {
1764 "format": "s",
1765 "logBase": 1,
1766 "max": null,
1767 "min": null,
1768 "show": true
1769 },
1770 {
1771 "format": "short",
1772 "logBase": 1,
1773 "max": null,
1774 "min": null,
1775 "show": true
1776 }
1777 ],
1778 "yaxis": {
1779 "align": false,
1780 "alignLevel": null
1781 }
1782 },
1783 {
1784 "aliasColors": {},
1785 "bars": false,
1786 "dashLength": 10,
1787 "dashes": false,
1788 "datasource": "$datasource",
1789 "editable": true,
1790 "error": false,
1791 "fill": 1,
1792 "grid": {},
1793 "gridPos": {
1794 "h": 8,
1795 "w": 12,
1796 "x": 0,
1797 "y": 64
1798 },
1799 "id": 7,
1800 "legend": {
1801 "alignAsTable": true,
1802 "avg": false,
1803 "current": false,
1804 "hideEmpty": true,
1805 "hideZero": true,
1806 "max": false,
1807 "min": false,
1808 "show": true,
1809 "total": false,
1810 "values": false
1811 },
1812 "lines": true,
1813 "linewidth": 1,
1814 "links": [],
1815 "nullPointMode": "null",
1816 "percentage": false,
1817 "pointradius": 5,
1818 "points": false,
1819 "renderer": "flot",
1820 "seriesOverrides": [],
1821 "spaceLength": 10,
1822 "stack": false,
1823 "steppedLine": false,
1824 "targets": [
1825 {
1826 "expr": "rate(synapse_http_server_response_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
1827 "format": "time_series",
1828 "interval": "",
1829 "intervalFactor": 2,
1830 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}} {{tag}}",
1831 "refId": "A",
1832 "step": 20
1833 }
1834 ],
1835 "thresholds": [],
1836 "timeFrom": null,
1837 "timeShift": null,
1838 "title": "DB Usage by endpoint",
1839 "tooltip": {
1840 "shared": false,
1841 "sort": 0,
1842 "value_type": "cumulative"
1843 },
1844 "type": "graph",
1845 "xaxis": {
1846 "buckets": null,
1847 "mode": "time",
1848 "name": null,
1849 "show": true,
1850 "values": []
1851 },
1852 "yaxes": [
1853 {
1854 "format": "percentunit",
1855 "logBase": 1,
1856 "max": null,
1857 "min": null,
1858 "show": true
1859 },
1860 {
1861 "format": "short",
1862 "logBase": 1,
1863 "max": null,
1864 "min": null,
1865 "show": true
1866 }
1867 ],
1868 "yaxis": {
1869 "align": false,
1870 "alignLevel": null
1871 }
1872 },
1873 {
1874 "aliasColors": {},
1875 "bars": false,
1876 "dashLength": 10,
1877 "dashes": false,
1878 "datasource": "$datasource",
1879 "decimals": null,
1880 "editable": true,
1881 "error": false,
1882 "fill": 2,
1883 "grid": {},
1884 "gridPos": {
1885 "h": 8,
1886 "w": 12,
1887 "x": 12,
1888 "y": 64
1889 },
1890 "id": 47,
1891 "legend": {
1892 "alignAsTable": true,
1893 "avg": true,
1894 "current": false,
1895 "hideEmpty": false,
1896 "hideZero": true,
1897 "max": true,
1898 "min": false,
1899 "rightSide": false,
1900 "show": true,
1901 "total": false,
1902 "values": true
1903 },
1904 "lines": true,
1905 "linewidth": 1,
1906 "links": [],
1907 "nullPointMode": "null",
1908 "percentage": false,
1909 "pointradius": 5,
1910 "points": false,
1911 "renderer": "flot",
1912 "seriesOverrides": [],
1913 "spaceLength": 10,
1914 "stack": false,
1915 "steppedLine": false,
1916 "targets": [
1917 {
1918 "expr": "rate(synapse_http_server_response_time_seconds_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\",tag!=\"incremental_sync\"}[$bucket_size])/rate(synapse_http_server_response_time_seconds_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\",tag!=\"incremental_sync\"}[$bucket_size])",
1919 "format": "time_series",
1920 "interval": "",
1921 "intervalFactor": 2,
1922 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}} {{tag}}",
1923 "refId": "A",
1924 "step": 20
1925 }
1926 ],
1927 "thresholds": [],
1928 "timeFrom": null,
1929 "timeShift": null,
1930 "title": "Non-sync avg response time",
1931 "tooltip": {
1932 "shared": false,
1933 "sort": 0,
1934 "value_type": "individual"
1935 },
1936 "transparent": false,
1937 "type": "graph",
1938 "xaxis": {
1939 "buckets": null,
1940 "mode": "time",
1941 "name": null,
1942 "show": true,
1943 "values": []
1944 },
1945 "yaxes": [
1946 {
1947 "format": "s",
1948 "logBase": 1,
1949 "max": null,
1950 "min": null,
1951 "show": true
1952 },
1953 {
1954 "format": "short",
1955 "logBase": 1,
1956 "max": null,
1957 "min": null,
1958 "show": false
1959 }
1960 ],
1961 "yaxis": {
1962 "align": false,
1963 "alignLevel": null
1964 }
1965 },
1966 {
1967 "aliasColors": {},
1968 "bars": false,
1969 "dashLength": 10,
1970 "dashes": false,
1971 "datasource": "${DS_PROMETHEUS}",
1972 "fill": 1,
1973 "gridPos": {
1974 "h": 9,
1975 "w": 12,
1976 "x": 0,
1977 "y": 72
1978 },
1979 "id": 103,
1980 "legend": {
1981 "avg": false,
1982 "current": false,
1983 "max": false,
1984 "min": false,
1985 "show": true,
1986 "total": false,
1987 "values": false
1988 },
1989 "lines": true,
1990 "linewidth": 1,
1991 "links": [],
1992 "nullPointMode": "null",
1993 "percentage": false,
1994 "pointradius": 5,
1995 "points": false,
1996 "renderer": "flot",
1997 "seriesOverrides": [],
1998 "spaceLength": 10,
1999 "stack": false,
2000 "steppedLine": false,
2001 "targets": [
2002 {
2003 "expr": "topk(10,synapse_http_server_in_flight_requests_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"})",
2004 "format": "time_series",
2005 "interval": "",
2006 "intervalFactor": 1,
2007 "legendFormat": "{{job}}-{{index}} {{method}} {{servlet}}",
2008 "refId": "A"
2009 }
2010 ],
2011 "thresholds": [],
2012 "timeFrom": null,
2013 "timeShift": null,
2014 "title": "Requests in flight",
2015 "tooltip": {
2016 "shared": false,
2017 "sort": 0,
2018 "value_type": "individual"
2019 },
2020 "type": "graph",
2021 "xaxis": {
2022 "buckets": null,
2023 "mode": "time",
2024 "name": null,
2025 "show": true,
2026 "values": []
2027 },
2028 "yaxes": [
2029 {
2030 "format": "short",
2031 "label": null,
2032 "logBase": 1,
2033 "max": null,
2034 "min": null,
2035 "show": true
2036 },
2037 {
2038 "format": "short",
2039 "label": null,
2040 "logBase": 1,
2041 "max": null,
2042 "min": null,
2043 "show": true
2044 }
2045 ],
2046 "yaxis": {
2047 "align": false,
2048 "alignLevel": null
2049 }
2050 }
2051 ],
2052 "repeat": null,
2053 "title": "Requests",
2054 "type": "row"
2055 },
2056 {
2057 "collapsed": true,
2058 "gridPos": {
2059 "h": 1,
2060 "w": 24,
2061 "x": 0,
2062 "y": 20
2063 },
2064 "id": 97,
2065 "panels": [
2066 {
2067 "aliasColors": {},
2068 "bars": false,
2069 "dashLength": 10,
2070 "dashes": false,
2071 "datasource": "${DS_PROMETHEUS}",
2072 "fill": 1,
2073 "gridPos": {
2074 "h": 9,
2075 "w": 12,
2076 "x": 0,
2077 "y": 23
2078 },
2079 "id": 99,
2080 "legend": {
2081 "avg": false,
2082 "current": false,
2083 "max": false,
2084 "min": false,
2085 "show": true,
2086 "total": false,
2087 "values": false
2088 },
2089 "lines": true,
2090 "linewidth": 1,
2091 "links": [],
2092 "nullPointMode": "null",
2093 "percentage": false,
2094 "pointradius": 5,
2095 "points": false,
2096 "renderer": "flot",
2097 "seriesOverrides": [],
2098 "spaceLength": 10,
2099 "stack": false,
2100 "steppedLine": false,
2101 "targets": [
2102 {
2103 "expr": "rate(synapse_background_process_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_background_process_ru_stime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
2104 "format": "time_series",
2105 "interval": "",
2106 "intervalFactor": 1,
2107 "legendFormat": "{{job}}-{{index}} {{name}}",
2108 "refId": "A"
2109 }
2110 ],
2111 "thresholds": [],
2112 "timeFrom": null,
2113 "timeShift": null,
2114 "title": "CPU usage by background jobs",
2115 "tooltip": {
2116 "shared": true,
2117 "sort": 0,
2118 "value_type": "individual"
2119 },
2120 "type": "graph",
2121 "xaxis": {
2122 "buckets": null,
2123 "mode": "time",
2124 "name": null,
2125 "show": true,
2126 "values": []
2127 },
2128 "yaxes": [
2129 {
2130 "format": "percentunit",
2131 "label": null,
2132 "logBase": 1,
2133 "max": null,
2134 "min": null,
2135 "show": true
2136 },
2137 {
2138 "format": "short",
2139 "label": null,
2140 "logBase": 1,
2141 "max": null,
2142 "min": null,
2143 "show": true
2144 }
2145 ],
2146 "yaxis": {
2147 "align": false,
2148 "alignLevel": null
2149 }
2150 },
2151 {
2152 "aliasColors": {},
2153 "bars": false,
2154 "dashLength": 10,
2155 "dashes": false,
2156 "datasource": "${DS_PROMETHEUS}",
2157 "fill": 1,
2158 "gridPos": {
2159 "h": 9,
2160 "w": 12,
2161 "x": 12,
2162 "y": 23
2163 },
2164 "id": 101,
2165 "legend": {
2166 "avg": false,
2167 "current": false,
2168 "max": false,
2169 "min": false,
2170 "show": true,
2171 "total": false,
2172 "values": false
2173 },
2174 "lines": true,
2175 "linewidth": 1,
2176 "links": [],
2177 "nullPointMode": "null",
2178 "percentage": false,
2179 "pointradius": 5,
2180 "points": false,
2181 "renderer": "flot",
2182 "seriesOverrides": [],
2183 "spaceLength": 10,
2184 "stack": false,
2185 "steppedLine": false,
2186 "targets": [
2187 {
2188 "expr": "rate(synapse_background_process_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
2189 "format": "time_series",
2190 "intervalFactor": 1,
2191 "legendFormat": "{{job}}-{{index}} {{name}}",
2192 "refId": "A"
2193 }
2194 ],
2195 "thresholds": [],
2196 "timeFrom": null,
2197 "timeShift": null,
2198 "title": "DB usage by background jobs",
2199 "tooltip": {
2200 "shared": true,
2201 "sort": 0,
2202 "value_type": "individual"
2203 },
2204 "type": "graph",
2205 "xaxis": {
2206 "buckets": null,
2207 "mode": "time",
2208 "name": null,
2209 "show": true,
2210 "values": []
2211 },
2212 "yaxes": [
2213 {
2214 "format": "percentunit",
2215 "label": null,
2216 "logBase": 1,
2217 "max": null,
2218 "min": null,
2219 "show": true
2220 },
2221 {
2222 "format": "short",
2223 "label": null,
2224 "logBase": 1,
2225 "max": null,
2226 "min": null,
2227 "show": true
2228 }
2229 ],
2230 "yaxis": {
2231 "align": false,
2232 "alignLevel": null
2233 }
2234 }
2235 ],
2236 "title": "Background jobs",
2237 "type": "row"
2238 },
2239 {
2240 "collapsed": true,
2241 "gridPos": {
2242 "h": 1,
2243 "w": 24,
2244 "x": 0,
2245 "y": 21
2246 },
2247 "id": 81,
2248 "panels": [
2249 {
2250 "aliasColors": {},
2251 "bars": false,
2252 "dashLength": 10,
2253 "dashes": false,
2254 "datasource": "${DS_PROMETHEUS}",
2255 "fill": 1,
2256 "gridPos": {
2257 "h": 9,
2258 "w": 12,
2259 "x": 0,
2260 "y": 25
2261 },
2262 "id": 79,
2263 "legend": {
2264 "avg": false,
2265 "current": false,
2266 "max": false,
2267 "min": false,
2268 "show": true,
2269 "total": false,
2270 "values": false
2271 },
2272 "lines": true,
2273 "linewidth": 1,
2274 "links": [],
2275 "nullPointMode": "null",
2276 "percentage": false,
2277 "pointradius": 5,
2278 "points": false,
2279 "renderer": "flot",
2280 "seriesOverrides": [],
2281 "spaceLength": 10,
2282 "stack": false,
2283 "steppedLine": false,
2284 "targets": [
2285 {
2286 "expr": "rate(synapse_federation_client_sent_transactions{instance=\"$instance\", job=~\"$job\", index=~\"$index\"}[$bucket_size])",
2287 "format": "time_series",
2288 "intervalFactor": 1,
2289 "legendFormat": "txn rate",
2290 "refId": "A"
2291 }
2292 ],
2293 "thresholds": [],
2294 "timeFrom": null,
2295 "timeShift": null,
2296 "title": "Outgoing federation transaction rate",
2297 "tooltip": {
2298 "shared": true,
2299 "sort": 0,
2300 "value_type": "individual"
2301 },
2302 "type": "graph",
2303 "xaxis": {
2304 "buckets": null,
2305 "mode": "time",
2306 "name": null,
2307 "show": true,
2308 "values": []
2309 },
2310 "yaxes": [
2311 {
2312 "format": "hertz",
2313 "label": null,
2314 "logBase": 1,
2315 "max": null,
2316 "min": null,
2317 "show": true
2318 },
2319 {
2320 "format": "short",
2321 "label": null,
2322 "logBase": 1,
2323 "max": null,
2324 "min": null,
2325 "show": true
2326 }
2327 ],
2328 "yaxis": {
2329 "align": false,
2330 "alignLevel": null
2331 }
2332 },
2333 {
2334 "aliasColors": {},
2335 "bars": false,
2336 "dashLength": 10,
2337 "dashes": false,
2338 "datasource": "${DS_PROMETHEUS}",
2339 "fill": 1,
2340 "gridPos": {
2341 "h": 9,
2342 "w": 12,
2343 "x": 12,
2344 "y": 25
2345 },
2346 "id": 83,
2347 "legend": {
2348 "avg": false,
2349 "current": false,
2350 "max": false,
2351 "min": false,
2352 "show": true,
2353 "total": false,
2354 "values": false
2355 },
2356 "lines": true,
2357 "linewidth": 1,
2358 "links": [],
2359 "nullPointMode": "null",
2360 "percentage": false,
2361 "pointradius": 5,
2362 "points": false,
2363 "renderer": "flot",
2364 "seriesOverrides": [],
2365 "spaceLength": 10,
2366 "stack": false,
2367 "steppedLine": false,
2368 "targets": [
2369 {
2370 "expr": "rate(synapse_federation_server_received_pdus{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
2371 "format": "time_series",
2372 "intervalFactor": 1,
2373 "legendFormat": "pdus",
2374 "refId": "A"
2375 },
2376 {
2377 "expr": "rate(synapse_federation_server_received_edus{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
2378 "format": "time_series",
2379 "intervalFactor": 1,
2380 "legendFormat": "edus",
2381 "refId": "B"
2382 }
2383 ],
2384 "thresholds": [],
2385 "timeFrom": null,
2386 "timeShift": null,
2387 "title": "Incoming PDU/EDU rate",
2388 "tooltip": {
2389 "shared": true,
2390 "sort": 0,
2391 "value_type": "individual"
2392 },
2393 "type": "graph",
2394 "xaxis": {
2395 "buckets": null,
2396 "mode": "time",
2397 "name": null,
2398 "show": true,
2399 "values": []
2400 },
2401 "yaxes": [
2402 {
2403 "format": "hertz",
2404 "label": null,
2405 "logBase": 1,
2406 "max": null,
2407 "min": null,
2408 "show": true
2409 },
2410 {
2411 "format": "short",
2412 "label": null,
2413 "logBase": 1,
2414 "max": null,
2415 "min": null,
2416 "show": true
2417 }
2418 ],
2419 "yaxis": {
2420 "align": false,
2421 "alignLevel": null
2422 }
2423 }
2424 ],
2425 "title": "Federation",
2426 "type": "row"
2427 },
2428 {
2429 "collapsed": true,
2430 "gridPos": {
2431 "h": 1,
2432 "w": 24,
2433 "x": 0,
2434 "y": 22
2435 },
2436 "id": 60,
2437 "panels": [
2438 {
2439 "aliasColors": {},
2440 "bars": false,
2441 "dashLength": 10,
2442 "dashes": false,
2443 "datasource": "$datasource",
2444 "fill": 1,
2445 "gridPos": {
2446 "h": 7,
2447 "w": 12,
2448 "x": 0,
2449 "y": 23
2450 },
2451 "id": 51,
2452 "legend": {
2453 "avg": false,
2454 "current": false,
2455 "max": false,
2456 "min": false,
2457 "show": true,
2458 "total": false,
2459 "values": false
2460 },
2461 "lines": true,
2462 "linewidth": 1,
2463 "links": [],
2464 "nullPointMode": "null",
2465 "percentage": false,
2466 "pointradius": 5,
2467 "points": false,
2468 "renderer": "flot",
2469 "seriesOverrides": [],
2470 "spaceLength": 10,
2471 "stack": false,
2472 "steppedLine": false,
2473 "targets": [
2474 {
2475 "expr": "rate(synapse_push_httppusher_http_pushes_processed{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
2476 "format": "time_series",
2477 "interval": "",
2478 "intervalFactor": 2,
2479 "legendFormat": "processed {{job}}",
2480 "refId": "A",
2481 "step": 20
2482 },
2483 {
2484 "expr": "rate(synapse_push_httppusher_http_pushes_failed{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
2485 "format": "time_series",
2486 "intervalFactor": 2,
2487 "legendFormat": "failed {{job}}",
2488 "refId": "B",
2489 "step": 20
2490 }
2491 ],
2492 "thresholds": [],
2493 "timeFrom": null,
2494 "timeShift": null,
2495 "title": "HTTP Push rate",
2496 "tooltip": {
2497 "shared": true,
2498 "sort": 0,
2499 "value_type": "individual"
2500 },
2501 "type": "graph",
2502 "xaxis": {
2503 "buckets": null,
2504 "mode": "time",
2505 "name": null,
2506 "show": true,
2507 "values": []
2508 },
2509 "yaxes": [
2510 {
2511 "format": "hertz",
2512 "label": null,
2513 "logBase": 1,
2514 "max": null,
2515 "min": null,
2516 "show": true
2517 },
2518 {
2519 "format": "short",
2520 "label": null,
2521 "logBase": 1,
2522 "max": null,
2523 "min": null,
2524 "show": true
2525 }
2526 ],
2527 "yaxis": {
2528 "align": false,
2529 "alignLevel": null
2530 }
2531 }
2532 ],
2533 "repeat": null,
2534 "title": "Pushes",
2535 "type": "row"
2536 },
2537 {
2538 "collapsed": true,
2539 "gridPos": {
2540 "h": 1,
2541 "w": 24,
2542 "x": 0,
2543 "y": 23
2544 },
2545 "id": 58,
2546 "panels": [
2547 {
2548 "aliasColors": {},
2549 "bars": false,
2550 "dashLength": 10,
2551 "dashes": false,
2552 "datasource": "$datasource",
2553 "editable": true,
2554 "error": false,
2555 "fill": 0,
2556 "grid": {},
2557 "gridPos": {
2558 "h": 7,
2559 "w": 12,
2560 "x": 0,
2561 "y": 25
2562 },
2563 "id": 10,
2564 "legend": {
2565 "avg": false,
2566 "current": false,
2567 "hideEmpty": true,
2568 "hideZero": true,
2569 "max": false,
2570 "min": false,
2571 "show": true,
2572 "total": false,
2573 "values": false
2574 },
2575 "lines": true,
2576 "linewidth": 2,
2577 "links": [],
2578 "nullPointMode": "null",
2579 "percentage": false,
2580 "pointradius": 5,
2581 "points": false,
2582 "renderer": "flot",
2583 "seriesOverrides": [],
2584 "spaceLength": 10,
2585 "stack": false,
2586 "steppedLine": false,
2587 "targets": [
2588 {
2589 "expr": "topk(10, rate(synapse_storage_transaction_time_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
2590 "format": "time_series",
2591 "interval": "",
2592 "intervalFactor": 2,
2593 "legendFormat": "{{job}}-{{index}} {{desc}}",
2594 "refId": "A",
2595 "step": 20
2596 }
2597 ],
2598 "thresholds": [],
2599 "timeFrom": null,
2600 "timeShift": null,
2601 "title": "Top DB transactions by txn rate",
2602 "tooltip": {
2603 "shared": false,
2604 "sort": 0,
2605 "value_type": "cumulative"
2606 },
2607 "type": "graph",
2608 "xaxis": {
2609 "buckets": null,
2610 "mode": "time",
2611 "name": null,
2612 "show": true,
2613 "values": []
2614 },
2615 "yaxes": [
2616 {
2617 "format": "hertz",
2618 "logBase": 1,
2619 "max": null,
2620 "min": 0,
2621 "show": true
2622 },
2623 {
2624 "format": "short",
2625 "logBase": 1,
2626 "max": null,
2627 "min": null,
2628 "show": true
2629 }
2630 ],
2631 "yaxis": {
2632 "align": false,
2633 "alignLevel": null
2634 }
2635 },
2636 {
2637 "aliasColors": {},
2638 "bars": false,
2639 "dashLength": 10,
2640 "dashes": false,
2641 "datasource": "$datasource",
2642 "editable": true,
2643 "error": false,
2644 "fill": 1,
2645 "grid": {},
2646 "gridPos": {
2647 "h": 7,
2648 "w": 12,
2649 "x": 12,
2650 "y": 25
2651 },
2652 "id": 11,
2653 "legend": {
2654 "avg": false,
2655 "current": false,
2656 "hideEmpty": true,
2657 "hideZero": true,
2658 "max": false,
2659 "min": false,
2660 "show": true,
2661 "total": false,
2662 "values": false
2663 },
2664 "lines": true,
2665 "linewidth": 1,
2666 "links": [],
2667 "nullPointMode": "null",
2668 "percentage": false,
2669 "pointradius": 5,
2670 "points": false,
2671 "renderer": "flot",
2672 "seriesOverrides": [],
2673 "spaceLength": 10,
2674 "stack": false,
2675 "steppedLine": true,
2676 "targets": [
2677 {
2678 "expr": "topk(5, rate(synapse_storage_transaction_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
2679 "format": "time_series",
2680 "instant": false,
2681 "interval": "",
2682 "intervalFactor": 1,
2683 "legendFormat": "{{job}}-{{index}} {{desc}}",
2684 "refId": "A",
2685 "step": 20
2686 }
2687 ],
2688 "thresholds": [],
2689 "timeFrom": null,
2690 "timeShift": null,
2691 "title": "Top DB transactions by total txn time",
2692 "tooltip": {
2693 "shared": false,
2694 "sort": 0,
2695 "value_type": "cumulative"
2696 },
2697 "type": "graph",
2698 "xaxis": {
2699 "buckets": null,
2700 "mode": "time",
2701 "name": null,
2702 "show": true,
2703 "values": []
2704 },
2705 "yaxes": [
2706 {
2707 "format": "percentunit",
2708 "logBase": 1,
2709 "max": null,
2710 "min": null,
2711 "show": true
2712 },
2713 {
2714 "format": "short",
2715 "logBase": 1,
2716 "max": null,
2717 "min": null,
2718 "show": true
2719 }
2720 ],
2721 "yaxis": {
2722 "align": false,
2723 "alignLevel": null
2724 }
2725 }
2726 ],
2727 "repeat": null,
2728 "title": "Database",
2729 "type": "row"
2730 },
2731 {
2732 "collapsed": true,
2733 "gridPos": {
2734 "h": 1,
2735 "w": 24,
2736 "x": 0,
2737 "y": 24
2738 },
2739 "id": 59,
2740 "panels": [
2741 {
2742 "aliasColors": {},
2743 "bars": false,
2744 "dashLength": 10,
2745 "dashes": false,
2746 "datasource": "$datasource",
2747 "editable": true,
2748 "error": false,
2749 "fill": 1,
2750 "grid": {},
2751 "gridPos": {
2752 "h": 13,
2753 "w": 12,
2754 "x": 0,
2755 "y": 17
2756 },
2757 "id": 12,
2758 "legend": {
2759 "alignAsTable": true,
2760 "avg": false,
2761 "current": false,
2762 "max": false,
2763 "min": false,
2764 "show": true,
2765 "total": false,
2766 "values": false
2767 },
2768 "lines": true,
2769 "linewidth": 2,
2770 "links": [],
2771 "nullPointMode": "null",
2772 "percentage": false,
2773 "pointradius": 5,
2774 "points": false,
2775 "renderer": "flot",
2776 "seriesOverrides": [],
2777 "spaceLength": 10,
2778 "stack": false,
2779 "steppedLine": false,
2780 "targets": [
2781 {
2782 "expr": "rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])",
2783 "format": "time_series",
2784 "interval": "",
2785 "intervalFactor": 2,
2786 "legendFormat": "{{job}}-{{index}} {{block_name}}",
2787 "refId": "A",
2788 "step": 20
2789 }
2790 ],
2791 "thresholds": [],
2792 "timeFrom": null,
2793 "timeShift": null,
2794 "title": "Total CPU Usage by Block",
2795 "tooltip": {
2796 "shared": false,
2797 "sort": 0,
2798 "value_type": "cumulative"
2799 },
2800 "type": "graph",
2801 "xaxis": {
2802 "buckets": null,
2803 "mode": "time",
2804 "name": null,
2805 "show": true,
2806 "values": []
2807 },
2808 "yaxes": [
2809 {
2810 "format": "percentunit",
2811 "logBase": 1,
2812 "max": null,
2813 "min": null,
2814 "show": true
2815 },
2816 {
2817 "format": "short",
2818 "logBase": 1,
2819 "max": null,
2820 "min": null,
2821 "show": true
2822 }
2823 ],
2824 "yaxis": {
2825 "align": false,
2826 "alignLevel": null
2827 }
2828 },
2829 {
2830 "aliasColors": {},
2831 "bars": false,
2832 "dashLength": 10,
2833 "dashes": false,
2834 "datasource": "$datasource",
2835 "editable": true,
2836 "error": false,
2837 "fill": 1,
2838 "grid": {},
2839 "gridPos": {
2840 "h": 13,
2841 "w": 12,
2842 "x": 12,
2843 "y": 17
2844 },
2845 "id": 26,
2846 "legend": {
2847 "alignAsTable": true,
2848 "avg": false,
2849 "current": false,
2850 "max": false,
2851 "min": false,
2852 "show": true,
2853 "total": false,
2854 "values": false
2855 },
2856 "lines": true,
2857 "linewidth": 2,
2858 "links": [],
2859 "nullPointMode": "null",
2860 "percentage": false,
2861 "pointradius": 5,
2862 "points": false,
2863 "renderer": "flot",
2864 "seriesOverrides": [],
2865 "spaceLength": 10,
2866 "stack": false,
2867 "steppedLine": false,
2868 "targets": [
2869 {
2870 "expr": "(rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])) / rate(synapse_util_metrics_block_count[$bucket_size])",
2871 "format": "time_series",
2872 "interval": "",
2873 "intervalFactor": 2,
2874 "legendFormat": "{{job}}-{{index}} {{block_name}}",
2875 "refId": "A",
2876 "step": 20
2877 }
2878 ],
2879 "thresholds": [],
2880 "timeFrom": null,
2881 "timeShift": null,
2882 "title": "Average CPU Time per Block",
2883 "tooltip": {
2884 "shared": false,
2885 "sort": 0,
2886 "value_type": "cumulative"
2887 },
2888 "type": "graph",
2889 "xaxis": {
2890 "buckets": null,
2891 "mode": "time",
2892 "name": null,
2893 "show": true,
2894 "values": []
2895 },
2896 "yaxes": [
2897 {
2898 "format": "ms",
2899 "logBase": 1,
2900 "max": null,
2901 "min": null,
2902 "show": true
2903 },
2904 {
2905 "format": "short",
2906 "logBase": 1,
2907 "max": null,
2908 "min": null,
2909 "show": true
2910 }
2911 ],
2912 "yaxis": {
2913 "align": false,
2914 "alignLevel": null
2915 }
2916 },
2917 {
2918 "aliasColors": {},
2919 "bars": false,
2920 "dashLength": 10,
2921 "dashes": false,
2922 "datasource": "$datasource",
2923 "editable": true,
2924 "error": false,
2925 "fill": 1,
2926 "grid": {},
2927 "gridPos": {
2928 "h": 13,
2929 "w": 12,
2930 "x": 0,
2931 "y": 30
2932 },
2933 "id": 13,
2934 "legend": {
2935 "alignAsTable": true,
2936 "avg": false,
2937 "current": false,
2938 "max": false,
2939 "min": false,
2940 "show": true,
2941 "total": false,
2942 "values": false
2943 },
2944 "lines": true,
2945 "linewidth": 2,
2946 "links": [],
2947 "nullPointMode": "null",
2948 "percentage": false,
2949 "pointradius": 5,
2950 "points": false,
2951 "renderer": "flot",
2952 "seriesOverrides": [],
2953 "spaceLength": 10,
2954 "stack": false,
2955 "steppedLine": false,
2956 "targets": [
2957 {
2958 "expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size])",
2959 "format": "time_series",
2960 "interval": "",
2961 "intervalFactor": 2,
2962 "legendFormat": "{{job}} {{block_name}}",
2963 "refId": "A",
2964 "step": 20
2965 }
2966 ],
2967 "thresholds": [],
2968 "timeFrom": null,
2969 "timeShift": null,
2970 "title": "Total DB Usage by Block",
2971 "tooltip": {
2972 "shared": false,
2973 "sort": 0,
2974 "value_type": "cumulative"
2975 },
2976 "type": "graph",
2977 "xaxis": {
2978 "buckets": null,
2979 "mode": "time",
2980 "name": null,
2981 "show": true,
2982 "values": []
2983 },
2984 "yaxes": [
2985 {
2986 "format": "percentunit",
2987 "logBase": 1,
2988 "max": null,
2989 "min": 0,
2990 "show": true
2991 },
2992 {
2993 "format": "short",
2994 "logBase": 1,
2995 "max": null,
2996 "min": null,
2997 "show": true
2998 }
2999 ],
3000 "yaxis": {
3001 "align": false,
3002 "alignLevel": null
3003 }
3004 },
3005 {
3006 "aliasColors": {},
3007 "bars": false,
3008 "dashLength": 10,
3009 "dashes": false,
3010 "datasource": "$datasource",
3011 "editable": true,
3012 "error": false,
3013 "fill": 1,
3014 "grid": {},
3015 "gridPos": {
3016 "h": 13,
3017 "w": 12,
3018 "x": 12,
3019 "y": 30
3020 },
3021 "id": 27,
3022 "legend": {
3023 "alignAsTable": true,
3024 "avg": false,
3025 "current": false,
3026 "max": false,
3027 "min": false,
3028 "show": true,
3029 "total": false,
3030 "values": false
3031 },
3032 "lines": true,
3033 "linewidth": 2,
3034 "links": [],
3035 "nullPointMode": "null",
3036 "percentage": false,
3037 "pointradius": 5,
3038 "points": false,
3039 "renderer": "flot",
3040 "seriesOverrides": [],
3041 "spaceLength": 10,
3042 "stack": false,
3043 "steppedLine": false,
3044 "targets": [
3045 {
3046 "expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
3047 "format": "time_series",
3048 "interval": "",
3049 "intervalFactor": 2,
3050 "legendFormat": "{{job}}-{{index}} {{block_name}}",
3051 "refId": "A",
3052 "step": 20
3053 }
3054 ],
3055 "thresholds": [],
3056 "timeFrom": null,
3057 "timeShift": null,
3058 "title": "Average Database Time per Block",
3059 "tooltip": {
3060 "shared": false,
3061 "sort": 0,
3062 "value_type": "cumulative"
3063 },
3064 "type": "graph",
3065 "xaxis": {
3066 "buckets": null,
3067 "mode": "time",
3068 "name": null,
3069 "show": true,
3070 "values": []
3071 },
3072 "yaxes": [
3073 {
3074 "format": "ms",
3075 "logBase": 1,
3076 "max": null,
3077 "min": null,
3078 "show": true
3079 },
3080 {
3081 "format": "short",
3082 "logBase": 1,
3083 "max": null,
3084 "min": null,
3085 "show": true
3086 }
3087 ],
3088 "yaxis": {
3089 "align": false,
3090 "alignLevel": null
3091 }
3092 },
3093 {
3094 "aliasColors": {},
3095 "bars": false,
3096 "dashLength": 10,
3097 "dashes": false,
3098 "datasource": "$datasource",
3099 "editable": true,
3100 "error": false,
3101 "fill": 1,
3102 "grid": {},
3103 "gridPos": {
3104 "h": 13,
3105 "w": 12,
3106 "x": 0,
3107 "y": 43
3108 },
3109 "id": 28,
3110 "legend": {
3111 "avg": false,
3112 "current": false,
3113 "max": false,
3114 "min": false,
3115 "show": false,
3116 "total": false,
3117 "values": false
3118 },
3119 "lines": true,
3120 "linewidth": 2,
3121 "links": [],
3122 "nullPointMode": "null",
3123 "percentage": false,
3124 "pointradius": 5,
3125 "points": false,
3126 "renderer": "flot",
3127 "seriesOverrides": [],
3128 "spaceLength": 10,
3129 "stack": false,
3130 "steppedLine": false,
3131 "targets": [
3132 {
3133 "expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
3134 "format": "time_series",
3135 "interval": "",
3136 "intervalFactor": 2,
3137 "legendFormat": "{{job}}-{{index}} {{block_name}}",
3138 "refId": "A",
3139 "step": 20
3140 }
3141 ],
3142 "thresholds": [],
3143 "timeFrom": null,
3144 "timeShift": null,
3145 "title": "Average Transactions per Block",
3146 "tooltip": {
3147 "shared": false,
3148 "sort": 0,
3149 "value_type": "cumulative"
3150 },
3151 "type": "graph",
3152 "xaxis": {
3153 "buckets": null,
3154 "mode": "time",
3155 "name": null,
3156 "show": true,
3157 "values": []
3158 },
3159 "yaxes": [
3160 {
3161 "format": "none",
3162 "logBase": 1,
3163 "max": null,
3164 "min": null,
3165 "show": true
3166 },
3167 {
3168 "format": "short",
3169 "logBase": 1,
3170 "max": null,
3171 "min": null,
3172 "show": true
3173 }
3174 ],
3175 "yaxis": {
3176 "align": false,
3177 "alignLevel": null
3178 }
3179 },
3180 {
3181 "aliasColors": {},
3182 "bars": false,
3183 "dashLength": 10,
3184 "dashes": false,
3185 "datasource": "$datasource",
3186 "editable": true,
3187 "error": false,
3188 "fill": 1,
3189 "grid": {},
3190 "gridPos": {
3191 "h": 13,
3192 "w": 12,
3193 "x": 12,
3194 "y": 43
3195 },
3196 "id": 25,
3197 "legend": {
3198 "avg": false,
3199 "current": false,
3200 "max": false,
3201 "min": false,
3202 "show": false,
3203 "total": false,
3204 "values": false
3205 },
3206 "lines": true,
3207 "linewidth": 2,
3208 "links": [],
3209 "nullPointMode": "null",
3210 "percentage": false,
3211 "pointradius": 5,
3212 "points": false,
3213 "renderer": "flot",
3214 "seriesOverrides": [],
3215 "spaceLength": 10,
3216 "stack": false,
3217 "steppedLine": false,
3218 "targets": [
3219 {
3220 "expr": "rate(synapse_util_metrics_block_time_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_count[$bucket_size])",
3221 "format": "time_series",
3222 "interval": "",
3223 "intervalFactor": 2,
3224 "legendFormat": "{{job}}-{{index}} {{block_name}}",
3225 "refId": "A",
3226 "step": 20
3227 }
3228 ],
3229 "thresholds": [],
3230 "timeFrom": null,
3231 "timeShift": null,
3232 "title": "Average Wallclock Time per Block",
3233 "tooltip": {
3234 "shared": false,
3235 "sort": 0,
3236 "value_type": "cumulative"
3237 },
3238 "type": "graph",
3239 "xaxis": {
3240 "buckets": null,
3241 "mode": "time",
3242 "name": null,
3243 "show": true,
3244 "values": []
3245 },
3246 "yaxes": [
3247 {
3248 "format": "ms",
3249 "logBase": 1,
3250 "max": null,
3251 "min": null,
3252 "show": true
3253 },
3254 {
3255 "format": "short",
3256 "logBase": 1,
3257 "max": null,
3258 "min": null,
3259 "show": true
3260 }
3261 ],
3262 "yaxis": {
3263 "align": false,
3264 "alignLevel": null
3265 }
3266 }
3267 ],
3268 "repeat": null,
3269 "title": "Per-block metrics",
3270 "type": "row"
3271 },
3272 {
3273 "collapsed": true,
3274 "gridPos": {
3275 "h": 1,
3276 "w": 24,
3277 "x": 0,
3278 "y": 25
3279 },
3280 "id": 61,
3281 "panels": [
3282 {
3283 "aliasColors": {},
3284 "bars": false,
3285 "dashLength": 10,
3286 "dashes": false,
3287 "datasource": "$datasource",
3288 "decimals": 2,
3289 "editable": true,
3290 "error": false,
3291 "fill": 0,
3292 "grid": {},
3293 "gridPos": {
3294 "h": 10,
3295 "w": 12,
3296 "x": 0,
3297 "y": 55
3298 },
3299 "id": 1,
3300 "legend": {
3301 "alignAsTable": true,
3302 "avg": false,
3303 "current": false,
3304 "hideEmpty": true,
3305 "hideZero": false,
3306 "max": false,
3307 "min": false,
3308 "show": true,
3309 "total": false,
3310 "values": false
3311 },
3312 "lines": true,
3313 "linewidth": 2,
3314 "links": [],
3315 "nullPointMode": "null",
3316 "percentage": false,
3317 "pointradius": 5,
3318 "points": false,
3319 "renderer": "flot",
3320 "seriesOverrides": [],
3321 "spaceLength": 10,
3322 "stack": false,
3323 "steppedLine": false,
3324 "targets": [
3325 {
3326 "expr": "rate(synapse_util_caches_cache:hits{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])/rate(synapse_util_caches_cache:total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
3327 "format": "time_series",
3328 "intervalFactor": 2,
3329 "legendFormat": "{{name}} {{job}}-{{index}}",
3330 "refId": "A",
3331 "step": 20
3332 }
3333 ],
3334 "thresholds": [],
3335 "timeFrom": null,
3336 "timeShift": null,
3337 "title": "Cache Hit Ratio",
3338 "tooltip": {
3339 "msResolution": true,
3340 "shared": false,
3341 "sort": 0,
3342 "value_type": "cumulative"
3343 },
3344 "type": "graph",
3345 "xaxis": {
3346 "buckets": null,
3347 "mode": "time",
3348 "name": null,
3349 "show": true,
3350 "values": []
3351 },
3352 "yaxes": [
3353 {
3354 "decimals": null,
3355 "format": "percentunit",
3356 "label": "",
3357 "logBase": 1,
3358 "max": "1",
3359 "min": 0,
3360 "show": true
3361 },
3362 {
3363 "format": "short",
3364 "logBase": 1,
3365 "max": null,
3366 "min": null,
3367 "show": false
3368 }
3369 ],
3370 "yaxis": {
3371 "align": false,
3372 "alignLevel": null
3373 }
3374 },
3375 {
3376 "aliasColors": {},
3377 "bars": false,
3378 "dashLength": 10,
3379 "dashes": false,
3380 "datasource": "$datasource",
3381 "editable": true,
3382 "error": false,
3383 "fill": 1,
3384 "grid": {},
3385 "gridPos": {
3386 "h": 10,
3387 "w": 12,
3388 "x": 12,
3389 "y": 55
3390 },
3391 "id": 8,
3392 "legend": {
3393 "alignAsTable": true,
3394 "avg": false,
3395 "current": false,
3396 "hideZero": false,
3397 "max": false,
3398 "min": false,
3399 "show": true,
3400 "total": false,
3401 "values": false
3402 },
3403 "lines": true,
3404 "linewidth": 2,
3405 "links": [],
3406 "nullPointMode": "connected",
3407 "percentage": false,
3408 "pointradius": 5,
3409 "points": false,
3410 "renderer": "flot",
3411 "seriesOverrides": [],
3412 "spaceLength": 10,
3413 "stack": false,
3414 "steppedLine": false,
3415 "targets": [
3416 {
3417 "expr": "synapse_util_caches_cache:size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
3418 "format": "time_series",
3419 "hide": false,
3420 "interval": "",
3421 "intervalFactor": 2,
3422 "legendFormat": "{{name}} {{job}}-{{index}}",
3423 "refId": "A",
3424 "step": 20
3425 }
3426 ],
3427 "thresholds": [],
3428 "timeFrom": null,
3429 "timeShift": null,
3430 "title": "Cache Size",
3431 "tooltip": {
3432 "shared": false,
3433 "sort": 0,
3434 "value_type": "cumulative"
3435 },
3436 "type": "graph",
3437 "xaxis": {
3438 "buckets": null,
3439 "mode": "time",
3440 "name": null,
3441 "show": true,
3442 "values": []
3443 },
3444 "yaxes": [
3445 {
3446 "format": "short",
3447 "logBase": 1,
3448 "max": null,
3449 "min": 0,
3450 "show": true
3451 },
3452 {
3453 "format": "short",
3454 "logBase": 1,
3455 "max": null,
3456 "min": null,
3457 "show": true
3458 }
3459 ],
3460 "yaxis": {
3461 "align": false,
3462 "alignLevel": null
3463 }
3464 },
3465 {
3466 "aliasColors": {},
3467 "bars": false,
3468 "dashLength": 10,
3469 "dashes": false,
3470 "datasource": "$datasource",
3471 "editable": true,
3472 "error": false,
3473 "fill": 1,
3474 "grid": {},
3475 "gridPos": {
3476 "h": 10,
3477 "w": 12,
3478 "x": 0,
3479 "y": 65
3480 },
3481 "id": 38,
3482 "legend": {
3483 "alignAsTable": true,
3484 "avg": false,
3485 "current": false,
3486 "hideZero": false,
3487 "max": false,
3488 "min": false,
3489 "show": true,
3490 "total": false,
3491 "values": false
3492 },
3493 "lines": true,
3494 "linewidth": 2,
3495 "links": [],
3496 "nullPointMode": "connected",
3497 "percentage": false,
3498 "pointradius": 5,
3499 "points": false,
3500 "renderer": "flot",
3501 "seriesOverrides": [],
3502 "spaceLength": 10,
3503 "stack": false,
3504 "steppedLine": false,
3505 "targets": [
3506 {
3507 "expr": "rate(synapse_util_caches_cache:total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
3508 "format": "time_series",
3509 "interval": "",
3510 "intervalFactor": 2,
3511 "legendFormat": "{{name}} {{job}}-{{index}}",
3512 "refId": "A",
3513 "step": 20
3514 }
3515 ],
3516 "thresholds": [],
3517 "timeFrom": null,
3518 "timeShift": null,
3519 "title": "Cache request rate",
3520 "tooltip": {
3521 "shared": false,
3522 "sort": 0,
3523 "value_type": "cumulative"
3524 },
3525 "type": "graph",
3526 "xaxis": {
3527 "buckets": null,
3528 "mode": "time",
3529 "name": null,
3530 "show": true,
3531 "values": []
3532 },
3533 "yaxes": [
3534 {
3535 "format": "rps",
3536 "logBase": 1,
3537 "max": null,
3538 "min": 0,
3539 "show": true
3540 },
3541 {
3542 "format": "short",
3543 "logBase": 1,
3544 "max": null,
3545 "min": null,
3546 "show": true
3547 }
3548 ],
3549 "yaxis": {
3550 "align": false,
3551 "alignLevel": null
3552 }
3553 },
3554 {
3555 "aliasColors": {},
3556 "bars": false,
3557 "dashLength": 10,
3558 "dashes": false,
3559 "datasource": "$datasource",
3560 "fill": 1,
3561 "gridPos": {
3562 "h": 10,
3563 "w": 12,
3564 "x": 12,
3565 "y": 65
3566 },
3567 "id": 39,
3568 "legend": {
3569 "alignAsTable": true,
3570 "avg": false,
3571 "current": false,
3572 "max": false,
3573 "min": false,
3574 "show": true,
3575 "total": false,
3576 "values": false
3577 },
3578 "lines": true,
3579 "linewidth": 1,
3580 "links": [],
3581 "nullPointMode": "null",
3582 "percentage": false,
3583 "pointradius": 5,
3584 "points": false,
3585 "renderer": "flot",
3586 "seriesOverrides": [],
3587 "spaceLength": 10,
3588 "stack": false,
3589 "steppedLine": false,
3590 "targets": [
3591 {
3592 "expr": "topk(10, rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]) - rate(synapse_util_caches_cache:hits{job=\"$job\",instance=\"$instance\"}[$bucket_size]))",
3593 "format": "time_series",
3594 "intervalFactor": 2,
3595 "legendFormat": "{{name}} {{job}}-{{index}}",
3596 "refId": "A",
3597 "step": 20
3598 }
3599 ],
3600 "thresholds": [],
3601 "timeFrom": null,
3602 "timeShift": null,
3603 "title": "Top 10 cache misses",
3604 "tooltip": {
3605 "shared": true,
3606 "sort": 0,
3607 "value_type": "individual"
3608 },
3609 "type": "graph",
3610 "xaxis": {
3611 "buckets": null,
3612 "mode": "time",
3613 "name": null,
3614 "show": true,
3615 "values": []
3616 },
3617 "yaxes": [
3618 {
3619 "format": "rps",
3620 "label": null,
3621 "logBase": 1,
3622 "max": null,
3623 "min": null,
3624 "show": true
3625 },
3626 {
3627 "format": "short",
3628 "label": null,
3629 "logBase": 1,
3630 "max": null,
3631 "min": null,
3632 "show": true
3633 }
3634 ],
3635 "yaxis": {
3636 "align": false,
3637 "alignLevel": null
3638 }
3639 },
3640 {
3641 "aliasColors": {},
3642 "bars": false,
3643 "dashLength": 10,
3644 "dashes": false,
3645 "datasource": "${DS_PROMETHEUS}",
3646 "fill": 1,
3647 "gridPos": {
3648 "h": 9,
3649 "w": 12,
3650 "x": 0,
3651 "y": 75
3652 },
3653 "id": 65,
3654 "legend": {
3655 "alignAsTable": true,
3656 "avg": false,
3657 "current": false,
3658 "max": false,
3659 "min": false,
3660 "show": true,
3661 "total": false,
3662 "values": false
3663 },
3664 "lines": true,
3665 "linewidth": 1,
3666 "links": [],
3667 "nullPointMode": "null",
3668 "percentage": false,
3669 "pointradius": 5,
3670 "points": false,
3671 "renderer": "flot",
3672 "seriesOverrides": [],
3673 "spaceLength": 10,
3674 "stack": false,
3675 "steppedLine": false,
3676 "targets": [
3677 {
3678 "expr": "rate(synapse_util_caches_cache:evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
3679 "format": "time_series",
3680 "intervalFactor": 1,
3681 "legendFormat": "{{name}} {{job}}-{{index}}",
3682 "refId": "A"
3683 }
3684 ],
3685 "thresholds": [],
3686 "timeFrom": null,
3687 "timeShift": null,
3688 "title": "Cache eviction rate",
3689 "tooltip": {
3690 "shared": false,
3691 "sort": 0,
3692 "value_type": "individual"
3693 },
3694 "transparent": false,
3695 "type": "graph",
3696 "xaxis": {
3697 "buckets": null,
3698 "mode": "time",
3699 "name": null,
3700 "show": true,
3701 "values": []
3702 },
3703 "yaxes": [
3704 {
3705 "decimals": null,
3706 "format": "hertz",
3707 "label": "entries / second",
3708 "logBase": 1,
3709 "max": null,
3710 "min": null,
3711 "show": true
3712 },
3713 {
3714 "format": "short",
3715 "label": null,
3716 "logBase": 1,
3717 "max": null,
3718 "min": null,
3719 "show": true
3720 }
3721 ],
3722 "yaxis": {
3723 "align": false,
3724 "alignLevel": null
3725 }
3726 }
3727 ],
3728 "repeat": null,
3729 "title": "Caches",
3730 "type": "row"
3731 },
3732 {
3733 "collapsed": true,
3734 "gridPos": {
3735 "h": 1,
3736 "w": 24,
3737 "x": 0,
3738 "y": 26
3739 },
3740 "id": 62,
3741 "panels": [
3742 {
3743 "aliasColors": {},
3744 "bars": false,
3745 "dashLength": 10,
3746 "dashes": false,
3747 "datasource": "${DS_PROMETHEUS}",
3748 "fill": 1,
3749 "gridPos": {
3750 "h": 9,
3751 "w": 12,
3752 "x": 0,
3753 "y": 90
3754 },
3755 "id": 91,
3756 "legend": {
3757 "avg": false,
3758 "current": false,
3759 "max": false,
3760 "min": false,
3761 "show": true,
3762 "total": false,
3763 "values": false
3764 },
3765 "lines": true,
3766 "linewidth": 1,
3767 "links": [],
3768 "nullPointMode": "null",
3769 "percentage": false,
3770 "pointradius": 5,
3771 "points": false,
3772 "renderer": "flot",
3773 "seriesOverrides": [],
3774 "spaceLength": 10,
3775 "stack": true,
3776 "steppedLine": false,
3777 "targets": [
3778 {
3779 "expr": "rate(python_gc_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[10m])",
3780 "format": "time_series",
3781 "instant": false,
3782 "intervalFactor": 1,
3783 "legendFormat": "{{job}}-{{index}} gen {{gen}}",
3784 "refId": "A"
3785 }
3786 ],
3787 "thresholds": [],
3788 "timeFrom": null,
3789 "timeShift": null,
3790 "title": "Total GC time by bucket (10m smoothing)",
3791 "tooltip": {
3792 "shared": true,
3793 "sort": 0,
3794 "value_type": "individual"
3795 },
3796 "type": "graph",
3797 "xaxis": {
3798 "buckets": null,
3799 "mode": "time",
3800 "name": null,
3801 "show": true,
3802 "values": []
3803 },
3804 "yaxes": [
3805 {
3806 "decimals": null,
3807 "format": "percentunit",
3808 "label": null,
3809 "logBase": 1,
3810 "max": null,
3811 "min": "0",
3812 "show": true
3813 },
3814 {
3815 "format": "short",
3816 "label": null,
3817 "logBase": 1,
3818 "max": null,
3819 "min": null,
3820 "show": true
3821 }
3822 ],
3823 "yaxis": {
3824 "align": false,
3825 "alignLevel": null
3826 }
3827 },
3828 {
3829 "aliasColors": {},
3830 "bars": false,
3831 "dashLength": 10,
3832 "dashes": false,
3833 "datasource": "$datasource",
3834 "decimals": 3,
3835 "editable": true,
3836 "error": false,
3837 "fill": 1,
3838 "grid": {},
3839 "gridPos": {
3840 "h": 9,
3841 "w": 12,
3842 "x": 12,
3843 "y": 90
3844 },
3845 "id": 21,
3846 "legend": {
3847 "alignAsTable": true,
3848 "avg": false,
3849 "current": false,
3850 "max": false,
3851 "min": false,
3852 "show": true,
3853 "total": false,
3854 "values": false
3855 },
3856 "lines": true,
3857 "linewidth": 2,
3858 "links": [],
3859 "nullPointMode": "null as zero",
3860 "percentage": false,
3861 "pointradius": 5,
3862 "points": false,
3863 "renderer": "flot",
3864 "seriesOverrides": [],
3865 "spaceLength": 10,
3866 "stack": false,
3867 "steppedLine": false,
3868 "targets": [
3869 {
3870 "expr": "rate(python_gc_time_sum{instance=\"$instance\",job=~\"$job\"}[$bucket_size])/rate(python_gc_time_count[$bucket_size])",
3871 "format": "time_series",
3872 "intervalFactor": 2,
3873 "legendFormat": "{{job}} {{index}} gen {{gen}} ",
3874 "refId": "A",
3875 "step": 20,
3876 "target": ""
3877 }
3878 ],
3879 "thresholds": [],
3880 "timeFrom": null,
3881 "timeShift": null,
3882 "title": "Average GC Time Per Collection",
3883 "tooltip": {
3884 "shared": false,
3885 "sort": 0,
3886 "value_type": "cumulative"
3887 },
3888 "type": "graph",
3889 "xaxis": {
3890 "buckets": null,
3891 "mode": "time",
3892 "name": null,
3893 "show": true,
3894 "values": []
3895 },
3896 "yaxes": [
3897 {
3898 "format": "s",
3899 "logBase": 1,
3900 "max": null,
3901 "min": null,
3902 "show": true
3903 },
3904 {
3905 "format": "short",
3906 "logBase": 1,
3907 "max": null,
3908 "min": null,
3909 "show": true
3910 }
3911 ],
3912 "yaxis": {
3913 "align": false,
3914 "alignLevel": null
3915 }
3916 },
3917 {
3918 "aliasColors": {},
3919 "bars": false,
3920 "dashLength": 10,
3921 "dashes": false,
3922 "datasource": "${DS_PROMETHEUS}",
3923 "fill": 1,
3924 "gridPos": {
3925 "h": 9,
3926 "w": 12,
3927 "x": 0,
3928 "y": 99
3929 },
3930 "id": 89,
3931 "legend": {
3932 "avg": false,
3933 "current": false,
3934 "hideEmpty": true,
3935 "hideZero": false,
3936 "max": false,
3937 "min": false,
3938 "show": true,
3939 "total": false,
3940 "values": false
3941 },
3942 "lines": true,
3943 "linewidth": 1,
3944 "links": [],
3945 "nullPointMode": "null",
3946 "percentage": false,
3947 "pointradius": 5,
3948 "points": false,
3949 "renderer": "flot",
3950 "seriesOverrides": [],
3951 "spaceLength": 10,
3952 "stack": false,
3953 "steppedLine": false,
3954 "targets": [
3955 {
3956 "expr": "python_gc_counts{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}",
3957 "format": "time_series",
3958 "intervalFactor": 1,
3959 "legendFormat": "{{job}}-{{index}} gen {{gen}}",
3960 "refId": "A"
3961 }
3962 ],
3963 "thresholds": [],
3964 "timeFrom": null,
3965 "timeShift": null,
3966 "title": "Currently allocated objects",
3967 "tooltip": {
3968 "shared": false,
3969 "sort": 0,
3970 "value_type": "individual"
3971 },
3972 "type": "graph",
3973 "xaxis": {
3974 "buckets": null,
3975 "mode": "time",
3976 "name": null,
3977 "show": true,
3978 "values": []
3979 },
3980 "yaxes": [
3981 {
3982 "format": "short",
3983 "label": null,
3984 "logBase": 1,
3985 "max": null,
3986 "min": null,
3987 "show": true
3988 },
3989 {
3990 "format": "short",
3991 "label": null,
3992 "logBase": 1,
3993 "max": null,
3994 "min": null,
3995 "show": true
3996 }
3997 ],
3998 "yaxis": {
3999 "align": false,
4000 "alignLevel": null
4001 }
4002 },
4003 {
4004 "aliasColors": {},
4005 "bars": false,
4006 "dashLength": 10,
4007 "dashes": false,
4008 "datasource": "${DS_PROMETHEUS}",
4009 "fill": 1,
4010 "gridPos": {
4011 "h": 9,
4012 "w": 12,
4013 "x": 12,
4014 "y": 99
4015 },
4016 "id": 93,
4017 "legend": {
4018 "avg": false,
4019 "current": false,
4020 "max": false,
4021 "min": false,
4022 "show": true,
4023 "total": false,
4024 "values": false
4025 },
4026 "lines": true,
4027 "linewidth": 1,
4028 "links": [],
4029 "nullPointMode": "null",
4030 "percentage": false,
4031 "pointradius": 5,
4032 "points": false,
4033 "renderer": "flot",
4034 "seriesOverrides": [],
4035 "spaceLength": 10,
4036 "stack": false,
4037 "steppedLine": false,
4038 "targets": [
4039 {
4040 "expr": "rate(python_gc_unreachable_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])/rate(python_gc_time_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4041 "format": "time_series",
4042 "intervalFactor": 1,
4043 "legendFormat": "{{job}}-{{index}} gen {{gen}}",
4044 "refId": "A"
4045 }
4046 ],
4047 "thresholds": [],
4048 "timeFrom": null,
4049 "timeShift": null,
4050 "title": "Object counts per collection",
4051 "tooltip": {
4052 "shared": true,
4053 "sort": 0,
4054 "value_type": "individual"
4055 },
4056 "type": "graph",
4057 "xaxis": {
4058 "buckets": null,
4059 "mode": "time",
4060 "name": null,
4061 "show": true,
4062 "values": []
4063 },
4064 "yaxes": [
4065 {
4066 "format": "short",
4067 "label": null,
4068 "logBase": 1,
4069 "max": null,
4070 "min": null,
4071 "show": true
4072 },
4073 {
4074 "format": "short",
4075 "label": null,
4076 "logBase": 1,
4077 "max": null,
4078 "min": null,
4079 "show": true
4080 }
4081 ],
4082 "yaxis": {
4083 "align": false,
4084 "alignLevel": null
4085 }
4086 },
4087 {
4088 "aliasColors": {},
4089 "bars": false,
4090 "dashLength": 10,
4091 "dashes": false,
4092 "datasource": "${DS_PROMETHEUS}",
4093 "fill": 1,
4094 "gridPos": {
4095 "h": 9,
4096 "w": 12,
4097 "x": 0,
4098 "y": 108
4099 },
4100 "id": 95,
4101 "legend": {
4102 "avg": false,
4103 "current": false,
4104 "max": false,
4105 "min": false,
4106 "show": true,
4107 "total": false,
4108 "values": false
4109 },
4110 "lines": true,
4111 "linewidth": 1,
4112 "links": [],
4113 "nullPointMode": "null",
4114 "percentage": false,
4115 "pointradius": 5,
4116 "points": false,
4117 "renderer": "flot",
4118 "seriesOverrides": [],
4119 "spaceLength": 10,
4120 "stack": false,
4121 "steppedLine": false,
4122 "targets": [
4123 {
4124 "expr": "rate(python_gc_time_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4125 "format": "time_series",
4126 "intervalFactor": 1,
4127 "legendFormat": "{{job}}-{{index}} gen {{gen}}",
4128 "refId": "A"
4129 }
4130 ],
4131 "thresholds": [],
4132 "timeFrom": null,
4133 "timeShift": null,
4134 "title": "GC frequency",
4135 "tooltip": {
4136 "shared": true,
4137 "sort": 0,
4138 "value_type": "individual"
4139 },
4140 "type": "graph",
4141 "xaxis": {
4142 "buckets": null,
4143 "mode": "time",
4144 "name": null,
4145 "show": true,
4146 "values": []
4147 },
4148 "yaxes": [
4149 {
4150 "format": "hertz",
4151 "label": null,
4152 "logBase": 1,
4153 "max": null,
4154 "min": null,
4155 "show": true
4156 },
4157 {
4158 "format": "short",
4159 "label": null,
4160 "logBase": 1,
4161 "max": null,
4162 "min": null,
4163 "show": true
4164 }
4165 ],
4166 "yaxis": {
4167 "align": false,
4168 "alignLevel": null
4169 }
4170 },
4171 {
4172 "cards": {
4173 "cardPadding": 0,
4174 "cardRound": null
4175 },
4176 "color": {
4177 "cardColor": "#b4ff00",
4178 "colorScale": "sqrt",
4179 "colorScheme": "interpolateSpectral",
4180 "exponent": 0.5,
4181 "max": null,
4182 "min": 0,
4183 "mode": "spectrum"
4184 },
4185 "dataFormat": "tsbuckets",
4186 "datasource": "${DS_PROMETHEUS}",
4187 "gridPos": {
4188 "h": 9,
4189 "w": 12,
4190 "x": 12,
4191 "y": 108
4192 },
4193 "heatmap": {},
4194 "highlightCards": true,
4195 "id": 87,
4196 "legend": {
4197 "show": true
4198 },
4199 "links": [],
4200 "targets": [
4201 {
4202 "expr": "sum(rate(python_gc_time_bucket[$bucket_size])) by (le)",
4203 "format": "heatmap",
4204 "intervalFactor": 1,
4205 "legendFormat": "{{le}}",
4206 "refId": "A"
4207 }
4208 ],
4209 "title": "GC durations",
4210 "tooltip": {
4211 "show": true,
4212 "showHistogram": false
4213 },
4214 "type": "heatmap",
4215 "xAxis": {
4216 "show": true
4217 },
4218 "xBucketNumber": null,
4219 "xBucketSize": null,
4220 "yAxis": {
4221 "decimals": null,
4222 "format": "s",
4223 "logBase": 1,
4224 "max": null,
4225 "min": null,
4226 "show": true,
4227 "splitFactor": null
4228 },
4229 "yBucketBound": "auto",
4230 "yBucketNumber": null,
4231 "yBucketSize": null
4232 }
4233 ],
4234 "repeat": null,
4235 "title": "GC",
4236 "type": "row"
4237 },
4238 {
4239 "collapsed": true,
4240 "gridPos": {
4241 "h": 1,
4242 "w": 24,
4243 "x": 0,
4244 "y": 27
4245 },
4246 "id": 63,
4247 "panels": [
4248 {
4249 "aliasColors": {},
4250 "bars": false,
4251 "dashLength": 10,
4252 "dashes": false,
4253 "datasource": "${DS_PROMETHEUS}",
4254 "fill": 1,
4255 "gridPos": {
4256 "h": 7,
4257 "w": 12,
4258 "x": 0,
4259 "y": 19
4260 },
4261 "id": 2,
4262 "legend": {
4263 "avg": false,
4264 "current": false,
4265 "max": false,
4266 "min": false,
4267 "show": true,
4268 "total": false,
4269 "values": false
4270 },
4271 "lines": true,
4272 "linewidth": 1,
4273 "links": [],
4274 "nullPointMode": "null",
4275 "percentage": false,
4276 "pointradius": 5,
4277 "points": false,
4278 "renderer": "flot",
4279 "seriesOverrides": [],
4280 "spaceLength": 10,
4281 "stack": false,
4282 "steppedLine": false,
4283 "targets": [
4284 {
4285 "expr": "rate(synapse_replication_tcp_resource_user_sync{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4286 "format": "time_series",
4287 "intervalFactor": 2,
4288 "legendFormat": "user started/stopped syncing",
4289 "refId": "A",
4290 "step": 20
4291 },
4292 {
4293 "expr": "rate(synapse_replication_tcp_resource_federation_ack{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4294 "format": "time_series",
4295 "intervalFactor": 2,
4296 "legendFormat": "federation ack",
4297 "refId": "B",
4298 "step": 20
4299 },
4300 {
4301 "expr": "rate(synapse_replication_tcp_resource_remove_pusher{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4302 "format": "time_series",
4303 "intervalFactor": 2,
4304 "legendFormat": "remove pusher",
4305 "refId": "C",
4306 "step": 20
4307 },
4308 {
4309 "expr": "rate(synapse_replication_tcp_resource_invalidate_cache{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4310 "format": "time_series",
4311 "intervalFactor": 2,
4312 "legendFormat": "invalidate cache",
4313 "refId": "D",
4314 "step": 20
4315 },
4316 {
4317 "expr": "rate(synapse_replication_tcp_resource_user_ip_cache{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
4318 "format": "time_series",
4319 "intervalFactor": 2,
4320 "legendFormat": "user ip cache",
4321 "refId": "E",
4322 "step": 20
4323 }
4324 ],
4325 "thresholds": [],
4326 "timeFrom": null,
4327 "timeShift": null,
4328 "title": "Rate of events on replication master",
4329 "tooltip": {
4330 "shared": true,
4331 "sort": 0,
4332 "value_type": "individual"
4333 },
4334 "type": "graph",
4335 "xaxis": {
4336 "buckets": null,
4337 "mode": "time",
4338 "name": null,
4339 "show": true,
4340 "values": []
4341 },
4342 "yaxes": [
4343 {
4344 "format": "hertz",
4345 "label": null,
4346 "logBase": 1,
4347 "max": null,
4348 "min": null,
4349 "show": true
4350 },
4351 {
4352 "format": "short",
4353 "label": null,
4354 "logBase": 1,
4355 "max": null,
4356 "min": null,
4357 "show": true
4358 }
4359 ]
4360 },
4361 {
4362 "aliasColors": {},
4363 "bars": false,
4364 "dashLength": 10,
4365 "dashes": false,
4366 "datasource": "${DS_PROMETHEUS}",
4367 "fill": 1,
4368 "gridPos": {
4369 "h": 7,
4370 "w": 12,
4371 "x": 12,
4372 "y": 19
4373 },
4374 "id": 41,
4375 "legend": {
4376 "avg": false,
4377 "current": false,
4378 "max": false,
4379 "min": false,
4380 "show": true,
4381 "total": false,
4382 "values": false
4383 },
4384 "lines": true,
4385 "linewidth": 1,
4386 "links": [],
4387 "nullPointMode": "null",
4388 "percentage": false,
4389 "pointradius": 5,
4390 "points": false,
4391 "renderer": "flot",
4392 "seriesOverrides": [],
4393 "spaceLength": 10,
4394 "stack": false,
4395 "steppedLine": false,
4396 "targets": [
4397 {
4398 "expr": "rate(synapse_replication_tcp_resource_stream_updates{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
4399 "format": "time_series",
4400 "interval": "",
4401 "intervalFactor": 2,
4402 "legendFormat": "{{stream_name}}",
4403 "refId": "A",
4404 "step": 20
4405 }
4406 ],
4407 "thresholds": [],
4408 "timeFrom": null,
4409 "timeShift": null,
4410 "title": "Outgoing stream updates",
4411 "tooltip": {
4412 "shared": true,
4413 "sort": 0,
4414 "value_type": "individual"
4415 },
4416 "type": "graph",
4417 "xaxis": {
4418 "buckets": null,
4419 "mode": "time",
4420 "name": null,
4421 "show": true,
4422 "values": []
4423 },
4424 "yaxes": [
4425 {
4426 "format": "hertz",
4427 "label": null,
4428 "logBase": 1,
4429 "max": null,
4430 "min": null,
4431 "show": true
4432 },
4433 {
4434 "format": "short",
4435 "label": null,
4436 "logBase": 1,
4437 "max": null,
4438 "min": null,
4439 "show": true
4440 }
4441 ]
4442 },
4443 {
4444 "aliasColors": {},
4445 "bars": false,
4446 "dashLength": 10,
4447 "dashes": false,
4448 "datasource": "${DS_PROMETHEUS}",
4449 "fill": 1,
4450 "gridPos": {
4451 "h": 7,
4452 "w": 12,
4453 "x": 0,
4454 "y": 26
4455 },
4456 "id": 42,
4457 "legend": {
4458 "avg": false,
4459 "current": false,
4460 "max": false,
4461 "min": false,
4462 "show": true,
4463 "total": false,
4464 "values": false
4465 },
4466 "lines": true,
4467 "linewidth": 1,
4468 "links": [],
4469 "nullPointMode": "null",
4470 "percentage": false,
4471 "pointradius": 5,
4472 "points": false,
4473 "renderer": "flot",
4474 "seriesOverrides": [],
4475 "spaceLength": 10,
4476 "stack": false,
4477 "steppedLine": false,
4478 "targets": [
4479 {
4480 "expr": "sum (rate(synapse_replication_tcp_protocol_inbound_commands{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])) without (name, conn_id)",
4481 "format": "time_series",
4482 "intervalFactor": 2,
4483 "legendFormat": "{{job}}-{{index}} {{command}}",
4484 "refId": "A",
4485 "step": 20
4486 }
4487 ],
4488 "thresholds": [],
4489 "timeFrom": null,
4490 "timeShift": null,
4491 "title": "Rate of incoming commands",
4492 "tooltip": {
4493 "shared": true,
4494 "sort": 0,
4495 "value_type": "individual"
4496 },
4497 "type": "graph",
4498 "xaxis": {
4499 "buckets": null,
4500 "mode": "time",
4501 "name": null,
4502 "show": true,
4503 "values": []
4504 },
4505 "yaxes": [
4506 {
4507 "format": "hertz",
4508 "label": null,
4509 "logBase": 1,
4510 "max": null,
4511 "min": null,
4512 "show": true
4513 },
4514 {
4515 "format": "short",
4516 "label": null,
4517 "logBase": 1,
4518 "max": null,
4519 "min": null,
4520 "show": true
4521 }
4522 ]
4523 },
4524 {
4525 "aliasColors": {},
4526 "bars": false,
4527 "dashLength": 10,
4528 "dashes": false,
4529 "datasource": "${DS_PROMETHEUS}",
4530 "fill": 1,
4531 "gridPos": {
4532 "h": 7,
4533 "w": 12,
4534 "x": 12,
4535 "y": 26
4536 },
4537 "id": 43,
4538 "legend": {
4539 "avg": false,
4540 "current": false,
4541 "max": false,
4542 "min": false,
4543 "show": true,
4544 "total": false,
4545 "values": false
4546 },
4547 "lines": true,
4548 "linewidth": 1,
4549 "links": [],
4550 "nullPointMode": "null",
4551 "percentage": false,
4552 "pointradius": 5,
4553 "points": false,
4554 "renderer": "flot",
4555 "seriesOverrides": [],
4556 "spaceLength": 10,
4557 "stack": false,
4558 "steppedLine": false,
4559 "targets": [
4560 {
4561 "expr": "sum (rate(synapse_replication_tcp_protocol_outbound_commands{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])) without (name, conn_id)",
4562 "format": "time_series",
4563 "intervalFactor": 2,
4564 "legendFormat": "{{job}}-{{index}} {{command}}",
4565 "refId": "A",
4566 "step": 20
4567 }
4568 ],
4569 "thresholds": [],
4570 "timeFrom": null,
4571 "timeShift": null,
4572 "title": "Rate of outgoing commands",
4573 "tooltip": {
4574 "shared": true,
4575 "sort": 0,
4576 "value_type": "individual"
4577 },
4578 "type": "graph",
4579 "xaxis": {
4580 "buckets": null,
4581 "mode": "time",
4582 "name": null,
4583 "show": true,
4584 "values": []
4585 },
4586 "yaxes": [
4587 {
4588 "format": "hertz",
4589 "label": null,
4590 "logBase": 1,
4591 "max": null,
4592 "min": null,
4593 "show": true
4594 },
4595 {
4596 "format": "short",
4597 "label": null,
4598 "logBase": 1,
4599 "max": null,
4600 "min": null,
4601 "show": true
4602 }
4603 ]
4604 }
4605 ],
4606 "repeat": null,
4607 "title": "Replication",
4608 "type": "row"
4609 },
4610 {
4611 "collapsed": true,
4612 "gridPos": {
4613 "h": 1,
4614 "w": 24,
4615 "x": 0,
4616 "y": 28
4617 },
4618 "id": 69,
4619 "panels": [
4620 {
4621 "aliasColors": {},
4622 "bars": false,
4623 "dashLength": 10,
4624 "dashes": false,
4625 "datasource": "${DS_PROMETHEUS}",
4626 "fill": 1,
4627 "gridPos": {
4628 "h": 9,
4629 "w": 12,
4630 "x": 0,
4631 "y": 11
4632 },
4633 "id": 67,
4634 "legend": {
4635 "avg": false,
4636 "current": false,
4637 "max": false,
4638 "min": false,
4639 "show": true,
4640 "total": false,
4641 "values": false
4642 },
4643 "lines": true,
4644 "linewidth": 1,
4645 "links": [],
4646 "nullPointMode": "null",
4647 "percentage": false,
4648 "pointradius": 5,
4649 "points": false,
4650 "renderer": "flot",
4651 "seriesOverrides": [],
4652 "spaceLength": 10,
4653 "stack": false,
4654 "steppedLine": false,
4655 "targets": [
4656 {
4657 "expr": " synapse_event_persisted_position{instance=\"$instance\"} - ignoring(index, job, name) group_right(instance) synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
4658 "format": "time_series",
4659 "interval": "",
4660 "intervalFactor": 1,
4661 "legendFormat": "{{job}}-{{index}}",
4662 "refId": "A"
4663 }
4664 ],
4665 "thresholds": [],
4666 "timeFrom": null,
4667 "timeShift": null,
4668 "title": "Event processing lag",
4669 "tooltip": {
4670 "shared": true,
4671 "sort": 0,
4672 "value_type": "individual"
4673 },
4674 "type": "graph",
4675 "xaxis": {
4676 "buckets": null,
4677 "mode": "time",
4678 "name": null,
4679 "show": true,
4680 "values": []
4681 },
4682 "yaxes": [
4683 {
4684 "format": "short",
4685 "label": "events",
4686 "logBase": 1,
4687 "max": null,
4688 "min": null,
4689 "show": true
4690 },
4691 {
4692 "format": "short",
4693 "label": null,
4694 "logBase": 1,
4695 "max": null,
4696 "min": null,
4697 "show": true
4698 }
4699 ]
4700 },
4701 {
4702 "aliasColors": {},
4703 "bars": false,
4704 "dashLength": 10,
4705 "dashes": false,
4706 "datasource": "${DS_PROMETHEUS}",
4707 "fill": 1,
4708 "gridPos": {
4709 "h": 9,
4710 "w": 12,
4711 "x": 12,
4712 "y": 11
4713 },
4714 "id": 71,
4715 "legend": {
4716 "avg": false,
4717 "current": false,
4718 "max": false,
4719 "min": false,
4720 "show": true,
4721 "total": false,
4722 "values": false
4723 },
4724 "lines": true,
4725 "linewidth": 1,
4726 "links": [],
4727 "nullPointMode": "null",
4728 "percentage": false,
4729 "pointradius": 5,
4730 "points": false,
4731 "renderer": "flot",
4732 "seriesOverrides": [],
4733 "spaceLength": 10,
4734 "stack": false,
4735 "steppedLine": false,
4736 "targets": [
4737 {
4738 "expr": "time()*1000-synapse_event_processing_last_ts{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
4739 "format": "time_series",
4740 "hide": false,
4741 "intervalFactor": 1,
4742 "legendFormat": "{{job}}-{{index}} {{name}}",
4743 "refId": "B"
4744 }
4745 ],
4746 "thresholds": [],
4747 "timeFrom": null,
4748 "timeShift": null,
4749 "title": "Age of last processed event",
4750 "tooltip": {
4751 "shared": true,
4752 "sort": 0,
4753 "value_type": "individual"
4754 },
4755 "type": "graph",
4756 "xaxis": {
4757 "buckets": null,
4758 "mode": "time",
4759 "name": null,
4760 "show": true,
4761 "values": []
4762 },
4763 "yaxes": [
4764 {
4765 "format": "ms",
4766 "label": null,
4767 "logBase": 1,
4768 "max": null,
4769 "min": null,
4770 "show": true
4771 },
4772 {
4773 "format": "short",
4774 "label": null,
4775 "logBase": 1,
4776 "max": null,
4777 "min": null,
4778 "show": true
4779 }
4780 ]
4781 }
4782 ],
4783 "title": "Event processing loop positions",
4784 "type": "row"
4785 }
4786 ],
4787 "refresh": "1m",
4788 "schemaVersion": 16,
4789 "style": "dark",
4790 "tags": [
4791 "matrix"
4792 ],
4793 "templating": {
4794 "list": [
4795 {
4796 "current": {
4797 "text": "Prometheus",
4798 "value": "Prometheus"
4799 },
4800 "hide": 0,
4801 "label": null,
4802 "name": "datasource",
4803 "options": [],
4804 "query": "prometheus",
4805 "refresh": 1,
4806 "regex": "",
4807 "type": "datasource"
4808 },
4809 {
4810 "allFormat": "glob",
4811 "auto": true,
4812 "auto_count": 100,
4813 "auto_min": "30s",
4814 "current": {
4815 "text": "auto",
4816 "value": "$__auto_interval_bucket_size"
4817 },
4818 "datasource": null,
4819 "hide": 0,
4820 "includeAll": false,
4821 "label": "Bucket Size",
4822 "multi": false,
4823 "multiFormat": "glob",
4824 "name": "bucket_size",
4825 "options": [
4826 {
4827 "selected": true,
4828 "text": "auto",
4829 "value": "$__auto_interval_bucket_size"
4830 },
4831 {
4832 "selected": false,
4833 "text": "30s",
4834 "value": "30s"
4835 },
4836 {
4837 "selected": false,
4838 "text": "1m",
4839 "value": "1m"
4840 },
4841 {
4842 "selected": false,
4843 "text": "2m",
4844 "value": "2m"
4845 },
4846 {
4847 "selected": false,
4848 "text": "5m",
4849 "value": "5m"
4850 }
4851 ],
4852 "query": "30s,1m,2m,5m",
4853 "refresh": 2,
4854 "type": "interval"
4855 },
4856 {
4857 "allValue": null,
4858 "current": {},
4859 "datasource": "$datasource",
4860 "hide": 0,
4861 "includeAll": false,
4862 "label": null,
4863 "multi": false,
4864 "name": "instance",
4865 "options": [],
4866 "query": "label_values(process_cpu_user_seconds_total{job=~\"synapse.*\"}, instance)",
4867 "refresh": 2,
4868 "regex": "",
4869 "sort": 0,
4870 "tagValuesQuery": "",
4871 "tags": [],
4872 "tagsQuery": "",
4873 "type": "query",
4874 "useTags": false
4875 },
4876 {
4877 "allFormat": "regex wildcard",
4878 "allValue": "",
4879 "current": {},
4880 "datasource": "$datasource",
4881 "hide": 0,
4882 "hideLabel": false,
4883 "includeAll": true,
4884 "label": "Job",
4885 "multi": true,
4886 "multiFormat": "regex values",
4887 "name": "job",
4888 "options": [],
4889 "query": "label_values(process_cpu_user_seconds_total{job=~\"synapse.*\"}, job)",
4890 "refresh": 2,
4891 "refresh_on_load": false,
4892 "regex": "",
4893 "sort": 1,
4894 "tagValuesQuery": "",
4895 "tags": [],
4896 "tagsQuery": "",
4897 "type": "query",
4898 "useTags": false
4899 },
4900 {
4901 "allFormat": "regex wildcard",
4902 "allValue": ".*",
4903 "current": {},
4904 "datasource": "$datasource",
4905 "hide": 0,
4906 "hideLabel": false,
4907 "includeAll": true,
4908 "label": "",
4909 "multi": true,
4910 "multiFormat": "regex values",
4911 "name": "index",
4912 "options": [],
4913 "query": "label_values(process_cpu_user_seconds_total{job=~\"synapse.*\"}, index)",
4914 "refresh": 2,
4915 "refresh_on_load": false,
4916 "regex": "",
4917 "sort": 3,
4918 "tagValuesQuery": "",
4919 "tags": [],
4920 "tagsQuery": "",
4921 "type": "query",
4922 "useTags": false
4923 }
4924 ]
4925 },
4926 "time": {
4927 "from": "now-1h",
4928 "to": "now"
4929 },
4930 "timepicker": {
4931 "now": true,
4932 "refresh_intervals": [
4933 "5s",
4934 "10s",
4935 "30s",
4936 "1m",
4937 "5m",
4938 "15m",
4939 "30m",
4940 "1h",
4941 "2h",
4942 "1d"
4943 ],
4944 "time_options": [
4945 "5m",
4946 "15m",
4947 "1h",
4948 "6h",
4949 "12h",
4950 "24h",
4951 "2d",
4952 "7d",
4953 "30d"
4954 ]
4955 },
4956 "timezone": "",
4957 "title": "Synapse",
4958 "uid": "000000012",
4959 "version": 125
4960 }
0 FROM docker.io/python:2-alpine3.7
1
2 RUN apk add --no-cache --virtual .nacl_deps \
3 build-base \
4 libffi-dev \
5 libjpeg-turbo-dev \
6 libressl-dev \
7 libxslt-dev \
8 linux-headers \
9 postgresql-dev \
10 su-exec \
11 zlib-dev
12
13 COPY . /synapse
14
15 # A wheel cache may be provided in ./cache for faster build
16 RUN cd /synapse \
17 && pip install --upgrade \
18 lxml \
19 pip \
20 psycopg2 \
21 setuptools \
22 && mkdir -p /synapse/cache \
23 && pip install -f /synapse/cache --upgrade --process-dependency-links . \
24 && mv /synapse/docker/start.py /synapse/docker/conf / \
25 && rm -rf \
26 setup.cfg \
27 setup.py \
28 synapse
29
30 VOLUME ["/data"]
31
32 EXPOSE 8008/tcp 8448/tcp
33
34 ENTRYPOINT ["/start.py"]
0 # Synapse Docker
1
2 This Docker image will run Synapse as a single process. It does not provide a database
3 server or a TURN server, you should run these separately.
4
5 ## Run
6
7 We do not currently offer a `latest` image, as this has somewhat undefined semantics.
8 We instead release only tagged versions so upgrading between releases is entirely
9 within your control.
10
11 ### Using docker-compose (easier)
12
13 This image is designed to run either with an automatically generated configuration
14 file or with a custom configuration that requires manual editing.
15
16 An easy way to make use of this image is via docker-compose. See the
17 [contrib/docker](../contrib/docker)
18 section of the synapse project for examples.
19
20 ### Without Compose (harder)
21
22 If you do not wish to use Compose, you may still run this image using plain
23 Docker commands. Note that the following is just a guideline and you may need
24 to add parameters to the docker run command to account for the network situation
25 with your postgres database.
26
27 ```
28 docker run \
29 -d \
30 --name synapse \
31 -v ${DATA_PATH}:/data \
32 -e SYNAPSE_SERVER_NAME=my.matrix.host \
33 -e SYNAPSE_REPORT_STATS=yes \
34 docker.io/matrixdotorg/synapse:latest
35 ```
36
37 ## Volumes
38
39 The image expects a single volume, located at ``/data``, that will hold:
40
41 * temporary files during uploads;
42 * uploaded media and thumbnails;
43 * the SQLite database if you do not configure postgres;
44 * the appservices configuration.
45
46 You are free to use separate volumes depending on storage endpoints at your
47 disposal. For instance, ``/data/media`` coud be stored on a large but low
48 performance hdd storage while other files could be stored on high performance
49 endpoints.
50
51 In order to setup an application service, simply create an ``appservices``
52 directory in the data volume and write the application service Yaml
53 configuration file there. Multiple application services are supported.
54
55 ## Environment
56
57 Unless you specify a custom path for the configuration file, a very generic
58 file will be generated, based on the following environment settings.
59 These are a good starting point for setting up your own deployment.
60
61 Global settings:
62
63 * ``UID``, the user id Synapse will run as [default 991]
64 * ``GID``, the group id Synapse will run as [default 991]
65 * ``SYNAPSE_CONFIG_PATH``, path to a custom config file
66
67 If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
68 then customize it manually. No other environment variable is required.
69
70 Otherwise, a dynamic configuration file will be used. The following environment
71 variables are available for configuration:
72
73 * ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
74 * ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
75 statistics reporting back to the Matrix project which helps us to get funding.
76 * ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
77 you run your own TLS-capable reverse proxy).
78 * ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
79 the Synapse instance.
80 * ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
81 * ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
82 * ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
83 * ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
84 key in order to enable recaptcha upon registration.
85 * ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
86 key in order to enable recaptcha upon registration.
87 * ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
88 uris to enable TURN for this homeserver.
89 * ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
90
91 Shared secrets, that will be initialized to random values if not set:
92
93 * ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
94 registration is disable.
95 * ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
96 to the server.
97
98 Database specific values (will use SQLite if not set):
99
100 * `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
101 * `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
102 * `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
103 * `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
104
105 Mail server specific values (will not send emails if not set):
106
107 * ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
108 * ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
109 * ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
110 * ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
111
112 ## Build
113
114 Build the docker image with the `docker build` command from the root of the synapse repository.
115
116 ```
117 docker build -t docker.io/matrixdotorg/synapse . -f docker/Dockerfile
118 ```
119
120 The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
121
122 You may have a local Python wheel cache available, in which case copy the relevant
123 packages in the ``cache/`` directory at the root of the project.
0 # vim:ft=yaml
1
2 ## TLS ##
3
4 tls_certificate_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.crt"
5 tls_private_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.key"
6 tls_dh_params_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.dh"
7 no_tls: {{ "True" if SYNAPSE_NO_TLS else "False" }}
8 tls_fingerprints: []
9
10 ## Server ##
11
12 server_name: "{{ SYNAPSE_SERVER_NAME }}"
13 pid_file: /homeserver.pid
14 web_client: False
15 soft_file_limit: 0
16
17 ## Ports ##
18
19 listeners:
20 {% if not SYNAPSE_NO_TLS %}
21 -
22 port: 8448
23 bind_addresses: ['0.0.0.0']
24 type: http
25 tls: true
26 x_forwarded: false
27 resources:
28 - names: [client]
29 compress: true
30 - names: [federation] # Federation APIs
31 compress: false
32 {% endif %}
33
34 - port: 8008
35 tls: false
36 bind_addresses: ['0.0.0.0']
37 type: http
38 x_forwarded: false
39
40 resources:
41 - names: [client]
42 compress: true
43 - names: [federation]
44 compress: false
45
46 ## Database ##
47
48 {% if POSTGRES_PASSWORD %}
49 database:
50 name: "psycopg2"
51 args:
52 user: "{{ POSTGRES_USER or "synapse" }}"
53 password: "{{ POSTGRES_PASSWORD }}"
54 database: "{{ POSTGRES_DB or "synapse" }}"
55 host: "{{ POSTGRES_HOST or "db" }}"
56 port: "{{ POSTGRES_PORT or "5432" }}"
57 cp_min: 5
58 cp_max: 10
59 {% else %}
60 database:
61 name: "sqlite3"
62 args:
63 database: "/data/homeserver.db"
64 {% endif %}
65
66 ## Performance ##
67
68 event_cache_size: "{{ SYNAPSE_EVENT_CACHE_SIZE or "10K" }}"
69 verbose: 0
70 log_file: "/data/homeserver.log"
71 log_config: "/compiled/log.config"
72
73 ## Ratelimiting ##
74
75 rc_messages_per_second: 0.2
76 rc_message_burst_count: 10.0
77 federation_rc_window_size: 1000
78 federation_rc_sleep_limit: 10
79 federation_rc_sleep_delay: 500
80 federation_rc_reject_limit: 50
81 federation_rc_concurrent: 3
82
83 ## Files ##
84
85 media_store_path: "/data/media"
86 uploads_path: "/data/uploads"
87 max_upload_size: "10M"
88 max_image_pixels: "32M"
89 dynamic_thumbnails: false
90
91 # List of thumbnail to precalculate when an image is uploaded.
92 thumbnail_sizes:
93 - width: 32
94 height: 32
95 method: crop
96 - width: 96
97 height: 96
98 method: crop
99 - width: 320
100 height: 240
101 method: scale
102 - width: 640
103 height: 480
104 method: scale
105 - width: 800
106 height: 600
107 method: scale
108
109 url_preview_enabled: False
110 max_spider_size: "10M"
111
112 ## Captcha ##
113
114 {% if SYNAPSE_RECAPTCHA_PUBLIC_KEY %}
115 recaptcha_public_key: "{{ SYNAPSE_RECAPTCHA_PUBLIC_KEY }}"
116 recaptcha_private_key: "{{ SYNAPSE_RECAPTCHA_PRIVATE_KEY }}"
117 enable_registration_captcha: True
118 recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
119 {% else %}
120 recaptcha_public_key: "YOUR_PUBLIC_KEY"
121 recaptcha_private_key: "YOUR_PRIVATE_KEY"
122 enable_registration_captcha: False
123 recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
124 {% endif %}
125
126 ## Turn ##
127
128 {% if SYNAPSE_TURN_URIS %}
129 turn_uris:
130 {% for uri in SYNAPSE_TURN_URIS.split(',') %} - "{{ uri }}"
131 {% endfor %}
132 turn_shared_secret: "{{ SYNAPSE_TURN_SECRET }}"
133 turn_user_lifetime: "1h"
134 turn_allow_guests: True
135 {% else %}
136 turn_uris: []
137 turn_shared_secret: "YOUR_SHARED_SECRET"
138 turn_user_lifetime: "1h"
139 turn_allow_guests: True
140 {% endif %}
141
142 ## Registration ##
143
144 enable_registration: {{ "True" if SYNAPSE_ENABLE_REGISTRATION else "False" }}
145 registration_shared_secret: "{{ SYNAPSE_REGISTRATION_SHARED_SECRET }}"
146 bcrypt_rounds: 12
147 allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
148 enable_group_creation: true
149
150 # The list of identity servers trusted to verify third party
151 # identifiers by this server.
152 trusted_third_party_id_servers:
153 - matrix.org
154 - vector.im
155 - riot.im
156
157 ## Metrics ###
158
159 {% if SYNAPSE_REPORT_STATS.lower() == "yes" %}
160 enable_metrics: True
161 report_stats: True
162 {% else %}
163 enable_metrics: False
164 report_stats: False
165 {% endif %}
166
167 ## API Configuration ##
168
169 room_invite_state_types:
170 - "m.room.join_rules"
171 - "m.room.canonical_alias"
172 - "m.room.avatar"
173 - "m.room.name"
174
175 {% if SYNAPSE_APPSERVICES %}
176 app_service_config_files:
177 {% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
178 {% endfor %}
179 {% else %}
180 app_service_config_files: []
181 {% endif %}
182
183 macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
184 expire_access_token: False
185
186 ## Signing Keys ##
187
188 signing_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.signing.key"
189 old_signing_keys: {}
190 key_refresh_interval: "1d" # 1 Day.
191
192 # The trusted servers to download signing keys from.
193 perspectives:
194 servers:
195 "matrix.org":
196 verify_keys:
197 "ed25519:auto":
198 key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
199
200 password_config:
201 enabled: true
202
203 {% if SYNAPSE_SMTP_HOST %}
204 email:
205 enable_notifs: false
206 smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
207 smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
208 smtp_user: "{{ SYNAPSE_SMTP_USER }}"
209 smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
210 require_transport_security: False
211 notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
212 app_name: Matrix
213 template_dir: res/templates
214 notif_template_html: notif_mail.html
215 notif_template_text: notif_mail.txt
216 notif_for_new_users: True
217 riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
218 {% endif %}
0 version: 1
1
2 formatters:
3 precise:
4 format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
5
6 filters:
7 context:
8 (): synapse.util.logcontext.LoggingContextFilter
9 request: ""
10
11 handlers:
12 console:
13 class: logging.StreamHandler
14 formatter: precise
15 filters: [context]
16
17 loggers:
18 synapse:
19 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
20
21 synapse.storage.SQL:
22 # beware: increasing this to DEBUG will make synapse log sensitive
23 # information such as access tokens.
24 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
25
26 root:
27 level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
28 handlers: [console]
0 #!/usr/local/bin/python
1
2 import jinja2
3 import os
4 import sys
5 import subprocess
6 import glob
7
8 # Utility functions
9 convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
10
11 def check_arguments(environ, args):
12 for argument in args:
13 if argument not in environ:
14 print("Environment variable %s is mandatory, exiting." % argument)
15 sys.exit(2)
16
17 def generate_secrets(environ, secrets):
18 for name, secret in secrets.items():
19 if secret not in environ:
20 filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
21 if os.path.exists(filename):
22 with open(filename) as handle: value = handle.read()
23 else:
24 print("Generating a random secret for {}".format(name))
25 value = os.urandom(32).encode("hex")
26 with open(filename, "w") as handle: handle.write(value)
27 environ[secret] = value
28
29 # Prepare the configuration
30 mode = sys.argv[1] if len(sys.argv) > 1 else None
31 environ = os.environ.copy()
32 ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
33 args = ["python", "-m", "synapse.app.homeserver"]
34
35 # In generate mode, generate a configuration, missing keys, then exit
36 if mode == "generate":
37 check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
38 args += [
39 "--server-name", environ["SYNAPSE_SERVER_NAME"],
40 "--report-stats", environ["SYNAPSE_REPORT_STATS"],
41 "--config-path", environ["SYNAPSE_CONFIG_PATH"],
42 "--generate-config"
43 ]
44 os.execv("/usr/local/bin/python", args)
45
46 # In normal mode, generate missing keys if any, then run synapse
47 else:
48 # Parse the configuration file
49 if "SYNAPSE_CONFIG_PATH" in environ:
50 args += ["--config-path", environ["SYNAPSE_CONFIG_PATH"]]
51 else:
52 check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
53 generate_secrets(environ, {
54 "registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
55 "macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
56 })
57 environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
58 if not os.path.exists("/compiled"): os.mkdir("/compiled")
59 convert("/conf/homeserver.yaml", "/compiled/homeserver.yaml", environ)
60 convert("/conf/log.config", "/compiled/log.config", environ)
61 subprocess.check_output(["chown", "-R", ownership, "/data"])
62 args += ["--config-path", "/compiled/homeserver.yaml"]
63 # Generate missing keys and start synapse
64 subprocess.check_output(args + ["--generate-keys"])
65 os.execv("/sbin/su-exec", ["su-exec", ownership] + args)
0 Shared-Secret Registration
1 ==========================
2
3 This API allows for the creation of users in an administrative and
4 non-interactive way. This is generally used for bootstrapping a Synapse
5 instance with administrator accounts.
6
7 To authenticate yourself to the server, you will need both the shared secret
8 (``registration_shared_secret`` in the homeserver configuration), and a
9 one-time nonce. If the registration shared secret is not configured, this API
10 is not enabled.
11
12 To fetch the nonce, you need to request one from the API::
13
14 > GET /_matrix/client/r0/admin/register
15
16 < {"nonce": "thisisanonce"}
17
18 Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
19 body containing the nonce, username, password, whether they are an admin
20 (optional, False by default), and a HMAC digest of the content.
21
22 As an example::
23
24 > POST /_matrix/client/r0/admin/register
25 > {
26 "nonce": "thisisanonce",
27 "username": "pepper_roni",
28 "password": "pizza",
29 "admin": true,
30 "mac": "mac_digest_here"
31 }
32
33 < {
34 "access_token": "token_here",
35 "user_id": "@pepper_roni@test",
36 "home_server": "test",
37 "device_id": "device_id_here"
38 }
39
40 The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
41 the shared secret and the content being the nonce, user, password, and either
42 the string "admin" or "notadmin", each separated by NULs. For an example of
43 generation in Python::
44
45 import hmac, hashlib
46
47 def generate_mac(nonce, user, password, admin=False):
48
49 mac = hmac.new(
50 key=shared_secret,
51 digestmod=hashlib.sha1,
52 )
53
54 mac.update(nonce.encode('utf8'))
55 mac.update(b"\x00")
56 mac.update(user.encode('utf8'))
57 mac.update(b"\x00")
58 mac.update(password.encode('utf8'))
59 mac.update(b"\x00")
60 mac.update(b"admin" if admin else b"notadmin")
61
62 return mac.hexdigest()
205205 following regular expressions::
206206
207207 ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
208 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
209 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
210 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
211 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
208212
209213 ``synapse.app.user_dir``
210214 ~~~~~~~~~~~~~~~~~~~~~~~~
00 [tool.towncrier]
11 package = "synapse"
2 filename = "CHANGES.rst"
2 filename = "CHANGES.md"
33 directory = "changelog.d"
4 issue_format = "`#{issue} <https://github.com/matrix-org/synapse/issues/{issue}>`_"
4 issue_format = "[\\#{issue}](https://github.com/matrix-org/synapse/issues/{issue})"
5
6 [[tool.towncrier.type]]
7 directory = "feature"
8 name = "Features"
9 showcontent = true
10
11 [[tool.towncrier.type]]
12 directory = "bugfix"
13 name = "Bugfixes"
14 showcontent = true
15
16 [[tool.towncrier.type]]
17 directory = "doc"
18 name = "Improved Documentation"
19 showcontent = true
20
21 [[tool.towncrier.type]]
22 directory = "removal"
23 name = "Deprecations and Removals"
24 showcontent = true
25
26 [[tool.towncrier.type]]
27 directory = "misc"
28 name = "Internal Changes"
29 showcontent = true
2525
2626
2727 def request_registration(user, password, server_location, shared_secret, admin=False):
28 req = urllib2.Request(
29 "%s/_matrix/client/r0/admin/register" % (server_location,),
30 headers={'Content-Type': 'application/json'}
31 )
32
33 try:
34 if sys.version_info[:3] >= (2, 7, 9):
35 # As of version 2.7.9, urllib2 now checks SSL certs
36 import ssl
37 f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23))
38 else:
39 f = urllib2.urlopen(req)
40 body = f.read()
41 f.close()
42 nonce = json.loads(body)["nonce"]
43 except urllib2.HTTPError as e:
44 print "ERROR! Received %d %s" % (e.code, e.reason,)
45 if 400 <= e.code < 500:
46 if e.info().type == "application/json":
47 resp = json.load(e)
48 if "error" in resp:
49 print resp["error"]
50 sys.exit(1)
51
2852 mac = hmac.new(
2953 key=shared_secret,
3054 digestmod=hashlib.sha1,
3155 )
3256
57 mac.update(nonce)
58 mac.update("\x00")
3359 mac.update(user)
3460 mac.update("\x00")
3561 mac.update(password)
3965 mac = mac.hexdigest()
4066
4167 data = {
42 "user": user,
68 "nonce": nonce,
69 "username": user,
4370 "password": password,
4471 "mac": mac,
45 "type": "org.matrix.login.shared_secret",
4672 "admin": admin,
4773 }
4874
5177 print "Sending registration request..."
5278
5379 req = urllib2.Request(
54 "%s/_matrix/client/api/v1/register" % (server_location,),
80 "%s/_matrix/client/r0/admin/register" % (server_location,),
5581 data=json.dumps(data),
5682 headers={'Content-Type': 'application/json'}
5783 )
1313 pylint.cfg
1414 tox.ini
1515
16 [pep8]
17 max-line-length = 90
18 # W503 requires that binary operators be at the end, not start, of lines. Erik
19 # doesn't like it. E203 is contrary to PEP8.
20 ignore = W503,E203
21
1622 [flake8]
23 # note that flake8 inherits the "ignore" settings from "pep8" (because it uses
24 # pep8 to do those checks), but not the "max-line-length" setting
1725 max-line-length = 90
18 # W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
19 # E203 is contrary to PEP8.
20 ignore = W503,E203
2126
2227 [isort]
2328 line_length = 89
3035 known_twisted=twisted,OpenSSL
3136 multi_line_output=3
3237 include_trailing_comma=true
38 combine_as_imports=true
1616 """ This is a reference implementation of a Matrix home server.
1717 """
1818
19 __version__ = "0.33.1"
19 __version__ = "0.33.2"
6464
6565 @defer.inlineCallbacks
6666 def check_from_context(self, event, context, do_sig_check=True):
67 prev_state_ids = yield context.get_prev_state_ids(self.store)
6768 auth_events_ids = yield self.compute_auth_events(
68 event, context.prev_state_ids, for_verification=True,
69 event, prev_state_ids, for_verification=True,
6970 )
7071 auth_events = yield self.store.get_events(auth_events_ids)
7172 auth_events = {
250251 if ip_address not in app_service.ip_range_whitelist:
251252 defer.returnValue((None, None))
252253
253 if "user_id" not in request.args:
254 if b"user_id" not in request.args:
254255 defer.returnValue((app_service.sender, app_service))
255256
256 user_id = request.args["user_id"][0]
257 user_id = request.args[b"user_id"][0].decode('utf8')
257258 if app_service.sender == user_id:
258259 defer.returnValue((app_service.sender, app_service))
259260
543544
544545 @defer.inlineCallbacks
545546 def add_auth_events(self, builder, context):
546 auth_ids = yield self.compute_auth_events(builder, context.prev_state_ids)
547 prev_state_ids = yield context.get_prev_state_ids(self.store)
548 auth_ids = yield self.compute_auth_events(builder, prev_state_ids)
547549
548550 auth_events_entries = yield self.store.add_event_hashes(
549551 auth_ids
736738 )
737739
738740 return query_params[0]
741
742 @defer.inlineCallbacks
743 def check_in_room_or_world_readable(self, room_id, user_id):
744 """Checks that the user is or was in the room or the room is world
745 readable. If it isn't then an exception is raised.
746
747 Returns:
748 Deferred[tuple[str, str|None]]: Resolves to the current membership of
749 the user in the room and the membership event ID of the user. If
750 the user is not in the room and never has been, then
751 `(Membership.JOIN, None)` is returned.
752 """
753
754 try:
755 # check_user_was_in_room will return the most recent membership
756 # event for the user if:
757 # * The user is a non-guest user, and was ever in the room
758 # * The user is a guest user, and has joined the room
759 # else it will throw.
760 member_event = yield self.check_user_was_in_room(room_id, user_id)
761 defer.returnValue((member_event.membership, member_event.event_id))
762 except AuthError:
763 visibility = yield self.state.get_current_state(
764 room_id, EventTypes.RoomHistoryVisibility, ""
765 )
766 if (
767 visibility and
768 visibility.content["history_visibility"] == "world_readable"
769 ):
770 defer.returnValue((Membership.JOIN, None))
771 return
772 raise AuthError(
773 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
774 )
5454 SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
5555 CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
5656 CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
57 MAU_LIMIT_EXCEEDED = "M_MAU_LIMIT_EXCEEDED"
5758
5859
5960 class CodeMessageException(RuntimeError):
6869 self.code = code
6970 self.msg = msg
7071
71 def error_dict(self):
72 return cs_error(self.msg)
73
74
75 class MatrixCodeMessageException(CodeMessageException):
76 """An error from a general matrix endpoint, eg. from a proxied Matrix API call.
77
78 Attributes:
79 errcode (str): Matrix error code e.g 'M_FORBIDDEN'
80 """
81 def __init__(self, code, msg, errcode=Codes.UNKNOWN):
82 super(MatrixCodeMessageException, self).__init__(code, msg)
83 self.errcode = errcode
84
8572
8673 class SynapseError(CodeMessageException):
8774 """A base exception type for matrix errors which have an errcode and error
10794 self.errcode,
10895 )
10996
110 @classmethod
111 def from_http_response_exception(cls, err):
112 """Make a SynapseError based on an HTTPResponseException
113
114 This is useful when a proxied request has failed, and we need to
115 decide how to map the failure onto a matrix error to send back to the
116 client.
117
118 An attempt is made to parse the body of the http response as a matrix
119 error. If that succeeds, the errcode and error message from the body
120 are used as the errcode and error message in the new synapse error.
121
122 Otherwise, the errcode is set to M_UNKNOWN, and the error message is
123 set to the reason code from the HTTP response.
124
125 Args:
126 err (HttpResponseException):
127
128 Returns:
129 SynapseError:
130 """
131 # try to parse the body as json, to get better errcode/msg, but
132 # default to M_UNKNOWN with the HTTP status as the error text
133 try:
134 j = json.loads(err.response)
135 except ValueError:
136 j = {}
137 errcode = j.get('errcode', Codes.UNKNOWN)
138 errmsg = j.get('error', err.msg)
139
140 res = SynapseError(err.code, errmsg, errcode)
141 return res
97
98 class ProxiedRequestError(SynapseError):
99 """An error from a general matrix endpoint, eg. from a proxied Matrix API call.
100
101 Attributes:
102 errcode (str): Matrix error code e.g 'M_FORBIDDEN'
103 """
104 def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
105 super(ProxiedRequestError, self).__init__(
106 code, msg, errcode
107 )
108 if additional_fields is None:
109 self._additional_fields = {}
110 else:
111 self._additional_fields = dict(additional_fields)
112
113 def error_dict(self):
114 return cs_error(
115 self.msg,
116 self.errcode,
117 **self._additional_fields
118 )
142119
143120
144121 class ConsentNotGivenError(SynapseError):
307284 )
308285
309286
310 def cs_exception(exception):
311 if isinstance(exception, CodeMessageException):
312 return exception.error_dict()
313 else:
314 logger.error("Unknown exception type: %s", type(exception))
315 return {}
316
317
318287 def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
319288 """ Utility method for constructing an error response for client-server
320289 interactions.
371340 Represents an HTTP-level failure of an outbound request
372341
373342 Attributes:
374 response (str): body of response
343 response (bytes): body of response
375344 """
376345 def __init__(self, code, msg, response):
377346 """
379348 Args:
380349 code (int): HTTP status code
381350 msg (str): reason phrase from HTTP response status line
382 response (str): body of response
351 response (bytes): body of response
383352 """
384353 super(HttpResponseException, self).__init__(code, msg)
385354 self.response = response
355
356 def to_synapse_error(self):
357 """Make a SynapseError based on an HTTPResponseException
358
359 This is useful when a proxied request has failed, and we need to
360 decide how to map the failure onto a matrix error to send back to the
361 client.
362
363 An attempt is made to parse the body of the http response as a matrix
364 error. If that succeeds, the errcode and error message from the body
365 are used as the errcode and error message in the new synapse error.
366
367 Otherwise, the errcode is set to M_UNKNOWN, and the error message is
368 set to the reason code from the HTTP response.
369
370 Returns:
371 SynapseError:
372 """
373 # try to parse the body as json, to get better errcode/msg, but
374 # default to M_UNKNOWN with the HTTP status as the error text
375 try:
376 j = json.loads(self.response)
377 except ValueError:
378 j = {}
379
380 if not isinstance(j, dict):
381 j = {}
382
383 errcode = j.pop('errcode', Codes.UNKNOWN)
384 errmsg = j.pop('error', self.msg)
385
386 return ProxiedRequestError(self.code, errmsg, errcode, j)
112112 },
113113 "contains_url": {
114114 "type": "boolean"
115 }
115 },
116 "lazy_load_members": {
117 "type": "boolean"
118 },
119 "include_redundant_members": {
120 "type": "boolean"
121 },
116122 }
117123 }
118124
259265
260266 def ephemeral_limit(self):
261267 return self._room_ephemeral_filter.limit()
268
269 def lazy_load_members(self):
270 return self._room_state_filter.lazy_load_members()
271
272 def include_redundant_members(self):
273 return self._room_state_filter.include_redundant_members()
262274
263275 def filter_presence(self, events):
264276 return self._presence_filter.filter(events)
416428 def limit(self):
417429 return self.filter_json.get("limit", 10)
418430
431 def lazy_load_members(self):
432 return self.filter_json.get("lazy_load_members", False)
433
434 def include_redundant_members(self):
435 return self.filter_json.get("include_redundant_members", False)
436
419437
420438 def _matches_wildcard(actual_value, filter_value):
421439 if filter_value.endswith("*"):
3030 from synapse.metrics import RegistryProxy
3131 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
3232 from synapse.replication.slave.storage._base import BaseSlavedStore
33 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
3334 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
3435 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
3536 from synapse.replication.slave.storage.directory import DirectoryStore
3940 from synapse.replication.slave.storage.room import RoomStore
4041 from synapse.replication.slave.storage.transactions import TransactionStore
4142 from synapse.replication.tcp.client import ReplicationClientHandler
42 from synapse.rest.client.v1.room import PublicRoomListRestServlet
43 from synapse.rest.client.v1.room import (
44 JoinedRoomMemberListRestServlet,
45 PublicRoomListRestServlet,
46 RoomEventContextServlet,
47 RoomMemberListRestServlet,
48 RoomStateRestServlet,
49 )
4350 from synapse.server import HomeServer
4451 from synapse.storage.engines import create_engine
4552 from synapse.util.httpresourcetree import create_resource_tree
5158
5259
5360 class ClientReaderSlavedStore(
61 SlavedAccountDataStore,
5462 SlavedEventStore,
5563 SlavedKeyStore,
5664 RoomStore,
8189 resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
8290 elif name == "client":
8391 resource = JsonResource(self, canonical_json=False)
92
8493 PublicRoomListRestServlet(self).register(resource)
94 RoomMemberListRestServlet(self).register(resource)
95 JoinedRoomMemberListRestServlet(self).register(resource)
96 RoomStateRestServlet(self).register(resource)
97 RoomEventContextServlet(self).register(resource)
98
8599 resources.update({
86100 "/_matrix/client/r0": resource,
87101 "/_matrix/client/unstable": resource,
1616 import logging
1717 import os
1818 import sys
19
20 from six import iteritems
21
22 from prometheus_client import Gauge
1923
2024 from twisted.application import service
2125 from twisted.internet import defer, reactor
4650 from synapse.http.server import RootRedirect
4751 from synapse.http.site import SynapseSite
4852 from synapse.metrics import RegistryProxy
53 from synapse.metrics.background_process_metrics import run_as_background_process
4954 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
5055 from synapse.module_api import ModuleApi
5156 from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirements
296301 quit_with_error(e.message)
297302
298303
304 # Gauges to expose monthly active user control metrics
305 current_mau_gauge = Gauge("synapse_admin_current_mau", "Current MAU")
306 max_mau_value_gauge = Gauge("synapse_admin_max_mau_value", "MAU Limit")
307
308
299309 def setup(config_options):
300310 """
301311 Args:
424434 # currently either 0 or 1
425435 stats_process = []
426436
437 def start_phone_stats_home():
438 return run_as_background_process("phone_stats_home", phone_stats_home)
439
427440 @defer.inlineCallbacks
428441 def phone_stats_home():
429442 logger.info("Gathering stats for reporting")
441454 stats["total_nonbridged_users"] = total_nonbridged_users
442455
443456 daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
444 for name, count in daily_user_type_results.iteritems():
457 for name, count in iteritems(daily_user_type_results):
445458 stats["daily_user_type_" + name] = count
446459
447460 room_count = yield hs.get_datastore().get_room_count()
452465 stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
453466
454467 r30_results = yield hs.get_datastore().count_r30_users()
455 for name, count in r30_results.iteritems():
468 for name, count in iteritems(r30_results):
456469 stats["r30_users_" + name] = count
457470
458471 daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
495508 )
496509
497510 def generate_user_daily_visit_stats():
498 hs.get_datastore().generate_user_daily_visits()
511 return run_as_background_process(
512 "generate_user_daily_visits",
513 hs.get_datastore().generate_user_daily_visits,
514 )
499515
500516 # Rather than update on per session basis, batch up the requests.
501517 # If you increase the loop period, the accuracy of user_daily_visits
502518 # table will decrease
503519 clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
504520
521 @defer.inlineCallbacks
522 def generate_monthly_active_users():
523 count = 0
524 if hs.config.limit_usage_by_mau:
525 count = yield hs.get_datastore().count_monthly_users()
526 current_mau_gauge.set(float(count))
527 max_mau_value_gauge.set(float(hs.config.max_mau_value))
528
529 generate_monthly_active_users()
530 if hs.config.limit_usage_by_mau:
531 clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
532
505533 if hs.config.report_stats:
506534 logger.info("Scheduling stats reporting for 3 hour intervals")
507 clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
535 clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
508536
509537 # We need to defer this init for the cases that we daemonize
510538 # otherwise the process ID we get is that of the non-daemon process
512540
513541 # We wait 5 minutes to send the first set of stats as the server can
514542 # be quite busy the first few minutes
515 clock.call_later(5 * 60, phone_stats_home)
543 clock.call_later(5 * 60, start_phone_stats_home)
516544
517545 if hs.config.daemonize and hs.config.print_pidfile:
518546 print (hs.config.pid_file)
5454 from synapse.server import HomeServer
5555 from synapse.storage.engines import create_engine
5656 from synapse.storage.presence import UserPresenceState
57 from synapse.storage.roommember import RoomMemberStore
5857 from synapse.util.httpresourcetree import create_resource_tree
5958 from synapse.util.logcontext import LoggingContext, run_in_background
6059 from synapse.util.manhole import manhole
8079 RoomStore,
8180 BaseSlavedStore,
8281 ):
83 did_forget = (
84 RoomMemberStore.__dict__["did_forget"]
85 )
82 pass
8683
8784
8885 UPDATE_SYNCING_USERS_MS = 10 * 1000
2424 import sys
2525 import time
2626
27 from six import iteritems
28
2729 import yaml
2830
2931 SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
172174 os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
173175
174176 cache_factors = config.get("synctl_cache_factors", {})
175 for cache_name, factor in cache_factors.iteritems():
177 for cache_name, factor in iteritems(cache_factors):
176178 os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
177179
178180 worker_configfiles = []
6666 "block_non_admin_invites", False,
6767 )
6868
69 # Options to control access by tracking MAU
70 self.limit_usage_by_mau = config.get("limit_usage_by_mau", False)
71 if self.limit_usage_by_mau:
72 self.max_mau_value = config.get(
73 "max_mau_value", 0,
74 )
75 else:
76 self.max_mau_value = 0
6977 # FIXME: federation_domain_whitelist needs sytests
7078 self.federation_domain_whitelist = None
7179 federation_domain_whitelist = config.get(
208216 # different cores. See
209217 # https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
210218 #
219 # This setting requires the affinity package to be installed!
220 #
211221 # cpu_affinity: 0xFFFFFFFF
212222
213223 # Whether to serve a web client from the HTTP/HTTPS root resource.
2929 ## Turn ##
3030
3131 # The public URIs of the TURN server to give to clients
32 turn_uris: []
32 #turn_uris: []
3333
3434 # The shared secret used to compute passwords for the TURN server
35 turn_shared_secret: "YOUR_SHARED_SECRET"
35 #turn_shared_secret: "YOUR_SHARED_SECRET"
3636
3737 # The Username and password if the TURN server needs them and
3838 # does not use a token
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from six import iteritems
16
1517 from frozendict import frozendict
1618
1719 from twisted.internet import defer
1820
21 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
22
1923
2024 class EventContext(object):
2125 """
2226 Attributes:
23 current_state_ids (dict[(str, str), str]):
24 The current state map including the current event.
25 (type, state_key) -> event_id
26
27 prev_state_ids (dict[(str, str), str]):
28 The current state map excluding the current event.
29 (type, state_key) -> event_id
30
3127 state_group (int|None): state group id, if the state has been stored
3228 as a state group. This is usually only None if e.g. the event is
3329 an outlier.
4440
4541 prev_state_events (?): XXX: is this ever set to anything other than
4642 the empty list?
43
44 _current_state_ids (dict[(str, str), str]|None):
45 The current state map including the current event. None if outlier
46 or we haven't fetched the state from DB yet.
47 (type, state_key) -> event_id
48
49 _prev_state_ids (dict[(str, str), str]|None):
50 The current state map excluding the current event. None if outlier
51 or we haven't fetched the state from DB yet.
52 (type, state_key) -> event_id
53
54 _fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
55 been calculated. None if we haven't started calculating yet
56
57 _event_type (str): The type of the event the context is associated with.
58 Only set when state has not been fetched yet.
59
60 _event_state_key (str|None): The state_key of the event the context is
61 associated with. Only set when state has not been fetched yet.
62
63 _prev_state_id (str|None): If the event associated with the context is
64 a state event, then `_prev_state_id` is the event_id of the state
65 that was replaced.
66 Only set when state has not been fetched yet.
4767 """
4868
4969 __slots__ = [
50 "current_state_ids",
51 "prev_state_ids",
5270 "state_group",
5371 "rejected",
5472 "prev_group",
5573 "delta_ids",
5674 "prev_state_events",
5775 "app_service",
76 "_current_state_ids",
77 "_prev_state_ids",
78 "_prev_state_id",
79 "_event_type",
80 "_event_state_key",
81 "_fetching_state_deferred",
5882 ]
5983
6084 def __init__(self):
85 self.prev_state_events = []
86 self.rejected = False
87 self.app_service = None
88
89 @staticmethod
90 def with_state(state_group, current_state_ids, prev_state_ids,
91 prev_group=None, delta_ids=None):
92 context = EventContext()
93
6194 # The current state including the current event
62 self.current_state_ids = None
95 context._current_state_ids = current_state_ids
6396 # The current state excluding the current event
64 self.prev_state_ids = None
65 self.state_group = None
66
67 self.rejected = False
97 context._prev_state_ids = prev_state_ids
98 context.state_group = state_group
99
100 context._prev_state_id = None
101 context._event_type = None
102 context._event_state_key = None
103 context._fetching_state_deferred = defer.succeed(None)
68104
69105 # A previously persisted state group and a delta between that
70106 # and this state.
71 self.prev_group = None
72 self.delta_ids = None
73
74 self.prev_state_events = None
75
76 self.app_service = None
77
78 def serialize(self, event):
107 context.prev_group = prev_group
108 context.delta_ids = delta_ids
109
110 return context
111
112 @defer.inlineCallbacks
113 def serialize(self, event, store):
79114 """Converts self to a type that can be serialized as JSON, and then
80115 deserialized by `deserialize`
81116
91126 # the prev_state_ids, so if we're a state event we include the event
92127 # id that we replaced in the state.
93128 if event.is_state():
94 prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
129 prev_state_ids = yield self.get_prev_state_ids(store)
130 prev_state_id = prev_state_ids.get((event.type, event.state_key))
95131 else:
96132 prev_state_id = None
97133
98 return {
134 defer.returnValue({
99135 "prev_state_id": prev_state_id,
100136 "event_type": event.type,
101137 "event_state_key": event.state_key if event.is_state() else None,
105141 "delta_ids": _encode_state_dict(self.delta_ids),
106142 "prev_state_events": self.prev_state_events,
107143 "app_service_id": self.app_service.id if self.app_service else None
108 }
144 })
109145
110146 @staticmethod
111 @defer.inlineCallbacks
112147 def deserialize(store, input):
113148 """Converts a dict that was produced by `serialize` back into a
114149 EventContext.
121156 EventContext
122157 """
123158 context = EventContext()
159
160 # We use the state_group and prev_state_id stuff to pull the
161 # current_state_ids out of the DB and construct prev_state_ids.
162 context._prev_state_id = input["prev_state_id"]
163 context._event_type = input["event_type"]
164 context._event_state_key = input["event_state_key"]
165
166 context._current_state_ids = None
167 context._prev_state_ids = None
168 context._fetching_state_deferred = None
169
124170 context.state_group = input["state_group"]
125 context.rejected = input["rejected"]
126171 context.prev_group = input["prev_group"]
127172 context.delta_ids = _decode_state_dict(input["delta_ids"])
173
174 context.rejected = input["rejected"]
128175 context.prev_state_events = input["prev_state_events"]
129
130 # We use the state_group and prev_state_id stuff to pull the
131 # current_state_ids out of the DB and construct prev_state_ids.
132 prev_state_id = input["prev_state_id"]
133 event_type = input["event_type"]
134 event_state_key = input["event_state_key"]
135
136 context.current_state_ids = yield store.get_state_ids_for_group(
137 context.state_group,
138 )
139 if prev_state_id and event_state_key:
140 context.prev_state_ids = dict(context.current_state_ids)
141 context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
142 else:
143 context.prev_state_ids = context.current_state_ids
144176
145177 app_service_id = input["app_service_id"]
146178 if app_service_id:
147179 context.app_service = store.get_app_service_by_id(app_service_id)
148180
149 defer.returnValue(context)
181 return context
182
183 @defer.inlineCallbacks
184 def get_current_state_ids(self, store):
185 """Gets the current state IDs
186
187 Returns:
188 Deferred[dict[(str, str), str]|None]: Returns None if state_group
189 is None, which happens when the associated event is an outlier.
190 """
191
192 if not self._fetching_state_deferred:
193 self._fetching_state_deferred = run_in_background(
194 self._fill_out_state, store,
195 )
196
197 yield make_deferred_yieldable(self._fetching_state_deferred)
198
199 defer.returnValue(self._current_state_ids)
200
201 @defer.inlineCallbacks
202 def get_prev_state_ids(self, store):
203 """Gets the prev state IDs
204
205 Returns:
206 Deferred[dict[(str, str), str]|None]: Returns None if state_group
207 is None, which happens when the associated event is an outlier.
208 """
209
210 if not self._fetching_state_deferred:
211 self._fetching_state_deferred = run_in_background(
212 self._fill_out_state, store,
213 )
214
215 yield make_deferred_yieldable(self._fetching_state_deferred)
216
217 defer.returnValue(self._prev_state_ids)
218
219 def get_cached_current_state_ids(self):
220 """Gets the current state IDs if we have them already cached.
221
222 Returns:
223 dict[(str, str), str]|None: Returns None if we haven't cached the
224 state or if state_group is None, which happens when the associated
225 event is an outlier.
226 """
227
228 return self._current_state_ids
229
230 @defer.inlineCallbacks
231 def _fill_out_state(self, store):
232 """Called to populate the _current_state_ids and _prev_state_ids
233 attributes by loading from the database.
234 """
235 if self.state_group is None:
236 return
237
238 self._current_state_ids = yield store.get_state_ids_for_group(
239 self.state_group,
240 )
241 if self._prev_state_id and self._event_state_key is not None:
242 self._prev_state_ids = dict(self._current_state_ids)
243
244 key = (self._event_type, self._event_state_key)
245 self._prev_state_ids[key] = self._prev_state_id
246 else:
247 self._prev_state_ids = self._current_state_ids
248
249 @defer.inlineCallbacks
250 def update_state(self, state_group, prev_state_ids, current_state_ids,
251 prev_group, delta_ids):
252 """Replace the state in the context
253 """
254
255 # We need to make sure we wait for any ongoing fetching of state
256 # to complete so that the updated state doesn't get clobbered
257 if self._fetching_state_deferred:
258 yield make_deferred_yieldable(self._fetching_state_deferred)
259
260 self.state_group = state_group
261 self._prev_state_ids = prev_state_ids
262 self.prev_group = prev_group
263 self._current_state_ids = current_state_ids
264 self.delta_ids = delta_ids
265
266 # We need to ensure that that we've marked as having fetched the state
267 self._fetching_state_deferred = defer.succeed(None)
150268
151269
152270 def _encode_state_dict(state_dict):
158276
159277 return [
160278 (etype, state_key, v)
161 for (etype, state_key), v in state_dict.iteritems()
279 for (etype, state_key), v in iteritems(state_dict)
162280 ]
163281
164282
4747 PDU_RETRY_TIME_MS = 1 * 60 * 1000
4848
4949
50 class InvalidResponseError(RuntimeError):
51 """Helper for _try_destination_list: indicates that the server returned a response
52 we couldn't parse
53 """
54 pass
55
56
5057 class FederationClient(FederationBase):
5158 def __init__(self, hs):
5259 super(FederationClient, self).__init__(hs)
457464 defer.returnValue(signed_auth)
458465
459466 @defer.inlineCallbacks
467 def _try_destination_list(self, description, destinations, callback):
468 """Try an operation on a series of servers, until it succeeds
469
470 Args:
471 description (unicode): description of the operation we're doing, for logging
472
473 destinations (Iterable[unicode]): list of server_names to try
474
475 callback (callable): Function to run for each server. Passed a single
476 argument: the server_name to try. May return a deferred.
477
478 If the callback raises a CodeMessageException with a 300/400 code,
479 attempts to perform the operation stop immediately and the exception is
480 reraised.
481
482 Otherwise, if the callback raises an Exception the error is logged and the
483 next server tried. Normally the stacktrace is logged but this is
484 suppressed if the exception is an InvalidResponseError.
485
486 Returns:
487 The [Deferred] result of callback, if it succeeds
488
489 Raises:
490 SynapseError if the chosen remote server returns a 300/400 code.
491
492 RuntimeError if no servers were reachable.
493 """
494 for destination in destinations:
495 if destination == self.server_name:
496 continue
497
498 try:
499 res = yield callback(destination)
500 defer.returnValue(res)
501 except InvalidResponseError as e:
502 logger.warn(
503 "Failed to %s via %s: %s",
504 description, destination, e,
505 )
506 except HttpResponseException as e:
507 if not 500 <= e.code < 600:
508 raise e.to_synapse_error()
509 else:
510 logger.warn(
511 "Failed to %s via %s: %i %s",
512 description, destination, e.code, e.message,
513 )
514 except Exception:
515 logger.warn(
516 "Failed to %s via %s",
517 description, destination, exc_info=1,
518 )
519
520 raise RuntimeError("Failed to %s via any server", description)
521
460522 def make_membership_event(self, destinations, room_id, user_id, membership,
461523 content={},):
462524 """
480542 Deferred: resolves to a tuple of (origin (str), event (object))
481543 where origin is the remote homeserver which generated the event.
482544
483 Fails with a ``CodeMessageException`` if the chosen remote server
545 Fails with a ``SynapseError`` if the chosen remote server
484546 returns a 300/400 code.
485547
486548 Fails with a ``RuntimeError`` if no servers were reachable.
491553 "make_membership_event called with membership='%s', must be one of %s" %
492554 (membership, ",".join(valid_memberships))
493555 )
494 for destination in destinations:
495 if destination == self.server_name:
496 continue
497
498 try:
499 ret = yield self.transport_layer.make_membership_event(
500 destination, room_id, user_id, membership
501 )
502
503 pdu_dict = ret["event"]
504
505 logger.debug("Got response to make_%s: %s", membership, pdu_dict)
506
507 pdu_dict["content"].update(content)
508
509 # The protoevent received over the JSON wire may not have all
510 # the required fields. Lets just gloss over that because
511 # there's some we never care about
512 if "prev_state" not in pdu_dict:
513 pdu_dict["prev_state"] = []
514
515 ev = builder.EventBuilder(pdu_dict)
516
517 defer.returnValue(
518 (destination, ev)
519 )
520 break
521 except CodeMessageException as e:
522 if not 500 <= e.code < 600:
523 raise
524 else:
525 logger.warn(
526 "Failed to make_%s via %s: %s",
527 membership, destination, e.message
528 )
529 except Exception as e:
530 logger.warn(
531 "Failed to make_%s via %s: %s",
532 membership, destination, e.message
533 )
534
535 raise RuntimeError("Failed to send to any server.")
536
537 @defer.inlineCallbacks
556
557 @defer.inlineCallbacks
558 def send_request(destination):
559 ret = yield self.transport_layer.make_membership_event(
560 destination, room_id, user_id, membership
561 )
562
563 pdu_dict = ret["event"]
564
565 logger.debug("Got response to make_%s: %s", membership, pdu_dict)
566
567 pdu_dict["content"].update(content)
568
569 # The protoevent received over the JSON wire may not have all
570 # the required fields. Lets just gloss over that because
571 # there's some we never care about
572 if "prev_state" not in pdu_dict:
573 pdu_dict["prev_state"] = []
574
575 ev = builder.EventBuilder(pdu_dict)
576
577 defer.returnValue(
578 (destination, ev)
579 )
580
581 return self._try_destination_list(
582 "make_" + membership, destinations, send_request,
583 )
584
538585 def send_join(self, destinations, pdu):
539586 """Sends a join event to one of a list of homeservers.
540587
551598 giving the serer the event was sent to, ``state`` (?) and
552599 ``auth_chain``.
553600
554 Fails with a ``CodeMessageException`` if the chosen remote server
601 Fails with a ``SynapseError`` if the chosen remote server
555602 returns a 300/400 code.
556603
557604 Fails with a ``RuntimeError`` if no servers were reachable.
558605 """
559606
560 for destination in destinations:
561 if destination == self.server_name:
562 continue
563
564 try:
565 time_now = self._clock.time_msec()
566 _, content = yield self.transport_layer.send_join(
567 destination=destination,
568 room_id=pdu.room_id,
569 event_id=pdu.event_id,
570 content=pdu.get_pdu_json(time_now),
571 )
572
573 logger.debug("Got content: %s", content)
574
575 state = [
576 event_from_pdu_json(p, outlier=True)
577 for p in content.get("state", [])
578 ]
579
580 auth_chain = [
581 event_from_pdu_json(p, outlier=True)
582 for p in content.get("auth_chain", [])
583 ]
584
585 pdus = {
586 p.event_id: p
587 for p in itertools.chain(state, auth_chain)
588 }
589
590 valid_pdus = yield self._check_sigs_and_hash_and_fetch(
591 destination, list(pdus.values()),
592 outlier=True,
593 )
594
595 valid_pdus_map = {
596 p.event_id: p
597 for p in valid_pdus
598 }
599
600 # NB: We *need* to copy to ensure that we don't have multiple
601 # references being passed on, as that causes... issues.
602 signed_state = [
603 copy.copy(valid_pdus_map[p.event_id])
604 for p in state
605 if p.event_id in valid_pdus_map
606 ]
607
608 signed_auth = [
609 valid_pdus_map[p.event_id]
610 for p in auth_chain
611 if p.event_id in valid_pdus_map
612 ]
613
614 # NB: We *need* to copy to ensure that we don't have multiple
615 # references being passed on, as that causes... issues.
616 for s in signed_state:
617 s.internal_metadata = copy.deepcopy(s.internal_metadata)
618
619 auth_chain.sort(key=lambda e: e.depth)
620
621 defer.returnValue({
622 "state": signed_state,
623 "auth_chain": signed_auth,
624 "origin": destination,
625 })
626 except CodeMessageException as e:
627 if not 500 <= e.code < 600:
628 raise
629 else:
630 logger.exception(
631 "Failed to send_join via %s: %s",
632 destination, e.message
633 )
634 except Exception as e:
635 logger.exception(
636 "Failed to send_join via %s: %s",
637 destination, e.message
638 )
639
640 raise RuntimeError("Failed to send to any server.")
607 @defer.inlineCallbacks
608 def send_request(destination):
609 time_now = self._clock.time_msec()
610 _, content = yield self.transport_layer.send_join(
611 destination=destination,
612 room_id=pdu.room_id,
613 event_id=pdu.event_id,
614 content=pdu.get_pdu_json(time_now),
615 )
616
617 logger.debug("Got content: %s", content)
618
619 state = [
620 event_from_pdu_json(p, outlier=True)
621 for p in content.get("state", [])
622 ]
623
624 auth_chain = [
625 event_from_pdu_json(p, outlier=True)
626 for p in content.get("auth_chain", [])
627 ]
628
629 pdus = {
630 p.event_id: p
631 for p in itertools.chain(state, auth_chain)
632 }
633
634 valid_pdus = yield self._check_sigs_and_hash_and_fetch(
635 destination, list(pdus.values()),
636 outlier=True,
637 )
638
639 valid_pdus_map = {
640 p.event_id: p
641 for p in valid_pdus
642 }
643
644 # NB: We *need* to copy to ensure that we don't have multiple
645 # references being passed on, as that causes... issues.
646 signed_state = [
647 copy.copy(valid_pdus_map[p.event_id])
648 for p in state
649 if p.event_id in valid_pdus_map
650 ]
651
652 signed_auth = [
653 valid_pdus_map[p.event_id]
654 for p in auth_chain
655 if p.event_id in valid_pdus_map
656 ]
657
658 # NB: We *need* to copy to ensure that we don't have multiple
659 # references being passed on, as that causes... issues.
660 for s in signed_state:
661 s.internal_metadata = copy.deepcopy(s.internal_metadata)
662
663 auth_chain.sort(key=lambda e: e.depth)
664
665 defer.returnValue({
666 "state": signed_state,
667 "auth_chain": signed_auth,
668 "origin": destination,
669 })
670 return self._try_destination_list("send_join", destinations, send_request)
641671
642672 @defer.inlineCallbacks
643673 def send_invite(self, destination, room_id, event_id, pdu):
644674 time_now = self._clock.time_msec()
645 code, content = yield self.transport_layer.send_invite(
646 destination=destination,
647 room_id=room_id,
648 event_id=event_id,
649 content=pdu.get_pdu_json(time_now),
650 )
675 try:
676 code, content = yield self.transport_layer.send_invite(
677 destination=destination,
678 room_id=room_id,
679 event_id=event_id,
680 content=pdu.get_pdu_json(time_now),
681 )
682 except HttpResponseException as e:
683 if e.code == 403:
684 raise e.to_synapse_error()
685 raise
651686
652687 pdu_dict = content["event"]
653688
662697
663698 defer.returnValue(pdu)
664699
665 @defer.inlineCallbacks
666700 def send_leave(self, destinations, pdu):
667701 """Sends a leave event to one of a list of homeservers.
668702
679713 Return:
680714 Deferred: resolves to None.
681715
682 Fails with a ``CodeMessageException`` if the chosen remote server
683 returns a non-200 code.
716 Fails with a ``SynapseError`` if the chosen remote server
717 returns a 300/400 code.
684718
685719 Fails with a ``RuntimeError`` if no servers were reachable.
686720 """
687 for destination in destinations:
688 if destination == self.server_name:
689 continue
690
691 try:
692 time_now = self._clock.time_msec()
693 _, content = yield self.transport_layer.send_leave(
694 destination=destination,
695 room_id=pdu.room_id,
696 event_id=pdu.event_id,
697 content=pdu.get_pdu_json(time_now),
698 )
699
700 logger.debug("Got content: %s", content)
701 defer.returnValue(None)
702 except CodeMessageException:
703 raise
704 except Exception as e:
705 logger.exception(
706 "Failed to send_leave via %s: %s",
707 destination, e.message
708 )
709
710 raise RuntimeError("Failed to send to any server.")
721 @defer.inlineCallbacks
722 def send_request(destination):
723 time_now = self._clock.time_msec()
724 _, content = yield self.transport_layer.send_leave(
725 destination=destination,
726 room_id=pdu.room_id,
727 event_id=pdu.event_id,
728 content=pdu.get_pdu_json(time_now),
729 )
730
731 logger.debug("Got content: %s", content)
732 defer.returnValue(None)
733
734 return self._try_destination_list("send_leave", destinations, send_request)
711735
712736 def get_public_rooms(self, destination, limit=None, since_token=None,
713737 search_filter=None, include_all_networks=False,
2323
2424 from twisted.internet import defer
2525 from twisted.internet.abstract import isIPAddress
26 from twisted.python import failure
2627
2728 from synapse.api.constants import EventTypes
2829 from synapse.api.errors import AuthError, FederationError, NotFoundError, SynapseError
185186 logger.warn("Error handling PDU %s: %s", event_id, e)
186187 pdu_results[event_id] = {"error": str(e)}
187188 except Exception as e:
189 f = failure.Failure()
188190 pdu_results[event_id] = {"error": str(e)}
189 logger.exception("Failed to handle PDU %s", event_id)
191 logger.error(
192 "Failed to handle PDU %s: %s",
193 event_id, f.getTraceback().rstrip(),
194 )
190195
191196 yield async.concurrently_execute(
192197 process_pdus_for_room, pdus_by_room.keys(),
200205 edu.edu_type,
201206 edu.content
202207 )
203
204 pdu_failures = getattr(transaction, "pdu_failures", [])
205 for failure in pdu_failures:
206 logger.info("Got failure %r", failure)
207208
208209 response = {
209210 "pdus": pdu_results,
6161
6262 self.edus = SortedDict() # stream position -> Edu
6363
64 self.failures = SortedDict() # stream position -> (destination, Failure)
65
6664 self.device_messages = SortedDict() # stream position -> destination
6765
6866 self.pos = 1
7876
7977 for queue_name in [
8078 "presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
81 "edus", "failures", "device_messages", "pos_time",
79 "edus", "device_messages", "pos_time",
8280 ]:
8381 register(queue_name, getattr(self, queue_name))
8482
148146 for key in keys[:i]:
149147 del self.edus[key]
150148
151 # Delete things out of failure map
152 keys = self.failures.keys()
153 i = self.failures.bisect_left(position_to_delete)
154 for key in keys[:i]:
155 del self.failures[key]
156
157149 # Delete things out of device map
158150 keys = self.device_messages.keys()
159151 i = self.device_messages.bisect_left(position_to_delete)
201193 self.presence_map.update({state.user_id: state for state in local_states})
202194 self.presence_changed[pos] = [state.user_id for state in local_states]
203195
204 self.notifier.on_new_replication_data()
205
206 def send_failure(self, failure, destination):
207 """As per TransactionQueue"""
208 pos = self._next_pos()
209
210 self.failures[pos] = (destination, str(failure))
211196 self.notifier.on_new_replication_data()
212197
213198 def send_device_messages(self, destination):
284269 for (pos, edu) in edus:
285270 rows.append((pos, EduRow(edu)))
286271
287 # Fetch changed failures
288 i = self.failures.bisect_right(from_token)
289 j = self.failures.bisect_right(to_token) + 1
290 failures = self.failures.items()[i:j]
291
292 for (pos, (destination, failure)) in failures:
293 rows.append((pos, FailureRow(
294 destination=destination,
295 failure=failure,
296 )))
297
298272 # Fetch changed device messages
299273 i = self.device_messages.bisect_right(from_token)
300274 j = self.device_messages.bisect_right(to_token) + 1
414388
415389 def add_to_buffer(self, buff):
416390 buff.edus.setdefault(self.edu.destination, []).append(self.edu)
417
418
419 class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
420 "destination", # str
421 "failure",
422 ))):
423 """Streams failures to a remote server. Failures are issued when there was
424 something wrong with a transaction the remote sent us, e.g. it included
425 an event that was invalid.
426 """
427
428 TypeId = "f"
429
430 @staticmethod
431 def from_data(data):
432 return FailureRow(
433 destination=data["destination"],
434 failure=data["failure"],
435 )
436
437 def to_data(self):
438 return {
439 "destination": self.destination,
440 "failure": self.failure,
441 }
442
443 def add_to_buffer(self, buff):
444 buff.failures.setdefault(self.destination, []).append(self.failure)
445391
446392
447393 class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
470416 PresenceRow,
471417 KeyedEduRow,
472418 EduRow,
473 FailureRow,
474419 DeviceRow,
475420 )
476421 }
480425 "presence", # list(UserPresenceState)
481426 "keyed_edus", # dict of destination -> { key -> Edu }
482427 "edus", # dict of destination -> [Edu]
483 "failures", # dict of destination -> [failures]
484428 "device_destinations", # set of destinations
485429 ))
486430
502446 presence=[],
503447 keyed_edus={},
504448 edus={},
505 failures={},
506449 device_destinations=set(),
507450 )
508451
531474 edu.destination, edu.edu_type, edu.content, key=None,
532475 )
533476
534 for destination, failure_list in iteritems(buff.failures):
535 for failure in failure_list:
536 transaction_queue.send_failure(destination, failure)
537
538477 for destination in buff.device_destinations:
539478 transaction_queue.send_device_messages(destination)
2929 sent_edus_counter,
3030 sent_transactions_counter,
3131 )
32 from synapse.util import PreserveLoggingContext, logcontext
32 from synapse.metrics.background_process_metrics import run_as_background_process
33 from synapse.util import logcontext
3334 from synapse.util.metrics import measure_func
3435 from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
3536
114115 ),
115116 )
116117
117 # destination -> list of tuple(failure, deferred)
118 self.pending_failures_by_dest = {}
119
120118 # destination -> stream_id of last successfully sent to-device message.
121119 # NB: may be a long or an int.
122120 self.last_device_stream_id_by_dest = {}
164162 if self._is_processing:
165163 return
166164
167 # fire off a processing loop in the background. It's likely it will
168 # outlast the current request, so run it in the sentinel logcontext.
169 with PreserveLoggingContext():
170 self._process_event_queue_loop()
165 # fire off a processing loop in the background
166 run_as_background_process(
167 "process_event_queue_for_federation",
168 self._process_event_queue_loop,
169 )
171170
172171 @defer.inlineCallbacks
173172 def _process_event_queue_loop(self):
379378
380379 self._attempt_new_transaction(destination)
381380
382 def send_failure(self, failure, destination):
383 if destination == self.server_name or destination == "localhost":
384 return
385
386 if not self.can_send_to(destination):
387 return
388
389 self.pending_failures_by_dest.setdefault(
390 destination, []
391 ).append(failure)
392
393 self._attempt_new_transaction(destination)
394
395381 def send_device_messages(self, destination):
396382 if destination == self.server_name or destination == "localhost":
397383 return
431417
432418 logger.debug("TX [%s] Starting transaction loop", destination)
433419
434 # Drop the logcontext before starting the transaction. It doesn't
435 # really make sense to log all the outbound transactions against
436 # whatever path led us to this point: that's pretty arbitrary really.
437 #
438 # (this also means we can fire off _perform_transaction without
439 # yielding)
440 with logcontext.PreserveLoggingContext():
441 self._transaction_transmission_loop(destination)
420 run_as_background_process(
421 "federation_transaction_transmission_loop",
422 self._transaction_transmission_loop,
423 destination,
424 )
442425
443426 @defer.inlineCallbacks
444427 def _transaction_transmission_loop(self, destination):
469452 pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
470453 pending_edus = self.pending_edus_by_dest.pop(destination, [])
471454 pending_presence = self.pending_presence_by_dest.pop(destination, {})
472 pending_failures = self.pending_failures_by_dest.pop(destination, [])
473455
474456 pending_edus.extend(
475457 self.pending_edus_keyed_by_dest.pop(destination, {}).values()
497479 logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
498480 destination, len(pending_pdus))
499481
500 if not pending_pdus and not pending_edus and not pending_failures:
482 if not pending_pdus and not pending_edus:
501483 logger.debug("TX [%s] Nothing to send", destination)
502484 self.last_device_stream_id_by_dest[destination] = (
503485 device_stream_id
507489 # END CRITICAL SECTION
508490
509491 success = yield self._send_new_transaction(
510 destination, pending_pdus, pending_edus, pending_failures,
492 destination, pending_pdus, pending_edus,
511493 )
512494 if success:
513495 sent_transactions_counter.inc()
584566
585567 @measure_func("_send_new_transaction")
586568 @defer.inlineCallbacks
587 def _send_new_transaction(self, destination, pending_pdus, pending_edus,
588 pending_failures):
569 def _send_new_transaction(self, destination, pending_pdus, pending_edus):
589570
590571 # Sort based on the order field
591572 pending_pdus.sort(key=lambda t: t[1])
592573 pdus = [x[0] for x in pending_pdus]
593574 edus = pending_edus
594 failures = [x.get_dict() for x in pending_failures]
595575
596576 success = True
597577
601581
602582 logger.debug(
603583 "TX [%s] {%s} Attempting new transaction"
604 " (pdus: %d, edus: %d, failures: %d)",
584 " (pdus: %d, edus: %d)",
605585 destination, txn_id,
606586 len(pdus),
607587 len(edus),
608 len(failures)
609588 )
610589
611590 logger.debug("TX [%s] Persisting transaction...", destination)
617596 destination=destination,
618597 pdus=pdus,
619598 edus=edus,
620 pdu_failures=failures,
621599 )
622600
623601 self._next_txn_id += 1
627605 logger.debug("TX [%s] Persisted transaction", destination)
628606 logger.info(
629607 "TX [%s] {%s} Sending transaction [%s],"
630 " (PDUs: %d, EDUs: %d, failures: %d)",
608 " (PDUs: %d, EDUs: %d)",
631609 destination, txn_id,
632610 transaction.transaction_id,
633611 len(pdus),
634612 len(edus),
635 len(failures),
636613 )
637614
638615 # Actually send the transaction
164164 param_dict = dict(kv.split("=") for kv in params)
165165
166166 def strip_quotes(value):
167 if value.startswith(b"\""):
167 if value.startswith("\""):
168168 return value[1:-1]
169169 else:
170170 return value
282282 )
283283
284284 logger.info(
285 "Received txn %s from %s. (PDUs: %d, EDUs: %d, failures: %d)",
285 "Received txn %s from %s. (PDUs: %d, EDUs: %d)",
286286 transaction_id, origin,
287287 len(transaction_data.get("pdus", [])),
288288 len(transaction_data.get("edus", [])),
289 len(transaction_data.get("failures", [])),
290289 )
291290
292291 # We should ideally be getting this from the security layer.
403402
404403
405404 class FederationSendLeaveServlet(BaseFederationServlet):
406 PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<txid>[^/]*)"
407
408 @defer.inlineCallbacks
409 def on_PUT(self, origin, content, query, room_id, txid):
405 PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
406
407 @defer.inlineCallbacks
408 def on_PUT(self, origin, content, query, room_id, event_id):
410409 content = yield self.handler.on_send_leave_request(origin, content)
411410 defer.returnValue((200, content))
412411
7272 "previous_ids",
7373 "pdus",
7474 "edus",
75 "pdu_failures",
7675 ]
7776
7877 internal_keys = [
4242 from twisted.internet import defer
4343
4444 from synapse.api.errors import SynapseError
45 from synapse.metrics.background_process_metrics import run_as_background_process
4546 from synapse.types import get_domain_from_id
4647 from synapse.util.logcontext import run_in_background
4748
128129 self.attestations = hs.get_groups_attestation_signing()
129130
130131 self._renew_attestations_loop = self.clock.looping_call(
131 self._renew_attestations, 30 * 60 * 1000,
132 self._start_renew_attestations, 30 * 60 * 1000,
132133 )
133134
134135 @defer.inlineCallbacks
149150 yield self.store.update_remote_attestion(group_id, user_id, attestation)
150151
151152 defer.returnValue({})
153
154 def _start_renew_attestations(self):
155 return run_as_background_process("renew_attestations", self._renew_attestations)
152156
153157 @defer.inlineCallbacks
154158 def _renew_attestations(self):
1616 from .directory import DirectoryHandler
1717 from .federation import FederationHandler
1818 from .identity import IdentityHandler
19 from .message import MessageHandler
2019 from .register import RegistrationHandler
21 from .room import RoomContextHandler
2220 from .search import SearchHandler
2321
2422
4341
4442 def __init__(self, hs):
4543 self.registration_handler = RegistrationHandler(hs)
46 self.message_handler = MessageHandler(hs)
4744 self.federation_handler = FederationHandler(hs)
4845 self.directory_handler = DirectoryHandler(hs)
4946 self.admin_handler = AdminHandler(hs)
5047 self.identity_handler = IdentityHandler(hs)
5148 self.search_handler = SearchHandler(hs)
52 self.room_context_handler = RoomContextHandler(hs)
111111 guest_access = event.content.get("guest_access", "forbidden")
112112 if guest_access != "can_join":
113113 if context:
114 current_state_ids = yield context.get_current_state_ids(self.store)
114115 current_state = yield self.store.get_events(
115 list(context.current_state_ids.values())
116 list(current_state_ids.values())
116117 )
117118 else:
118119 current_state = yield self.state_handler.get_current_state(
2222
2323 import synapse
2424 from synapse.api.constants import EventTypes
25 from synapse.metrics.background_process_metrics import run_as_background_process
2526 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
2627 from synapse.util.metrics import Measure
2728
105106 yield self._check_user_exists(event.state_key)
106107
107108 if not self.started_scheduler:
108 self.scheduler.start().addErrback(log_failure)
109 def start_scheduler():
110 return self.scheduler.start().addErrback(log_failure)
111 run_as_background_process("as_scheduler", start_scheduler)
109112 self.started_scheduler = True
110113
111114 # Fork off pushes to these services
1414 # limitations under the License.
1515
1616 import logging
17 import unicodedata
1718
1819 import attr
1920 import bcrypt
518519 """
519520 logger.info("Logging in user %s on device %s", user_id, device_id)
520521 access_token = yield self.issue_access_token(user_id, device_id)
522 yield self._check_mau_limits()
521523
522524 # the device *should* have been registered before we got here; however,
523525 # it's possible we raced against a DELETE operation. The thing we
625627 # special case to check for "password" for the check_password interface
626628 # for the auth providers
627629 password = login_submission.get("password")
630
628631 if login_type == LoginType.PASSWORD:
629632 if not self._password_enabled:
630633 raise SynapseError(400, "Password login has been disabled.")
706709 multiple inexact matches.
707710
708711 Args:
709 user_id (str): complete @user:id
710 Returns:
711 (str) the canonical_user_id, or None if unknown user / bad password
712 user_id (unicode): complete @user:id
713 password (unicode): the provided password
714 Returns:
715 (unicode) the canonical_user_id, or None if unknown user / bad password
712716 """
713717 lookupres = yield self._find_user_id_and_pwd_hash(user_id)
714718 if not lookupres:
727731 device_id)
728732 defer.returnValue(access_token)
729733
734 @defer.inlineCallbacks
730735 def validate_short_term_login_token_and_get_user_id(self, login_token):
736 yield self._check_mau_limits()
731737 auth_api = self.hs.get_auth()
738 user_id = None
732739 try:
733740 macaroon = pymacaroons.Macaroon.deserialize(login_token)
734741 user_id = auth_api.get_user_id_from_macaroon(macaroon)
735742 auth_api.validate_macaroon(macaroon, "login", True, user_id)
736 return user_id
737743 except Exception:
738744 raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
745 defer.returnValue(user_id)
739746
740747 @defer.inlineCallbacks
741748 def delete_access_token(self, access_token):
848855 """Computes a secure hash of password.
849856
850857 Args:
851 password (str): Password to hash.
852
853 Returns:
854 Deferred(str): Hashed password.
858 password (unicode): Password to hash.
859
860 Returns:
861 Deferred(unicode): Hashed password.
855862 """
856863 def _do_hash():
857 return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
858 bcrypt.gensalt(self.bcrypt_rounds))
864 # Normalise the Unicode in the password
865 pw = unicodedata.normalize("NFKC", password)
866
867 return bcrypt.hashpw(
868 pw.encode('utf8') + self.hs.config.password_pepper.encode("utf8"),
869 bcrypt.gensalt(self.bcrypt_rounds),
870 ).decode('ascii')
859871
860872 return make_deferred_yieldable(
861873 threads.deferToThreadPool(
867879 """Validates that self.hash(password) == stored_hash.
868880
869881 Args:
870 password (str): Password to hash.
871 stored_hash (str): Expected hash value.
882 password (unicode): Password to hash.
883 stored_hash (unicode): Expected hash value.
872884
873885 Returns:
874886 Deferred(bool): Whether self.hash(password) == stored_hash.
875887 """
876888
877889 def _do_validate_hash():
890 # Normalise the Unicode in the password
891 pw = unicodedata.normalize("NFKC", password)
892
878893 return bcrypt.checkpw(
879 password.encode('utf8') + self.hs.config.password_pepper,
894 pw.encode('utf8') + self.hs.config.password_pepper.encode("utf8"),
880895 stored_hash.encode('utf8')
881896 )
882897
890905 )
891906 else:
892907 return defer.succeed(False)
908
909 @defer.inlineCallbacks
910 def _check_mau_limits(self):
911 """
912 Ensure that if mau blocking is enabled that invalid users cannot
913 log in.
914 """
915 if self.hs.config.limit_usage_by_mau is True:
916 current_mau = yield self.store.count_monthly_users()
917 if current_mau >= self.hs.config.max_mau_value:
918 raise AuthError(
919 403, "MAU Limit Exceeded", errcode=Codes.MAU_LIMIT_EXCEEDED
920 )
893921
894922
895923 @attr.s
2020 import sys
2121
2222 import six
23 from six import iteritems
24 from six.moves import http_client
23 from six import iteritems, itervalues
24 from six.moves import http_client, zip
2525
2626 from signedjson.key import decode_verify_key_bytes
2727 from signedjson.sign import verify_signed_json
7575 self.hs = hs
7676
7777 self.store = hs.get_datastore()
78 self.replication_layer = hs.get_federation_client()
78 self.federation_client = hs.get_federation_client()
7979 self.state_handler = hs.get_state_handler()
8080 self.server_name = hs.hostname
8181 self.keyring = hs.get_keyring()
254254 # know about
255255 for p in prevs - seen:
256256 state, got_auth_chain = (
257 yield self.replication_layer.get_state_for_room(
257 yield self.federation_client.get_state_for_room(
258258 origin, pdu.room_id, p
259259 )
260260 )
337337 #
338338 # see https://github.com/matrix-org/synapse/pull/1744
339339
340 missing_events = yield self.replication_layer.get_missing_events(
340 missing_events = yield self.federation_client.get_missing_events(
341341 origin,
342342 pdu.room_id,
343343 earliest_events_ids=list(latest),
399399 )
400400
401401 try:
402 event_stream_id, max_stream_id = yield self._persist_auth_tree(
402 yield self._persist_auth_tree(
403403 origin, auth_chain, state, event
404404 )
405405 except AuthError as e:
443443 yield self._handle_new_events(origin, event_infos)
444444
445445 try:
446 context, event_stream_id, max_stream_id = yield self._handle_new_event(
446 context = yield self._handle_new_event(
447447 origin,
448448 event,
449449 state=state,
468468 except StoreError:
469469 logger.exception("Failed to store room.")
470470
471 extra_users = []
472 if event.type == EventTypes.Member:
473 target_user_id = event.state_key
474 target_user = UserID.from_string(target_user_id)
475 extra_users.append(target_user)
476
477 self.notifier.on_new_room_event(
478 event, event_stream_id, max_stream_id,
479 extra_users=extra_users
480 )
481
482471 if event.type == EventTypes.Member:
483472 if event.membership == Membership.JOIN:
484473 # Only fire user_joined_room if the user has acutally
485474 # joined the room. Don't bother if the user is just
486475 # changing their profile info.
487476 newly_joined = True
488 prev_state_id = context.prev_state_ids.get(
477
478 prev_state_ids = yield context.get_prev_state_ids(self.store)
479
480 prev_state_id = prev_state_ids.get(
489481 (event.type, event.state_key)
490482 )
491483 if prev_state_id:
497489
498490 if newly_joined:
499491 user = UserID.from_string(event.state_key)
500 yield user_joined_room(self.distributor, user, event.room_id)
492 yield self.user_joined_room(user, event.room_id)
501493
502494 @log_function
503495 @defer.inlineCallbacks
518510 if dest == self.server_name:
519511 raise SynapseError(400, "Can't backfill from self.")
520512
521 events = yield self.replication_layer.backfill(
513 events = yield self.federation_client.backfill(
522514 dest,
523515 room_id,
524516 limit=limit,
566558 state_events = {}
567559 events_to_state = {}
568560 for e_id in edges:
569 state, auth = yield self.replication_layer.get_state_for_room(
561 state, auth = yield self.federation_client.get_state_for_room(
570562 destination=dest,
571563 room_id=room_id,
572564 event_id=e_id
608600 results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
609601 [
610602 logcontext.run_in_background(
611 self.replication_layer.get_pdu,
603 self.federation_client.get_pdu,
612604 [dest],
613605 event_id,
614606 outlier=True,
730722 """
731723 joined_users = [
732724 (state_key, int(event.depth))
733 for (e_type, state_key), event in state.iteritems()
725 for (e_type, state_key), event in iteritems(state)
734726 if e_type == EventTypes.Member
735727 and event.membership == Membership.JOIN
736728 ]
747739 except Exception:
748740 pass
749741
750 return sorted(joined_domains.iteritems(), key=lambda d: d[1])
742 return sorted(joined_domains.items(), key=lambda d: d[1])
751743
752744 curr_domains = get_domains_from_state(curr_state)
753745
810802 tried_domains = set(likely_domains)
811803 tried_domains.add(self.server_name)
812804
813 event_ids = list(extremities.iterkeys())
805 event_ids = list(extremities.keys())
814806
815807 logger.debug("calling resolve_state_groups in _maybe_backfill")
816808 resolve = logcontext.preserve_fn(
826818 states = dict(zip(event_ids, [s.state for s in states]))
827819
828820 state_map = yield self.store.get_events(
829 [e_id for ids in states.itervalues() for e_id in ids.itervalues()],
821 [e_id for ids in itervalues(states) for e_id in itervalues(ids)],
830822 get_prev_content=False
831823 )
832824 states = {
833825 key: {
834826 k: state_map[e_id]
835 for k, e_id in state_dict.iteritems()
827 for k, e_id in iteritems(state_dict)
836828 if e_id in state_map
837 } for key, state_dict in states.iteritems()
829 } for key, state_dict in iteritems(states)
838830 }
839831
840832 for e_id, _ in sorted_extremeties_tuple:
889881
890882 Invites must be signed by the invitee's server before distribution.
891883 """
892 pdu = yield self.replication_layer.send_invite(
884 pdu = yield self.federation_client.send_invite(
893885 destination=target_host,
894886 room_id=event.room_id,
895887 event_id=event.event_id,
905897 [auth_id for auth_id, _ in event.auth_events],
906898 include_given=True
907899 )
908
909 for event in auth:
910 event.signatures.update(
911 compute_event_signature(
912 event,
913 self.hs.hostname,
914 self.hs.config.signing_key[0]
915 )
916 )
917
918900 defer.returnValue([e for e in auth])
919901
920902 @log_function
948930
949931 self.room_queues[room_id] = []
950932
951 yield self.store.clean_room_for_join(room_id)
933 yield self._clean_room_for_join(room_id)
952934
953935 handled_events = set()
954936
961943 target_hosts.insert(0, origin)
962944 except ValueError:
963945 pass
964 ret = yield self.replication_layer.send_join(target_hosts, event)
946 ret = yield self.federation_client.send_join(target_hosts, event)
965947
966948 origin = ret["origin"]
967949 state = ret["state"]
987969 # FIXME
988970 pass
989971
990 event_stream_id, max_stream_id = yield self._persist_auth_tree(
972 yield self._persist_auth_tree(
991973 origin, auth_chain, state, event
992 )
993
994 self.notifier.on_new_room_event(
995 event, event_stream_id, max_stream_id,
996 extra_users=[joinee]
997974 )
998975
999976 logger.debug("Finished joining %s to %s", joinee, room_id)
10901067 # would introduce the danger of backwards-compatibility problems.
10911068 event.internal_metadata.send_on_behalf_of = origin
10921069
1093 context, event_stream_id, max_stream_id = yield self._handle_new_event(
1070 context = yield self._handle_new_event(
10941071 origin, event
10951072 )
10961073
11001077 event.signatures,
11011078 )
11021079
1103 extra_users = []
1104 if event.type == EventTypes.Member:
1105 target_user_id = event.state_key
1106 target_user = UserID.from_string(target_user_id)
1107 extra_users.append(target_user)
1108
1109 self.notifier.on_new_room_event(
1110 event, event_stream_id, max_stream_id, extra_users=extra_users
1111 )
1112
11131080 if event.type == EventTypes.Member:
11141081 if event.content["membership"] == Membership.JOIN:
11151082 user = UserID.from_string(event.state_key)
1116 yield user_joined_room(self.distributor, user, event.room_id)
1117
1118 state_ids = list(context.prev_state_ids.values())
1083 yield self.user_joined_room(user, event.room_id)
1084
1085 prev_state_ids = yield context.get_prev_state_ids(self.store)
1086
1087 state_ids = list(prev_state_ids.values())
11191088 auth_chain = yield self.store.get_auth_chain(state_ids)
11201089
1121 state = yield self.store.get_events(list(context.prev_state_ids.values()))
1090 state = yield self.store.get_events(list(prev_state_ids.values()))
11221091
11231092 defer.returnValue({
11241093 "state": list(state.values()),
11801149 )
11811150
11821151 context = yield self.state_handler.compute_event_context(event)
1183
1184 event_stream_id, max_stream_id = yield self.store.persist_event(
1185 event,
1186 context=context,
1187 )
1188
1189 target_user = UserID.from_string(event.state_key)
1190 self.notifier.on_new_room_event(
1191 event, event_stream_id, max_stream_id,
1192 extra_users=[target_user],
1193 )
1152 yield self._persist_events([(event, context)])
11941153
11951154 defer.returnValue(event)
11961155
12151174 except ValueError:
12161175 pass
12171176
1218 yield self.replication_layer.send_leave(
1177 yield self.federation_client.send_leave(
12191178 target_hosts,
12201179 event
12211180 )
12221181
12231182 context = yield self.state_handler.compute_event_context(event)
1224
1225 event_stream_id, max_stream_id = yield self.store.persist_event(
1226 event,
1227 context=context,
1228 )
1229
1230 target_user = UserID.from_string(event.state_key)
1231 self.notifier.on_new_room_event(
1232 event, event_stream_id, max_stream_id,
1233 extra_users=[target_user],
1234 )
1183 yield self._persist_events([(event, context)])
12351184
12361185 defer.returnValue(event)
12371186
12381187 @defer.inlineCallbacks
12391188 def _make_and_verify_event(self, target_hosts, room_id, user_id, membership,
12401189 content={},):
1241 origin, pdu = yield self.replication_layer.make_membership_event(
1190 origin, pdu = yield self.federation_client.make_membership_event(
12421191 target_hosts,
12431192 room_id,
12441193 user_id,
12831232 @log_function
12841233 def on_make_leave_request(self, room_id, user_id):
12851234 """ We've received a /make_leave/ request, so we create a partial
1286 join event for the room and return that. We do *not* persist or
1235 leave event for the room and return that. We do *not* persist or
12871236 process it until the other server has signed it and sent it back.
12881237 """
12891238 builder = self.event_builder_factory.new({
13221271
13231272 event.internal_metadata.outlier = False
13241273
1325 context, event_stream_id, max_stream_id = yield self._handle_new_event(
1274 yield self._handle_new_event(
13261275 origin, event
13271276 )
13281277
13301279 "on_send_leave_request: After _handle_new_event: %s, sigs: %s",
13311280 event.event_id,
13321281 event.signatures,
1333 )
1334
1335 extra_users = []
1336 if event.type == EventTypes.Member:
1337 target_user_id = event.state_key
1338 target_user = UserID.from_string(target_user_id)
1339 extra_users.append(target_user)
1340
1341 self.notifier.on_new_room_event(
1342 event, event_stream_id, max_stream_id, extra_users=extra_users
13431282 )
13441283
13451284 defer.returnValue(None)
13741313 del results[(event.type, event.state_key)]
13751314
13761315 res = list(results.values())
1377 for event in res:
1378 # We sign these again because there was a bug where we
1379 # incorrectly signed things the first time round
1380 if self.is_mine_id(event.event_id):
1381 event.signatures.update(
1382 compute_event_signature(
1383 event,
1384 self.hs.hostname,
1385 self.hs.config.signing_key[0]
1386 )
1387 )
1388
13891316 defer.returnValue(res)
13901317 else:
13911318 defer.returnValue([])
14601387 )
14611388
14621389 if event:
1463 if self.is_mine_id(event.event_id):
1464 # FIXME: This is a temporary work around where we occasionally
1465 # return events slightly differently than when they were
1466 # originally signed
1467 event.signatures.update(
1468 compute_event_signature(
1469 event,
1470 self.hs.hostname,
1471 self.hs.config.signing_key[0]
1472 )
1473 )
1474
14751390 in_room = yield self.auth.check_host_in_room(
14761391 event.room_id,
14771392 origin
15071422 event, context
15081423 )
15091424
1510 event_stream_id, max_stream_id = yield self.store.persist_event(
1511 event,
1512 context=context,
1425 yield self._persist_events(
1426 [(event, context)],
15131427 backfilled=backfilled,
15141428 )
15151429 except: # noqa: E722, as we reraise the exception this is fine.
15221436
15231437 six.reraise(tp, value, tb)
15241438
1525 if not backfilled:
1526 # this intentionally does not yield: we don't care about the result
1527 # and don't need to wait for it.
1528 logcontext.run_in_background(
1529 self.pusher_pool.on_new_notifications,
1530 event_stream_id, max_stream_id,
1531 )
1532
1533 defer.returnValue((context, event_stream_id, max_stream_id))
1439 defer.returnValue(context)
15341440
15351441 @defer.inlineCallbacks
15361442 def _handle_new_events(self, origin, event_infos, backfilled=False):
15381444 should not depend on one another, e.g. this should be used to persist
15391445 a bunch of outliers, but not a chunk of individual events that depend
15401446 on each other for state calculations.
1447
1448 Notifies about the events where appropriate.
15411449 """
15421450 contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
15431451 [
15521460 ], consumeErrors=True,
15531461 ))
15541462
1555 yield self.store.persist_events(
1463 yield self._persist_events(
15561464 [
15571465 (ev_info["event"], context)
1558 for ev_info, context in itertools.izip(event_infos, contexts)
1466 for ev_info, context in zip(event_infos, contexts)
15591467 ],
15601468 backfilled=backfilled,
15611469 )
15641472 def _persist_auth_tree(self, origin, auth_events, state, event):
15651473 """Checks the auth chain is valid (and passes auth checks) for the
15661474 state and event. Then persists the auth chain and state atomically.
1567 Persists the event seperately.
1475 Persists the event separately. Notifies about the persisted events
1476 where appropriate.
15681477
15691478 Will attempt to fetch missing auth events.
15701479
15751484 event (Event)
15761485
15771486 Returns:
1578 2-tuple of (event_stream_id, max_stream_id) from the persist_event
1579 call for `event`
1487 Deferred
15801488 """
15811489 events_to_context = {}
15821490 for e in itertools.chain(auth_events, state):
16021510 missing_auth_events.add(e_id)
16031511
16041512 for e_id in missing_auth_events:
1605 m_ev = yield self.replication_layer.get_pdu(
1513 m_ev = yield self.federation_client.get_pdu(
16061514 [origin],
16071515 e_id,
16081516 outlier=True,
16401548 raise
16411549 events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR
16421550
1643 yield self.store.persist_events(
1551 yield self._persist_events(
16441552 [
16451553 (e, events_to_context[e.event_id])
16461554 for e in itertools.chain(auth_events, state)
16511559 event, old_state=state
16521560 )
16531561
1654 event_stream_id, max_stream_id = yield self.store.persist_event(
1655 event, new_event_context,
1656 )
1657
1658 defer.returnValue((event_stream_id, max_stream_id))
1562 yield self._persist_events(
1563 [(event, new_event_context)],
1564 )
16591565
16601566 @defer.inlineCallbacks
16611567 def _prep_event(self, origin, event, state=None, auth_events=None):
16751581 )
16761582
16771583 if not auth_events:
1584 prev_state_ids = yield context.get_prev_state_ids(self.store)
16781585 auth_events_ids = yield self.auth.compute_auth_events(
1679 event, context.prev_state_ids, for_verification=True,
1586 event, prev_state_ids, for_verification=True,
16801587 )
16811588 auth_events = yield self.store.get_events(auth_events_ids)
16821589 auth_events = {
17461653 local_auth_chain, remote_auth_chain
17471654 )
17481655
1749 for event in ret["auth_chain"]:
1750 event.signatures.update(
1751 compute_event_signature(
1752 event,
1753 self.hs.hostname,
1754 self.hs.config.signing_key[0]
1755 )
1756 )
1757
17581656 logger.debug("on_query_auth returning: %s", ret)
17591657
17601658 defer.returnValue(ret)
18301728 logger.info("Missing auth: %s", missing_auth)
18311729 # If we don't have all the auth events, we need to get them.
18321730 try:
1833 remote_auth_chain = yield self.replication_layer.get_event_auth(
1731 remote_auth_chain = yield self.federation_client.get_event_auth(
18341732 origin, event.room_id, event.event_id
18351733 )
18361734
19351833 break
19361834
19371835 if do_resolution:
1836 prev_state_ids = yield context.get_prev_state_ids(self.store)
19381837 # 1. Get what we think is the auth chain.
19391838 auth_ids = yield self.auth.compute_auth_events(
1940 event, context.prev_state_ids
1839 event, prev_state_ids
19411840 )
19421841 local_auth_chain = yield self.store.get_auth_chain(
19431842 auth_ids, include_given=True
19451844
19461845 try:
19471846 # 2. Get remote difference.
1948 result = yield self.replication_layer.query_auth(
1847 result = yield self.federation_client.query_auth(
19491848 origin,
19501849 event.room_id,
19511850 event.event_id,
20271926 k: a.event_id for k, a in iteritems(auth_events)
20281927 if k != event_key
20291928 }
2030 context.current_state_ids = dict(context.current_state_ids)
2031 context.current_state_ids.update(state_updates)
2032 if context.delta_ids is not None:
2033 context.delta_ids = dict(context.delta_ids)
2034 context.delta_ids.update(state_updates)
2035 context.prev_state_ids = dict(context.prev_state_ids)
2036 context.prev_state_ids.update({
1929 current_state_ids = yield context.get_current_state_ids(self.store)
1930 current_state_ids = dict(current_state_ids)
1931
1932 current_state_ids.update(state_updates)
1933
1934 prev_state_ids = yield context.get_prev_state_ids(self.store)
1935 prev_state_ids = dict(prev_state_ids)
1936
1937 prev_state_ids.update({
20371938 k: a.event_id for k, a in iteritems(auth_events)
20381939 })
2039 context.state_group = yield self.store.store_state_group(
1940
1941 # create a new state group as a delta from the existing one.
1942 prev_group = context.state_group
1943 state_group = yield self.store.store_state_group(
20401944 event.event_id,
20411945 event.room_id,
2042 prev_group=context.prev_group,
2043 delta_ids=context.delta_ids,
2044 current_state_ids=context.current_state_ids,
1946 prev_group=prev_group,
1947 delta_ids=state_updates,
1948 current_state_ids=current_state_ids,
1949 )
1950
1951 yield context.update_state(
1952 state_group=state_group,
1953 current_state_ids=current_state_ids,
1954 prev_state_ids=prev_state_ids,
1955 prev_group=prev_group,
1956 delta_ids=state_updates,
20451957 )
20461958
20471959 @defer.inlineCallbacks
22312143 yield member_handler.send_membership_event(None, event, context)
22322144 else:
22332145 destinations = set(x.split(":", 1)[-1] for x in (sender_user_id, room_id))
2234 yield self.replication_layer.forward_third_party_invite(
2146 yield self.federation_client.forward_third_party_invite(
22352147 destinations,
22362148 room_id,
22372149 event_dict,
22812193 event.content["third_party_invite"]["signed"]["token"]
22822194 )
22832195 original_invite = None
2284 original_invite_id = context.prev_state_ids.get(key)
2196 prev_state_ids = yield context.get_prev_state_ids(self.store)
2197 original_invite_id = prev_state_ids.get(key)
22852198 if original_invite_id:
22862199 original_invite = yield self.store.get_event(
22872200 original_invite_id, allow_none=True
23232236 signed = event.content["third_party_invite"]["signed"]
23242237 token = signed["token"]
23252238
2326 invite_event_id = context.prev_state_ids.get(
2239 prev_state_ids = yield context.get_prev_state_ids(self.store)
2240 invite_event_id = prev_state_ids.get(
23272241 (EventTypes.ThirdPartyInvite, token,)
23282242 )
23292243
23842298 )
23852299 if "valid" not in response or not response["valid"]:
23862300 raise AuthError(403, "Third party certificate was invalid")
2301
2302 @defer.inlineCallbacks
2303 def _persist_events(self, event_and_contexts, backfilled=False):
2304 """Persists events and tells the notifier/pushers about them, if
2305 necessary.
2306
2307 Args:
2308 event_and_contexts(list[tuple[FrozenEvent, EventContext]])
2309 backfilled (bool): Whether these events are a result of
2310 backfilling or not
2311
2312 Returns:
2313 Deferred
2314 """
2315 max_stream_id = yield self.store.persist_events(
2316 event_and_contexts,
2317 backfilled=backfilled,
2318 )
2319
2320 if not backfilled: # Never notify for backfilled events
2321 for event, _ in event_and_contexts:
2322 self._notify_persisted_event(event, max_stream_id)
2323
2324 def _notify_persisted_event(self, event, max_stream_id):
2325 """Checks to see if notifier/pushers should be notified about the
2326 event or not.
2327
2328 Args:
2329 event (FrozenEvent)
2330 max_stream_id (int): The max_stream_id returned by persist_events
2331 """
2332
2333 extra_users = []
2334 if event.type == EventTypes.Member:
2335 target_user_id = event.state_key
2336
2337 # We notify for memberships if its an invite for one of our
2338 # users
2339 if event.internal_metadata.is_outlier():
2340 if event.membership != Membership.INVITE:
2341 if not self.is_mine_id(target_user_id):
2342 return
2343
2344 target_user = UserID.from_string(target_user_id)
2345 extra_users.append(target_user)
2346 elif event.internal_metadata.is_outlier():
2347 return
2348
2349 event_stream_id = event.internal_metadata.stream_ordering
2350 self.notifier.on_new_room_event(
2351 event, event_stream_id, max_stream_id,
2352 extra_users=extra_users
2353 )
2354
2355 logcontext.run_in_background(
2356 self.pusher_pool.on_new_notifications,
2357 event_stream_id, max_stream_id,
2358 )
2359
2360 def _clean_room_for_join(self, room_id):
2361 return self.store.clean_room_for_join(room_id)
2362
2363 def user_joined_room(self, user, room_id):
2364 """Called when a new user has joined the room
2365 """
2366 return user_joined_room(self.distributor, user, room_id)
2525 from synapse.api.errors import (
2626 CodeMessageException,
2727 Codes,
28 MatrixCodeMessageException,
28 HttpResponseException,
2929 SynapseError,
3030 )
3131
8484 )
8585 defer.returnValue(None)
8686
87 data = {}
8887 try:
8988 data = yield self.http_client.get_json(
9089 "https://%s%s" % (
9392 ),
9493 {'sid': creds['sid'], 'client_secret': client_secret}
9594 )
96 except MatrixCodeMessageException as e:
95 except HttpResponseException as e:
9796 logger.info("getValidated3pid failed with Matrix error: %r", e)
98 raise SynapseError(e.code, e.msg, e.errcode)
99 except CodeMessageException as e:
100 data = json.loads(e.msg)
97 raise e.to_synapse_error()
10198
10299 if 'medium' in data:
103100 defer.returnValue(data)
135132 )
136133 logger.debug("bound threepid %r to %s", creds, mxid)
137134 except CodeMessageException as e:
138 data = json.loads(e.msg)
135 data = json.loads(e.msg) # XXX WAT?
139136 defer.returnValue(data)
140137
141138 @defer.inlineCallbacks
208205 params
209206 )
210207 defer.returnValue(data)
211 except MatrixCodeMessageException as e:
212 logger.info("Proxied requestToken failed with Matrix error: %r", e)
213 raise SynapseError(e.code, e.msg, e.errcode)
214 except CodeMessageException as e:
208 except HttpResponseException as e:
215209 logger.info("Proxied requestToken failed: %r", e)
216 raise e
210 raise e.to_synapse_error()
217211
218212 @defer.inlineCallbacks
219213 def requestMsisdnToken(
243237 params
244238 )
245239 defer.returnValue(data)
246 except MatrixCodeMessageException as e:
247 logger.info("Proxied requestToken failed with Matrix error: %r", e)
248 raise SynapseError(e.code, e.msg, e.errcode)
249 except CodeMessageException as e:
240 except HttpResponseException as e:
250241 logger.info("Proxied requestToken failed: %r", e)
251 raise e
242 raise e.to_synapse_error()
147147 try:
148148 if event.membership == Membership.JOIN:
149149 room_end_token = now_token.room_key
150 deferred_room_state = self.state_handler.get_current_state(
151 event.room_id
150 deferred_room_state = run_in_background(
151 self.state_handler.get_current_state,
152 event.room_id,
152153 )
153154 elif event.membership == Membership.LEAVE:
154155 room_end_token = "s%d" % (event.stream_ordering,)
155 deferred_room_state = self.store.get_state_for_events(
156 [event.event_id], None
156 deferred_room_state = run_in_background(
157 self.store.get_state_for_events,
158 [event.event_id], None,
157159 )
158160 deferred_room_state.addCallback(
159161 lambda states: states[event.event_id]
386388 receipts = []
387389 defer.returnValue(receipts)
388390
389 presence, receipts, (messages, token) = yield defer.gatherResults(
390 [
391 run_in_background(get_presence),
392 run_in_background(get_receipts),
393 run_in_background(
394 self.store.get_recent_events_for_room,
395 room_id,
396 limit=limit,
397 end_token=now_token.room_key,
398 )
399 ],
400 consumeErrors=True,
401 ).addErrback(unwrapFirstError)
391 presence, receipts, (messages, token) = yield make_deferred_yieldable(
392 defer.gatherResults(
393 [
394 run_in_background(get_presence),
395 run_in_background(get_receipts),
396 run_in_background(
397 self.store.get_recent_events_for_room,
398 room_id,
399 limit=limit,
400 end_token=now_token.room_key,
401 )
402 ],
403 consumeErrors=True,
404 ).addErrback(unwrapFirstError),
405 )
402406
403407 messages = yield filter_events_for_client(
404408 self.store, user_id, messages, is_peeking=is_peeking,
2222
2323 from twisted.internet import defer
2424 from twisted.internet.defer import succeed
25 from twisted.python.failure import Failure
2625
2726 from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
2827 from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
3130 from synapse.events.utils import serialize_event
3231 from synapse.events.validator import EventValidator
3332 from synapse.replication.http.send_event import send_event_to_master
34 from synapse.types import RoomAlias, RoomStreamToken, UserID
35 from synapse.util.async import Limiter, ReadWriteLock
33 from synapse.types import RoomAlias, UserID
34 from synapse.util.async import Linearizer
3635 from synapse.util.frozenutils import frozendict_json_encoder
3736 from synapse.util.logcontext import run_in_background
3837 from synapse.util.metrics import measure_func
39 from synapse.util.stringutils import random_string
40 from synapse.visibility import filter_events_for_client
4138
4239 from ._base import BaseHandler
4340
4441 logger = logging.getLogger(__name__)
4542
4643
47 class PurgeStatus(object):
48 """Object tracking the status of a purge request
49
50 This class contains information on the progress of a purge request, for
51 return by get_purge_status.
52
53 Attributes:
54 status (int): Tracks whether this request has completed. One of
55 STATUS_{ACTIVE,COMPLETE,FAILED}
44 class MessageHandler(object):
45 """Contains some read only APIs to get state about a room
5646 """
5747
58 STATUS_ACTIVE = 0
59 STATUS_COMPLETE = 1
60 STATUS_FAILED = 2
61
62 STATUS_TEXT = {
63 STATUS_ACTIVE: "active",
64 STATUS_COMPLETE: "complete",
65 STATUS_FAILED: "failed",
66 }
67
68 def __init__(self):
69 self.status = PurgeStatus.STATUS_ACTIVE
70
71 def asdict(self):
72 return {
73 "status": PurgeStatus.STATUS_TEXT[self.status]
74 }
75
76
77 class MessageHandler(BaseHandler):
78
7948 def __init__(self, hs):
80 super(MessageHandler, self).__init__(hs)
81 self.hs = hs
49 self.auth = hs.get_auth()
50 self.clock = hs.get_clock()
8251 self.state = hs.get_state_handler()
83 self.clock = hs.get_clock()
84
85 self.pagination_lock = ReadWriteLock()
86 self._purges_in_progress_by_room = set()
87 # map from purge id to PurgeStatus
88 self._purges_by_id = {}
89
90 def start_purge_history(self, room_id, token,
91 delete_local_events=False):
92 """Start off a history purge on a room.
93
94 Args:
95 room_id (str): The room to purge from
96
97 token (str): topological token to delete events before
98 delete_local_events (bool): True to delete local events as well as
99 remote ones
100
101 Returns:
102 str: unique ID for this purge transaction.
103 """
104 if room_id in self._purges_in_progress_by_room:
105 raise SynapseError(
106 400,
107 "History purge already in progress for %s" % (room_id, ),
108 )
109
110 purge_id = random_string(16)
111
112 # we log the purge_id here so that it can be tied back to the
113 # request id in the log lines.
114 logger.info("[purge] starting purge_id %s", purge_id)
115
116 self._purges_by_id[purge_id] = PurgeStatus()
117 run_in_background(
118 self._purge_history,
119 purge_id, room_id, token, delete_local_events,
120 )
121 return purge_id
122
123 @defer.inlineCallbacks
124 def _purge_history(self, purge_id, room_id, token,
125 delete_local_events):
126 """Carry out a history purge on a room.
127
128 Args:
129 purge_id (str): The id for this purge
130 room_id (str): The room to purge from
131 token (str): topological token to delete events before
132 delete_local_events (bool): True to delete local events as well as
133 remote ones
134
135 Returns:
136 Deferred
137 """
138 self._purges_in_progress_by_room.add(room_id)
139 try:
140 with (yield self.pagination_lock.write(room_id)):
141 yield self.store.purge_history(
142 room_id, token, delete_local_events,
143 )
144 logger.info("[purge] complete")
145 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
146 except Exception:
147 logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
148 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
149 finally:
150 self._purges_in_progress_by_room.discard(room_id)
151
152 # remove the purge from the list 24 hours after it completes
153 def clear_purge():
154 del self._purges_by_id[purge_id]
155 self.hs.get_reactor().callLater(24 * 3600, clear_purge)
156
157 def get_purge_status(self, purge_id):
158 """Get the current status of an active purge
159
160 Args:
161 purge_id (str): purge_id returned by start_purge_history
162
163 Returns:
164 PurgeStatus|None
165 """
166 return self._purges_by_id.get(purge_id)
167
168 @defer.inlineCallbacks
169 def get_messages(self, requester, room_id=None, pagin_config=None,
170 as_client_event=True, event_filter=None):
171 """Get messages in a room.
172
173 Args:
174 requester (Requester): The user requesting messages.
175 room_id (str): The room they want messages from.
176 pagin_config (synapse.api.streams.PaginationConfig): The pagination
177 config rules to apply, if any.
178 as_client_event (bool): True to get events in client-server format.
179 event_filter (Filter): Filter to apply to results or None
180 Returns:
181 dict: Pagination API results
182 """
183 user_id = requester.user.to_string()
184
185 if pagin_config.from_token:
186 room_token = pagin_config.from_token.room_key
187 else:
188 pagin_config.from_token = (
189 yield self.hs.get_event_sources().get_current_token_for_room(
190 room_id=room_id
191 )
192 )
193 room_token = pagin_config.from_token.room_key
194
195 room_token = RoomStreamToken.parse(room_token)
196
197 pagin_config.from_token = pagin_config.from_token.copy_and_replace(
198 "room_key", str(room_token)
199 )
200
201 source_config = pagin_config.get_source_config("room")
202
203 with (yield self.pagination_lock.read(room_id)):
204 membership, member_event_id = yield self._check_in_room_or_world_readable(
205 room_id, user_id
206 )
207
208 if source_config.direction == 'b':
209 # if we're going backwards, we might need to backfill. This
210 # requires that we have a topo token.
211 if room_token.topological:
212 max_topo = room_token.topological
213 else:
214 max_topo = yield self.store.get_max_topological_token(
215 room_id, room_token.stream
216 )
217
218 if membership == Membership.LEAVE:
219 # If they have left the room then clamp the token to be before
220 # they left the room, to save the effort of loading from the
221 # database.
222 leave_token = yield self.store.get_topological_token_for_event(
223 member_event_id
224 )
225 leave_token = RoomStreamToken.parse(leave_token)
226 if leave_token.topological < max_topo:
227 source_config.from_key = str(leave_token)
228
229 yield self.hs.get_handlers().federation_handler.maybe_backfill(
230 room_id, max_topo
231 )
232
233 events, next_key = yield self.store.paginate_room_events(
234 room_id=room_id,
235 from_key=source_config.from_key,
236 to_key=source_config.to_key,
237 direction=source_config.direction,
238 limit=source_config.limit,
239 event_filter=event_filter,
240 )
241
242 next_token = pagin_config.from_token.copy_and_replace(
243 "room_key", next_key
244 )
245
246 if not events:
247 defer.returnValue({
248 "chunk": [],
249 "start": pagin_config.from_token.to_string(),
250 "end": next_token.to_string(),
251 })
252
253 if event_filter:
254 events = event_filter.filter(events)
255
256 events = yield filter_events_for_client(
257 self.store,
258 user_id,
259 events,
260 is_peeking=(member_event_id is None),
261 )
262
263 time_now = self.clock.time_msec()
264
265 chunk = {
266 "chunk": [
267 serialize_event(e, time_now, as_client_event)
268 for e in events
269 ],
270 "start": pagin_config.from_token.to_string(),
271 "end": next_token.to_string(),
272 }
273
274 defer.returnValue(chunk)
52 self.store = hs.get_datastore()
27553
27654 @defer.inlineCallbacks
27755 def get_room_data(self, user_id=None, room_id=None,
28563 Raises:
28664 SynapseError if something went wrong.
28765 """
288 membership, membership_event_id = yield self._check_in_room_or_world_readable(
66 membership, membership_event_id = yield self.auth.check_in_room_or_world_readable(
28967 room_id, user_id
29068 )
29169
29270 if membership == Membership.JOIN:
293 data = yield self.state_handler.get_current_state(
71 data = yield self.state.get_current_state(
29472 room_id, event_type, state_key
29573 )
29674 elif membership == Membership.LEAVE:
30381 defer.returnValue(data)
30482
30583 @defer.inlineCallbacks
306 def _check_in_room_or_world_readable(self, room_id, user_id):
307 try:
308 # check_user_was_in_room will return the most recent membership
309 # event for the user if:
310 # * The user is a non-guest user, and was ever in the room
311 # * The user is a guest user, and has joined the room
312 # else it will throw.
313 member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
314 defer.returnValue((member_event.membership, member_event.event_id))
315 return
316 except AuthError:
317 visibility = yield self.state_handler.get_current_state(
318 room_id, EventTypes.RoomHistoryVisibility, ""
319 )
320 if (
321 visibility and
322 visibility.content["history_visibility"] == "world_readable"
323 ):
324 defer.returnValue((Membership.JOIN, None))
325 return
326 raise AuthError(
327 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
328 )
329
330 @defer.inlineCallbacks
33184 def get_state_events(self, user_id, room_id, is_guest=False):
33285 """Retrieve all state events for a given room. If the user is
33386 joined to the room then return the current state. If the user has
33992 Returns:
34093 A list of dicts representing state events. [{}, {}, {}]
34194 """
342 membership, membership_event_id = yield self._check_in_room_or_world_readable(
95 membership, membership_event_id = yield self.auth.check_in_room_or_world_readable(
34396 room_id, user_id
34497 )
34598
34699 if membership == Membership.JOIN:
347 room_state = yield self.state_handler.get_current_state(room_id)
100 room_state = yield self.state.get_current_state(room_id)
348101 elif membership == Membership.LEAVE:
349102 room_state = yield self.store.get_state_for_events(
350103 [membership_event_id], None
372125 if not requester.app_service:
373126 # We check AS auth after fetching the room membership, as it
374127 # requires us to pull out all joined members anyway.
375 membership, _ = yield self._check_in_room_or_world_readable(
128 membership, _ = yield self.auth.check_in_room_or_world_readable(
376129 room_id, user_id
377130 )
378131 if membership != Membership.JOIN:
426179
427180 # We arbitrarily limit concurrent event creation for a room to 5.
428181 # This is to stop us from diverging history *too* much.
429 self.limiter = Limiter(max_count=5)
182 self.limiter = Linearizer(max_count=5, name="room_event_creation_limit")
430183
431184 self.action_generator = hs.get_action_generator()
432185
629382 If so, returns the version of the event in context.
630383 Otherwise, returns None.
631384 """
632 prev_event_id = context.prev_state_ids.get((event.type, event.state_key))
385 prev_state_ids = yield context.get_prev_state_ids(self.store)
386 prev_event_id = prev_state_ids.get((event.type, event.state_key))
633387 prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
634388 if not prev_event:
635389 return
751505 event = builder.build()
752506
753507 logger.debug(
754 "Created event %s with state: %s",
755 event.event_id, context.prev_state_ids,
508 "Created event %s",
509 event.event_id,
756510 )
757511
758512 defer.returnValue(
805559 # If we're a worker we need to hit out to the master.
806560 if self.config.worker_app:
807561 yield send_event_to_master(
808 self.hs.get_clock(),
809 self.http_client,
562 clock=self.hs.get_clock(),
563 store=self.store,
564 client=self.http_client,
810565 host=self.config.worker_replication_host,
811566 port=self.config.worker_replication_http_port,
812567 requester=requester,
883638 e.sender == event.sender
884639 )
885640
641 current_state_ids = yield context.get_current_state_ids(self.store)
642
886643 state_to_include_ids = [
887644 e_id
888 for k, e_id in iteritems(context.current_state_ids)
645 for k, e_id in iteritems(current_state_ids)
889646 if k[0] in self.hs.config.room_invite_state_types
890647 or k == (EventTypes.Member, event.sender)
891648 ]
921678 )
922679
923680 if event.type == EventTypes.Redaction:
681 prev_state_ids = yield context.get_prev_state_ids(self.store)
924682 auth_events_ids = yield self.auth.compute_auth_events(
925 event, context.prev_state_ids, for_verification=True,
683 event, prev_state_ids, for_verification=True,
926684 )
927685 auth_events = yield self.store.get_events(auth_events_ids)
928686 auth_events = {
942700 "You don't have permission to redact events"
943701 )
944702
945 if event.type == EventTypes.Create and context.prev_state_ids:
946 raise AuthError(
947 403,
948 "Changing the room create event is forbidden",
949 )
703 if event.type == EventTypes.Create:
704 prev_state_ids = yield context.get_prev_state_ids(self.store)
705 if prev_state_ids:
706 raise AuthError(
707 403,
708 "Changing the room create event is forbidden",
709 )
950710
951711 (event_stream_id, max_stream_id) = yield self.store.persist_event(
952712 event, context=context
0 # -*- coding: utf-8 -*-
1 # Copyright 2014 - 2016 OpenMarket Ltd
2 # Copyright 2017 - 2018 New Vector Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 import logging
16
17 from twisted.internet import defer
18 from twisted.python.failure import Failure
19
20 from synapse.api.constants import Membership
21 from synapse.api.errors import SynapseError
22 from synapse.events.utils import serialize_event
23 from synapse.types import RoomStreamToken
24 from synapse.util.async import ReadWriteLock
25 from synapse.util.logcontext import run_in_background
26 from synapse.util.stringutils import random_string
27 from synapse.visibility import filter_events_for_client
28
29 logger = logging.getLogger(__name__)
30
31
32 class PurgeStatus(object):
33 """Object tracking the status of a purge request
34
35 This class contains information on the progress of a purge request, for
36 return by get_purge_status.
37
38 Attributes:
39 status (int): Tracks whether this request has completed. One of
40 STATUS_{ACTIVE,COMPLETE,FAILED}
41 """
42
43 STATUS_ACTIVE = 0
44 STATUS_COMPLETE = 1
45 STATUS_FAILED = 2
46
47 STATUS_TEXT = {
48 STATUS_ACTIVE: "active",
49 STATUS_COMPLETE: "complete",
50 STATUS_FAILED: "failed",
51 }
52
53 def __init__(self):
54 self.status = PurgeStatus.STATUS_ACTIVE
55
56 def asdict(self):
57 return {
58 "status": PurgeStatus.STATUS_TEXT[self.status]
59 }
60
61
62 class PaginationHandler(object):
63 """Handles pagination and purge history requests.
64
65 These are in the same handler due to the fact we need to block clients
66 paginating during a purge.
67 """
68
69 def __init__(self, hs):
70 self.hs = hs
71 self.auth = hs.get_auth()
72 self.store = hs.get_datastore()
73 self.clock = hs.get_clock()
74
75 self.pagination_lock = ReadWriteLock()
76 self._purges_in_progress_by_room = set()
77 # map from purge id to PurgeStatus
78 self._purges_by_id = {}
79
80 def start_purge_history(self, room_id, token,
81 delete_local_events=False):
82 """Start off a history purge on a room.
83
84 Args:
85 room_id (str): The room to purge from
86
87 token (str): topological token to delete events before
88 delete_local_events (bool): True to delete local events as well as
89 remote ones
90
91 Returns:
92 str: unique ID for this purge transaction.
93 """
94 if room_id in self._purges_in_progress_by_room:
95 raise SynapseError(
96 400,
97 "History purge already in progress for %s" % (room_id, ),
98 )
99
100 purge_id = random_string(16)
101
102 # we log the purge_id here so that it can be tied back to the
103 # request id in the log lines.
104 logger.info("[purge] starting purge_id %s", purge_id)
105
106 self._purges_by_id[purge_id] = PurgeStatus()
107 run_in_background(
108 self._purge_history,
109 purge_id, room_id, token, delete_local_events,
110 )
111 return purge_id
112
113 @defer.inlineCallbacks
114 def _purge_history(self, purge_id, room_id, token,
115 delete_local_events):
116 """Carry out a history purge on a room.
117
118 Args:
119 purge_id (str): The id for this purge
120 room_id (str): The room to purge from
121 token (str): topological token to delete events before
122 delete_local_events (bool): True to delete local events as well as
123 remote ones
124
125 Returns:
126 Deferred
127 """
128 self._purges_in_progress_by_room.add(room_id)
129 try:
130 with (yield self.pagination_lock.write(room_id)):
131 yield self.store.purge_history(
132 room_id, token, delete_local_events,
133 )
134 logger.info("[purge] complete")
135 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
136 except Exception:
137 logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
138 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
139 finally:
140 self._purges_in_progress_by_room.discard(room_id)
141
142 # remove the purge from the list 24 hours after it completes
143 def clear_purge():
144 del self._purges_by_id[purge_id]
145 self.hs.get_reactor().callLater(24 * 3600, clear_purge)
146
147 def get_purge_status(self, purge_id):
148 """Get the current status of an active purge
149
150 Args:
151 purge_id (str): purge_id returned by start_purge_history
152
153 Returns:
154 PurgeStatus|None
155 """
156 return self._purges_by_id.get(purge_id)
157
158 @defer.inlineCallbacks
159 def get_messages(self, requester, room_id=None, pagin_config=None,
160 as_client_event=True, event_filter=None):
161 """Get messages in a room.
162
163 Args:
164 requester (Requester): The user requesting messages.
165 room_id (str): The room they want messages from.
166 pagin_config (synapse.api.streams.PaginationConfig): The pagination
167 config rules to apply, if any.
168 as_client_event (bool): True to get events in client-server format.
169 event_filter (Filter): Filter to apply to results or None
170 Returns:
171 dict: Pagination API results
172 """
173 user_id = requester.user.to_string()
174
175 if pagin_config.from_token:
176 room_token = pagin_config.from_token.room_key
177 else:
178 pagin_config.from_token = (
179 yield self.hs.get_event_sources().get_current_token_for_room(
180 room_id=room_id
181 )
182 )
183 room_token = pagin_config.from_token.room_key
184
185 room_token = RoomStreamToken.parse(room_token)
186
187 pagin_config.from_token = pagin_config.from_token.copy_and_replace(
188 "room_key", str(room_token)
189 )
190
191 source_config = pagin_config.get_source_config("room")
192
193 with (yield self.pagination_lock.read(room_id)):
194 membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
195 room_id, user_id
196 )
197
198 if source_config.direction == 'b':
199 # if we're going backwards, we might need to backfill. This
200 # requires that we have a topo token.
201 if room_token.topological:
202 max_topo = room_token.topological
203 else:
204 max_topo = yield self.store.get_max_topological_token(
205 room_id, room_token.stream
206 )
207
208 if membership == Membership.LEAVE:
209 # If they have left the room then clamp the token to be before
210 # they left the room, to save the effort of loading from the
211 # database.
212 leave_token = yield self.store.get_topological_token_for_event(
213 member_event_id
214 )
215 leave_token = RoomStreamToken.parse(leave_token)
216 if leave_token.topological < max_topo:
217 source_config.from_key = str(leave_token)
218
219 yield self.hs.get_handlers().federation_handler.maybe_backfill(
220 room_id, max_topo
221 )
222
223 events, next_key = yield self.store.paginate_room_events(
224 room_id=room_id,
225 from_key=source_config.from_key,
226 to_key=source_config.to_key,
227 direction=source_config.direction,
228 limit=source_config.limit,
229 event_filter=event_filter,
230 )
231
232 next_token = pagin_config.from_token.copy_and_replace(
233 "room_key", next_key
234 )
235
236 if not events:
237 defer.returnValue({
238 "chunk": [],
239 "start": pagin_config.from_token.to_string(),
240 "end": next_token.to_string(),
241 })
242
243 if event_filter:
244 events = event_filter.filter(events)
245
246 events = yield filter_events_for_client(
247 self.store,
248 user_id,
249 events,
250 is_peeking=(member_event_id is None),
251 )
252
253 time_now = self.clock.time_msec()
254
255 chunk = {
256 "chunk": [
257 serialize_event(e, time_now, as_client_event)
258 for e in events
259 ],
260 "start": pagin_config.from_token.to_string(),
261 "end": next_token.to_string(),
262 }
263
264 defer.returnValue(chunk)
1616
1717 from twisted.internet import defer
1818
19 from synapse.api.errors import AuthError, CodeMessageException, SynapseError
19 from synapse.api.errors import (
20 AuthError,
21 CodeMessageException,
22 Codes,
23 StoreError,
24 SynapseError,
25 )
26 from synapse.metrics.background_process_metrics import run_as_background_process
2027 from synapse.types import UserID, get_domain_from_id
2128
2229 from ._base import BaseHandler
4047
4148 if hs.config.worker_app is None:
4249 self.clock.looping_call(
43 self._update_remote_profile_cache, self.PROFILE_UPDATE_MS,
50 self._start_update_remote_profile_cache, self.PROFILE_UPDATE_MS,
4451 )
4552
4653 @defer.inlineCallbacks
4754 def get_profile(self, user_id):
4855 target_user = UserID.from_string(user_id)
4956 if self.hs.is_mine(target_user):
50 displayname = yield self.store.get_profile_displayname(
51 target_user.localpart
52 )
53 avatar_url = yield self.store.get_profile_avatar_url(
54 target_user.localpart
55 )
57 try:
58 displayname = yield self.store.get_profile_displayname(
59 target_user.localpart
60 )
61 avatar_url = yield self.store.get_profile_avatar_url(
62 target_user.localpart
63 )
64 except StoreError as e:
65 if e.code == 404:
66 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
67 raise
5668
5769 defer.returnValue({
5870 "displayname": displayname,
7284 except CodeMessageException as e:
7385 if e.code != 404:
7486 logger.exception("Failed to get displayname")
75
7687 raise
7788
7889 @defer.inlineCallbacks
8394 """
8495 target_user = UserID.from_string(user_id)
8596 if self.hs.is_mine(target_user):
86 displayname = yield self.store.get_profile_displayname(
87 target_user.localpart
88 )
89 avatar_url = yield self.store.get_profile_avatar_url(
90 target_user.localpart
91 )
97 try:
98 displayname = yield self.store.get_profile_displayname(
99 target_user.localpart
100 )
101 avatar_url = yield self.store.get_profile_avatar_url(
102 target_user.localpart
103 )
104 except StoreError as e:
105 if e.code == 404:
106 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
107 raise
92108
93109 defer.returnValue({
94110 "displayname": displayname,
101117 @defer.inlineCallbacks
102118 def get_displayname(self, target_user):
103119 if self.hs.is_mine(target_user):
104 displayname = yield self.store.get_profile_displayname(
105 target_user.localpart
106 )
120 try:
121 displayname = yield self.store.get_profile_displayname(
122 target_user.localpart
123 )
124 except StoreError as e:
125 if e.code == 404:
126 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
127 raise
107128
108129 defer.returnValue(displayname)
109130 else:
120141 except CodeMessageException as e:
121142 if e.code != 404:
122143 logger.exception("Failed to get displayname")
123
124144 raise
125145 except Exception:
126146 logger.exception("Failed to get displayname")
155175 @defer.inlineCallbacks
156176 def get_avatar_url(self, target_user):
157177 if self.hs.is_mine(target_user):
158 avatar_url = yield self.store.get_profile_avatar_url(
159 target_user.localpart
160 )
161
178 try:
179 avatar_url = yield self.store.get_profile_avatar_url(
180 target_user.localpart
181 )
182 except StoreError as e:
183 if e.code == 404:
184 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
185 raise
162186 defer.returnValue(avatar_url)
163187 else:
164188 try:
211235 just_field = args.get("field", None)
212236
213237 response = {}
214
215 if just_field is None or just_field == "displayname":
216 response["displayname"] = yield self.store.get_profile_displayname(
217 user.localpart
218 )
219
220 if just_field is None or just_field == "avatar_url":
221 response["avatar_url"] = yield self.store.get_profile_avatar_url(
222 user.localpart
223 )
238 try:
239 if just_field is None or just_field == "displayname":
240 response["displayname"] = yield self.store.get_profile_displayname(
241 user.localpart
242 )
243
244 if just_field is None or just_field == "avatar_url":
245 response["avatar_url"] = yield self.store.get_profile_avatar_url(
246 user.localpart
247 )
248 except StoreError as e:
249 if e.code == 404:
250 raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
251 raise
224252
225253 defer.returnValue(response)
226254
253281 room_id, str(e.message)
254282 )
255283
284 def _start_update_remote_profile_cache(self):
285 return run_as_background_process(
286 "Update remote profile", self._update_remote_profile_cache,
287 )
288
289 @defer.inlineCallbacks
256290 def _update_remote_profile_cache(self):
257291 """Called periodically to check profiles of remote users we haven't
258292 checked in a while.
4444 hs (synapse.server.HomeServer):
4545 """
4646 super(RegistrationHandler, self).__init__(hs)
47
47 self.hs = hs
4848 self.auth = hs.get_auth()
4949 self._auth_handler = hs.get_auth_handler()
5050 self.profile_handler = hs.get_profile_handler()
130130 Args:
131131 localpart : The local part of the user ID to register. If None,
132132 one will be generated.
133 password (str) : The password to assign to this user so they can
133 password (unicode) : The password to assign to this user so they can
134134 login again. This can be None which means they cannot login again
135135 via a password (e.g. the user is an application service user).
136136 generate_token (bool): Whether a new access token should be
143143 Raises:
144144 RegistrationError if there was a problem registering.
145145 """
146 yield self._check_mau_limits()
146147 password_hash = None
147148 if password:
148149 password_hash = yield self.auth_handler().hash(password)
287288 400,
288289 "User ID can only contain characters a-z, 0-9, or '=_-./'",
289290 )
291 yield self._check_mau_limits()
290292 user = UserID(localpart, self.hs.hostname)
291293 user_id = user.to_string()
292294
436438 """
437439 if localpart is None:
438440 raise SynapseError(400, "Request must include user id")
439
441 yield self._check_mau_limits()
440442 need_register = True
441443
442444 try:
530532 remote_room_hosts=remote_room_hosts,
531533 action="join",
532534 )
535
536 @defer.inlineCallbacks
537 def _check_mau_limits(self):
538 """
539 Do not accept registrations if monthly active user limits exceeded
540 and limiting is enabled
541 """
542 if self.hs.config.limit_usage_by_mau is True:
543 current_mau = yield self.store.count_monthly_users()
544 if current_mau >= self.hs.config.max_mau_value:
545 raise RegistrationError(
546 403, "MAU Limit Exceeded", Codes.MAU_LIMIT_EXCEEDED
547 )
1414 # limitations under the License.
1515
1616 """Contains functions for performing events on rooms."""
17 import itertools
1718 import logging
1819 import math
1920 import string
2324
2425 from synapse.api.constants import EventTypes, JoinRules, RoomCreationPreset
2526 from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
26 from synapse.types import RoomAlias, RoomID, RoomStreamToken, UserID
27 from synapse.types import RoomAlias, RoomID, RoomStreamToken, StreamToken, UserID
2728 from synapse.util import stringutils
2829 from synapse.visibility import filter_events_for_client
2930
394395 )
395396
396397
397 class RoomContextHandler(BaseHandler):
398 class RoomContextHandler(object):
399 def __init__(self, hs):
400 self.hs = hs
401 self.store = hs.get_datastore()
402
398403 @defer.inlineCallbacks
399 def get_event_context(self, user, room_id, event_id, limit):
404 def get_event_context(self, user, room_id, event_id, limit, event_filter):
400405 """Retrieves events, pagination tokens and state around a given event
401406 in a room.
402407
406411 event_id (str)
407412 limit (int): The maximum number of events to return in total
408413 (excluding state).
414 event_filter (Filter|None): the filter to apply to the events returned
415 (excluding the target event_id)
409416
410417 Returns:
411418 dict, or None if the event isn't found
412419 """
413420 before_limit = math.floor(limit / 2.)
414421 after_limit = limit - before_limit
415
416 now_token = yield self.hs.get_event_sources().get_current_token()
417422
418423 users = yield self.store.get_users_in_room(room_id)
419424 is_peeking = user.to_string() not in users
440445 )
441446
442447 results = yield self.store.get_events_around(
443 room_id, event_id, before_limit, after_limit
448 room_id, event_id, before_limit, after_limit, event_filter
444449 )
445450
446451 results["events_before"] = yield filter_evts(results["events_before"])
452457 else:
453458 last_event_id = event_id
454459
460 types = None
461 filtered_types = None
462 if event_filter and event_filter.lazy_load_members():
463 members = set(ev.sender for ev in itertools.chain(
464 results["events_before"],
465 (results["event"],),
466 results["events_after"],
467 ))
468 filtered_types = [EventTypes.Member]
469 types = [(EventTypes.Member, member) for member in members]
470
471 # XXX: why do we return the state as of the last event rather than the
472 # first? Shouldn't we be consistent with /sync?
473 # https://github.com/matrix-org/matrix-doc/issues/687
474
455475 state = yield self.store.get_state_for_events(
456 [last_event_id], None
476 [last_event_id], types, filtered_types=filtered_types,
457477 )
458478 results["state"] = list(state[last_event_id].values())
459479
460 results["start"] = now_token.copy_and_replace(
480 # We use a dummy token here as we only care about the room portion of
481 # the token, which we replace.
482 token = StreamToken.START
483
484 results["start"] = token.copy_and_replace(
461485 "room_key", results["start"]
462486 ).to_string()
463487
464 results["end"] = now_token.copy_and_replace(
488 results["end"] = token.copy_and_replace(
465489 "room_key", results["end"]
466490 ).to_string()
467491
200200 ratelimit=ratelimit,
201201 )
202202
203 prev_member_event_id = context.prev_state_ids.get(
203 prev_state_ids = yield context.get_prev_state_ids(self.store)
204
205 prev_member_event_id = prev_state_ids.get(
204206 (EventTypes.Member, target.to_string()),
205207 None
206208 )
495497 if prev_event is not None:
496498 return
497499
500 prev_state_ids = yield context.get_prev_state_ids(self.store)
498501 if event.membership == Membership.JOIN:
499502 if requester.is_guest:
500 guest_can_join = yield self._can_guest_join(context.prev_state_ids)
503 guest_can_join = yield self._can_guest_join(prev_state_ids)
501504 if not guest_can_join:
502505 # This should be an auth check, but guests are a local concept,
503506 # so don't really fit into the general auth process.
516519 ratelimit=ratelimit,
517520 )
518521
519 prev_member_event_id = context.prev_state_ids.get(
522 prev_member_event_id = prev_state_ids.get(
520523 (EventTypes.Member, event.state_key),
521524 None
522525 )
704707 inviter_display_name = member_event.content.get("displayname", "")
705708 inviter_avatar_url = member_event.content.get("avatar_url", "")
706709
710 # if user has no display name, default to their MXID
711 if not inviter_display_name:
712 inviter_display_name = user.to_string()
713
707714 canonical_room_alias = ""
708715 canonical_alias_event = room_state.get((EventTypes.CanonicalAlias, ""))
709716 if canonical_alias_event:
286286 contexts = {}
287287 for event in allowed_events:
288288 res = yield self.store.get_events_around(
289 event.room_id, event.event_id, before_limit, after_limit
289 event.room_id, event.event_id, before_limit, after_limit,
290290 )
291291
292292 logger.info(
00 # -*- coding: utf-8 -*-
1 # Copyright 2015 - 2016 OpenMarket Ltd
1 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
2425 from synapse.push.clientformat import format_push_rules_for_user
2526 from synapse.types import RoomStreamToken
2627 from synapse.util.async import concurrently_execute
28 from synapse.util.caches.expiringcache import ExpiringCache
29 from synapse.util.caches.lrucache import LruCache
2730 from synapse.util.caches.response_cache import ResponseCache
2831 from synapse.util.logcontext import LoggingContext
2932 from synapse.util.metrics import Measure, measure_func
3033 from synapse.visibility import filter_events_for_client
3134
3235 logger = logging.getLogger(__name__)
36
37 # Store the cache that tracks which lazy-loaded members have been sent to a given
38 # client for no more than 30 minutes.
39 LAZY_LOADED_MEMBERS_CACHE_MAX_AGE = 30 * 60 * 1000
40
41 # Remember the last 100 members we sent to a client for the purposes of
42 # avoiding redundantly sending the same lazy-loaded members to the client
43 LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100
3344
3445
3546 SyncConfig = collections.namedtuple("SyncConfig", [
179190 self.clock = hs.get_clock()
180191 self.response_cache = ResponseCache(hs, "sync")
181192 self.state = hs.get_state_handler()
193
194 # ExpiringCache((User, Device)) -> LruCache(state_key => event_id)
195 self.lazy_loaded_members_cache = ExpiringCache(
196 "lazy_loaded_members_cache", self.clock,
197 max_len=0, expiry_ms=LAZY_LOADED_MEMBERS_CACHE_MAX_AGE,
198 )
182199
183200 def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0,
184201 full_state=False):
415432 ))
416433
417434 @defer.inlineCallbacks
418 def get_state_after_event(self, event):
435 def get_state_after_event(self, event, types=None, filtered_types=None):
419436 """
420437 Get the room state after the given event
421438
422439 Args:
423440 event(synapse.events.EventBase): event of interest
441 types(list[(str, str|None)]|None): List of (type, state_key) tuples
442 which are used to filter the state fetched. If `state_key` is None,
443 all events are returned of the given type.
444 May be None, which matches any key.
445 filtered_types(list[str]|None): Only apply filtering via `types` to this
446 list of event types. Other types of events are returned unfiltered.
447 If None, `types` filtering is applied to all events.
424448
425449 Returns:
426450 A Deferred map from ((type, state_key)->Event)
427451 """
428 state_ids = yield self.store.get_state_ids_for_event(event.event_id)
452 state_ids = yield self.store.get_state_ids_for_event(
453 event.event_id, types, filtered_types=filtered_types,
454 )
429455 if event.is_state():
430456 state_ids = state_ids.copy()
431457 state_ids[(event.type, event.state_key)] = event.event_id
432458 defer.returnValue(state_ids)
433459
434460 @defer.inlineCallbacks
435 def get_state_at(self, room_id, stream_position):
461 def get_state_at(self, room_id, stream_position, types=None, filtered_types=None):
436462 """ Get the room state at a particular stream position
437463
438464 Args:
439465 room_id(str): room for which to get state
440466 stream_position(StreamToken): point at which to get state
467 types(list[(str, str|None)]|None): List of (type, state_key) tuples
468 which are used to filter the state fetched. If `state_key` is None,
469 all events are returned of the given type.
470 filtered_types(list[str]|None): Only apply filtering via `types` to this
471 list of event types. Other types of events are returned unfiltered.
472 If None, `types` filtering is applied to all events.
441473
442474 Returns:
443475 A Deferred map from ((type, state_key)->Event)
452484
453485 if last_events:
454486 last_event = last_events[-1]
455 state = yield self.get_state_after_event(last_event)
487 state = yield self.get_state_after_event(
488 last_event, types, filtered_types=filtered_types,
489 )
456490
457491 else:
458492 # no events in this room - so presumably no state
484518 # TODO(mjark) Check for new redactions in the state events.
485519
486520 with Measure(self.clock, "compute_state_delta"):
521
522 types = None
523 filtered_types = None
524
525 lazy_load_members = sync_config.filter_collection.lazy_load_members()
526 include_redundant_members = (
527 sync_config.filter_collection.include_redundant_members()
528 )
529
530 if lazy_load_members:
531 # We only request state for the members needed to display the
532 # timeline:
533
534 types = [
535 (EventTypes.Member, state_key)
536 for state_key in set(
537 event.sender # FIXME: we also care about invite targets etc.
538 for event in batch.events
539 )
540 ]
541
542 # only apply the filtering to room members
543 filtered_types = [EventTypes.Member]
544
545 timeline_state = {
546 (event.type, event.state_key): event.event_id
547 for event in batch.events if event.is_state()
548 }
549
487550 if full_state:
488551 if batch:
489552 current_state_ids = yield self.store.get_state_ids_for_event(
490 batch.events[-1].event_id
553 batch.events[-1].event_id, types=types,
554 filtered_types=filtered_types,
491555 )
492556
493557 state_ids = yield self.store.get_state_ids_for_event(
494 batch.events[0].event_id
558 batch.events[0].event_id, types=types,
559 filtered_types=filtered_types,
495560 )
561
496562 else:
497563 current_state_ids = yield self.get_state_at(
498 room_id, stream_position=now_token
564 room_id, stream_position=now_token, types=types,
565 filtered_types=filtered_types,
499566 )
500567
501568 state_ids = current_state_ids
502
503 timeline_state = {
504 (event.type, event.state_key): event.event_id
505 for event in batch.events if event.is_state()
506 }
507569
508570 state_ids = _calculate_state(
509571 timeline_contains=timeline_state,
510572 timeline_start=state_ids,
511573 previous={},
512574 current=current_state_ids,
575 lazy_load_members=lazy_load_members,
513576 )
514577 elif batch.limited:
515578 state_at_previous_sync = yield self.get_state_at(
516 room_id, stream_position=since_token
579 room_id, stream_position=since_token, types=types,
580 filtered_types=filtered_types,
517581 )
518582
519583 current_state_ids = yield self.store.get_state_ids_for_event(
520 batch.events[-1].event_id
584 batch.events[-1].event_id, types=types,
585 filtered_types=filtered_types,
521586 )
522587
523588 state_at_timeline_start = yield self.store.get_state_ids_for_event(
524 batch.events[0].event_id
589 batch.events[0].event_id, types=types,
590 filtered_types=filtered_types,
525591 )
526
527 timeline_state = {
528 (event.type, event.state_key): event.event_id
529 for event in batch.events if event.is_state()
530 }
531592
532593 state_ids = _calculate_state(
533594 timeline_contains=timeline_state,
534595 timeline_start=state_at_timeline_start,
535596 previous=state_at_previous_sync,
536597 current=current_state_ids,
598 lazy_load_members=lazy_load_members,
537599 )
538600 else:
539601 state_ids = {}
602 if lazy_load_members:
603 if types:
604 state_ids = yield self.store.get_state_ids_for_event(
605 batch.events[0].event_id, types=types,
606 filtered_types=filtered_types,
607 )
608
609 if lazy_load_members and not include_redundant_members:
610 cache_key = (sync_config.user.to_string(), sync_config.device_id)
611 cache = self.lazy_loaded_members_cache.get(cache_key)
612 if cache is None:
613 logger.debug("creating LruCache for %r", cache_key)
614 cache = LruCache(LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE)
615 self.lazy_loaded_members_cache[cache_key] = cache
616 else:
617 logger.debug("found LruCache for %r", cache_key)
618
619 # if it's a new sync sequence, then assume the client has had
620 # amnesia and doesn't want any recent lazy-loaded members
621 # de-duplicated.
622 if since_token is None:
623 logger.debug("clearing LruCache for %r", cache_key)
624 cache.clear()
625 else:
626 # only send members which aren't in our LruCache (either
627 # because they're new to this client or have been pushed out
628 # of the cache)
629 logger.debug("filtering state from %r...", state_ids)
630 state_ids = {
631 t: event_id
632 for t, event_id in state_ids.iteritems()
633 if cache.get(t[1]) != event_id
634 }
635 logger.debug("...to %r", state_ids)
636
637 # add any member IDs we are about to send into our LruCache
638 for t, event_id in itertools.chain(
639 state_ids.items(),
640 timeline_state.items(),
641 ):
642 if t[0] == EventTypes.Member:
643 cache.set(t[1], event_id)
540644
541645 state = {}
542646 if state_ids:
14471551 return False
14481552
14491553
1450 def _calculate_state(timeline_contains, timeline_start, previous, current):
1554 def _calculate_state(
1555 timeline_contains, timeline_start, previous, current, lazy_load_members,
1556 ):
14511557 """Works out what state to include in a sync response.
14521558
14531559 Args:
14561562 previous (dict): state at the end of the previous sync (or empty dict
14571563 if this is an initial sync)
14581564 current (dict): state at the end of the timeline
1565 lazy_load_members (bool): whether to return members from timeline_start
1566 or not. assumes that timeline_start has already been filtered to
1567 include only the members the client needs to know about.
14591568
14601569 Returns:
14611570 dict
14711580 }
14721581
14731582 c_ids = set(e for e in current.values())
1583 ts_ids = set(e for e in timeline_start.values())
1584 p_ids = set(e for e in previous.values())
14741585 tc_ids = set(e for e in timeline_contains.values())
1475 p_ids = set(e for e in previous.values())
1476 ts_ids = set(e for e in timeline_start.values())
1586
1587 # If we are lazyloading room members, we explicitly add the membership events
1588 # for the senders in the timeline into the state block returned by /sync,
1589 # as we may not have sent them to the client before. We find these membership
1590 # events by filtering them out of timeline_start, which has already been filtered
1591 # to only include membership events for the senders in the timeline.
1592 # In practice, we can do this by removing them from the p_ids list,
1593 # which is the list of relevant state we know we have already sent to the client.
1594 # see https://github.com/matrix-org/synapse/pull/2970
1595 # /files/efcdacad7d1b7f52f879179701c7e0d9b763511f#r204732809
1596
1597 if lazy_load_members:
1598 p_ids.difference_update(
1599 e for t, e in timeline_start.iteritems()
1600 if t[0] == EventTypes.Member
1601 )
14771602
14781603 state_ids = ((c_ids | ts_ids) - p_ids) - tc_ids
14791604
2525 from twisted.internet import defer, protocol, reactor, ssl, task
2626 from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
2727 from twisted.web._newclient import ResponseDone
28 from twisted.web.client import Agent, BrowserLikeRedirectAgent, ContentDecoderAgent
29 from twisted.web.client import FileBodyProducer as TwistedFileBodyProducer
3028 from twisted.web.client import (
29 Agent,
30 BrowserLikeRedirectAgent,
31 ContentDecoderAgent,
32 FileBodyProducer as TwistedFileBodyProducer,
3133 GzipDecoder,
3234 HTTPConnectionPool,
3335 PartialDownloadError,
3638 from twisted.web.http import PotentialDataLoss
3739 from twisted.web.http_headers import Headers
3840
39 from synapse.api.errors import (
40 CodeMessageException,
41 Codes,
42 MatrixCodeMessageException,
43 SynapseError,
44 )
41 from synapse.api.errors import Codes, HttpResponseException, SynapseError
4542 from synapse.http import cancelled_to_request_timed_out_error, redact_uri
4643 from synapse.http.endpoint import SpiderEndpoint
4744 from synapse.util.async import add_timeout_to_deferred
129126
130127 Returns:
131128 Deferred[object]: parsed json
129
130 Raises:
131 HttpResponseException: On a non-2xx HTTP response.
132
133 ValueError: if the response was not JSON
132134 """
133135
134136 # TODO: Do we ever want to log message contents?
152154
153155 body = yield make_deferred_yieldable(readBody(response))
154156
155 defer.returnValue(json.loads(body))
157 if 200 <= response.code < 300:
158 defer.returnValue(json.loads(body))
159 else:
160 raise HttpResponseException(response.code, response.phrase, body)
156161
157162 @defer.inlineCallbacks
158163 def post_json_get_json(self, uri, post_json, headers=None):
166171
167172 Returns:
168173 Deferred[object]: parsed json
174
175 Raises:
176 HttpResponseException: On a non-2xx HTTP response.
177
178 ValueError: if the response was not JSON
169179 """
170180 json_str = encode_canonical_json(post_json)
171181
190200 if 200 <= response.code < 300:
191201 defer.returnValue(json.loads(body))
192202 else:
193 raise self._exceptionFromFailedRequest(response, body)
194
195 defer.returnValue(json.loads(body))
203 raise HttpResponseException(response.code, response.phrase, body)
196204
197205 @defer.inlineCallbacks
198206 def get_json(self, uri, args={}, headers=None):
210218 Deferred: Succeeds when we get *any* 2xx HTTP response, with the
211219 HTTP body as JSON.
212220 Raises:
213 On a non-2xx HTTP response. The response body will be used as the
214 error message.
215 """
216 try:
217 body = yield self.get_raw(uri, args, headers=headers)
218 defer.returnValue(json.loads(body))
219 except CodeMessageException as e:
220 raise self._exceptionFromFailedRequest(e.code, e.msg)
221 HttpResponseException On a non-2xx HTTP response.
222
223 ValueError: if the response was not JSON
224 """
225 body = yield self.get_raw(uri, args, headers=headers)
226 defer.returnValue(json.loads(body))
221227
222228 @defer.inlineCallbacks
223229 def put_json(self, uri, json_body, args={}, headers=None):
236242 Deferred: Succeeds when we get *any* 2xx HTTP response, with the
237243 HTTP body as JSON.
238244 Raises:
239 On a non-2xx HTTP response.
245 HttpResponseException On a non-2xx HTTP response.
246
247 ValueError: if the response was not JSON
240248 """
241249 if len(args):
242250 query_bytes = urllib.urlencode(args, True)
263271 if 200 <= response.code < 300:
264272 defer.returnValue(json.loads(body))
265273 else:
266 # NB: This is explicitly not json.loads(body)'d because the contract
267 # of CodeMessageException is a *string* message. Callers can always
268 # load it into JSON if they want.
269 raise CodeMessageException(response.code, body)
274 raise HttpResponseException(response.code, response.phrase, body)
270275
271276 @defer.inlineCallbacks
272277 def get_raw(self, uri, args={}, headers=None):
284289 Deferred: Succeeds when we get *any* 2xx HTTP response, with the
285290 HTTP body at text.
286291 Raises:
287 On a non-2xx HTTP response. The response body will be used as the
288 error message.
292 HttpResponseException on a non-2xx HTTP response.
289293 """
290294 if len(args):
291295 query_bytes = urllib.urlencode(args, True)
308312 if 200 <= response.code < 300:
309313 defer.returnValue(body)
310314 else:
311 raise CodeMessageException(response.code, body)
312
313 def _exceptionFromFailedRequest(self, response, body):
314 try:
315 jsonBody = json.loads(body)
316 errcode = jsonBody['errcode']
317 error = jsonBody['error']
318 return MatrixCodeMessageException(response.code, error, errcode)
319 except (ValueError, KeyError):
320 return CodeMessageException(response.code, body)
315 raise HttpResponseException(response.code, response.phrase, body)
321316
322317 # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
323318 # The two should be factored out.
3737 )
3838
3939 response_timer = Histogram(
40 "synapse_http_server_response_time_seconds", "sec", ["method", "servlet", "tag"]
40 "synapse_http_server_response_time_seconds", "sec",
41 ["method", "servlet", "tag", "code"],
4142 )
4243
4344 response_ru_utime = Counter(
170171 )
171172 return
172173
173 outgoing_responses_counter.labels(request.method, str(request.code)).inc()
174 response_code = str(request.code)
175
176 outgoing_responses_counter.labels(request.method, response_code).inc()
174177
175178 response_count.labels(request.method, self.name, tag).inc()
176179
177 response_timer.labels(request.method, self.name, tag).observe(
180 response_timer.labels(request.method, self.name, tag, response_code).observe(
178181 time_sec - self.start
179182 )
180183
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15
1516 import cgi
1617 import collections
1718 import logging
18 import urllib
19
20 from six.moves import http_client
19
20 from six import PY3
21 from six.moves import http_client, urllib
2122
2223 from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
2324
3435 Codes,
3536 SynapseError,
3637 UnrecognizedRequestError,
37 cs_exception,
3838 )
3939 from synapse.http.request_metrics import requests_counter
4040 from synapse.util.caches import intern_dict
7575 def wrapped_request_handler(self, request):
7676 try:
7777 yield h(self, request)
78 except CodeMessageException as e:
78 except SynapseError as e:
7979 code = e.code
80 if isinstance(e, SynapseError):
81 logger.info(
82 "%s SynapseError: %s - %s", request, code, e.msg
83 )
84 else:
85 logger.exception(e)
80 logger.info(
81 "%s SynapseError: %s - %s", request, code, e.msg
82 )
8683 respond_with_json(
87 request, code, cs_exception(e), send_cors=True,
84 request, code, e.error_dict(), send_cors=True,
8885 pretty_print=_request_user_agent_is_curl(request),
8986 )
9087
263260 self.hs = hs
264261
265262 def register_paths(self, method, path_patterns, callback):
263 method = method.encode("utf-8") # method is bytes on py3
266264 for path_pattern in path_patterns:
267265 logger.debug("Registering for %s %s", method, path_pattern.pattern)
268266 self.path_regexs.setdefault(method, []).append(
295293 # here. If it throws an exception, that is handled by the wrapper
296294 # installed by @request_handler.
297295
296 def _unquote(s):
297 if PY3:
298 # On Python 3, unquote is unicode -> unicode
299 return urllib.parse.unquote(s)
300 else:
301 # On Python 2, unquote is bytes -> bytes We need to encode the
302 # URL again (as it was decoded by _get_handler_for request), as
303 # ASCII because it's a URL, and then decode it to get the UTF-8
304 # characters that were quoted.
305 return urllib.parse.unquote(s.encode('ascii')).decode('utf8')
306
298307 kwargs = intern_dict({
299 name: urllib.unquote(value).decode("UTF-8") if value else value
308 name: _unquote(value) if value else value
300309 for name, value in group_dict.items()
301310 })
302311
312321 request (twisted.web.http.Request):
313322
314323 Returns:
315 Tuple[Callable, dict[str, str]]: callback method, and the dict
316 mapping keys to path components as specified in the handler's
317 path match regexp.
324 Tuple[Callable, dict[unicode, unicode]]: callback method, and the
325 dict mapping keys to path components as specified in the
326 handler's path match regexp.
318327
319328 The callback will normally be a method registered via
320329 register_paths, so will return (possibly via Deferred) either
326335 # Loop through all the registered callbacks to check if the method
327336 # and path regex match
328337 for path_entry in self.path_regexs.get(request.method, []):
329 m = path_entry.pattern.match(request.path)
338 m = path_entry.pattern.match(request.path.decode('ascii'))
330339 if m:
331340 # We found a match!
332341 return path_entry.callback, m.groupdict()
382391 self.url = path
383392
384393 def render_GET(self, request):
385 return redirectTo(self.url, request)
394 return redirectTo(self.url.encode('ascii'), request)
386395
387396 def getChild(self, name, request):
388397 if len(name) == 0:
403412 return
404413
405414 if pretty_print:
406 json_bytes = encode_pretty_printed_json(json_object) + "\n"
415 json_bytes = (encode_pretty_printed_json(json_object) + "\n"
416 ).encode("utf-8")
407417 else:
408418 if canonical_json or synapse.events.USE_FROZEN_DICTS:
419 # canonicaljson already encodes to bytes
409420 json_bytes = encode_canonical_json(json_object)
410421 else:
411 json_bytes = json.dumps(json_object)
422 json_bytes = json.dumps(json_object).encode("utf-8")
412423
413424 return respond_with_json_bytes(
414425 request, code, json_bytes,
170170 if not content_bytes and allow_empty_body:
171171 return None
172172
173 # Decode to Unicode so that simplejson will return Unicode strings on
174 # Python 2
173175 try:
174 content = json.loads(content_bytes)
176 content_unicode = content_bytes.decode('utf8')
177 except UnicodeDecodeError:
178 logger.warn("Unable to decode UTF-8")
179 raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)
180
181 try:
182 content = json.loads(content_unicode)
175183 except Exception as e:
176184 logger.warn("Unable to parse JSON: %s", e)
177185 raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import six
16
17 from prometheus_client.core import REGISTRY, Counter, GaugeMetricFamily
18
19 from twisted.internet import defer
20
21 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
22
23 _background_process_start_count = Counter(
24 "synapse_background_process_start_count",
25 "Number of background processes started",
26 ["name"],
27 )
28
29 # we set registry=None in all of these to stop them getting registered with
30 # the default registry. Instead we collect them all via the CustomCollector,
31 # which ensures that we can update them before they are collected.
32 #
33 _background_process_ru_utime = Counter(
34 "synapse_background_process_ru_utime_seconds",
35 "User CPU time used by background processes, in seconds",
36 ["name"],
37 registry=None,
38 )
39
40 _background_process_ru_stime = Counter(
41 "synapse_background_process_ru_stime_seconds",
42 "System CPU time used by background processes, in seconds",
43 ["name"],
44 registry=None,
45 )
46
47 _background_process_db_txn_count = Counter(
48 "synapse_background_process_db_txn_count",
49 "Number of database transactions done by background processes",
50 ["name"],
51 registry=None,
52 )
53
54 _background_process_db_txn_duration = Counter(
55 "synapse_background_process_db_txn_duration_seconds",
56 ("Seconds spent by background processes waiting for database "
57 "transactions, excluding scheduling time"),
58 ["name"],
59 registry=None,
60 )
61
62 _background_process_db_sched_duration = Counter(
63 "synapse_background_process_db_sched_duration_seconds",
64 "Seconds spent by background processes waiting for database connections",
65 ["name"],
66 registry=None,
67 )
68
69 # map from description to a counter, so that we can name our logcontexts
70 # incrementally. (It actually duplicates _background_process_start_count, but
71 # it's much simpler to do so than to try to combine them.)
72 _background_process_counts = dict() # type: dict[str, int]
73
74 # map from description to the currently running background processes.
75 #
76 # it's kept as a dict of sets rather than a big set so that we can keep track
77 # of process descriptions that no longer have any active processes.
78 _background_processes = dict() # type: dict[str, set[_BackgroundProcess]]
79
80
81 class _Collector(object):
82 """A custom metrics collector for the background process metrics.
83
84 Ensures that all of the metrics are up-to-date with any in-flight processes
85 before they are returned.
86 """
87 def collect(self):
88 background_process_in_flight_count = GaugeMetricFamily(
89 "synapse_background_process_in_flight_count",
90 "Number of background processes in flight",
91 labels=["name"],
92 )
93
94 for desc, processes in six.iteritems(_background_processes):
95 background_process_in_flight_count.add_metric(
96 (desc,), len(processes),
97 )
98 for process in processes:
99 process.update_metrics()
100
101 yield background_process_in_flight_count
102
103 # now we need to run collect() over each of the static Counters, and
104 # yield each metric they return.
105 for m in (
106 _background_process_ru_utime,
107 _background_process_ru_stime,
108 _background_process_db_txn_count,
109 _background_process_db_txn_duration,
110 _background_process_db_sched_duration,
111 ):
112 for r in m.collect():
113 yield r
114
115
116 REGISTRY.register(_Collector())
117
118
119 class _BackgroundProcess(object):
120 def __init__(self, desc, ctx):
121 self.desc = desc
122 self._context = ctx
123 self._reported_stats = None
124
125 def update_metrics(self):
126 """Updates the metrics with values from this process."""
127 new_stats = self._context.get_resource_usage()
128 if self._reported_stats is None:
129 diff = new_stats
130 else:
131 diff = new_stats - self._reported_stats
132 self._reported_stats = new_stats
133
134 _background_process_ru_utime.labels(self.desc).inc(diff.ru_utime)
135 _background_process_ru_stime.labels(self.desc).inc(diff.ru_stime)
136 _background_process_db_txn_count.labels(self.desc).inc(
137 diff.db_txn_count,
138 )
139 _background_process_db_txn_duration.labels(self.desc).inc(
140 diff.db_txn_duration_sec,
141 )
142 _background_process_db_sched_duration.labels(self.desc).inc(
143 diff.db_sched_duration_sec,
144 )
145
146
147 def run_as_background_process(desc, func, *args, **kwargs):
148 """Run the given function in its own logcontext, with resource metrics
149
150 This should be used to wrap processes which are fired off to run in the
151 background, instead of being associated with a particular request.
152
153 It returns a Deferred which completes when the function completes, but it doesn't
154 follow the synapse logcontext rules, which makes it appropriate for passing to
155 clock.looping_call and friends (or for firing-and-forgetting in the middle of a
156 normal synapse inlineCallbacks function).
157
158 Args:
159 desc (str): a description for this background process type
160 func: a function, which may return a Deferred
161 args: positional args for func
162 kwargs: keyword args for func
163
164 Returns: Deferred which returns the result of func, but note that it does not
165 follow the synapse logcontext rules.
166 """
167 @defer.inlineCallbacks
168 def run():
169 count = _background_process_counts.get(desc, 0)
170 _background_process_counts[desc] = count + 1
171 _background_process_start_count.labels(desc).inc()
172
173 with LoggingContext(desc) as context:
174 context.request = "%s-%i" % (desc, count)
175 proc = _BackgroundProcess(desc, context)
176 _background_processes.setdefault(desc, set()).add(proc)
177 try:
178 yield func(*args, **kwargs)
179 finally:
180 proc.update_metrics()
181 _background_processes[desc].remove(proc)
182
183 with PreserveLoggingContext():
184 return run()
273273 logger.exception("Error notifying application services of event")
274274
275275 def on_new_event(self, stream_key, new_token, users=[], rooms=[]):
276 """ Used to inform listeners that something has happend event wise.
276 """ Used to inform listeners that something has happened event wise.
277277
278278 Will wake up all listeners for the given users and rooms.
279279 """
111111
112112 @defer.inlineCallbacks
113113 def _get_power_levels_and_sender_level(self, event, context):
114 pl_event_id = context.prev_state_ids.get(POWER_KEY)
114 prev_state_ids = yield context.get_prev_state_ids(self.store)
115 pl_event_id = prev_state_ids.get(POWER_KEY)
115116 if pl_event_id:
116117 # fastpath: if there's a power level event, that's all we need, and
117118 # not having a power level event is an extreme edge case
119120 auth_events = {POWER_KEY: pl_event}
120121 else:
121122 auth_events_ids = yield self.auth.compute_auth_events(
122 event, context.prev_state_ids, for_verification=False,
123 event, prev_state_ids, for_verification=False,
123124 )
124125 auth_events = yield self.store.get_events(auth_events_ids)
125126 auth_events = {
303304
304305 push_rules_delta_state_cache_metric.inc_hits()
305306 else:
306 current_state_ids = context.current_state_ids
307 current_state_ids = yield context.get_current_state_ids(self.store)
307308 push_rules_delta_state_cache_metric.inc_misses()
308309
309310 push_rules_state_size_counter.inc(len(current_state_ids))
1717
1818 from twisted.internet import defer
1919
20 from synapse.api.errors import MatrixCodeMessageException, SynapseError
20 from synapse.api.errors import HttpResponseException
2121 from synapse.http.servlet import RestServlet, parse_json_object_from_request
2222 from synapse.types import Requester, UserID
2323 from synapse.util.distributor import user_joined_room, user_left_room
5555
5656 try:
5757 result = yield client.post_json_get_json(uri, payload)
58 except MatrixCodeMessageException as e:
58 except HttpResponseException as e:
5959 # We convert to SynapseError as we know that it was a SynapseError
6060 # on the master process that we should send to the client. (And
6161 # importantly, not stack traces everywhere)
62 raise SynapseError(e.code, e.msg, e.errcode)
62 raise e.to_synapse_error()
6363 defer.returnValue(result)
6464
6565
9191
9292 try:
9393 result = yield client.post_json_get_json(uri, payload)
94 except MatrixCodeMessageException as e:
94 except HttpResponseException as e:
9595 # We convert to SynapseError as we know that it was a SynapseError
9696 # on the master process that we should send to the client. (And
9797 # importantly, not stack traces everywhere)
98 raise SynapseError(e.code, e.msg, e.errcode)
98 raise e.to_synapse_error()
9999 defer.returnValue(result)
100100
101101
130130
131131 try:
132132 result = yield client.post_json_get_json(uri, payload)
133 except MatrixCodeMessageException as e:
133 except HttpResponseException as e:
134134 # We convert to SynapseError as we know that it was a SynapseError
135135 # on the master process that we should send to the client. (And
136136 # importantly, not stack traces everywhere)
137 raise SynapseError(e.code, e.msg, e.errcode)
137 raise e.to_synapse_error()
138138 defer.returnValue(result)
139139
140140
164164
165165 try:
166166 result = yield client.post_json_get_json(uri, payload)
167 except MatrixCodeMessageException as e:
167 except HttpResponseException as e:
168168 # We convert to SynapseError as we know that it was a SynapseError
169169 # on the master process that we should send to the client. (And
170170 # importantly, not stack traces everywhere)
171 raise SynapseError(e.code, e.msg, e.errcode)
171 raise e.to_synapse_error()
172172 defer.returnValue(result)
173173
174174
1717
1818 from twisted.internet import defer
1919
20 from synapse.api.errors import (
21 CodeMessageException,
22 MatrixCodeMessageException,
23 SynapseError,
24 )
20 from synapse.api.errors import CodeMessageException, HttpResponseException
2521 from synapse.events import FrozenEvent
2622 from synapse.events.snapshot import EventContext
2723 from synapse.http.servlet import RestServlet, parse_json_object_from_request
3329
3430
3531 @defer.inlineCallbacks
36 def send_event_to_master(clock, client, host, port, requester, event, context,
32 def send_event_to_master(clock, store, client, host, port, requester, event, context,
3733 ratelimit, extra_users):
3834 """Send event to be handled on the master
3935
4036 Args:
4137 clock (synapse.util.Clock)
38 store (DataStore)
4239 client (SimpleHttpClient)
4340 host (str): host of master
4441 port (int): port on master listening for HTTP replication
5249 host, port, event.event_id,
5350 )
5451
52 serialized_context = yield context.serialize(event, store)
53
5554 payload = {
5655 "event": event.get_pdu_json(),
5756 "internal_metadata": event.internal_metadata.get_dict(),
5857 "rejected_reason": event.rejected_reason,
59 "context": context.serialize(event),
58 "context": serialized_context,
6059 "requester": requester.serialize(),
6160 "ratelimit": ratelimit,
6261 "extra_users": [u.to_string() for u in extra_users],
7978 # If we timed out we probably don't need to worry about backing
8079 # off too much, but lets just wait a little anyway.
8180 yield clock.sleep(1)
82 except MatrixCodeMessageException as e:
81 except HttpResponseException as e:
8382 # We convert to SynapseError as we know that it was a SynapseError
8483 # on the master process that we should send to the client. (And
8584 # importantly, not stack traces everywhere)
86 raise SynapseError(e.code, e.msg, e.errcode)
85 raise e.to_synapse_error()
8786 defer.returnValue(result)
8887
8988
191191 """Returns a deferred that is resolved when we receive a SYNC command
192192 with given data.
193193
194 Used by tests.
194 [Not currently] used by tests.
195195 """
196196 return self.awaiting_syncs.setdefault(data, defer.Deferred())
197197
2424 from twisted.internet.protocol import Factory
2525
2626 from synapse.metrics import LaterGauge
27 from synapse.metrics.background_process_metrics import run_as_background_process
2728 from synapse.util.metrics import Measure, measure_func
2829
2930 from .protocol import ServerReplicationStreamProtocol
116117 for conn in self.connections:
117118 conn.send_error("server shutting down")
118119
119 @defer.inlineCallbacks
120120 def on_notifier_poke(self):
121121 """Checks if there is actually any new data and sends it to the
122122 connections if there are.
131131 stream.discard_updates_and_advance()
132132 return
133133
134 # If we're in the process of checking for new updates, mark that fact
135 # and return
134 self.pending_updates = True
135
136136 if self.is_looping:
137 logger.debug("Noitifier poke loop already running")
138 self.pending_updates = True
137 logger.debug("Notifier poke loop already running")
139138 return
140139
141 self.pending_updates = True
140 run_as_background_process("replication_notifier", self._run_notifier_loop)
141
142 @defer.inlineCallbacks
143 def _run_notifier_loop(self):
142144 self.is_looping = True
143145
144146 try:
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1213 # See the License for the specific language governing permissions and
1314 # limitations under the License.
1415
16 from six import PY3
17
1518 from synapse.http.server import JsonResource
1619 from synapse.rest.client import versions
17 from synapse.rest.client.v1 import admin, directory, events, initial_sync
18 from synapse.rest.client.v1 import login as v1_login
19 from synapse.rest.client.v1 import logout, presence, profile, push_rule, pusher
20 from synapse.rest.client.v1 import register as v1_register
21 from synapse.rest.client.v1 import room, voip
20 from synapse.rest.client.v1 import (
21 admin,
22 directory,
23 events,
24 initial_sync,
25 login as v1_login,
26 logout,
27 presence,
28 profile,
29 push_rule,
30 pusher,
31 room,
32 voip,
33 )
2234 from synapse.rest.client.v2_alpha import (
2335 account,
2436 account_data,
4153 user_directory,
4254 )
4355
56 if not PY3:
57 from synapse.rest.client.v1_only import (
58 register as v1_register,
59 )
60
4461
4562 class ClientRestResource(JsonResource):
4663 """A resource for version 1 of the matrix client API."""
5370 def register_servlets(client_resource, hs):
5471 versions.register_servlets(client_resource)
5572
56 # "v1"
73 if not PY3:
74 # "v1" (Python 2 only)
75 v1_register.register_servlets(hs, client_resource)
76
77 # Deprecated in r0
78 initial_sync.register_servlets(hs, client_resource)
79 room.register_deprecated_servlets(hs, client_resource)
80
81 # Partially deprecated in r0
82 events.register_servlets(hs, client_resource)
83
84 # "v1" + "r0"
5785 room.register_servlets(hs, client_resource)
58 events.register_servlets(hs, client_resource)
59 v1_register.register_servlets(hs, client_resource)
6086 v1_login.register_servlets(hs, client_resource)
6187 profile.register_servlets(hs, client_resource)
6288 presence.register_servlets(hs, client_resource)
63 initial_sync.register_servlets(hs, client_resource)
6489 directory.register_servlets(hs, client_resource)
6590 voip.register_servlets(hs, client_resource)
6691 admin.register_servlets(hs, client_resource)
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 import hashlib
17 import hmac
1618 import logging
1719
20 from six import text_type
1821 from six.moves import http_client
1922
2023 from twisted.internet import defer
6265 defer.returnValue((200, ret))
6366
6467
68 class UserRegisterServlet(ClientV1RestServlet):
69 """
70 Attributes:
71 NONCE_TIMEOUT (int): Seconds until a generated nonce won't be accepted
72 nonces (dict[str, int]): The nonces that we will accept. A dict of
73 nonce to the time it was generated, in int seconds.
74 """
75 PATTERNS = client_path_patterns("/admin/register")
76 NONCE_TIMEOUT = 60
77
78 def __init__(self, hs):
79 super(UserRegisterServlet, self).__init__(hs)
80 self.handlers = hs.get_handlers()
81 self.reactor = hs.get_reactor()
82 self.nonces = {}
83 self.hs = hs
84
85 def _clear_old_nonces(self):
86 """
87 Clear out old nonces that are older than NONCE_TIMEOUT.
88 """
89 now = int(self.reactor.seconds())
90
91 for k, v in list(self.nonces.items()):
92 if now - v > self.NONCE_TIMEOUT:
93 del self.nonces[k]
94
95 def on_GET(self, request):
96 """
97 Generate a new nonce.
98 """
99 self._clear_old_nonces()
100
101 nonce = self.hs.get_secrets().token_hex(64)
102 self.nonces[nonce] = int(self.reactor.seconds())
103 return (200, {"nonce": nonce.encode('ascii')})
104
105 @defer.inlineCallbacks
106 def on_POST(self, request):
107 self._clear_old_nonces()
108
109 if not self.hs.config.registration_shared_secret:
110 raise SynapseError(400, "Shared secret registration is not enabled")
111
112 body = parse_json_object_from_request(request)
113
114 if "nonce" not in body:
115 raise SynapseError(
116 400, "nonce must be specified", errcode=Codes.BAD_JSON,
117 )
118
119 nonce = body["nonce"]
120
121 if nonce not in self.nonces:
122 raise SynapseError(
123 400, "unrecognised nonce",
124 )
125
126 # Delete the nonce, so it can't be reused, even if it's invalid
127 del self.nonces[nonce]
128
129 if "username" not in body:
130 raise SynapseError(
131 400, "username must be specified", errcode=Codes.BAD_JSON,
132 )
133 else:
134 if (
135 not isinstance(body['username'], text_type)
136 or len(body['username']) > 512
137 ):
138 raise SynapseError(400, "Invalid username")
139
140 username = body["username"].encode("utf-8")
141 if b"\x00" in username:
142 raise SynapseError(400, "Invalid username")
143
144 if "password" not in body:
145 raise SynapseError(
146 400, "password must be specified", errcode=Codes.BAD_JSON,
147 )
148 else:
149 if (
150 not isinstance(body['password'], text_type)
151 or len(body['password']) > 512
152 ):
153 raise SynapseError(400, "Invalid password")
154
155 password = body["password"].encode("utf-8")
156 if b"\x00" in password:
157 raise SynapseError(400, "Invalid password")
158
159 admin = body.get("admin", None)
160 got_mac = body["mac"]
161
162 want_mac = hmac.new(
163 key=self.hs.config.registration_shared_secret.encode(),
164 digestmod=hashlib.sha1,
165 )
166 want_mac.update(nonce)
167 want_mac.update(b"\x00")
168 want_mac.update(username)
169 want_mac.update(b"\x00")
170 want_mac.update(password)
171 want_mac.update(b"\x00")
172 want_mac.update(b"admin" if admin else b"notadmin")
173 want_mac = want_mac.hexdigest()
174
175 if not hmac.compare_digest(want_mac, got_mac.encode('ascii')):
176 raise SynapseError(403, "HMAC incorrect")
177
178 # Reuse the parts of RegisterRestServlet to reduce code duplication
179 from synapse.rest.client.v2_alpha.register import RegisterRestServlet
180
181 register = RegisterRestServlet(self.hs)
182
183 (user_id, _) = yield register.registration_handler.register(
184 localpart=body['username'].lower(),
185 password=body["password"],
186 admin=bool(admin),
187 generate_token=False,
188 )
189
190 result = yield register._create_registration_details(user_id, body)
191 defer.returnValue((200, result))
192
193
65194 class WhoisRestServlet(ClientV1RestServlet):
66195 PATTERNS = client_path_patterns("/admin/whois/(?P<user_id>[^/]*)")
67196
122251 hs (synapse.server.HomeServer)
123252 """
124253 super(PurgeHistoryRestServlet, self).__init__(hs)
125 self.handlers = hs.get_handlers()
254 self.pagination_handler = hs.get_pagination_handler()
126255 self.store = hs.get_datastore()
127256
128257 @defer.inlineCallbacks
197326 errcode=Codes.BAD_JSON,
198327 )
199328
200 purge_id = yield self.handlers.message_handler.start_purge_history(
329 purge_id = yield self.pagination_handler.start_purge_history(
201330 room_id, token,
202331 delete_local_events=delete_local_events,
203332 )
219348 hs (synapse.server.HomeServer)
220349 """
221350 super(PurgeHistoryStatusRestServlet, self).__init__(hs)
222 self.handlers = hs.get_handlers()
351 self.pagination_handler = hs.get_pagination_handler()
223352
224353 @defer.inlineCallbacks
225354 def on_GET(self, request, purge_id):
229358 if not is_admin:
230359 raise AuthError(403, "You are not a server admin")
231360
232 purge_status = self.handlers.message_handler.get_purge_status(purge_id)
361 purge_status = self.pagination_handler.get_purge_status(purge_id)
233362 if purge_status is None:
234363 raise NotFoundError("purge id '%s' not found" % purge_id)
235364
613742 ShutdownRoomRestServlet(hs).register(http_server)
614743 QuarantineMediaInRoom(hs).register(http_server)
615744 ListMediaInRoom(hs).register(http_server)
745 UserRegisterServlet(hs).register(http_server)
1717
1818 from twisted.internet import defer
1919
20 from synapse.api.errors import AuthError, Codes, SynapseError
20 from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
2121 from synapse.http.servlet import parse_json_object_from_request
2222 from synapse.types import RoomAlias
2323
158158 def on_GET(self, request, room_id):
159159 room = yield self.store.get_room(room_id)
160160 if room is None:
161 raise SynapseError(400, "Unknown room")
161 raise NotFoundError("Unknown room")
162162
163163 defer.returnValue((200, {
164164 "visibility": "public" if room["is_public"] else "private"
+0
-436
synapse/rest/client/v1/register.py less more
0 # -*- coding: utf-8 -*-
1 # Copyright 2014-2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This module contains REST servlets to do with registration: /register"""
16 import hmac
17 import logging
18 from hashlib import sha1
19
20 from twisted.internet import defer
21
22 import synapse.util.stringutils as stringutils
23 from synapse.api.constants import LoginType
24 from synapse.api.errors import Codes, SynapseError
25 from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request
26 from synapse.types import create_requester
27
28 from .base import ClientV1RestServlet, client_path_patterns
29
30 logger = logging.getLogger(__name__)
31
32
33 # We ought to be using hmac.compare_digest() but on older pythons it doesn't
34 # exist. It's a _really minor_ security flaw to use plain string comparison
35 # because the timing attack is so obscured by all the other code here it's
36 # unlikely to make much difference
37 if hasattr(hmac, "compare_digest"):
38 compare_digest = hmac.compare_digest
39 else:
40 def compare_digest(a, b):
41 return a == b
42
43
44 class RegisterRestServlet(ClientV1RestServlet):
45 """Handles registration with the home server.
46
47 This servlet is in control of the registration flow; the registration
48 handler doesn't have a concept of multi-stages or sessions.
49 """
50
51 PATTERNS = client_path_patterns("/register$", releases=(), include_in_unstable=False)
52
53 def __init__(self, hs):
54 """
55 Args:
56 hs (synapse.server.HomeServer): server
57 """
58 super(RegisterRestServlet, self).__init__(hs)
59 # sessions are stored as:
60 # self.sessions = {
61 # "session_id" : { __session_dict__ }
62 # }
63 # TODO: persistent storage
64 self.sessions = {}
65 self.enable_registration = hs.config.enable_registration
66 self.auth = hs.get_auth()
67 self.auth_handler = hs.get_auth_handler()
68 self.handlers = hs.get_handlers()
69
70 def on_GET(self, request):
71
72 require_email = 'email' in self.hs.config.registrations_require_3pid
73 require_msisdn = 'msisdn' in self.hs.config.registrations_require_3pid
74
75 flows = []
76 if self.hs.config.enable_registration_captcha:
77 # only support the email-only flow if we don't require MSISDN 3PIDs
78 if not require_msisdn:
79 flows.extend([
80 {
81 "type": LoginType.RECAPTCHA,
82 "stages": [
83 LoginType.RECAPTCHA,
84 LoginType.EMAIL_IDENTITY,
85 LoginType.PASSWORD
86 ]
87 },
88 ])
89 # only support 3PIDless registration if no 3PIDs are required
90 if not require_email and not require_msisdn:
91 flows.extend([
92 {
93 "type": LoginType.RECAPTCHA,
94 "stages": [LoginType.RECAPTCHA, LoginType.PASSWORD]
95 }
96 ])
97 else:
98 # only support the email-only flow if we don't require MSISDN 3PIDs
99 if require_email or not require_msisdn:
100 flows.extend([
101 {
102 "type": LoginType.EMAIL_IDENTITY,
103 "stages": [
104 LoginType.EMAIL_IDENTITY, LoginType.PASSWORD
105 ]
106 }
107 ])
108 # only support 3PIDless registration if no 3PIDs are required
109 if not require_email and not require_msisdn:
110 flows.extend([
111 {
112 "type": LoginType.PASSWORD
113 }
114 ])
115 return (200, {"flows": flows})
116
117 @defer.inlineCallbacks
118 def on_POST(self, request):
119 register_json = parse_json_object_from_request(request)
120
121 session = (register_json["session"]
122 if "session" in register_json else None)
123 login_type = None
124 assert_params_in_dict(register_json, ["type"])
125
126 try:
127 login_type = register_json["type"]
128
129 is_application_server = login_type == LoginType.APPLICATION_SERVICE
130 is_using_shared_secret = login_type == LoginType.SHARED_SECRET
131
132 can_register = (
133 self.enable_registration
134 or is_application_server
135 or is_using_shared_secret
136 )
137 if not can_register:
138 raise SynapseError(403, "Registration has been disabled")
139
140 stages = {
141 LoginType.RECAPTCHA: self._do_recaptcha,
142 LoginType.PASSWORD: self._do_password,
143 LoginType.EMAIL_IDENTITY: self._do_email_identity,
144 LoginType.APPLICATION_SERVICE: self._do_app_service,
145 LoginType.SHARED_SECRET: self._do_shared_secret,
146 }
147
148 session_info = self._get_session_info(request, session)
149 logger.debug("%s : session info %s request info %s",
150 login_type, session_info, register_json)
151 response = yield stages[login_type](
152 request,
153 register_json,
154 session_info
155 )
156
157 if "access_token" not in response:
158 # isn't a final response
159 response["session"] = session_info["id"]
160
161 defer.returnValue((200, response))
162 except KeyError as e:
163 logger.exception(e)
164 raise SynapseError(400, "Missing JSON keys for login type %s." % (
165 login_type,
166 ))
167
168 def on_OPTIONS(self, request):
169 return (200, {})
170
171 def _get_session_info(self, request, session_id):
172 if not session_id:
173 # create a new session
174 while session_id is None or session_id in self.sessions:
175 session_id = stringutils.random_string(24)
176 self.sessions[session_id] = {
177 "id": session_id,
178 LoginType.EMAIL_IDENTITY: False,
179 LoginType.RECAPTCHA: False
180 }
181
182 return self.sessions[session_id]
183
184 def _save_session(self, session):
185 # TODO: Persistent storage
186 logger.debug("Saving session %s", session)
187 self.sessions[session["id"]] = session
188
189 def _remove_session(self, session):
190 logger.debug("Removing session %s", session)
191 self.sessions.pop(session["id"])
192
193 @defer.inlineCallbacks
194 def _do_recaptcha(self, request, register_json, session):
195 if not self.hs.config.enable_registration_captcha:
196 raise SynapseError(400, "Captcha not required.")
197
198 yield self._check_recaptcha(request, register_json, session)
199
200 session[LoginType.RECAPTCHA] = True # mark captcha as done
201 self._save_session(session)
202 defer.returnValue({
203 "next": [LoginType.PASSWORD, LoginType.EMAIL_IDENTITY]
204 })
205
206 @defer.inlineCallbacks
207 def _check_recaptcha(self, request, register_json, session):
208 if ("captcha_bypass_hmac" in register_json and
209 self.hs.config.captcha_bypass_secret):
210 if "user" not in register_json:
211 raise SynapseError(400, "Captcha bypass needs 'user'")
212
213 want = hmac.new(
214 key=self.hs.config.captcha_bypass_secret,
215 msg=register_json["user"],
216 digestmod=sha1,
217 ).hexdigest()
218
219 # str() because otherwise hmac complains that 'unicode' does not
220 # have the buffer interface
221 got = str(register_json["captcha_bypass_hmac"])
222
223 if compare_digest(want, got):
224 session["user"] = register_json["user"]
225 defer.returnValue(None)
226 else:
227 raise SynapseError(
228 400, "Captcha bypass HMAC incorrect",
229 errcode=Codes.CAPTCHA_NEEDED
230 )
231
232 challenge = None
233 user_response = None
234 try:
235 challenge = register_json["challenge"]
236 user_response = register_json["response"]
237 except KeyError:
238 raise SynapseError(400, "Captcha response is required",
239 errcode=Codes.CAPTCHA_NEEDED)
240
241 ip_addr = self.hs.get_ip_from_request(request)
242
243 handler = self.handlers.registration_handler
244 yield handler.check_recaptcha(
245 ip_addr,
246 self.hs.config.recaptcha_private_key,
247 challenge,
248 user_response
249 )
250
251 @defer.inlineCallbacks
252 def _do_email_identity(self, request, register_json, session):
253 if (self.hs.config.enable_registration_captcha and
254 not session[LoginType.RECAPTCHA]):
255 raise SynapseError(400, "Captcha is required.")
256
257 threepidCreds = register_json['threepidCreds']
258 handler = self.handlers.registration_handler
259 logger.debug("Registering email. threepidcreds: %s" % (threepidCreds))
260 yield handler.register_email(threepidCreds)
261 session["threepidCreds"] = threepidCreds # store creds for next stage
262 session[LoginType.EMAIL_IDENTITY] = True # mark email as done
263 self._save_session(session)
264 defer.returnValue({
265 "next": LoginType.PASSWORD
266 })
267
268 @defer.inlineCallbacks
269 def _do_password(self, request, register_json, session):
270 if (self.hs.config.enable_registration_captcha and
271 not session[LoginType.RECAPTCHA]):
272 # captcha should've been done by this stage!
273 raise SynapseError(400, "Captcha is required.")
274
275 if ("user" in session and "user" in register_json and
276 session["user"] != register_json["user"]):
277 raise SynapseError(
278 400, "Cannot change user ID during registration"
279 )
280
281 password = register_json["password"].encode("utf-8")
282 desired_user_id = (
283 register_json["user"].encode("utf-8")
284 if "user" in register_json else None
285 )
286
287 handler = self.handlers.registration_handler
288 (user_id, token) = yield handler.register(
289 localpart=desired_user_id,
290 password=password
291 )
292
293 if session[LoginType.EMAIL_IDENTITY]:
294 logger.debug("Binding emails %s to %s" % (
295 session["threepidCreds"], user_id)
296 )
297 yield handler.bind_emails(user_id, session["threepidCreds"])
298
299 result = {
300 "user_id": user_id,
301 "access_token": token,
302 "home_server": self.hs.hostname,
303 }
304 self._remove_session(session)
305 defer.returnValue(result)
306
307 @defer.inlineCallbacks
308 def _do_app_service(self, request, register_json, session):
309 as_token = self.auth.get_access_token_from_request(request)
310
311 assert_params_in_dict(register_json, ["user"])
312 user_localpart = register_json["user"].encode("utf-8")
313
314 handler = self.handlers.registration_handler
315 user_id = yield handler.appservice_register(
316 user_localpart, as_token
317 )
318 token = yield self.auth_handler.issue_access_token(user_id)
319 self._remove_session(session)
320 defer.returnValue({
321 "user_id": user_id,
322 "access_token": token,
323 "home_server": self.hs.hostname,
324 })
325
326 @defer.inlineCallbacks
327 def _do_shared_secret(self, request, register_json, session):
328 assert_params_in_dict(register_json, ["mac", "user", "password"])
329
330 if not self.hs.config.registration_shared_secret:
331 raise SynapseError(400, "Shared secret registration is not enabled")
332
333 user = register_json["user"].encode("utf-8")
334 password = register_json["password"].encode("utf-8")
335 admin = register_json.get("admin", None)
336
337 # Its important to check as we use null bytes as HMAC field separators
338 if b"\x00" in user:
339 raise SynapseError(400, "Invalid user")
340 if b"\x00" in password:
341 raise SynapseError(400, "Invalid password")
342
343 # str() because otherwise hmac complains that 'unicode' does not
344 # have the buffer interface
345 got_mac = str(register_json["mac"])
346
347 want_mac = hmac.new(
348 key=self.hs.config.registration_shared_secret.encode(),
349 digestmod=sha1,
350 )
351 want_mac.update(user)
352 want_mac.update(b"\x00")
353 want_mac.update(password)
354 want_mac.update(b"\x00")
355 want_mac.update(b"admin" if admin else b"notadmin")
356 want_mac = want_mac.hexdigest()
357
358 if compare_digest(want_mac, got_mac):
359 handler = self.handlers.registration_handler
360 user_id, token = yield handler.register(
361 localpart=user.lower(),
362 password=password,
363 admin=bool(admin),
364 )
365 self._remove_session(session)
366 defer.returnValue({
367 "user_id": user_id,
368 "access_token": token,
369 "home_server": self.hs.hostname,
370 })
371 else:
372 raise SynapseError(
373 403, "HMAC incorrect",
374 )
375
376
377 class CreateUserRestServlet(ClientV1RestServlet):
378 """Handles user creation via a server-to-server interface
379 """
380
381 PATTERNS = client_path_patterns("/createUser$", releases=())
382
383 def __init__(self, hs):
384 super(CreateUserRestServlet, self).__init__(hs)
385 self.store = hs.get_datastore()
386 self.handlers = hs.get_handlers()
387
388 @defer.inlineCallbacks
389 def on_POST(self, request):
390 user_json = parse_json_object_from_request(request)
391
392 access_token = self.auth.get_access_token_from_request(request)
393 app_service = self.store.get_app_service_by_token(
394 access_token
395 )
396 if not app_service:
397 raise SynapseError(403, "Invalid application service token.")
398
399 requester = create_requester(app_service.sender)
400
401 logger.debug("creating user: %s", user_json)
402 response = yield self._do_create(requester, user_json)
403
404 defer.returnValue((200, response))
405
406 def on_OPTIONS(self, request):
407 return 403, {}
408
409 @defer.inlineCallbacks
410 def _do_create(self, requester, user_json):
411 assert_params_in_dict(user_json, ["localpart", "displayname"])
412
413 localpart = user_json["localpart"].encode("utf-8")
414 displayname = user_json["displayname"].encode("utf-8")
415 password_hash = user_json["password_hash"].encode("utf-8") \
416 if user_json.get("password_hash") else None
417
418 handler = self.handlers.registration_handler
419 user_id, token = yield handler.get_or_create_user(
420 requester=requester,
421 localpart=localpart,
422 displayname=displayname,
423 password_hash=password_hash
424 )
425
426 defer.returnValue({
427 "user_id": user_id,
428 "access_token": token,
429 "home_server": self.hs.hostname,
430 })
431
432
433 def register_servlets(hs, http_server):
434 RegisterRestServlet(hs).register(http_server)
435 CreateUserRestServlet(hs).register(http_server)
8989 self.handlers = hs.get_handlers()
9090 self.event_creation_hander = hs.get_event_creation_handler()
9191 self.room_member_handler = hs.get_room_member_handler()
92 self.message_handler = hs.get_message_handler()
9293
9394 def register(self, http_server):
9495 # /room/$roomid/state/$eventtype
123124 format = parse_string(request, "format", default="content",
124125 allowed_values=["content", "event"])
125126
126 msg_handler = self.handlers.message_handler
127 msg_handler = self.message_handler
127128 data = yield msg_handler.get_room_data(
128129 user_id=requester.user.to_string(),
129130 room_id=room_id,
376377
377378 def __init__(self, hs):
378379 super(RoomMemberListRestServlet, self).__init__(hs)
379 self.handlers = hs.get_handlers()
380 self.message_handler = hs.get_message_handler()
380381
381382 @defer.inlineCallbacks
382383 def on_GET(self, request, room_id):
383384 # TODO support Pagination stream API (limit/tokens)
384385 requester = yield self.auth.get_user_by_req(request)
385 handler = self.handlers.message_handler
386 events = yield handler.get_state_events(
386 events = yield self.message_handler.get_state_events(
387387 room_id=room_id,
388388 user_id=requester.user.to_string(),
389389 )
405405
406406 def __init__(self, hs):
407407 super(JoinedRoomMemberListRestServlet, self).__init__(hs)
408 self.message_handler = hs.get_handlers().message_handler
408 self.message_handler = hs.get_message_handler()
409409
410410 @defer.inlineCallbacks
411411 def on_GET(self, request, room_id):
426426
427427 def __init__(self, hs):
428428 super(RoomMessageListRestServlet, self).__init__(hs)
429 self.handlers = hs.get_handlers()
429 self.pagination_handler = hs.get_pagination_handler()
430430
431431 @defer.inlineCallbacks
432432 def on_GET(self, request, room_id):
441441 event_filter = Filter(json.loads(filter_json))
442442 else:
443443 event_filter = None
444 handler = self.handlers.message_handler
445 msgs = yield handler.get_messages(
444 msgs = yield self.pagination_handler.get_messages(
446445 room_id=room_id,
447446 requester=requester,
448447 pagin_config=pagination_config,
459458
460459 def __init__(self, hs):
461460 super(RoomStateRestServlet, self).__init__(hs)
462 self.handlers = hs.get_handlers()
461 self.message_handler = hs.get_message_handler()
463462
464463 @defer.inlineCallbacks
465464 def on_GET(self, request, room_id):
466465 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
467 handler = self.handlers.message_handler
468466 # Get all the current state for this room
469 events = yield handler.get_state_events(
467 events = yield self.message_handler.get_state_events(
470468 room_id=room_id,
471469 user_id=requester.user.to_string(),
472470 is_guest=requester.is_guest,
524522 def __init__(self, hs):
525523 super(RoomEventContextServlet, self).__init__(hs)
526524 self.clock = hs.get_clock()
527 self.handlers = hs.get_handlers()
525 self.room_context_handler = hs.get_room_context_handler()
528526
529527 @defer.inlineCallbacks
530528 def on_GET(self, request, room_id, event_id):
532530
533531 limit = parse_integer(request, "limit", default=10)
534532
535 results = yield self.handlers.room_context_handler.get_event_context(
533 # picking the API shape for symmetry with /messages
534 filter_bytes = parse_string(request, "filter")
535 if filter_bytes:
536 filter_json = urlparse.unquote(filter_bytes).decode("UTF-8")
537 event_filter = Filter(json.loads(filter_json))
538 else:
539 event_filter = None
540
541 results = yield self.room_context_handler.get_event_context(
536542 requester.user,
537543 room_id,
538544 event_id,
539545 limit,
546 event_filter,
540547 )
541548
542549 if not results:
831838 RoomSendEventRestServlet(hs).register(http_server)
832839 PublicRoomListRestServlet(hs).register(http_server)
833840 RoomStateRestServlet(hs).register(http_server)
834 RoomInitialSyncRestServlet(hs).register(http_server)
835841 RoomRedactEventRestServlet(hs).register(http_server)
836842 RoomTypingRestServlet(hs).register(http_server)
837843 SearchRestServlet(hs).register(http_server)
838844 JoinedRoomsRestServlet(hs).register(http_server)
839845 RoomEventServlet(hs).register(http_server)
840846 RoomEventContextServlet(hs).register(http_server)
847
848
849 def register_deprecated_servlets(hs, http_server):
850 RoomInitialSyncRestServlet(hs).register(http_server)
0 """
1 REST APIs that are only used in v1 (the legacy API).
2 """
0 # -*- coding: utf-8 -*-
1 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """This module contains base REST classes for constructing client v1 servlets.
17 """
18
19 import re
20
21 from synapse.api.urls import CLIENT_PREFIX
22
23
24 def v1_only_client_path_patterns(path_regex, include_in_unstable=True):
25 """Creates a regex compiled client path with the correct client path
26 prefix.
27
28 Args:
29 path_regex (str): The regex string to match. This should NOT have a ^
30 as this will be prefixed.
31 Returns:
32 list of SRE_Pattern
33 """
34 patterns = [re.compile("^" + CLIENT_PREFIX + path_regex)]
35 if include_in_unstable:
36 unstable_prefix = CLIENT_PREFIX.replace("/api/v1", "/unstable")
37 patterns.append(re.compile("^" + unstable_prefix + path_regex))
38 return patterns
0 # -*- coding: utf-8 -*-
1 # Copyright 2014-2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This module contains REST servlets to do with registration: /register"""
16 import hmac
17 import logging
18 from hashlib import sha1
19
20 from twisted.internet import defer
21
22 import synapse.util.stringutils as stringutils
23 from synapse.api.constants import LoginType
24 from synapse.api.errors import Codes, SynapseError
25 from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request
26 from synapse.rest.client.v1.base import ClientV1RestServlet
27 from synapse.types import create_requester
28
29 from .base import v1_only_client_path_patterns
30
31 logger = logging.getLogger(__name__)
32
33
34 # We ought to be using hmac.compare_digest() but on older pythons it doesn't
35 # exist. It's a _really minor_ security flaw to use plain string comparison
36 # because the timing attack is so obscured by all the other code here it's
37 # unlikely to make much difference
38 if hasattr(hmac, "compare_digest"):
39 compare_digest = hmac.compare_digest
40 else:
41 def compare_digest(a, b):
42 return a == b
43
44
45 class RegisterRestServlet(ClientV1RestServlet):
46 """Handles registration with the home server.
47
48 This servlet is in control of the registration flow; the registration
49 handler doesn't have a concept of multi-stages or sessions.
50 """
51
52 PATTERNS = v1_only_client_path_patterns("/register$", include_in_unstable=False)
53
54 def __init__(self, hs):
55 """
56 Args:
57 hs (synapse.server.HomeServer): server
58 """
59 super(RegisterRestServlet, self).__init__(hs)
60 # sessions are stored as:
61 # self.sessions = {
62 # "session_id" : { __session_dict__ }
63 # }
64 # TODO: persistent storage
65 self.sessions = {}
66 self.enable_registration = hs.config.enable_registration
67 self.auth = hs.get_auth()
68 self.auth_handler = hs.get_auth_handler()
69 self.handlers = hs.get_handlers()
70
71 def on_GET(self, request):
72
73 require_email = 'email' in self.hs.config.registrations_require_3pid
74 require_msisdn = 'msisdn' in self.hs.config.registrations_require_3pid
75
76 flows = []
77 if self.hs.config.enable_registration_captcha:
78 # only support the email-only flow if we don't require MSISDN 3PIDs
79 if not require_msisdn:
80 flows.extend([
81 {
82 "type": LoginType.RECAPTCHA,
83 "stages": [
84 LoginType.RECAPTCHA,
85 LoginType.EMAIL_IDENTITY,
86 LoginType.PASSWORD
87 ]
88 },
89 ])
90 # only support 3PIDless registration if no 3PIDs are required
91 if not require_email and not require_msisdn:
92 flows.extend([
93 {
94 "type": LoginType.RECAPTCHA,
95 "stages": [LoginType.RECAPTCHA, LoginType.PASSWORD]
96 }
97 ])
98 else:
99 # only support the email-only flow if we don't require MSISDN 3PIDs
100 if require_email or not require_msisdn:
101 flows.extend([
102 {
103 "type": LoginType.EMAIL_IDENTITY,
104 "stages": [
105 LoginType.EMAIL_IDENTITY, LoginType.PASSWORD
106 ]
107 }
108 ])
109 # only support 3PIDless registration if no 3PIDs are required
110 if not require_email and not require_msisdn:
111 flows.extend([
112 {
113 "type": LoginType.PASSWORD
114 }
115 ])
116 return (200, {"flows": flows})
117
118 @defer.inlineCallbacks
119 def on_POST(self, request):
120 register_json = parse_json_object_from_request(request)
121
122 session = (register_json["session"]
123 if "session" in register_json else None)
124 login_type = None
125 assert_params_in_dict(register_json, ["type"])
126
127 try:
128 login_type = register_json["type"]
129
130 is_application_server = login_type == LoginType.APPLICATION_SERVICE
131 is_using_shared_secret = login_type == LoginType.SHARED_SECRET
132
133 can_register = (
134 self.enable_registration
135 or is_application_server
136 or is_using_shared_secret
137 )
138 if not can_register:
139 raise SynapseError(403, "Registration has been disabled")
140
141 stages = {
142 LoginType.RECAPTCHA: self._do_recaptcha,
143 LoginType.PASSWORD: self._do_password,
144 LoginType.EMAIL_IDENTITY: self._do_email_identity,
145 LoginType.APPLICATION_SERVICE: self._do_app_service,
146 LoginType.SHARED_SECRET: self._do_shared_secret,
147 }
148
149 session_info = self._get_session_info(request, session)
150 logger.debug("%s : session info %s request info %s",
151 login_type, session_info, register_json)
152 response = yield stages[login_type](
153 request,
154 register_json,
155 session_info
156 )
157
158 if "access_token" not in response:
159 # isn't a final response
160 response["session"] = session_info["id"]
161
162 defer.returnValue((200, response))
163 except KeyError as e:
164 logger.exception(e)
165 raise SynapseError(400, "Missing JSON keys for login type %s." % (
166 login_type,
167 ))
168
169 def on_OPTIONS(self, request):
170 return (200, {})
171
172 def _get_session_info(self, request, session_id):
173 if not session_id:
174 # create a new session
175 while session_id is None or session_id in self.sessions:
176 session_id = stringutils.random_string(24)
177 self.sessions[session_id] = {
178 "id": session_id,
179 LoginType.EMAIL_IDENTITY: False,
180 LoginType.RECAPTCHA: False
181 }
182
183 return self.sessions[session_id]
184
185 def _save_session(self, session):
186 # TODO: Persistent storage
187 logger.debug("Saving session %s", session)
188 self.sessions[session["id"]] = session
189
190 def _remove_session(self, session):
191 logger.debug("Removing session %s", session)
192 self.sessions.pop(session["id"])
193
194 @defer.inlineCallbacks
195 def _do_recaptcha(self, request, register_json, session):
196 if not self.hs.config.enable_registration_captcha:
197 raise SynapseError(400, "Captcha not required.")
198
199 yield self._check_recaptcha(request, register_json, session)
200
201 session[LoginType.RECAPTCHA] = True # mark captcha as done
202 self._save_session(session)
203 defer.returnValue({
204 "next": [LoginType.PASSWORD, LoginType.EMAIL_IDENTITY]
205 })
206
207 @defer.inlineCallbacks
208 def _check_recaptcha(self, request, register_json, session):
209 if ("captcha_bypass_hmac" in register_json and
210 self.hs.config.captcha_bypass_secret):
211 if "user" not in register_json:
212 raise SynapseError(400, "Captcha bypass needs 'user'")
213
214 want = hmac.new(
215 key=self.hs.config.captcha_bypass_secret,
216 msg=register_json["user"],
217 digestmod=sha1,
218 ).hexdigest()
219
220 # str() because otherwise hmac complains that 'unicode' does not
221 # have the buffer interface
222 got = str(register_json["captcha_bypass_hmac"])
223
224 if compare_digest(want, got):
225 session["user"] = register_json["user"]
226 defer.returnValue(None)
227 else:
228 raise SynapseError(
229 400, "Captcha bypass HMAC incorrect",
230 errcode=Codes.CAPTCHA_NEEDED
231 )
232
233 challenge = None
234 user_response = None
235 try:
236 challenge = register_json["challenge"]
237 user_response = register_json["response"]
238 except KeyError:
239 raise SynapseError(400, "Captcha response is required",
240 errcode=Codes.CAPTCHA_NEEDED)
241
242 ip_addr = self.hs.get_ip_from_request(request)
243
244 handler = self.handlers.registration_handler
245 yield handler.check_recaptcha(
246 ip_addr,
247 self.hs.config.recaptcha_private_key,
248 challenge,
249 user_response
250 )
251
252 @defer.inlineCallbacks
253 def _do_email_identity(self, request, register_json, session):
254 if (self.hs.config.enable_registration_captcha and
255 not session[LoginType.RECAPTCHA]):
256 raise SynapseError(400, "Captcha is required.")
257
258 threepidCreds = register_json['threepidCreds']
259 handler = self.handlers.registration_handler
260 logger.debug("Registering email. threepidcreds: %s" % (threepidCreds))
261 yield handler.register_email(threepidCreds)
262 session["threepidCreds"] = threepidCreds # store creds for next stage
263 session[LoginType.EMAIL_IDENTITY] = True # mark email as done
264 self._save_session(session)
265 defer.returnValue({
266 "next": LoginType.PASSWORD
267 })
268
269 @defer.inlineCallbacks
270 def _do_password(self, request, register_json, session):
271 if (self.hs.config.enable_registration_captcha and
272 not session[LoginType.RECAPTCHA]):
273 # captcha should've been done by this stage!
274 raise SynapseError(400, "Captcha is required.")
275
276 if ("user" in session and "user" in register_json and
277 session["user"] != register_json["user"]):
278 raise SynapseError(
279 400, "Cannot change user ID during registration"
280 )
281
282 password = register_json["password"].encode("utf-8")
283 desired_user_id = (
284 register_json["user"].encode("utf-8")
285 if "user" in register_json else None
286 )
287
288 handler = self.handlers.registration_handler
289 (user_id, token) = yield handler.register(
290 localpart=desired_user_id,
291 password=password
292 )
293
294 if session[LoginType.EMAIL_IDENTITY]:
295 logger.debug("Binding emails %s to %s" % (
296 session["threepidCreds"], user_id)
297 )
298 yield handler.bind_emails(user_id, session["threepidCreds"])
299
300 result = {
301 "user_id": user_id,
302 "access_token": token,
303 "home_server": self.hs.hostname,
304 }
305 self._remove_session(session)
306 defer.returnValue(result)
307
308 @defer.inlineCallbacks
309 def _do_app_service(self, request, register_json, session):
310 as_token = self.auth.get_access_token_from_request(request)
311
312 assert_params_in_dict(register_json, ["user"])
313 user_localpart = register_json["user"].encode("utf-8")
314
315 handler = self.handlers.registration_handler
316 user_id = yield handler.appservice_register(
317 user_localpart, as_token
318 )
319 token = yield self.auth_handler.issue_access_token(user_id)
320 self._remove_session(session)
321 defer.returnValue({
322 "user_id": user_id,
323 "access_token": token,
324 "home_server": self.hs.hostname,
325 })
326
327 @defer.inlineCallbacks
328 def _do_shared_secret(self, request, register_json, session):
329 assert_params_in_dict(register_json, ["mac", "user", "password"])
330
331 if not self.hs.config.registration_shared_secret:
332 raise SynapseError(400, "Shared secret registration is not enabled")
333
334 user = register_json["user"].encode("utf-8")
335 password = register_json["password"].encode("utf-8")
336 admin = register_json.get("admin", None)
337
338 # Its important to check as we use null bytes as HMAC field separators
339 if b"\x00" in user:
340 raise SynapseError(400, "Invalid user")
341 if b"\x00" in password:
342 raise SynapseError(400, "Invalid password")
343
344 # str() because otherwise hmac complains that 'unicode' does not
345 # have the buffer interface
346 got_mac = str(register_json["mac"])
347
348 want_mac = hmac.new(
349 key=self.hs.config.registration_shared_secret.encode(),
350 digestmod=sha1,
351 )
352 want_mac.update(user)
353 want_mac.update(b"\x00")
354 want_mac.update(password)
355 want_mac.update(b"\x00")
356 want_mac.update(b"admin" if admin else b"notadmin")
357 want_mac = want_mac.hexdigest()
358
359 if compare_digest(want_mac, got_mac):
360 handler = self.handlers.registration_handler
361 user_id, token = yield handler.register(
362 localpart=user.lower(),
363 password=password,
364 admin=bool(admin),
365 )
366 self._remove_session(session)
367 defer.returnValue({
368 "user_id": user_id,
369 "access_token": token,
370 "home_server": self.hs.hostname,
371 })
372 else:
373 raise SynapseError(
374 403, "HMAC incorrect",
375 )
376
377
378 class CreateUserRestServlet(ClientV1RestServlet):
379 """Handles user creation via a server-to-server interface
380 """
381
382 PATTERNS = v1_only_client_path_patterns("/createUser$")
383
384 def __init__(self, hs):
385 super(CreateUserRestServlet, self).__init__(hs)
386 self.store = hs.get_datastore()
387 self.handlers = hs.get_handlers()
388
389 @defer.inlineCallbacks
390 def on_POST(self, request):
391 user_json = parse_json_object_from_request(request)
392
393 access_token = self.auth.get_access_token_from_request(request)
394 app_service = self.store.get_app_service_by_token(
395 access_token
396 )
397 if not app_service:
398 raise SynapseError(403, "Invalid application service token.")
399
400 requester = create_requester(app_service.sender)
401
402 logger.debug("creating user: %s", user_json)
403 response = yield self._do_create(requester, user_json)
404
405 defer.returnValue((200, response))
406
407 def on_OPTIONS(self, request):
408 return 403, {}
409
410 @defer.inlineCallbacks
411 def _do_create(self, requester, user_json):
412 assert_params_in_dict(user_json, ["localpart", "displayname"])
413
414 localpart = user_json["localpart"].encode("utf-8")
415 displayname = user_json["displayname"].encode("utf-8")
416 password_hash = user_json["password_hash"].encode("utf-8") \
417 if user_json.get("password_hash") else None
418
419 handler = self.handlers.registration_handler
420 user_id, token = yield handler.get_or_create_user(
421 requester=requester,
422 localpart=localpart,
423 displayname=displayname,
424 password_hash=password_hash
425 )
426
427 defer.returnValue({
428 "user_id": user_id,
429 "access_token": token,
430 "home_server": self.hs.hostname,
431 })
432
433
434 def register_servlets(hs, http_server):
435 RegisterRestServlet(hs).register(http_server)
436 CreateUserRestServlet(hs).register(http_server)
192192 def on_POST(self, request):
193193 body = parse_json_object_from_request(request)
194194
195 kind = "user"
196 if "kind" in request.args:
197 kind = request.args["kind"][0]
198
199 if kind == "guest":
195 kind = b"user"
196 if b"kind" in request.args:
197 kind = request.args[b"kind"][0]
198
199 if kind == b"guest":
200200 ret = yield self._do_guest_registration(body)
201201 defer.returnValue(ret)
202202 return
203 elif kind != "user":
203 elif kind != b"user":
204204 raise UnrecognizedRequestError(
205205 "Do not understand membership kind: %s" % (kind,)
206206 )
388388 assert_params_in_dict(params, ["password"])
389389
390390 desired_username = params.get("username", None)
391 guest_access_token = params.get("guest_access_token", None)
391392 new_password = params.get("password", None)
392 guest_access_token = params.get("guest_access_token", None)
393393
394394 if desired_username is not None:
395395 desired_username = desired_username.lower()
3434 SynapseError,
3535 )
3636 from synapse.http.matrixfederationclient import MatrixFederationHttpClient
37 from synapse.metrics.background_process_metrics import run_as_background_process
3738 from synapse.util.async import Linearizer
3839 from synapse.util.logcontext import make_deferred_yieldable
3940 from synapse.util.retryutils import NotRetryingDestination
99100 )
100101
101102 self.clock.looping_call(
102 self._update_recently_accessed,
103 self._start_update_recently_accessed,
103104 UPDATE_RECENTLY_ACCESSED_TS,
105 )
106
107 def _start_update_recently_accessed(self):
108 return run_as_background_process(
109 "update_recently_accessed_media", self._update_recently_accessed,
104110 )
105111
106112 @defer.inlineCallbacks
372378 logger.warn("HTTP error fetching remote media %s/%s: %s",
373379 server_name, media_id, e.response)
374380 if e.code == twisted.web.http.NOT_FOUND:
375 raise SynapseError.from_http_response_exception(e)
381 raise e.to_synapse_error()
376382 raise SynapseError(502, "Failed to fetch remote media")
377383
378384 except SynapseError:
176176 if res:
177177 with res:
178178 consumer = BackgroundFileConsumer(
179 open(local_path, "w"), self.hs.get_reactor())
179 open(local_path, "wb"), self.hs.get_reactor())
180180 yield res.write_to_consumer(consumer)
181181 yield consumer.wait()
182182 defer.returnValue(local_path)
4040 wrap_json_request_handler,
4141 )
4242 from synapse.http.servlet import parse_integer, parse_string
43 from synapse.metrics.background_process_metrics import run_as_background_process
4344 from synapse.util.async import ObservableDeferred
4445 from synapse.util.caches.expiringcache import ExpiringCache
4546 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
8081 self._cache.start()
8182
8283 self._cleaner_loop = self.clock.looping_call(
83 self._expire_url_cache_data, 10 * 1000
84 self._start_expire_url_cache_data, 10 * 1000,
8485 )
8586
8687 def render_OPTIONS(self, request):
369370 "expires": 60 * 60 * 1000,
370371 "etag": headers["ETag"][0] if "ETag" in headers else None,
371372 })
373
374 def _start_expire_url_cache_data(self):
375 return run_as_background_process(
376 "expire_url_cache_data", self._expire_url_cache_data,
377 )
372378
373379 @defer.inlineCallbacks
374380 def _expire_url_cache_data(self):
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Injectable secrets module for Synapse.
17
18 See https://docs.python.org/3/library/secrets.html#module-secrets for the API
19 used in Python 3.6, and the API emulated in Python 2.7.
20 """
21
22 import sys
23
24 # secrets is available since python 3.6
25 if sys.version_info[0:2] >= (3, 6):
26 import secrets
27
28 def Secrets():
29 return secrets
30
31 else:
32 import os
33 import binascii
34
35 class Secrets(object):
36 def token_bytes(self, nbytes=32):
37 return os.urandom(nbytes)
38
39 def token_hex(self, nbytes=32):
40 return binascii.hexlify(self.token_bytes(nbytes))
5151 from synapse.handlers.events import EventHandler, EventStreamHandler
5252 from synapse.handlers.groups_local import GroupsLocalHandler
5353 from synapse.handlers.initial_sync import InitialSyncHandler
54 from synapse.handlers.message import EventCreationHandler
54 from synapse.handlers.message import EventCreationHandler, MessageHandler
55 from synapse.handlers.pagination import PaginationHandler
5556 from synapse.handlers.presence import PresenceHandler
5657 from synapse.handlers.profile import ProfileHandler
5758 from synapse.handlers.read_marker import ReadMarkerHandler
5859 from synapse.handlers.receipts import ReceiptsHandler
59 from synapse.handlers.room import RoomCreationHandler
60 from synapse.handlers.room import RoomContextHandler, RoomCreationHandler
6061 from synapse.handlers.room_list import RoomListHandler
6162 from synapse.handlers.room_member import RoomMemberMasterHandler
6263 from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
7374 MediaRepository,
7475 MediaRepositoryResource,
7576 )
77 from synapse.secrets import Secrets
7678 from synapse.server_notices.server_notices_manager import ServerNoticesManager
7779 from synapse.server_notices.server_notices_sender import ServerNoticesSender
7880 from synapse.server_notices.worker_server_notices_sender import WorkerServerNoticesSender
157159 'groups_server_handler',
158160 'groups_attestation_signing',
159161 'groups_attestation_renewer',
162 'secrets',
160163 'spam_checker',
161164 'room_member_handler',
162165 'federation_registry',
163166 'server_notices_manager',
164167 'server_notices_sender',
168 'message_handler',
169 'pagination_handler',
170 'room_context_handler',
165171 ]
166172
167173 def __init__(self, hostname, reactor=None, **kwargs):
404410 def build_groups_attestation_renewer(self):
405411 return GroupAttestionRenewer(self)
406412
413 def build_secrets(self):
414 return Secrets()
415
407416 def build_spam_checker(self):
408417 return SpamChecker(self)
409418
424433 if self.config.worker_app:
425434 return WorkerServerNoticesSender(self)
426435 return ServerNoticesSender(self)
436
437 def build_message_handler(self):
438 return MessageHandler(self)
439
440 def build_pagination_handler(self):
441 return PaginationHandler(self)
442
443 def build_room_context_handler(self):
444 return RoomContextHandler(self)
427445
428446 def remove_pusher(self, app_id, push_key, user_id):
429447 return self.get_pusherpool().remove_pusher(app_id, push_key, user_id)
1717 import logging
1818 from collections import namedtuple
1919
20 from six import iteritems, itervalues
20 from six import iteritems, iterkeys, itervalues
2121
2222 from frozendict import frozendict
2323
202202 # If this is an outlier, then we know it shouldn't have any current
203203 # state. Certainly store.get_current_state won't return any, and
204204 # persisting the event won't store the state group.
205 context = EventContext()
206205 if old_state:
207 context.prev_state_ids = {
206 prev_state_ids = {
208207 (s.type, s.state_key): s.event_id for s in old_state
209208 }
210209 if event.is_state():
211 context.current_state_ids = dict(context.prev_state_ids)
210 current_state_ids = dict(prev_state_ids)
212211 key = (event.type, event.state_key)
213 context.current_state_ids[key] = event.event_id
212 current_state_ids[key] = event.event_id
214213 else:
215 context.current_state_ids = context.prev_state_ids
214 current_state_ids = prev_state_ids
216215 else:
217 context.current_state_ids = {}
218 context.prev_state_ids = {}
219 context.prev_state_events = []
216 current_state_ids = {}
217 prev_state_ids = {}
220218
221219 # We don't store state for outliers, so we don't generate a state
222 # froup for it.
223 context.state_group = None
220 # group for it.
221 context = EventContext.with_state(
222 state_group=None,
223 current_state_ids=current_state_ids,
224 prev_state_ids=prev_state_ids,
225 )
224226
225227 defer.returnValue(context)
226228
229231 # Let's just correctly fill out the context and create a
230232 # new state group for it.
231233
232 context = EventContext()
233 context.prev_state_ids = {
234 prev_state_ids = {
234235 (s.type, s.state_key): s.event_id for s in old_state
235236 }
236237
237238 if event.is_state():
238239 key = (event.type, event.state_key)
239 if key in context.prev_state_ids:
240 replaces = context.prev_state_ids[key]
240 if key in prev_state_ids:
241 replaces = prev_state_ids[key]
241242 if replaces != event.event_id: # Paranoia check
242243 event.unsigned["replaces_state"] = replaces
243 context.current_state_ids = dict(context.prev_state_ids)
244 context.current_state_ids[key] = event.event_id
244 current_state_ids = dict(prev_state_ids)
245 current_state_ids[key] = event.event_id
245246 else:
246 context.current_state_ids = context.prev_state_ids
247
248 context.state_group = yield self.store.store_state_group(
247 current_state_ids = prev_state_ids
248
249 state_group = yield self.store.store_state_group(
249250 event.event_id,
250251 event.room_id,
251252 prev_group=None,
252253 delta_ids=None,
253 current_state_ids=context.current_state_ids,
254 current_state_ids=current_state_ids,
254255 )
255256
256 context.prev_state_events = []
257 context = EventContext.with_state(
258 state_group=state_group,
259 current_state_ids=current_state_ids,
260 prev_state_ids=prev_state_ids,
261 )
262
257263 defer.returnValue(context)
258264
259265 logger.debug("calling resolve_state_groups from compute_event_context")
261267 event.room_id, [e for e, _ in event.prev_events],
262268 )
263269
264 curr_state = entry.state
265
266 context = EventContext()
267 context.prev_state_ids = curr_state
270 prev_state_ids = entry.state
271 prev_group = None
272 delta_ids = None
273
268274 if event.is_state():
269275 # If this is a state event then we need to create a new state
270276 # group for the state after this event.
271277
272278 key = (event.type, event.state_key)
273 if key in context.prev_state_ids:
274 replaces = context.prev_state_ids[key]
279 if key in prev_state_ids:
280 replaces = prev_state_ids[key]
275281 event.unsigned["replaces_state"] = replaces
276282
277 context.current_state_ids = dict(context.prev_state_ids)
278 context.current_state_ids[key] = event.event_id
283 current_state_ids = dict(prev_state_ids)
284 current_state_ids[key] = event.event_id
279285
280286 if entry.state_group:
281287 # If the state at the event has a state group assigned then
282288 # we can use that as the prev group
283 context.prev_group = entry.state_group
284 context.delta_ids = {
289 prev_group = entry.state_group
290 delta_ids = {
285291 key: event.event_id
286292 }
287293 elif entry.prev_group:
288294 # If the state at the event only has a prev group, then we can
289295 # use that as a prev group too.
290 context.prev_group = entry.prev_group
291 context.delta_ids = dict(entry.delta_ids)
292 context.delta_ids[key] = event.event_id
293
294 context.state_group = yield self.store.store_state_group(
296 prev_group = entry.prev_group
297 delta_ids = dict(entry.delta_ids)
298 delta_ids[key] = event.event_id
299
300 state_group = yield self.store.store_state_group(
295301 event.event_id,
296302 event.room_id,
297 prev_group=context.prev_group,
298 delta_ids=context.delta_ids,
299 current_state_ids=context.current_state_ids,
303 prev_group=prev_group,
304 delta_ids=delta_ids,
305 current_state_ids=current_state_ids,
300306 )
301307 else:
302 context.current_state_ids = context.prev_state_ids
303 context.prev_group = entry.prev_group
304 context.delta_ids = entry.delta_ids
308 current_state_ids = prev_state_ids
309 prev_group = entry.prev_group
310 delta_ids = entry.delta_ids
305311
306312 if entry.state_group is None:
307313 entry.state_group = yield self.store.store_state_group(
309315 event.room_id,
310316 prev_group=entry.prev_group,
311317 delta_ids=entry.delta_ids,
312 current_state_ids=context.current_state_ids,
318 current_state_ids=current_state_ids,
313319 )
314320 entry.state_id = entry.state_group
315321
316 context.state_group = entry.state_group
317
318 context.prev_state_events = []
322 state_group = entry.state_group
323
324 context = EventContext.with_state(
325 state_group=state_group,
326 current_state_ids=current_state_ids,
327 prev_state_ids=prev_state_ids,
328 prev_group=prev_group,
329 delta_ids=delta_ids,
330 )
331
319332 defer.returnValue(context)
320333
321334 @defer.inlineCallbacks
457470 "Resolving state for %s with %d groups", room_id, len(state_groups_ids)
458471 )
459472
460 # build a map from state key to the event_ids which set that state.
461 # dict[(str, str), set[str])
462 state = {}
473 # start by assuming we won't have any conflicted state, and build up the new
474 # state map by iterating through the state groups. If we discover a conflict,
475 # we give up and instead use `resolve_events_with_factory`.
476 #
477 # XXX: is this actually worthwhile, or should we just let
478 # resolve_events_with_factory do it?
479 new_state = {}
480 conflicted_state = False
463481 for st in itervalues(state_groups_ids):
464482 for key, e_id in iteritems(st):
465 state.setdefault(key, set()).add(e_id)
466
467 # build a map from state key to the event_ids which set that state,
468 # including only those where there are state keys in conflict.
469 conflicted_state = {
470 k: list(v)
471 for k, v in iteritems(state)
472 if len(v) > 1
473 }
483 if key in new_state:
484 conflicted_state = True
485 break
486 new_state[key] = e_id
487 if conflicted_state:
488 break
474489
475490 if conflicted_state:
476491 logger.info("Resolving conflicted state for %r", room_id)
477492 with Measure(self.clock, "state._resolve_events"):
478493 new_state = yield resolve_events_with_factory(
479 list(state_groups_ids.values()),
494 list(itervalues(state_groups_ids)),
480495 event_map=event_map,
481496 state_map_factory=state_map_factory,
482497 )
483 else:
484 new_state = {
485 key: e_ids.pop() for key, e_ids in iteritems(state)
486 }
498
499 # if the new state matches any of the input state groups, we can
500 # use that state group again. Otherwise we will generate a state_id
501 # which will be used as a cache key for future resolutions, but
502 # not get persisted.
487503
488504 with Measure(self.clock, "state.create_group_ids"):
489 # if the new state matches any of the input state groups, we can
490 # use that state group again. Otherwise we will generate a state_id
491 # which will be used as a cache key for future resolutions, but
492 # not get persisted.
493 state_group = None
494 new_state_event_ids = frozenset(itervalues(new_state))
495 for sg, events in iteritems(state_groups_ids):
496 if new_state_event_ids == frozenset(e_id for e_id in events):
497 state_group = sg
498 break
499
500 # TODO: We want to create a state group for this set of events, to
501 # increase cache hits, but we need to make sure that it doesn't
502 # end up as a prev_group without being added to the database
503
504 prev_group = None
505 delta_ids = None
506 for old_group, old_ids in iteritems(state_groups_ids):
507 if not set(new_state) - set(old_ids):
508 n_delta_ids = {
509 k: v
510 for k, v in iteritems(new_state)
511 if old_ids.get(k) != v
512 }
513 if not delta_ids or len(n_delta_ids) < len(delta_ids):
514 prev_group = old_group
515 delta_ids = n_delta_ids
516
517 cache = _StateCacheEntry(
518 state=new_state,
519 state_group=state_group,
520 prev_group=prev_group,
521 delta_ids=delta_ids,
522 )
505 cache = _make_state_cache_entry(new_state, state_groups_ids)
523506
524507 if self._state_cache is not None:
525508 self._state_cache[group_names] = cache
527510 defer.returnValue(cache)
528511
529512
513 def _make_state_cache_entry(
514 new_state,
515 state_groups_ids,
516 ):
517 """Given a resolved state, and a set of input state groups, pick one to base
518 a new state group on (if any), and return an appropriately-constructed
519 _StateCacheEntry.
520
521 Args:
522 new_state (dict[(str, str), str]): resolved state map (mapping from
523 (type, state_key) to event_id)
524
525 state_groups_ids (dict[int, dict[(str, str), str]]):
526 map from state group id to the state in that state group
527 (where 'state' is a map from state key to event id)
528
529 Returns:
530 _StateCacheEntry
531 """
532 # if the new state matches any of the input state groups, we can
533 # use that state group again. Otherwise we will generate a state_id
534 # which will be used as a cache key for future resolutions, but
535 # not get persisted.
536
537 # first look for exact matches
538 new_state_event_ids = set(itervalues(new_state))
539 for sg, state in iteritems(state_groups_ids):
540 if len(new_state_event_ids) != len(state):
541 continue
542
543 old_state_event_ids = set(itervalues(state))
544 if new_state_event_ids == old_state_event_ids:
545 # got an exact match.
546 return _StateCacheEntry(
547 state=new_state,
548 state_group=sg,
549 )
550
551 # TODO: We want to create a state group for this set of events, to
552 # increase cache hits, but we need to make sure that it doesn't
553 # end up as a prev_group without being added to the database
554
555 # failing that, look for the closest match.
556 prev_group = None
557 delta_ids = None
558
559 for old_group, old_state in iteritems(state_groups_ids):
560 n_delta_ids = {
561 k: v
562 for k, v in iteritems(new_state)
563 if old_state.get(k) != v
564 }
565 if not delta_ids or len(n_delta_ids) < len(delta_ids):
566 prev_group = old_group
567 delta_ids = n_delta_ids
568
569 return _StateCacheEntry(
570 state=new_state,
571 state_group=None,
572 prev_group=prev_group,
573 delta_ids=delta_ids,
574 )
575
576
530577 def _ordered_events(events):
531578 def key_func(e):
532 return -int(e.depth), hashlib.sha1(e.event_id.encode()).hexdigest()
579 return -int(e.depth), hashlib.sha1(e.event_id.encode('ascii')).hexdigest()
533580
534581 return sorted(events, key=key_func)
535582
568615 with them in different state sets.
569616
570617 Args:
571 state_sets(list[dict[(str, str), str]]):
618 state_sets(iterable[dict[(str, str), str]]):
572619 List of dicts of (type, state_key) -> event_id, which are the
573620 different state groups to resolve.
574621
582629 conflicted_state is a dict mapping (type, state_key) to a set of
583630 event ids for conflicted state keys.
584631 """
585 unconflicted_state = dict(state_sets[0])
632 state_set_iterator = iter(state_sets)
633 unconflicted_state = dict(next(state_set_iterator))
586634 conflicted_state = {}
587635
588 for state_set in state_sets[1:]:
636 for state_set in state_set_iterator:
589637 for key, value in iteritems(state_set):
590638 # Check if there is an unconflicted entry for the state key.
591639 unconflicted_value = unconflicted_state.get(key)
646694 for event_id in event_ids
647695 )
648696 if event_map is not None:
649 needed_events -= set(event_map.iterkeys())
697 needed_events -= set(iterkeys(event_map))
650698
651699 logger.info("Asking for %d conflicted events", len(needed_events))
652700
667715 new_needed_events = set(itervalues(auth_events))
668716 new_needed_events -= needed_events
669717 if event_map is not None:
670 new_needed_events -= set(event_map.iterkeys())
718 new_needed_events -= set(iterkeys(event_map))
671719
672720 logger.info("Asking for %d auth events", len(new_needed_events))
673721
6565 PresenceStore, TransactionStore,
6666 DirectoryStore, KeyStore, StateStore, SignatureStore,
6767 ApplicationServiceStore,
68 EventsStore,
6869 EventFederationStore,
6970 MediaRepositoryStore,
7071 RejectionsStore,
7273 PusherStore,
7374 PushRuleStore,
7475 ApplicationServiceTransactionStore,
75 EventsStore,
7676 ReceiptsStore,
7777 EndToEndKeyStore,
7878 SearchStore,
9393 self._clock = hs.get_clock()
9494 self.database_engine = hs.database_engine
9595
96 self.db_conn = db_conn
9697 self._stream_id_gen = StreamIdGenerator(
9798 db_conn, "events", "stream_ordering",
9899 extra_tables=[("local_invites", "stream_id")]
264265 return count
265266
266267 return self.runInteraction("count_users", _count_users)
268
269 def count_monthly_users(self):
270 """Counts the number of users who used this homeserver in the last 30 days
271
272 This method should be refactored with count_daily_users - the only
273 reason not to is waiting on definition of mau
274
275 Returns:
276 Defered[int]
277 """
278 def _count_monthly_users(txn):
279 thirty_days_ago = int(self._clock.time_msec()) - (1000 * 60 * 60 * 24 * 30)
280 sql = """
281 SELECT COALESCE(count(*), 0) FROM (
282 SELECT user_id FROM user_ips
283 WHERE last_seen > ?
284 GROUP BY user_id
285 ) u
286 """
287
288 txn.execute(sql, (thirty_days_ago,))
289 count, = txn.fetchone()
290 return count
291
292 return self.runInteraction("count_monthly_users", _count_monthly_users)
267293
268294 def count_r30_users(self):
269295 """
310310 after_callbacks = []
311311 exception_callbacks = []
312312
313 if LoggingContext.current_context() == LoggingContext.sentinel:
314 logger.warn(
315 "Starting db txn '%s' from sentinel context",
316 desc,
317 )
318
313319 try:
314320 result = yield self.runWithConnection(
315321 self._new_transaction,
342348 """
343349 parent_context = LoggingContext.current_context()
344350 if parent_context == LoggingContext.sentinel:
345 # warning disabled for 0.33.0 release; proper fixes will land imminently.
346 # logger.warn(
347 # "Running db txn from sentinel context: metrics will be lost",
348 # )
351 logger.warn(
352 "Starting db connection from sentinel context: metrics will be lost",
353 )
349354 parent_context = None
350355
351356 start_time = time.time()
2121
2222 from synapse.appservice import AppServiceTransaction
2323 from synapse.config.appservice import load_appservices
24 from synapse.storage.events import EventsWorkerStore
24 from synapse.storage.events_worker import EventsWorkerStore
2525
2626 from ._base import SQLBaseStore
2727
1818
1919 from twisted.internet import defer
2020
21 from synapse.metrics.background_process_metrics import run_as_background_process
22
2123 from . import engines
2224 from ._base import SQLBaseStore
2325
8688 self._background_update_handlers = {}
8789 self._all_done = False
8890
91 def start_doing_background_updates(self):
92 run_as_background_process(
93 "background_updates", self._run_background_updates,
94 )
95
8996 @defer.inlineCallbacks
90 def start_doing_background_updates(self):
97 def _run_background_updates(self):
9198 logger.info("Starting background schema updates")
92
9399 while True:
94100 yield self.hs.get_clock().sleep(
95101 self.BACKGROUND_UPDATE_INTERVAL_MS / 1000.)
1818
1919 from twisted.internet import defer
2020
21 from synapse.metrics.background_process_metrics import run_as_background_process
2122 from synapse.util.caches import CACHE_SIZE_FACTOR
2223
2324 from . import background_updates
9293 self._batch_row_update[key] = (user_agent, device_id, now)
9394
9495 def _update_client_ips_batch(self):
95 to_update = self._batch_row_update
96 self._batch_row_update = {}
97 return self.runInteraction(
98 "_update_client_ips_batch", self._update_client_ips_batch_txn, to_update
96 def update():
97 to_update = self._batch_row_update
98 self._batch_row_update = {}
99 return self.runInteraction(
100 "_update_client_ips_batch", self._update_client_ips_batch_txn,
101 to_update,
102 )
103
104 return run_as_background_process(
105 "update_client_ips", update,
99106 )
100107
101108 def _update_client_ips_batch_txn(self, txn, to_update):
2020 from twisted.internet import defer
2121
2222 from synapse.api.errors import StoreError
23 from synapse.metrics.background_process_metrics import run_as_background_process
2324 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
2425
2526 from ._base import Cache, SQLBaseStore
247248
248249 def _update_remote_device_list_cache_entry_txn(self, txn, user_id, device_id,
249250 content, stream_id):
250 self._simple_upsert_txn(
251 txn,
252 table="device_lists_remote_cache",
253 keyvalues={
254 "user_id": user_id,
255 "device_id": device_id,
256 },
257 values={
258 "content": json.dumps(content),
259 }
260 )
251 if content.get("deleted"):
252 self._simple_delete_txn(
253 txn,
254 table="device_lists_remote_cache",
255 keyvalues={
256 "user_id": user_id,
257 "device_id": device_id,
258 },
259 )
260
261 txn.call_after(
262 self.device_id_exists_cache.invalidate, (user_id, device_id,)
263 )
264 else:
265 self._simple_upsert_txn(
266 txn,
267 table="device_lists_remote_cache",
268 keyvalues={
269 "user_id": user_id,
270 "device_id": device_id,
271 },
272 values={
273 "content": json.dumps(content),
274 }
275 )
261276
262277 txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
263278 txn.call_after(self._get_cached_devices_for_user.invalidate, (user_id,))
365380 now_stream_id = max(stream_id for stream_id in itervalues(query_map))
366381
367382 devices = self._get_e2e_device_keys_txn(
368 txn, query_map.keys(), include_all_devices=True
383 txn, query_map.keys(), include_all_devices=True, include_deleted_devices=True
369384 )
370385
371386 prev_sent_id_sql = """
392407
393408 prev_id = stream_id
394409
395 key_json = device.get("key_json", None)
396 if key_json:
397 result["keys"] = json.loads(key_json)
398 device_display_name = device.get("device_display_name", None)
399 if device_display_name:
400 result["device_display_name"] = device_display_name
410 if device is not None:
411 key_json = device.get("key_json", None)
412 if key_json:
413 result["keys"] = json.loads(key_json)
414 device_display_name = device.get("device_display_name", None)
415 if device_display_name:
416 result["device_display_name"] = device_display_name
417 else:
418 result["deleted"] = True
401419
402420 results.append(result)
403421
693711
694712 logger.info("Pruned %d device list outbound pokes", txn.rowcount)
695713
696 return self.runInteraction(
697 "_prune_old_outbound_device_pokes", _prune_txn
698 )
714 return run_as_background_process(
715 "prune_old_outbound_device_pokes",
716 self.runInteraction,
717 "_prune_old_outbound_device_pokes",
718 _prune_txn,
719 )
6363 )
6464
6565 @defer.inlineCallbacks
66 def get_e2e_device_keys(self, query_list, include_all_devices=False):
66 def get_e2e_device_keys(
67 self, query_list, include_all_devices=False,
68 include_deleted_devices=False,
69 ):
6770 """Fetch a list of device keys.
6871 Args:
6972 query_list(list): List of pairs of user_ids and device_ids.
7073 include_all_devices (bool): whether to include entries for devices
7174 that don't have device keys
75 include_deleted_devices (bool): whether to include null entries for
76 devices which no longer exist (but were in the query_list).
77 This option only takes effect if include_all_devices is true.
7278 Returns:
7379 Dict mapping from user-id to dict mapping from device_id to
7480 dict containing "key_json", "device_display_name".
7884
7985 results = yield self.runInteraction(
8086 "get_e2e_device_keys", self._get_e2e_device_keys_txn,
81 query_list, include_all_devices,
87 query_list, include_all_devices, include_deleted_devices,
8288 )
8389
8490 for user_id, device_keys in iteritems(results):
8793
8894 defer.returnValue(results)
8995
90 def _get_e2e_device_keys_txn(self, txn, query_list, include_all_devices):
96 def _get_e2e_device_keys_txn(
97 self, txn, query_list, include_all_devices=False,
98 include_deleted_devices=False,
99 ):
91100 query_clauses = []
92101 query_params = []
102
103 if include_all_devices is False:
104 include_deleted_devices = False
105
106 if include_deleted_devices:
107 deleted_devices = set(query_list)
93108
94109 for (user_id, device_id) in query_list:
95110 query_clause = "user_id = ?"
118133
119134 result = {}
120135 for row in rows:
136 if include_deleted_devices:
137 deleted_devices.remove((row["user_id"], row["device_id"]))
121138 result.setdefault(row["user_id"], {})[row["device_id"]] = row
139
140 if include_deleted_devices:
141 for user_id, device_id in deleted_devices:
142 result.setdefault(user_id, {})[device_id] = None
122143
123144 return result
124145
2222 from twisted.internet import defer
2323
2424 from synapse.api.errors import StoreError
25 from synapse.metrics.background_process_metrics import run_as_background_process
2526 from synapse.storage._base import SQLBaseStore
26 from synapse.storage.events import EventsWorkerStore
27 from synapse.storage.events_worker import EventsWorkerStore
2728 from synapse.storage.signatures import SignatureWorkerStore
2829 from synapse.util.caches.descriptors import cached
2930
112113 sql = (
113114 "SELECT b.event_id, MAX(e.depth) FROM events as e"
114115 " INNER JOIN event_edges as g"
115 " ON g.event_id = e.event_id AND g.room_id = e.room_id"
116 " ON g.event_id = e.event_id"
116117 " INNER JOIN event_backward_extremities as b"
117 " ON g.prev_event_id = b.event_id AND g.room_id = b.room_id"
118 " ON g.prev_event_id = b.event_id"
118119 " WHERE b.room_id = ? AND g.is_state is ?"
119120 " GROUP BY b.event_id"
120121 )
328329 "SELECT depth, prev_event_id FROM event_edges"
329330 " INNER JOIN events"
330331 " ON prev_event_id = events.event_id"
331 " AND event_edges.room_id = events.room_id"
332 " WHERE event_edges.room_id = ? AND event_edges.event_id = ?"
332 " WHERE event_edges.event_id = ?"
333333 " AND event_edges.is_state = ?"
334334 " LIMIT ?"
335335 )
364364
365365 txn.execute(
366366 query,
367 (room_id, event_id, False, limit - len(event_results))
367 (event_id, False, limit - len(event_results))
368368 )
369369
370370 for row in txn:
401401
402402 query = (
403403 "SELECT prev_event_id FROM event_edges "
404 "WHERE room_id = ? AND event_id = ? AND is_state = ? "
404 "WHERE event_id = ? AND is_state = ? "
405405 "LIMIT ?"
406406 )
407407
410410 for event_id in front:
411411 txn.execute(
412412 query,
413 (room_id, event_id, False, limit - len(event_results))
413 (event_id, False, limit - len(event_results))
414414 )
415415
416416 for e_id, in txn:
446446 )
447447
448448 hs.get_clock().looping_call(
449 self._delete_old_forward_extrem_cache, 60 * 60 * 1000
449 self._delete_old_forward_extrem_cache, 60 * 60 * 1000,
450450 )
451451
452452 def _update_min_depth_for_room_txn(self, txn, room_id, depth):
548548 sql,
549549 (self.stream_ordering_month_ago, self.stream_ordering_month_ago,)
550550 )
551 return self.runInteraction(
551 return run_as_background_process(
552 "delete_old_forward_extrem_cache",
553 self.runInteraction,
552554 "_delete_old_forward_extrem_cache",
553 _delete_old_forward_extrem_cache_txn
555 _delete_old_forward_extrem_cache_txn,
554556 )
555557
556558 def clean_room_for_join(self, room_id):
2121
2222 from twisted.internet import defer
2323
24 from synapse.metrics.background_process_metrics import run_as_background_process
2425 from synapse.storage._base import LoggingTransaction, SQLBaseStore
2526 from synapse.util.caches.descriptors import cachedInlineCallbacks
2627
457458 "Error removing push actions after event persistence failure",
458459 )
459460
460 @defer.inlineCallbacks
461461 def _find_stream_orderings_for_times(self):
462 yield self.runInteraction(
462 return run_as_background_process(
463 "event_push_action_stream_orderings",
464 self.runInteraction,
463465 "_find_stream_orderings_for_times",
464 self._find_stream_orderings_for_times_txn
466 self._find_stream_orderings_for_times_txn,
465467 )
466468
467469 def _find_stream_orderings_for_times_txn(self, txn):
603605
604606 self._doing_notif_rotation = False
605607 self._rotate_notif_loop = self._clock.looping_call(
606 self._rotate_notifs, 30 * 60 * 1000
608 self._start_rotate_notifs, 30 * 60 * 1000,
607609 )
608610
609611 def _set_push_actions_for_event_and_users_txn(self, txn, events_and_contexts,
785787 DELETE FROM event_push_summary
786788 WHERE room_id = ? AND user_id = ? AND stream_ordering <= ?
787789 """, (room_id, user_id, stream_ordering))
790
791 def _start_rotate_notifs(self):
792 return run_as_background_process("rotate_notifs", self._rotate_notifs)
788793
789794 @defer.inlineCallbacks
790795 def _rotate_notifs(self):
1818 from collections import OrderedDict, deque, namedtuple
1919 from functools import wraps
2020
21 from six import iteritems, itervalues
21 from six import iteritems
2222 from six.moves import range
2323
2424 from canonicaljson import json
3232 # these are only included to make the type annotations work
3333 from synapse.events import EventBase # noqa: F401
3434 from synapse.events.snapshot import EventContext # noqa: F401
35 from synapse.metrics.background_process_metrics import run_as_background_process
36 from synapse.storage.background_updates import BackgroundUpdateStore
37 from synapse.storage.event_federation import EventFederationStore
3538 from synapse.storage.events_worker import EventsWorkerStore
3639 from synapse.types import RoomStreamToken, get_domain_from_id
3740 from synapse.util.async import ObservableDeferred
6366
6467
6568 def encode_json(json_object):
66 return frozendict_json_encoder.encode(json_object)
69 """
70 Encode a Python object as JSON and return it in a Unicode string.
71 """
72 out = frozendict_json_encoder.encode(json_object)
73 if isinstance(out, bytes):
74 out = out.decode('utf8')
75 return out
6776
6877
6978 class _EventPeristenceQueue(object):
140149 try:
141150 queue = self._get_drainining_queue(room_id)
142151 for item in queue:
143 # handle_queue_loop runs in the sentinel logcontext, so
144 # there is no need to preserve_fn when running the
145 # callbacks on the deferred.
146152 try:
147153 ret = yield per_item_callback(item)
148 item.deferred.callback(ret)
149154 except Exception:
150 item.deferred.errback()
155 with PreserveLoggingContext():
156 item.deferred.errback()
157 else:
158 with PreserveLoggingContext():
159 item.deferred.callback(ret)
151160 finally:
152161 queue = self._event_persist_queues.pop(room_id, None)
153162 if queue:
154163 self._event_persist_queues[room_id] = queue
155164 self._currently_persisting_rooms.discard(room_id)
156165
157 # set handle_queue_loop off on the background. We don't want to
158 # attribute work done in it to the current request, so we drop the
159 # logcontext altogether.
160 with PreserveLoggingContext():
161 handle_queue_loop()
166 # set handle_queue_loop off in the background
167 run_as_background_process("persist_events", handle_queue_loop)
162168
163169 def _get_drainining_queue(self, room_id):
164170 queue = self._event_persist_queues.setdefault(room_id, deque())
194200 return f
195201
196202
197 class EventsStore(EventsWorkerStore):
203 # inherits from EventFederationStore so that we can call _update_backward_extremities
204 # and _handle_mult_prev_events (though arguably those could both be moved in here)
205 class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore):
198206 EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
199207 EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
200208
232240
233241 self._state_resolution_handler = hs.get_state_resolution_handler()
234242
243 @defer.inlineCallbacks
235244 def persist_events(self, events_and_contexts, backfilled=False):
236245 """
237246 Write events to the database
238247 Args:
239248 events_and_contexts: list of tuples of (event, context)
240 backfilled: ?
249 backfilled (bool): Whether the results are retrieved from federation
250 via backfill or not. Used to determine if they're "new" events
251 which might update the current state etc.
252
253 Returns:
254 Deferred[int]: the stream ordering of the latest persisted event
241255 """
242256 partitioned = {}
243257 for event, ctx in events_and_contexts:
254268 for room_id in partitioned:
255269 self._maybe_start_persisting(room_id)
256270
257 return make_deferred_yieldable(
271 yield make_deferred_yieldable(
258272 defer.gatherResults(deferreds, consumeErrors=True)
259273 )
274
275 max_persisted_id = yield self._stream_id_gen.get_current_token()
276
277 defer.returnValue(max_persisted_id)
260278
261279 @defer.inlineCallbacks
262280 @log_function
344362 new_forward_extremeties = {}
345363
346364 # map room_id->(type,state_key)->event_id tracking the full
347 # state in each room after adding these events
365 # state in each room after adding these events.
366 # This is simply used to prefill the get_current_state_ids
367 # cache
348368 current_state_for_room = {}
349369
350 # map room_id->(to_delete, to_insert) where each entry is
351 # a map (type,key)->event_id giving the state delta in each
370 # map room_id->(to_delete, to_insert) where to_delete is a list
371 # of type/state keys to remove from current state, and to_insert
372 # is a map (type,key)->event_id giving the state delta in each
352373 # room
353374 state_delta_for_room = {}
354375
418439 logger.info(
419440 "Calculating state delta for room %s", room_id,
420441 )
421 current_state = yield self._get_new_state_after_events(
422 room_id,
423 ev_ctx_rm,
424 latest_event_ids,
425 new_latest_event_ids,
426 )
442 with Measure(
443 self._clock,
444 "persist_events.get_new_state_after_events",
445 ):
446 res = yield self._get_new_state_after_events(
447 room_id,
448 ev_ctx_rm,
449 latest_event_ids,
450 new_latest_event_ids,
451 )
452 current_state, delta_ids = res
453
454 # If either are not None then there has been a change,
455 # and we need to work out the delta (or use that
456 # given)
457 if delta_ids is not None:
458 # If there is a delta we know that we've
459 # only added or replaced state, never
460 # removed keys entirely.
461 state_delta_for_room[room_id] = ([], delta_ids)
462 elif current_state is not None:
463 with Measure(
464 self._clock,
465 "persist_events.calculate_state_delta",
466 ):
467 delta = yield self._calculate_state_delta(
468 room_id, current_state,
469 )
470 state_delta_for_room[room_id] = delta
471
472 # If we have the current_state then lets prefill
473 # the cache with it.
427474 if current_state is not None:
428475 current_state_for_room[room_id] = current_state
429 delta = yield self._calculate_state_delta(
430 room_id, current_state,
431 )
432 if delta is not None:
433 state_delta_for_room[room_id] = delta
434476
435477 yield self.runInteraction(
436478 "persist_events",
497539 iterable=list(new_latest_event_ids),
498540 retcols=["prev_event_id"],
499541 keyvalues={
500 "room_id": room_id,
501542 "is_state": False,
502543 },
503544 desc="_calculate_new_extremeties",
529570 the new forward extremities for the room.
530571
531572 Returns:
532 Deferred[dict[(str,str), str]|None]:
533 None if there are no changes to the room state, or
534 a dict of (type, state_key) -> event_id].
573 Deferred[tuple[dict[(str,str), str]|None, dict[(str,str), str]|None]]:
574 Returns a tuple of two state maps, the first being the full new current
575 state and the second being the delta to the existing current state.
576 If both are None then there has been no change.
577
578 If there has been a change then we only return the delta if its
579 already been calculated. Conversely if we do know the delta then
580 the new current state is only returned if we've already calculated
581 it.
535582 """
536583
537584 if not new_latest_event_ids:
539586
540587 # map from state_group to ((type, key) -> event_id) state map
541588 state_groups_map = {}
589
590 # Map from (prev state group, new state group) -> delta state dict
591 state_group_deltas = {}
592
542593 for ev, ctx in events_context:
543594 if ctx.state_group is None:
544 # I don't think this can happen, but let's double-check
545 raise Exception(
546 "Context for new extremity event %s has no state "
547 "group" % (ev.event_id, ),
548 )
595 # This should only happen for outlier events.
596 if not ev.internal_metadata.is_outlier():
597 raise Exception(
598 "Context for new event %s has no state "
599 "group" % (ev.event_id, ),
600 )
601 continue
549602
550603 if ctx.state_group in state_groups_map:
551604 continue
552605
553 state_groups_map[ctx.state_group] = ctx.current_state_ids
606 # We're only interested in pulling out state that has already
607 # been cached in the context. We'll pull stuff out of the DB later
608 # if necessary.
609 current_state_ids = ctx.get_cached_current_state_ids()
610 if current_state_ids is not None:
611 state_groups_map[ctx.state_group] = current_state_ids
612
613 if ctx.prev_group:
614 state_group_deltas[(ctx.prev_group, ctx.state_group)] = ctx.delta_ids
554615
555616 # We need to map the event_ids to their state groups. First, let's
556617 # check if the event is one we're persisting, in which case we can
565626 for event_id in new_latest_event_ids:
566627 # First search in the list of new events we're adding.
567628 for ev, ctx in events_context:
568 if event_id == ev.event_id:
629 if event_id == ev.event_id and ctx.state_group is not None:
569630 event_id_to_state_group[event_id] = ctx.state_group
570631 break
571632 else:
593654 # If they old and new groups are the same then we don't need to do
594655 # anything.
595656 if old_state_groups == new_state_groups:
596 return
657 defer.returnValue((None, None))
658
659 if len(new_state_groups) == 1 and len(old_state_groups) == 1:
660 # If we're going from one state group to another, lets check if
661 # we have a delta for that transition. If we do then we can just
662 # return that.
663
664 new_state_group = next(iter(new_state_groups))
665 old_state_group = next(iter(old_state_groups))
666
667 delta_ids = state_group_deltas.get(
668 (old_state_group, new_state_group,), None
669 )
670 if delta_ids is not None:
671 # We have a delta from the existing to new current state,
672 # so lets just return that. If we happen to already have
673 # the current state in memory then lets also return that,
674 # but it doesn't matter if we don't.
675 new_state = state_groups_map.get(new_state_group)
676 defer.returnValue((new_state, delta_ids))
597677
598678 # Now that we have calculated new_state_groups we need to get
599679 # their state IDs so we can resolve to a single state set.
605685 if len(new_state_groups) == 1:
606686 # If there is only one state group, then we know what the current
607687 # state is.
608 defer.returnValue(state_groups_map[new_state_groups.pop()])
688 defer.returnValue((state_groups_map[new_state_groups.pop()], None))
609689
610690 # Ok, we need to defer to the state handler to resolve our state sets.
611691
624704 room_id, state_groups, events_map, get_events
625705 )
626706
627 defer.returnValue(res.state)
707 defer.returnValue((res.state, None))
628708
629709 @defer.inlineCallbacks
630710 def _calculate_state_delta(self, room_id, current_state):
633713 Assumes that we are only persisting events for one room at a time.
634714
635715 Returns:
636 2-tuple (to_delete, to_insert) where both are state dicts,
637 i.e. (type, state_key) -> event_id. `to_delete` are the entries to
638 first be deleted from current_state_events, `to_insert` are entries
639 to insert.
716 tuple[list, dict] (to_delete, to_insert): where to_delete are the
717 type/state_keys to remove from current_state_events and `to_insert`
718 are the updates to current_state_events.
640719 """
641720 existing_state = yield self.get_current_state_ids(room_id)
642721
643 existing_events = set(itervalues(existing_state))
644 new_events = set(ev_id for ev_id in itervalues(current_state))
645 changed_events = existing_events ^ new_events
646
647 if not changed_events:
648 return
649
650 to_delete = {
651 key: ev_id for key, ev_id in iteritems(existing_state)
652 if ev_id in changed_events
653 }
654 events_to_insert = (new_events - existing_events)
722 to_delete = [
723 key for key in existing_state
724 if key not in current_state
725 ]
726
655727 to_insert = {
656728 key: ev_id for key, ev_id in iteritems(current_state)
657 if ev_id in events_to_insert
729 if ev_id != existing_state.get(key)
658730 }
659731
660732 defer.returnValue((to_delete, to_insert))
677749 delete_existing (bool): True to purge existing table rows for the
678750 events from the database. This is useful when retrying due to
679751 IntegrityError.
680 state_delta_for_room (dict[str, (list[str], list[str])]):
752 state_delta_for_room (dict[str, (list, dict)]):
681753 The current-state delta for each room. For each room, a tuple
682 (to_delete, to_insert), being a list of event ids to be removed
683 from the current state, and a list of event ids to be added to
754 (to_delete, to_insert), being a list of type/state keys to be
755 removed from the current state, and a state set to be added to
684756 the current state.
685757 new_forward_extremeties (dict[str, list[str]]):
686758 The new forward extremities for each room. For each room, a
758830 def _update_current_state_txn(self, txn, state_delta_by_room, max_stream_order):
759831 for room_id, current_state_tuple in iteritems(state_delta_by_room):
760832 to_delete, to_insert = current_state_tuple
833
834 # First we add entries to the current_state_delta_stream. We
835 # do this before updating the current_state_events table so
836 # that we can use it to calculate the `prev_event_id`. (This
837 # allows us to not have to pull out the existing state
838 # unnecessarily).
839 sql = """
840 INSERT INTO current_state_delta_stream
841 (stream_id, room_id, type, state_key, event_id, prev_event_id)
842 SELECT ?, ?, ?, ?, ?, (
843 SELECT event_id FROM current_state_events
844 WHERE room_id = ? AND type = ? AND state_key = ?
845 )
846 """
847 txn.executemany(sql, (
848 (
849 max_stream_order, room_id, etype, state_key, None,
850 room_id, etype, state_key,
851 )
852 for etype, state_key in to_delete
853 # We sanity check that we're deleting rather than updating
854 if (etype, state_key) not in to_insert
855 ))
856 txn.executemany(sql, (
857 (
858 max_stream_order, room_id, etype, state_key, ev_id,
859 room_id, etype, state_key,
860 )
861 for (etype, state_key), ev_id in iteritems(to_insert)
862 ))
863
864 # Now we actually update the current_state_events table
865
761866 txn.executemany(
762 "DELETE FROM current_state_events WHERE event_id = ?",
763 [(ev_id,) for ev_id in itervalues(to_delete)],
867 "DELETE FROM current_state_events"
868 " WHERE room_id = ? AND type = ? AND state_key = ?",
869 (
870 (room_id, etype, state_key)
871 for etype, state_key in itertools.chain(to_delete, to_insert)
872 ),
764873 )
765874
766875 self._simple_insert_many_txn(
777886 ],
778887 )
779888
780 state_deltas = {key: None for key in to_delete}
781 state_deltas.update(to_insert)
782
783 self._simple_insert_many_txn(
784 txn,
785 table="current_state_delta_stream",
786 values=[
787 {
788 "stream_id": max_stream_order,
789 "room_id": room_id,
790 "type": key[0],
791 "state_key": key[1],
792 "event_id": ev_id,
793 "prev_event_id": to_delete.get(key, None),
794 }
795 for key, ev_id in iteritems(state_deltas)
796 ]
797 )
798
799889 txn.call_after(
800890 self._curr_state_delta_stream_cache.entity_has_changed,
801891 room_id, max_stream_order,
809899 # and which we have added, then we invlidate the caches for all
810900 # those users.
811901 members_changed = set(
812 state_key for ev_type, state_key in state_deltas
902 state_key
903 for ev_type, state_key in itertools.chain(to_delete, to_insert)
813904 if ev_type == EventTypes.Member
814905 )
815906
9821073
9831074 metadata_json = encode_json(
9841075 event.internal_metadata.get_dict()
985 ).decode("UTF-8")
1076 )
9861077
9871078 sql = (
9881079 "UPDATE event_json SET internal_metadata = ?"
10651156 ):
10661157 txn.executemany(
10671158 "DELETE FROM %s WHERE room_id = ? AND event_id = ?" % (table,),
1068 [(ev.event_id,) for ev, _ in events_and_contexts]
1159 [(ev.room_id, ev.event_id) for ev, _ in events_and_contexts]
10691160 )
10701161
10711162 def _store_event_txn(self, txn, events_and_contexts):
10961187 "room_id": event.room_id,
10971188 "internal_metadata": encode_json(
10981189 event.internal_metadata.get_dict()
1099 ).decode("UTF-8"),
1100 "json": encode_json(event_dict(event)).decode("UTF-8"),
1190 ),
1191 "json": encode_json(event_dict(event)),
11011192 }
11021193 for event, _ in events_and_contexts
11031194 ],
11161207 "type": event.type,
11171208 "processed": True,
11181209 "outlier": event.internal_metadata.is_outlier(),
1119 "content": encode_json(event.content).decode("UTF-8"),
11201210 "origin_server_ts": int(event.origin_server_ts),
11211211 "received_ts": self._clock.time_msec(),
11221212 "sender": event.sender,
2424 from synapse.events import FrozenEvent
2525 from synapse.events.snapshot import EventContext # noqa: F401
2626 from synapse.events.utils import prune_event
27 from synapse.metrics.background_process_metrics import run_as_background_process
2728 from synapse.util.logcontext import (
2829 LoggingContext,
2930 PreserveLoggingContext,
329330 should_start = False
330331
331332 if should_start:
332 with PreserveLoggingContext():
333 self.runWithConnection(
334 self._do_fetch
335 )
333 run_as_background_process(
334 "fetch_events",
335 self.runWithConnection,
336 self._do_fetch,
337 )
336338
337339 logger.debug("Loading %d events", len(events))
338340 with PreserveLoggingContext():
2020
2121 from twisted.internet import defer
2222
23 from synapse.api.constants import EventTypes
2423 from synapse.push.baserules import list_with_base_rules
2524 from synapse.storage.appservice import ApplicationServiceWorkerStore
2625 from synapse.storage.pusher import PusherWorkerStore
185184
186185 defer.returnValue(results)
187186
187 @defer.inlineCallbacks
188188 def bulk_get_push_rules_for_room(self, event, context):
189189 state_group = context.state_group
190190 if not state_group:
194194 # To do this we set the state_group to a new object as object() != object()
195195 state_group = object()
196196
197 return self._bulk_get_push_rules_for_room(
198 event.room_id, state_group, context.current_state_ids, event=event
199 )
197 current_state_ids = yield context.get_current_state_ids(self)
198 result = yield self._bulk_get_push_rules_for_room(
199 event.room_id, state_group, current_state_ids, event=event
200 )
201 defer.returnValue(result)
200202
201203 @cachedInlineCallbacks(num_args=2, cache_context=True)
202204 def _bulk_get_push_rules_for_room(self, room_id, state_group, current_state_ids,
245247 for uid in users_with_receipts:
246248 if uid in local_users_in_room:
247249 user_ids.add(uid)
248
249 forgotten = yield self.who_forgot_in_room(
250 event.room_id, on_invalidate=cache_context.invalidate,
251 )
252
253 for row in forgotten:
254 user_id = row["user_id"]
255 event_id = row["event_id"]
256
257 mem_id = current_state_ids.get((EventTypes.Member, user_id), None)
258 if event_id == mem_id:
259 user_ids.discard(user_id)
260250
261251 rules_by_user = yield self.bulk_get_push_rules(
262252 user_ids, on_invalidate=cache_context.invalidate,
232232 )
233233
234234 if newly_inserted:
235 self.runInteraction(
235 yield self.runInteraction(
236236 "add_pusher",
237237 self._invalidate_cache_and_stream,
238238 self.get_if_user_has_pusher, (user_id,)
2323 from twisted.internet import defer
2424
2525 from synapse.api.constants import EventTypes, Membership
26 from synapse.storage.events import EventsWorkerStore
26 from synapse.storage.events_worker import EventsWorkerStore
2727 from synapse.types import get_domain_from_id
2828 from synapse.util.async import Linearizer
2929 from synapse.util.caches import intern_string
231231
232232 defer.returnValue(user_who_share_room)
233233
234 @defer.inlineCallbacks
234235 def get_joined_users_from_context(self, event, context):
235236 state_group = context.state_group
236237 if not state_group:
240241 # To do this we set the state_group to a new object as object() != object()
241242 state_group = object()
242243
243 return self._get_joined_users_from_context(
244 event.room_id, state_group, context.current_state_ids,
244 current_state_ids = yield context.get_current_state_ids(self)
245 result = yield self._get_joined_users_from_context(
246 event.room_id, state_group, current_state_ids,
245247 event=event,
246248 context=context,
247249 )
250 defer.returnValue(result)
248251
249252 def get_joined_users_from_state(self, room_id, state_entry):
250253 state_group = state_entry.state_group
457460 def _get_joined_hosts_cache(self, room_id):
458461 return _JoinedHostsCache(self, room_id)
459462
460 @cached()
461 def who_forgot_in_room(self, room_id):
462 return self._simple_select_list(
463 table="room_memberships",
464 retcols=("user_id", "event_id"),
465 keyvalues={
466 "room_id": room_id,
467 "forgotten": 1,
468 },
469 desc="who_forgot"
470 )
463 @cachedInlineCallbacks(num_args=2)
464 def did_forget(self, user_id, room_id):
465 """Returns whether user_id has elected to discard history for room_id.
466
467 Returns False if they have since re-joined."""
468 def f(txn):
469 sql = (
470 "SELECT"
471 " COUNT(*)"
472 " FROM"
473 " room_memberships"
474 " WHERE"
475 " user_id = ?"
476 " AND"
477 " room_id = ?"
478 " AND"
479 " forgotten = 0"
480 )
481 txn.execute(sql, (user_id, room_id))
482 rows = txn.fetchall()
483 return rows[0][0]
484 count = yield self.runInteraction("did_forget_membership", f)
485 defer.returnValue(count == 0)
471486
472487
473488 class RoomMemberStore(RoomMemberWorkerStore):
576591 )
577592 txn.execute(sql, (user_id, room_id))
578593
579 txn.call_after(self.did_forget.invalidate, (user_id, room_id))
580594 self._invalidate_cache_and_stream(
581 txn, self.who_forgot_in_room, (room_id,)
595 txn, self.did_forget, (user_id, room_id,),
582596 )
583597 return self.runInteraction("forget_membership", f)
584
585 @cachedInlineCallbacks(num_args=2)
586 def did_forget(self, user_id, room_id):
587 """Returns whether user_id has elected to discard history for room_id.
588
589 Returns False if they have since re-joined."""
590 def f(txn):
591 sql = (
592 "SELECT"
593 " COUNT(*)"
594 " FROM"
595 " room_memberships"
596 " WHERE"
597 " user_id = ?"
598 " AND"
599 " room_id = ?"
600 " AND"
601 " forgotten = 0"
602 )
603 txn.execute(sql, (user_id, room_id))
604 rows = txn.fetchall()
605 return rows[0][0]
606 count = yield self.runInteraction("did_forget_membership", f)
607 defer.returnValue(count == 0)
608598
609599 @defer.inlineCallbacks
610600 def _background_add_membership_profile(self, progress, batch_size):
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 We want to stop populating 'event.content', so we need to make it nullable.
17
18 If this has to be rolled back, then the following should populate the missing data:
19
20 Postgres:
21
22 UPDATE events SET content=(ej.json::json)->'content' FROM event_json ej
23 WHERE ej.event_id = events.event_id AND
24 stream_ordering < (
25 SELECT stream_ordering FROM events WHERE content IS NOT NULL
26 ORDER BY stream_ordering LIMIT 1
27 );
28
29 UPDATE events SET content=(ej.json::json)->'content' FROM event_json ej
30 WHERE ej.event_id = events.event_id AND
31 stream_ordering > (
32 SELECT stream_ordering FROM events WHERE content IS NOT NULL
33 ORDER BY stream_ordering DESC LIMIT 1
34 );
35
36 SQLite:
37
38 UPDATE events SET content=(
39 SELECT json_extract(json,'$.content') FROM event_json ej
40 WHERE ej.event_id = events.event_id
41 )
42 WHERE
43 stream_ordering < (
44 SELECT stream_ordering FROM events WHERE content IS NOT NULL
45 ORDER BY stream_ordering LIMIT 1
46 )
47 OR stream_ordering > (
48 SELECT stream_ordering FROM events WHERE content IS NOT NULL
49 ORDER BY stream_ordering DESC LIMIT 1
50 );
51
52 """
53
54 import logging
55
56 from synapse.storage.engines import PostgresEngine
57
58 logger = logging.getLogger(__name__)
59
60
61 def run_create(cur, database_engine, *args, **kwargs):
62 pass
63
64
65 def run_upgrade(cur, database_engine, *args, **kwargs):
66 if isinstance(database_engine, PostgresEngine):
67 cur.execute("""
68 ALTER TABLE events ALTER COLUMN content DROP NOT NULL;
69 """)
70 return
71
72 # sqlite is an arse about this. ref: https://www.sqlite.org/lang_altertable.html
73
74 cur.execute("SELECT sql FROM sqlite_master WHERE tbl_name='events' AND type='table'")
75 (oldsql,) = cur.fetchone()
76
77 sql = oldsql.replace("content TEXT NOT NULL", "content TEXT")
78 if sql == oldsql:
79 raise Exception("Couldn't find null constraint to drop in %s" % oldsql)
80
81 logger.info("Replacing definition of 'events' with: %s", sql)
82
83 cur.execute("PRAGMA schema_version")
84 (oldver,) = cur.fetchone()
85 cur.execute("PRAGMA writable_schema=ON")
86 cur.execute(
87 "UPDATE sqlite_master SET sql=? WHERE tbl_name='events' AND type='table'",
88 (sql, ),
89 )
90 cur.execute("PRAGMA schema_version=%i" % (oldver + 1,))
91 cur.execute("PRAGMA writable_schema=OFF")
3636 event_id TEXT NOT NULL,
3737 prev_event_id TEXT NOT NULL,
3838 room_id TEXT NOT NULL,
39 is_state BOOL NOT NULL,
39 is_state BOOL NOT NULL, -- true if this is a prev_state edge rather than a regular
40 -- event dag edge.
4041 UNIQUE (event_id, prev_event_id, room_id, is_state)
4142 );
4243
1818 event_id TEXT NOT NULL,
1919 type TEXT NOT NULL,
2020 room_id TEXT NOT NULL,
21 content TEXT NOT NULL,
21
22 -- 'content' used to be created NULLable, but as of delta 50 we drop that constraint.
23 -- the hack we use to drop the constraint doesn't work for an in-memory sqlite
24 -- database, which breaks the sytests. Hence, we no longer make it nullable.
25 content TEXT,
26
2227 unrecognized_keys TEXT,
2328 processed BOOL NOT NULL,
2429 outlier BOOL NOT NULL,
7373 txn (cursor):
7474 event_id (str): Id for the Event.
7575 Returns:
76 A dict of algorithm -> hash.
76 A dict[unicode, bytes] of algorithm -> hash.
7777 """
7878 query = (
7979 "SELECT algorithm, hash"
185185
186186 @defer.inlineCallbacks
187187 def _get_state_groups_from_groups(self, groups, types):
188 """Returns dictionary state_group -> (dict of (type, state_key) -> event id)
188 """Returns the state groups for a given set of groups, filtering on
189 types of state events.
190
191 Args:
192 groups(list[int]): list of state group IDs to query
193 types (Iterable[str, str|None]|None): list of 2-tuples of the form
194 (`type`, `state_key`), where a `state_key` of `None` matches all
195 state_keys for the `type`. If None, all types are returned.
196
197 Returns:
198 dictionary state_group -> (dict of (type, state_key) -> event id)
189199 """
190200 results = {}
191201
199209
200210 defer.returnValue(results)
201211
202 def _get_state_groups_from_groups_txn(self, txn, groups, types=None):
212 def _get_state_groups_from_groups_txn(
213 self, txn, groups, types=None,
214 ):
203215 results = {group: {} for group in groups}
216
204217 if types is not None:
205218 types = list(set(types)) # deduplicate types list
206219
238251 # Turns out that postgres doesn't like doing a list of OR's and
239252 # is about 1000x slower, so we just issue a query for each specific
240253 # type seperately.
241 if types:
254 if types is not None:
242255 clause_to_args = [
243256 (
244257 "AND type = ? AND state_key = ?",
277290 else:
278291 where_clauses.append("(type = ? AND state_key = ?)")
279292 where_args.extend([typ[0], typ[1]])
293
280294 where_clause = "AND (%s)" % (" OR ".join(where_clauses))
281295 else:
282296 where_clause = ""
331345 return results
332346
333347 @defer.inlineCallbacks
334 def get_state_for_events(self, event_ids, types):
348 def get_state_for_events(self, event_ids, types, filtered_types=None):
335349 """Given a list of event_ids and type tuples, return a list of state
336350 dicts for each event. The state dicts will only have the type/state_keys
337351 that are in the `types` list.
338352
339353 Args:
340 event_ids (list)
341 types (list): List of (type, state_key) tuples which are used to
342 filter the state fetched. `state_key` may be None, which matches
343 any `state_key`
354 event_ids (list[string])
355 types (list[(str, str|None)]|None): List of (type, state_key) tuples
356 which are used to filter the state fetched. If `state_key` is None,
357 all events are returned of the given type.
358 May be None, which matches any key.
359 filtered_types(list[str]|None): Only apply filtering via `types` to this
360 list of event types. Other types of events are returned unfiltered.
361 If None, `types` filtering is applied to all events.
344362
345363 Returns:
346364 deferred: A list of dicts corresponding to the event_ids given.
351369 )
352370
353371 groups = set(itervalues(event_to_groups))
354 group_to_state = yield self._get_state_for_groups(groups, types)
372 group_to_state = yield self._get_state_for_groups(groups, types, filtered_types)
355373
356374 state_event_map = yield self.get_events(
357375 [ev_id for sd in itervalues(group_to_state) for ev_id in itervalues(sd)],
370388 defer.returnValue({event: event_to_state[event] for event in event_ids})
371389
372390 @defer.inlineCallbacks
373 def get_state_ids_for_events(self, event_ids, types=None):
391 def get_state_ids_for_events(self, event_ids, types=None, filtered_types=None):
374392 """
375393 Get the state dicts corresponding to a list of events
376394
377395 Args:
378396 event_ids(list(str)): events whose state should be returned
379 types(list[(str, str)]|None): List of (type, state_key) tuples
380 which are used to filter the state fetched. May be None, which
381 matches any key
397 types(list[(str, str|None)]|None): List of (type, state_key) tuples
398 which are used to filter the state fetched. If `state_key` is None,
399 all events are returned of the given type.
400 May be None, which matches any key.
401 filtered_types(list[str]|None): Only apply filtering via `types` to this
402 list of event types. Other types of events are returned unfiltered.
403 If None, `types` filtering is applied to all events.
382404
383405 Returns:
384406 A deferred dict from event_id -> (type, state_key) -> state_event
388410 )
389411
390412 groups = set(itervalues(event_to_groups))
391 group_to_state = yield self._get_state_for_groups(groups, types)
413 group_to_state = yield self._get_state_for_groups(groups, types, filtered_types)
392414
393415 event_to_state = {
394416 event_id: group_to_state[group]
398420 defer.returnValue({event: event_to_state[event] for event in event_ids})
399421
400422 @defer.inlineCallbacks
401 def get_state_for_event(self, event_id, types=None):
423 def get_state_for_event(self, event_id, types=None, filtered_types=None):
402424 """
403425 Get the state dict corresponding to a particular event
404426
405427 Args:
406428 event_id(str): event whose state should be returned
407 types(list[(str, str)]|None): List of (type, state_key) tuples
408 which are used to filter the state fetched. May be None, which
409 matches any key
429 types(list[(str, str|None)]|None): List of (type, state_key) tuples
430 which are used to filter the state fetched. If `state_key` is None,
431 all events are returned of the given type.
432 May be None, which matches any key.
433 filtered_types(list[str]|None): Only apply filtering via `types` to this
434 list of event types. Other types of events are returned unfiltered.
435 If None, `types` filtering is applied to all events.
410436
411437 Returns:
412438 A deferred dict from (type, state_key) -> state_event
413439 """
414 state_map = yield self.get_state_for_events([event_id], types)
440 state_map = yield self.get_state_for_events([event_id], types, filtered_types)
415441 defer.returnValue(state_map[event_id])
416442
417443 @defer.inlineCallbacks
418 def get_state_ids_for_event(self, event_id, types=None):
444 def get_state_ids_for_event(self, event_id, types=None, filtered_types=None):
419445 """
420446 Get the state dict corresponding to a particular event
421447
422448 Args:
423449 event_id(str): event whose state should be returned
424 types(list[(str, str)]|None): List of (type, state_key) tuples
425 which are used to filter the state fetched. May be None, which
426 matches any key
450 types(list[(str, str|None)]|None): List of (type, state_key) tuples
451 which are used to filter the state fetched. If `state_key` is None,
452 all events are returned of the given type.
453 May be None, which matches any key.
454 filtered_types(list[str]|None): Only apply filtering via `types` to this
455 list of event types. Other types of events are returned unfiltered.
456 If None, `types` filtering is applied to all events.
427457
428458 Returns:
429459 A deferred dict from (type, state_key) -> state_event
430460 """
431 state_map = yield self.get_state_ids_for_events([event_id], types)
461 state_map = yield self.get_state_ids_for_events([event_id], types, filtered_types)
432462 defer.returnValue(state_map[event_id])
433463
434464 @cached(max_entries=50000)
459489
460490 defer.returnValue({row["event_id"]: row["state_group"] for row in rows})
461491
462 def _get_some_state_from_cache(self, group, types):
492 def _get_some_state_from_cache(self, group, types, filtered_types=None):
463493 """Checks if group is in cache. See `_get_state_for_groups`
464494
465 Returns 3-tuple (`state_dict`, `missing_types`, `got_all`).
466 `missing_types` is the list of types that aren't in the cache for that
467 group. `got_all` is a bool indicating if we successfully retrieved all
495 Args:
496 group(int): The state group to lookup
497 types(list[str, str|None]): List of 2-tuples of the form
498 (`type`, `state_key`), where a `state_key` of `None` matches all
499 state_keys for the `type`.
500 filtered_types(list[str]|None): Only apply filtering via `types` to this
501 list of event types. Other types of events are returned unfiltered.
502 If None, `types` filtering is applied to all events.
503
504 Returns 2-tuple (`state_dict`, `got_all`).
505 `got_all` is a bool indicating if we successfully retrieved all
468506 requests state from the cache, if False we need to query the DB for the
469507 missing state.
470
471 Args:
472 group: The state group to lookup
473 types (list): List of 2-tuples of the form (`type`, `state_key`),
474 where a `state_key` of `None` matches all state_keys for the
475 `type`.
476508 """
477509 is_all, known_absent, state_dict_ids = self._state_group_cache.get(group)
478510
479511 type_to_key = {}
480 missing_types = set()
512
513 # tracks whether any of ourrequested types are missing from the cache
514 missing_types = False
481515
482516 for typ, state_key in types:
483517 key = (typ, state_key)
484 if state_key is None:
518
519 if (
520 state_key is None or
521 (filtered_types is not None and typ not in filtered_types)
522 ):
485523 type_to_key[typ] = None
486 missing_types.add(key)
524 # we mark the type as missing from the cache because
525 # when the cache was populated it might have been done with a
526 # restricted set of state_keys, so the wildcard will not work
527 # and the cache may be incomplete.
528 missing_types = True
487529 else:
488530 if type_to_key.get(typ, object()) is not None:
489531 type_to_key.setdefault(typ, set()).add(state_key)
490532
491533 if key not in state_dict_ids and key not in known_absent:
492 missing_types.add(key)
534 missing_types = True
493535
494536 sentinel = object()
495537
496538 def include(typ, state_key):
497539 valid_state_keys = type_to_key.get(typ, sentinel)
498540 if valid_state_keys is sentinel:
499 return False
541 return filtered_types is not None and typ not in filtered_types
500542 if valid_state_keys is None:
501543 return True
502544 if state_key in valid_state_keys:
503545 return True
504546 return False
505547
506 got_all = is_all or not missing_types
548 got_all = is_all
549 if not got_all:
550 # the cache is incomplete. We may still have got all the results we need, if
551 # we don't have any wildcards in the match list.
552 if not missing_types and filtered_types is None:
553 got_all = True
507554
508555 return {
509556 k: v for k, v in iteritems(state_dict_ids)
510557 if include(k[0], k[1])
511 }, missing_types, got_all
558 }, got_all
512559
513560 def _get_all_state_from_cache(self, group):
514561 """Checks if group is in cache. See `_get_state_for_groups`
525572 return state_dict_ids, is_all
526573
527574 @defer.inlineCallbacks
528 def _get_state_for_groups(self, groups, types=None):
575 def _get_state_for_groups(self, groups, types=None, filtered_types=None):
529576 """Gets the state at each of a list of state groups, optionally
530577 filtering by type/state_key
531578
539586 Otherwise, each entry should be a `(type, state_key)` tuple to
540587 include in the response. A `state_key` of None is a wildcard
541588 meaning that we require all state with that type.
589 filtered_types(list[str]|None): Only apply filtering via `types` to this
590 list of event types. Other types of events are returned unfiltered.
591 If None, `types` filtering is applied to all events.
542592
543593 Returns:
544594 Deferred[dict[int, dict[(type, state_key), EventBase]]]
550600 missing_groups = []
551601 if types is not None:
552602 for group in set(groups):
553 state_dict_ids, _, got_all = self._get_some_state_from_cache(
554 group, types,
603 state_dict_ids, got_all = self._get_some_state_from_cache(
604 group, types, filtered_types
555605 )
556606 results[group] = state_dict_ids
557607
578628 # cache. Hence, if we are doing a wildcard lookup, populate the
579629 # cache fully so that we can do an efficient lookup next time.
580630
581 if types and any(k is None for (t, k) in types):
631 if filtered_types or (types and any(k is None for (t, k) in types)):
582632 types_to_fetch = None
583633 else:
584634 types_to_fetch = types
585635
586636 group_to_state_dict = yield self._get_state_groups_from_groups(
587 missing_groups, types_to_fetch,
637 missing_groups, types_to_fetch
588638 )
589639
590640 for group, group_state_dict in iteritems(group_to_state_dict):
594644 if types:
595645 for k, v in iteritems(group_state_dict):
596646 (typ, _) = k
597 if k in types or (typ, None) in types:
647 if (
648 (k in types or (typ, None) in types) or
649 (filtered_types and typ not in filtered_types)
650 ):
598651 state_dict[k] = v
599652 else:
600653 state_dict.update(group_state_dict)
4242
4343 from synapse.storage._base import SQLBaseStore
4444 from synapse.storage.engines import PostgresEngine
45 from synapse.storage.events import EventsWorkerStore
45 from synapse.storage.events_worker import EventsWorkerStore
4646 from synapse.types import RoomStreamToken
4747 from synapse.util.caches.stream_change_cache import StreamChangeCache
4848 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
526526 )
527527
528528 @defer.inlineCallbacks
529 def get_events_around(self, room_id, event_id, before_limit, after_limit):
529 def get_events_around(
530 self, room_id, event_id, before_limit, after_limit, event_filter=None,
531 ):
530532 """Retrieve events and pagination tokens around a given event in a
531533 room.
532534
535537 event_id (str)
536538 before_limit (int)
537539 after_limit (int)
540 event_filter (Filter|None)
538541
539542 Returns:
540543 dict
542545
543546 results = yield self.runInteraction(
544547 "get_events_around", self._get_events_around_txn,
545 room_id, event_id, before_limit, after_limit
548 room_id, event_id, before_limit, after_limit, event_filter,
546549 )
547550
548551 events_before = yield self._get_events(
562565 "end": results["after"]["token"],
563566 })
564567
565 def _get_events_around_txn(self, txn, room_id, event_id, before_limit, after_limit):
568 def _get_events_around_txn(
569 self, txn, room_id, event_id, before_limit, after_limit, event_filter,
570 ):
566571 """Retrieves event_ids and pagination tokens around a given event in a
567572 room.
568573
571576 event_id (str)
572577 before_limit (int)
573578 after_limit (int)
579 event_filter (Filter|None)
574580
575581 Returns:
576582 dict
600606
601607 rows, start_token = self._paginate_room_events_txn(
602608 txn, room_id, before_token, direction='b', limit=before_limit,
609 event_filter=event_filter,
603610 )
604611 events_before = [r.event_id for r in rows]
605612
606613 rows, end_token = self._paginate_room_events_txn(
607614 txn, room_id, after_token, direction='f', limit=after_limit,
615 event_filter=event_filter,
608616 )
609617 events_after = [r.event_id for r in rows]
610618
2121
2222 from twisted.internet import defer
2323
24 from synapse.metrics.background_process_metrics import run_as_background_process
2425 from synapse.util.caches.descriptors import cached
2526
2627 from ._base import SQLBaseStore
5657 def __init__(self, db_conn, hs):
5758 super(TransactionStore, self).__init__(db_conn, hs)
5859
59 self._clock.looping_call(self._cleanup_transactions, 30 * 60 * 1000)
60 self._clock.looping_call(self._start_cleanup_transactions, 30 * 60 * 1000)
6061
6162 def get_received_txn_response(self, transaction_id, origin):
6263 """For an incoming transaction from a given origin, check if we have
270271 txn.execute(query, (self._clock.time_msec(),))
271272 return self.cursor_to_dict(txn)
272273
274 def _start_cleanup_transactions(self):
275 return run_as_background_process(
276 "cleanup_transactions", self._cleanup_transactions,
277 )
278
273279 def _cleanup_transactions(self):
274280 now = self._clock.time_msec()
275281 month_ago = now - 30 * 24 * 60 * 60 * 1000
136136 @classmethod
137137 def from_string(cls, s):
138138 """Parse the string given by 's' into a structure object."""
139 if len(s) < 1 or s[0] != cls.SIGIL:
139 if len(s) < 1 or s[0:1] != cls.SIGIL:
140140 raise SynapseError(400, "Expected %s string to start with '%s'" % (
141141 cls.__name__, cls.SIGIL,
142142 ))
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1112 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1213 # See the License for the specific language governing permissions and
1314 # limitations under the License.
14
15 import collections
1516 import logging
1617 from contextlib import contextmanager
1718
155156
156157
157158 class Linearizer(object):
158 """Linearizes access to resources based on a key. Useful to ensure only one
159 thing is happening at a time on a given resource.
159 """Limits concurrent access to resources based on a key. Useful to ensure
160 only a few things happen at a time on a given resource.
160161
161162 Example:
162163
163 with (yield linearizer.queue("test_key")):
164 with (yield limiter.queue("test_key")):
164165 # do some work.
165166
166167 """
167 def __init__(self, name=None, clock=None):
168 def __init__(self, name=None, max_count=1, clock=None):
169 """
170 Args:
171 max_count(int): The maximum number of concurrent accesses
172 """
168173 if name is None:
169174 self.name = id(self)
170175 else:
171176 self.name = name
172 self.key_to_defer = {}
173177
174178 if not clock:
175179 from twisted.internet import reactor
176180 clock = Clock(reactor)
177181 self._clock = clock
182 self.max_count = max_count
183
184 # key_to_defer is a map from the key to a 2 element list where
185 # the first element is the number of things executing, and
186 # the second element is an OrderedDict, where the keys are deferreds for the
187 # things blocked from executing.
188 self.key_to_defer = {}
178189
179190 @defer.inlineCallbacks
180191 def queue(self, key):
181 # If there is already a deferred in the queue, we pull it out so that
182 # we can wait on it later.
183 # Then we replace it with a deferred that we resolve *after* the
184 # context manager has exited.
185 # We only return the context manager after the previous deferred has
186 # resolved.
187 # This all has the net effect of creating a chain of deferreds that
188 # wait for the previous deferred before starting their work.
189 current_defer = self.key_to_defer.get(key)
190
191 new_defer = defer.Deferred()
192 self.key_to_defer[key] = new_defer
193
194 if current_defer:
192 entry = self.key_to_defer.setdefault(key, [0, collections.OrderedDict()])
193
194 # If the number of things executing is greater than the maximum
195 # then add a deferred to the list of blocked items
196 # When on of the things currently executing finishes it will callback
197 # this item so that it can continue executing.
198 if entry[0] >= self.max_count:
199 new_defer = defer.Deferred()
200 entry[1][new_defer] = 1
201
195202 logger.info(
196 "Waiting to acquire linearizer lock %r for key %r", self.name, key
203 "Waiting to acquire linearizer lock %r for key %r", self.name, key,
197204 )
198205 try:
199 with PreserveLoggingContext():
200 yield current_defer
201 except Exception:
202 logger.exception("Unexpected exception in Linearizer")
203
204 logger.info("Acquired linearizer lock %r for key %r", self.name,
205 key)
206 yield make_deferred_yieldable(new_defer)
207 except Exception as e:
208 if isinstance(e, CancelledError):
209 logger.info(
210 "Cancelling wait for linearizer lock %r for key %r",
211 self.name, key,
212 )
213 else:
214 logger.warn(
215 "Unexpected exception waiting for linearizer lock %r for key %r",
216 self.name, key,
217 )
218
219 # we just have to take ourselves back out of the queue.
220 del entry[1][new_defer]
221 raise
222
223 logger.info("Acquired linearizer lock %r for key %r", self.name, key)
224 entry[0] += 1
206225
207226 # if the code holding the lock completes synchronously, then it
208227 # will recursively run the next claimant on the list. That can
212231 # In order to break the cycle, we add a cheeky sleep(0) here to
213232 # ensure that we fall back to the reactor between each iteration.
214233 #
215 # (There's no particular need for it to happen before we return
216 # the context manager, but it needs to happen while we hold the
217 # lock, and the context manager's exit code must be synchronous,
218 # so actually this is the only sensible place.
234 # (This needs to happen while we hold the lock, and the context manager's exit
235 # code must be synchronous, so this is the only sensible place.)
219236 yield self._clock.sleep(0)
220237
221238 else:
222 logger.info("Acquired uncontended linearizer lock %r for key %r",
223 self.name, key)
239 logger.info(
240 "Acquired uncontended linearizer lock %r for key %r", self.name, key,
241 )
242 entry[0] += 1
224243
225244 @contextmanager
226245 def _ctx_manager():
228247 yield
229248 finally:
230249 logger.info("Releasing linearizer lock %r for key %r", self.name, key)
231 with PreserveLoggingContext():
232 new_defer.callback(None)
233 current_d = self.key_to_defer.get(key)
234 if current_d is new_defer:
235 self.key_to_defer.pop(key, None)
236
237 defer.returnValue(_ctx_manager())
238
239
240 class Limiter(object):
241 """Limits concurrent access to resources based on a key. Useful to ensure
242 only a few thing happen at a time on a given resource.
243
244 Example:
245
246 with (yield limiter.queue("test_key")):
247 # do some work.
248
249 """
250 def __init__(self, max_count):
251 """
252 Args:
253 max_count(int): The maximum number of concurrent access
254 """
255 self.max_count = max_count
256
257 # key_to_defer is a map from the key to a 2 element list where
258 # the first element is the number of things executing
259 # the second element is a list of deferreds for the things blocked from
260 # executing.
261 self.key_to_defer = {}
262
263 @defer.inlineCallbacks
264 def queue(self, key):
265 entry = self.key_to_defer.setdefault(key, [0, []])
266
267 # If the number of things executing is greater than the maximum
268 # then add a deferred to the list of blocked items
269 # When on of the things currently executing finishes it will callback
270 # this item so that it can continue executing.
271 if entry[0] >= self.max_count:
272 new_defer = defer.Deferred()
273 entry[1].append(new_defer)
274
275 logger.info("Waiting to acquire limiter lock for key %r", key)
276 with PreserveLoggingContext():
277 yield new_defer
278 logger.info("Acquired limiter lock for key %r", key)
279 else:
280 logger.info("Acquired uncontended limiter lock for key %r", key)
281
282 entry[0] += 1
283
284 @contextmanager
285 def _ctx_manager():
286 try:
287 yield
288 finally:
289 logger.info("Releasing limiter lock for key %r", key)
290250
291251 # We've finished executing so check if there are any things
292252 # blocked waiting to execute and start one of them
293253 entry[0] -= 1
294254
295255 if entry[1]:
296 next_def = entry[1].pop(0)
297
256 (next_def, _) = entry[1].popitem(last=False)
257
258 # we need to run the next thing in the sentinel context.
298259 with PreserveLoggingContext():
299260 next_def.callback(None)
300261 elif entry[0] == 0:
472472
473473 @functools.wraps(self.orig)
474474 def wrapped(*args, **kwargs):
475 # If we're passed a cache_context then we'll want to call its invalidate()
476 # whenever we are invalidated
475 # If we're passed a cache_context then we'll want to call its
476 # invalidate() whenever we are invalidated
477477 invalidate_callback = kwargs.pop("on_invalidate", None)
478478
479479 arg_dict = inspect.getcallargs(self.orig, obj, *args, **kwargs)
480480 keyargs = [arg_dict[arg_nm] for arg_nm in self.arg_names]
481481 list_args = arg_dict[self.list_name]
482482
483 # cached is a dict arg -> deferred, where deferred results in a
484 # 2-tuple (`arg`, `result`)
485483 results = {}
486 cached_defers = {}
487 missing = []
484
485 def update_results_dict(res, arg):
486 results[arg] = res
487
488 # list of deferreds to wait for
489 cached_defers = []
490
491 missing = set()
488492
489493 # If the cache takes a single arg then that is used as the key,
490494 # otherwise a tuple is used.
491495 if num_args == 1:
492 def cache_get(arg):
493 return cache.get(arg, callback=invalidate_callback)
496 def arg_to_cache_key(arg):
497 return arg
494498 else:
495 key = list(keyargs)
496
497 def cache_get(arg):
498 key[self.list_pos] = arg
499 return cache.get(tuple(key), callback=invalidate_callback)
499 keylist = list(keyargs)
500
501 def arg_to_cache_key(arg):
502 keylist[self.list_pos] = arg
503 return tuple(keylist)
500504
501505 for arg in list_args:
502506 try:
503 res = cache_get(arg)
504
507 res = cache.get(arg_to_cache_key(arg),
508 callback=invalidate_callback)
505509 if not isinstance(res, ObservableDeferred):
506510 results[arg] = res
507511 elif not res.has_succeeded():
508512 res = res.observe()
509 res.addCallback(lambda r, arg: (arg, r), arg)
510 cached_defers[arg] = res
513 res.addCallback(update_results_dict, arg)
514 cached_defers.append(res)
511515 else:
512516 results[arg] = res.get_result()
513517 except KeyError:
514 missing.append(arg)
518 missing.add(arg)
515519
516520 if missing:
521 # we need an observable deferred for each entry in the list,
522 # which we put in the cache. Each deferred resolves with the
523 # relevant result for that key.
524 deferreds_map = {}
525 for arg in missing:
526 deferred = defer.Deferred()
527 deferreds_map[arg] = deferred
528 key = arg_to_cache_key(arg)
529 observable = ObservableDeferred(deferred)
530 cache.set(key, observable, callback=invalidate_callback)
531
532 def complete_all(res):
533 # the wrapped function has completed. It returns a
534 # a dict. We can now resolve the observable deferreds in
535 # the cache and update our own result map.
536 for e in missing:
537 val = res.get(e, None)
538 deferreds_map[e].callback(val)
539 results[e] = val
540
541 def errback(f):
542 # the wrapped function has failed. Invalidate any cache
543 # entries we're supposed to be populating, and fail
544 # their deferreds.
545 for e in missing:
546 key = arg_to_cache_key(e)
547 cache.invalidate(key)
548 deferreds_map[e].errback(f)
549
550 # return the failure, to propagate to our caller.
551 return f
552
517553 args_to_call = dict(arg_dict)
518 args_to_call[self.list_name] = missing
519
520 ret_d = defer.maybeDeferred(
554 args_to_call[self.list_name] = list(missing)
555
556 cached_defers.append(defer.maybeDeferred(
521557 logcontext.preserve_fn(self.function_to_call),
522558 **args_to_call
559 ).addCallbacks(complete_all, errback))
560
561 if cached_defers:
562 d = defer.gatherResults(
563 cached_defers,
564 consumeErrors=True,
565 ).addCallbacks(
566 lambda _: results,
567 unwrapFirstError
523568 )
524
525 ret_d = ObservableDeferred(ret_d)
526
527 # We need to create deferreds for each arg in the list so that
528 # we can insert the new deferred into the cache.
529 for arg in missing:
530 observer = ret_d.observe()
531 observer.addCallback(lambda r, arg: r.get(arg, None), arg)
532
533 observer = ObservableDeferred(observer)
534
535 if num_args == 1:
536 cache.set(
537 arg, observer,
538 callback=invalidate_callback
539 )
540
541 def invalidate(f, key):
542 cache.invalidate(key)
543 return f
544 observer.addErrback(invalidate, arg)
545 else:
546 key = list(keyargs)
547 key[self.list_pos] = arg
548 cache.set(
549 tuple(key), observer,
550 callback=invalidate_callback
551 )
552
553 def invalidate(f, key):
554 cache.invalidate(key)
555 return f
556 observer.addErrback(invalidate, tuple(key))
557
558 res = observer.observe()
559 res.addCallback(lambda r, arg: (arg, r), arg)
560
561 cached_defers[arg] = res
562
563 if cached_defers:
564 def update_results_dict(res):
565 results.update(res)
566 return results
567
568 return logcontext.make_deferred_yieldable(defer.gatherResults(
569 list(cached_defers.values()),
570 consumeErrors=True,
571 ).addCallback(update_results_dict).addErrback(
572 unwrapFirstError
573 ))
569 return logcontext.make_deferred_yieldable(d)
574570 else:
575571 return results
576572
624620 cache.
625621
626622 Args:
627 cache (Cache): The underlying cache to use.
623 cached_method_name (str): The name of the single-item lookup method.
624 This is only used to find the cache to use.
628625 list_name (str): The name of the argument that is the list to use to
629626 do batch lookups in the cache.
630627 num_args (int): Number of arguments to use as the key in the cache
1515 import logging
1616 from collections import OrderedDict
1717
18 from synapse.metrics.background_process_metrics import run_as_background_process
1819 from synapse.util.caches import register_cache
1920
2021 logger = logging.getLogger(__name__)
6263 return
6364
6465 def f():
65 self._prune_cache()
66 return run_as_background_process(
67 "prune_cache_%s" % self._cache_name,
68 self._prune_cache,
69 )
6670
6771 self._clock.looping_call(f, self._expiry_ms / 2)
6872
1616
1717 from twisted.internet import defer
1818
19 from synapse.util import unwrapFirstError
20 from synapse.util.logcontext import PreserveLoggingContext
19 from synapse.metrics.background_process_metrics import run_as_background_process
20 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
2121
2222 logger = logging.getLogger(__name__)
2323
2424
2525 def user_left_room(distributor, user, room_id):
26 with PreserveLoggingContext():
27 distributor.fire("user_left_room", user=user, room_id=room_id)
26 distributor.fire("user_left_room", user=user, room_id=room_id)
2827
2928
3029 def user_joined_room(distributor, user, room_id):
31 with PreserveLoggingContext():
32 distributor.fire("user_joined_room", user=user, room_id=room_id)
30 distributor.fire("user_joined_room", user=user, room_id=room_id)
3331
3432
3533 class Distributor(object):
4341 model will do for today.
4442 """
4543
46 def __init__(self, suppress_failures=True):
47 self.suppress_failures = suppress_failures
48
44 def __init__(self):
4945 self.signals = {}
5046 self.pre_registration = {}
5147
5551
5652 self.signals[name] = Signal(
5753 name,
58 suppress_failures=self.suppress_failures,
5954 )
6055
6156 if name in self.pre_registration:
7469 self.pre_registration[name].append(observer)
7570
7671 def fire(self, name, *args, **kwargs):
72 """Dispatches the given signal to the registered observers.
73
74 Runs the observers as a background process. Does not return a deferred.
75 """
7776 if name not in self.signals:
7877 raise KeyError("%r does not have a signal named %s" % (self, name))
7978
80 return self.signals[name].fire(*args, **kwargs)
79 run_as_background_process(
80 name,
81 self.signals[name].fire,
82 *args, **kwargs
83 )
8184
8285
8386 class Signal(object):
9093 method into all of the observers.
9194 """
9295
93 def __init__(self, name, suppress_failures):
96 def __init__(self, name):
9497 self.name = name
95 self.suppress_failures = suppress_failures
9698 self.observers = []
9799
98100 def observe(self, observer):
102104 Each observer callable may return a Deferred."""
103105 self.observers.append(observer)
104106
105 @defer.inlineCallbacks
106107 def fire(self, *args, **kwargs):
107108 """Invokes every callable in the observer list, passing in the args and
108109 kwargs. Exceptions thrown by observers are logged but ignored. It is
120121 failure.type,
121122 failure.value,
122123 failure.getTracebackObject()))
123 if not self.suppress_failures:
124 return failure
125124
126125 return defer.maybeDeferred(observer, *args, **kwargs).addErrback(eb)
127126
128 with PreserveLoggingContext():
129 deferreds = [
130 do(observer)
131 for observer in self.observers
132 ]
127 deferreds = [
128 run_in_background(do, o)
129 for o in self.observers
130 ]
133131
134 res = yield defer.gatherResults(
135 deferreds, consumeErrors=True
136 ).addErrback(unwrapFirstError)
137
138 defer.returnValue(res)
132 return make_deferred_yieldable(defer.gatherResults(
133 deferreds, consumeErrors=True,
134 ))
139135
140136 def __repr__(self):
141137 return "<Signal name=%r>" % (self.name,)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from six import string_types
15 from six import binary_type, text_type
1616
1717 from canonicaljson import json
1818 from frozendict import frozendict
2525 if isinstance(o, frozendict):
2626 return o
2727
28 if isinstance(o, string_types):
28 if isinstance(o, (binary_type, text_type)):
2929 return o
3030
3131 try:
4040 if isinstance(o, (dict, frozendict)):
4141 return dict({k: unfreeze(v) for k, v in o.items()})
4242
43 if isinstance(o, string_types):
43 if isinstance(o, (binary_type, text_type)):
4444 return o
4545
4646 try:
9797 self.db_txn_duration_sec = 0
9898 self.db_sched_duration_sec = 0
9999 self.evt_db_fetch_count = 0
100
101 def __repr__(self):
102 return ("<ContextResourceUsage ru_stime='%r', ru_utime='%r', "
103 "db_txn_count='%r', db_txn_duration_sec='%r', "
104 "db_sched_duration_sec='%r', evt_db_fetch_count='%r'>") % (
105 self.ru_stime,
106 self.ru_utime,
107 self.db_txn_count,
108 self.db_txn_duration_sec,
109 self.db_sched_duration_sec,
110 self.evt_db_fetch_count,)
100111
101112 def __iadd__(self, other):
102113 """Add another ContextResourceUsage's stats to this one's.
103103 logger.warn("Expected context. (%r)", self.name)
104104 return
105105
106 usage = context.get_resource_usage() - self.start_usage
107 block_ru_utime.labels(self.name).inc(usage.ru_utime)
108 block_ru_stime.labels(self.name).inc(usage.ru_stime)
109 block_db_txn_count.labels(self.name).inc(usage.db_txn_count)
110 block_db_txn_duration.labels(self.name).inc(usage.db_txn_duration_sec)
111 block_db_sched_duration.labels(self.name).inc(usage.db_sched_duration_sec)
106 current = context.get_resource_usage()
107 usage = current - self.start_usage
108 try:
109 block_ru_utime.labels(self.name).inc(usage.ru_utime)
110 block_ru_stime.labels(self.name).inc(usage.ru_stime)
111 block_db_txn_count.labels(self.name).inc(usage.db_txn_count)
112 block_db_txn_duration.labels(self.name).inc(usage.db_txn_duration_sec)
113 block_db_sched_duration.labels(self.name).inc(usage.db_sched_duration_sec)
114 except ValueError:
115 logger.warn(
116 "Failed to save metrics! OLD: %r, NEW: %r",
117 self.start_usage, current
118 )
112119
113120 if self.created_context:
114121 self.start_context.__exit__(exc_type, exc_val, exc_tb)
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import itertools
14
1515 import logging
1616 import operator
1717
18 import six
18 from six import iteritems, itervalues
19 from six.moves import map
1920
2021 from twisted.internet import defer
2122
2223 from synapse.api.constants import EventTypes, Membership
2324 from synapse.events.utils import prune_event
2425 from synapse.types import get_domain_from_id
25 from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
2626
2727 logger = logging.getLogger(__name__)
2828
7474 types=types,
7575 )
7676
77 forgotten = yield make_deferred_yieldable(defer.gatherResults([
78 defer.maybeDeferred(
79 preserve_fn(store.who_forgot_in_room),
80 room_id,
81 )
82 for room_id in frozenset(e.room_id for e in events)
83 ], consumeErrors=True))
84
85 # Set of membership event_ids that have been forgotten
86 event_id_forgotten = frozenset(
87 row["event_id"] for rows in forgotten for row in rows
88 )
89
9077 ignore_dict_content = yield store.get_global_account_data_by_type_for_user(
9178 "m.ignored_user_list", user_id,
9279 )
175162 if membership is None:
176163 membership_event = state.get((EventTypes.Member, user_id), None)
177164 if membership_event:
178 # XXX why do we do this?
179 # https://github.com/matrix-org/synapse/issues/3350
180 if membership_event.event_id not in event_id_forgotten:
181 membership = membership_event.membership
165 membership = membership_event.membership
182166
183167 # if the user was a member of the room at the time of the event,
184168 # they can see it.
220204 return event
221205
222206 # check each event: gives an iterable[None|EventBase]
223 filtered_events = itertools.imap(allowed, events)
207 filtered_events = map(allowed, events)
224208
225209 # remove the None entries
226210 filtered_events = filter(operator.truth, filtered_events)
260244 # membership states for the requesting server to determine
261245 # if the server is either in the room or has been invited
262246 # into the room.
263 for ev in state.itervalues():
247 for ev in itervalues(state):
264248 if ev.type != EventTypes.Member:
265249 continue
266250 try:
294278 )
295279
296280 visibility_ids = set()
297 for sids in event_to_state_ids.itervalues():
281 for sids in itervalues(event_to_state_ids):
298282 hist = sids.get((EventTypes.RoomHistoryVisibility, ""))
299283 if hist:
300284 visibility_ids.add(hist)
307291 event_map = yield store.get_events(visibility_ids)
308292 all_open = all(
309293 e.content.get("history_visibility") in (None, "shared", "world_readable")
310 for e in event_map.itervalues()
294 for e in itervalues(event_map)
311295 )
312296
313297 if all_open:
345329 #
346330 state_key_to_event_id_set = {
347331 e
348 for key_to_eid in six.itervalues(event_to_state_ids)
332 for key_to_eid in itervalues(event_to_state_ids)
349333 for e in key_to_eid.items()
350334 }
351335
368352 event_to_state = {
369353 e_id: {
370354 key: event_map[inner_e_id]
371 for key, inner_e_id in key_to_eid.iteritems()
355 for key, inner_e_id in iteritems(key_to_eid)
372356 if inner_e_id in event_map
373357 }
374 for e_id, key_to_eid in event_to_state_ids.iteritems()
358 for e_id, key_to_eid in iteritems(event_to_state_ids)
375359 }
376360
377361 defer.returnValue([
4545 self.auth = Auth(self.hs)
4646
4747 self.test_user = "@foo:bar"
48 self.test_token = "_test_token_"
48 self.test_token = b"_test_token_"
4949
5050 # this is overridden for the appservice tests
5151 self.store.get_app_service_by_token = Mock(return_value=None)
6060 self.store.get_user_by_access_token = Mock(return_value=user_info)
6161
6262 request = Mock(args={})
63 request.args["access_token"] = [self.test_token]
63 request.args[b"access_token"] = [self.test_token]
6464 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
6565 requester = yield self.auth.get_user_by_req(request)
6666 self.assertEquals(requester.user.to_string(), self.test_user)
6969 self.store.get_user_by_access_token = Mock(return_value=None)
7070
7171 request = Mock(args={})
72 request.args["access_token"] = [self.test_token]
72 request.args[b"access_token"] = [self.test_token]
7373 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
7474 d = self.auth.get_user_by_req(request)
7575 self.failureResultOf(d, AuthError)
9797
9898 request = Mock(args={})
9999 request.getClientIP.return_value = "127.0.0.1"
100 request.args["access_token"] = [self.test_token]
100 request.args[b"access_token"] = [self.test_token]
101101 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
102102 requester = yield self.auth.get_user_by_req(request)
103103 self.assertEquals(requester.user.to_string(), self.test_user)
114114
115115 request = Mock(args={})
116116 request.getClientIP.return_value = "192.168.10.10"
117 request.args["access_token"] = [self.test_token]
117 request.args[b"access_token"] = [self.test_token]
118118 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
119119 requester = yield self.auth.get_user_by_req(request)
120120 self.assertEquals(requester.user.to_string(), self.test_user)
130130
131131 request = Mock(args={})
132132 request.getClientIP.return_value = "131.111.8.42"
133 request.args["access_token"] = [self.test_token]
133 request.args[b"access_token"] = [self.test_token]
134134 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
135135 d = self.auth.get_user_by_req(request)
136136 self.failureResultOf(d, AuthError)
140140 self.store.get_user_by_access_token = Mock(return_value=None)
141141
142142 request = Mock(args={})
143 request.args["access_token"] = [self.test_token]
143 request.args[b"access_token"] = [self.test_token]
144144 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
145145 d = self.auth.get_user_by_req(request)
146146 self.failureResultOf(d, AuthError)
157157
158158 @defer.inlineCallbacks
159159 def test_get_user_by_req_appservice_valid_token_valid_user_id(self):
160 masquerading_user_id = "@doppelganger:matrix.org"
160 masquerading_user_id = b"@doppelganger:matrix.org"
161161 app_service = Mock(
162162 token="foobar", url="a_url", sender=self.test_user,
163163 ip_range_whitelist=None,
168168
169169 request = Mock(args={})
170170 request.getClientIP.return_value = "127.0.0.1"
171 request.args["access_token"] = [self.test_token]
172 request.args["user_id"] = [masquerading_user_id]
171 request.args[b"access_token"] = [self.test_token]
172 request.args[b"user_id"] = [masquerading_user_id]
173173 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
174174 requester = yield self.auth.get_user_by_req(request)
175 self.assertEquals(requester.user.to_string(), masquerading_user_id)
175 self.assertEquals(
176 requester.user.to_string(),
177 masquerading_user_id.decode('utf8')
178 )
176179
177180 def test_get_user_by_req_appservice_valid_token_bad_user_id(self):
178 masquerading_user_id = "@doppelganger:matrix.org"
181 masquerading_user_id = b"@doppelganger:matrix.org"
179182 app_service = Mock(
180183 token="foobar", url="a_url", sender=self.test_user,
181184 ip_range_whitelist=None,
186189
187190 request = Mock(args={})
188191 request.getClientIP.return_value = "127.0.0.1"
189 request.args["access_token"] = [self.test_token]
190 request.args["user_id"] = [masquerading_user_id]
192 request.args[b"access_token"] = [self.test_token]
193 request.args[b"user_id"] = [masquerading_user_id]
191194 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
192195 d = self.auth.get_user_by_req(request)
193196 self.failureResultOf(d, AuthError)
417420
418421 # check the token works
419422 request = Mock(args={})
420 request.args["access_token"] = [token]
423 request.args[b"access_token"] = [token.encode('ascii')]
421424 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
422425 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
423426 self.assertEqual(UserID.from_string(USER_ID), requester.user)
430433
431434 # the token should *not* work now
432435 request = Mock(args={})
433 request.args["access_token"] = [guest_tok]
436 request.args[b"access_token"] = [guest_tok.encode('ascii')]
434437 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
435438
436439 with self.assertRaises(AuthError) as cm:
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 from mock import Mock
1415
1516 import pymacaroons
1617
1819
1920 import synapse
2021 import synapse.api.errors
22 from synapse.api.errors import AuthError
2123 from synapse.handlers.auth import AuthHandler
2224
2325 from tests import unittest
3638 self.hs.handlers = AuthHandlers(self.hs)
3739 self.auth_handler = self.hs.handlers.auth_handler
3840 self.macaroon_generator = self.hs.get_macaroon_generator()
41 # MAU tests
42 self.hs.config.max_mau_value = 50
43 self.small_number_of_users = 1
44 self.large_number_of_users = 100
3945
4046 def test_token_is_a_macaroon(self):
4147 token = self.macaroon_generator.generate_access_token("some_user")
7076 v.satisfy_general(verify_nonce)
7177 v.verify(macaroon, self.hs.config.macaroon_secret_key)
7278
79 @defer.inlineCallbacks
7380 def test_short_term_login_token_gives_user_id(self):
7481 self.hs.clock.now = 1000
7582
7683 token = self.macaroon_generator.generate_short_term_login_token(
7784 "a_user", 5000
7885 )
79
80 self.assertEqual(
81 "a_user",
82 self.auth_handler.validate_short_term_login_token_and_get_user_id(
83 token
84 )
86 user_id = yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
87 token
8588 )
89 self.assertEqual("a_user", user_id)
8690
8791 # when we advance the clock, the token should be rejected
8892 self.hs.clock.now = 6000
8993 with self.assertRaises(synapse.api.errors.AuthError):
90 self.auth_handler.validate_short_term_login_token_and_get_user_id(
94 yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
9195 token
9296 )
9397
98 @defer.inlineCallbacks
9499 def test_short_term_login_token_cannot_replace_user_id(self):
95100 token = self.macaroon_generator.generate_short_term_login_token(
96101 "a_user", 5000
97102 )
98103 macaroon = pymacaroons.Macaroon.deserialize(token)
99104
105 user_id = yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
106 macaroon.serialize()
107 )
100108 self.assertEqual(
101 "a_user",
102 self.auth_handler.validate_short_term_login_token_and_get_user_id(
103 macaroon.serialize()
104 )
109 "a_user", user_id
105110 )
106111
107112 # add another "user_id" caveat, which might allow us to override the
109114 macaroon.add_first_party_caveat("user_id = b_user")
110115
111116 with self.assertRaises(synapse.api.errors.AuthError):
112 self.auth_handler.validate_short_term_login_token_and_get_user_id(
117 yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
113118 macaroon.serialize()
114119 )
120
121 @defer.inlineCallbacks
122 def test_mau_limits_disabled(self):
123 self.hs.config.limit_usage_by_mau = False
124 # Ensure does not throw exception
125 yield self.auth_handler.get_access_token_for_user_id('user_a')
126
127 yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
128 self._get_macaroon().serialize()
129 )
130
131 @defer.inlineCallbacks
132 def test_mau_limits_exceeded(self):
133 self.hs.config.limit_usage_by_mau = True
134 self.hs.get_datastore().count_monthly_users = Mock(
135 return_value=defer.succeed(self.large_number_of_users)
136 )
137
138 with self.assertRaises(AuthError):
139 yield self.auth_handler.get_access_token_for_user_id('user_a')
140
141 self.hs.get_datastore().count_monthly_users = Mock(
142 return_value=defer.succeed(self.large_number_of_users)
143 )
144 with self.assertRaises(AuthError):
145 yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
146 self._get_macaroon().serialize()
147 )
148
149 @defer.inlineCallbacks
150 def test_mau_limits_not_exceeded(self):
151 self.hs.config.limit_usage_by_mau = True
152
153 self.hs.get_datastore().count_monthly_users = Mock(
154 return_value=defer.succeed(self.small_number_of_users)
155 )
156 # Ensure does not raise exception
157 yield self.auth_handler.get_access_token_for_user_id('user_a')
158
159 self.hs.get_datastore().count_monthly_users = Mock(
160 return_value=defer.succeed(self.small_number_of_users)
161 )
162 yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
163 self._get_macaroon().serialize()
164 )
165
166 def _get_macaroon(self):
167 token = self.macaroon_generator.generate_short_term_login_token(
168 "user_a", 5000
169 )
170 return pymacaroons.Macaroon.deserialize(token)
1616
1717 from twisted.internet import defer
1818
19 from synapse.api.errors import RegistrationError
1920 from synapse.handlers.register import RegistrationHandler
2021 from synapse.types import UserID, create_requester
2122
7677 requester, local_part, display_name)
7778 self.assertEquals(result_user_id, user_id)
7879 self.assertEquals(result_token, 'secret')
80
81 @defer.inlineCallbacks
82 def test_cannot_register_when_mau_limits_exceeded(self):
83 local_part = "someone"
84 display_name = "someone"
85 requester = create_requester("@as:test")
86 store = self.hs.get_datastore()
87 self.hs.config.limit_usage_by_mau = False
88 self.hs.config.max_mau_value = 50
89 lots_of_users = 100
90 small_number_users = 1
91
92 store.count_monthly_users = Mock(return_value=defer.succeed(lots_of_users))
93
94 # Ensure does not throw exception
95 yield self.handler.get_or_create_user(requester, 'a', display_name)
96
97 self.hs.config.limit_usage_by_mau = True
98
99 with self.assertRaises(RegistrationError):
100 yield self.handler.get_or_create_user(requester, 'b', display_name)
101
102 store.count_monthly_users = Mock(return_value=defer.succeed(small_number_users))
103
104 self._macaroon_mock_generator("another_secret")
105
106 # Ensure does not throw exception
107 yield self.handler.get_or_create_user("@neil:matrix.org", 'c', "Neil")
108
109 self._macaroon_mock_generator("another another secret")
110 store.count_monthly_users = Mock(return_value=defer.succeed(lots_of_users))
111
112 with self.assertRaises(RegistrationError):
113 yield self.handler.register(localpart=local_part)
114
115 self._macaroon_mock_generator("another another secret")
116 store.count_monthly_users = Mock(return_value=defer.succeed(lots_of_users))
117
118 with self.assertRaises(RegistrationError):
119 yield self.handler.register_saml2(local_part)
120
121 def _macaroon_mock_generator(self, secret):
122 """
123 Reset macaroon generator in the case where the test creates multiple users
124 """
125 macaroon_generator = Mock(
126 generate_access_token=Mock(return_value=secret))
127 self.hs.get_macaroon_generator = Mock(return_value=macaroon_generator)
128 self.hs.handlers = RegistrationHandlers(self.hs)
129 self.handler = self.hs.get_handlers().registration_handler
4343 "content": content,
4444 }
4545 ],
46 "pdu_failures": [],
4746 }
4847
4948
1010 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
13
1413 import tempfile
1514
1615 from mock import Mock, NonCallableMock
1716
1817 from twisted.internet import defer, reactor
18 from twisted.internet.defer import Deferred
1919
2020 from synapse.replication.tcp.client import (
2121 ReplicationClientFactory,
2222 ReplicationClientHandler,
2323 )
2424 from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
25 from synapse.util.logcontext import PreserveLoggingContext, make_deferred_yieldable
2526
2627 from tests import unittest
2728 from tests.utils import setup_test_homeserver
29
30
31 class TestReplicationClientHandler(ReplicationClientHandler):
32 """Overrides on_rdata so that we can wait for it to happen"""
33 def __init__(self, store):
34 super(TestReplicationClientHandler, self).__init__(store)
35 self._rdata_awaiters = []
36
37 def await_replication(self):
38 d = Deferred()
39 self._rdata_awaiters.append(d)
40 return make_deferred_yieldable(d)
41
42 def on_rdata(self, stream_name, token, rows):
43 awaiters = self._rdata_awaiters
44 self._rdata_awaiters = []
45 super(TestReplicationClientHandler, self).on_rdata(stream_name, token, rows)
46 with PreserveLoggingContext():
47 for a in awaiters:
48 a.callback(None)
2849
2950
3051 class BaseSlavedStoreTestCase(unittest.TestCase):
5172 self.addCleanup(listener.stopListening)
5273 self.streamer = server_factory.streamer
5374
54 self.replication_handler = ReplicationClientHandler(self.slaved_store)
75 self.replication_handler = TestReplicationClientHandler(self.slaved_store)
5576 client_factory = ReplicationClientFactory(
5677 self.hs, "client_name", self.replication_handler
5778 )
5980 self.addCleanup(client_factory.stopTrying)
6081 self.addCleanup(client_connector.disconnect)
6182
62 @defer.inlineCallbacks
6383 def replicate(self):
64 yield self.streamer.on_notifier_poke()
65 d = self.replication_handler.await_sync("replication_test")
66 self.streamer.send_sync_to_all_connections("replication_test")
67 yield d
84 """Tell the master side of replication that something has happened, and then
85 wait for the replication to occur.
86 """
87 # xxx: should we be more specific in what we wait for?
88 d = self.replication_handler.await_replication()
89 self.streamer.on_notifier_poke()
90 return d
6891
6992 @defer.inlineCallbacks
7093 def check(self, method, args, expected_result=None):
221221 state_ids = {
222222 key: e.event_id for key, e in state.items()
223223 }
224 context = EventContext()
225 context.current_state_ids = state_ids
226 context.prev_state_ids = state_ids
224 context = EventContext.with_state(
225 state_group=None,
226 current_state_ids=state_ids,
227 prev_state_ids=state_ids
228 )
227229 else:
228230 state_handler = self.hs.get_state_handler()
229231 context = yield state_handler.compute_event_context(event)
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import hashlib
16 import hmac
17 import json
18
19 from mock import Mock
20
21 from synapse.http.server import JsonResource
22 from synapse.rest.client.v1.admin import register_servlets
23 from synapse.util import Clock
24
25 from tests import unittest
26 from tests.server import (
27 ThreadedMemoryReactorClock,
28 make_request,
29 render,
30 setup_test_homeserver,
31 )
32
33
34 class UserRegisterTestCase(unittest.TestCase):
35 def setUp(self):
36
37 self.clock = ThreadedMemoryReactorClock()
38 self.hs_clock = Clock(self.clock)
39 self.url = "/_matrix/client/r0/admin/register"
40
41 self.registration_handler = Mock()
42 self.identity_handler = Mock()
43 self.login_handler = Mock()
44 self.device_handler = Mock()
45 self.device_handler.check_device_registered = Mock(return_value="FAKE")
46
47 self.datastore = Mock(return_value=Mock())
48 self.datastore.get_current_state_deltas = Mock(return_value=[])
49
50 self.secrets = Mock()
51
52 self.hs = setup_test_homeserver(
53 http_client=None, clock=self.hs_clock, reactor=self.clock
54 )
55
56 self.hs.config.registration_shared_secret = u"shared"
57
58 self.hs.get_media_repository = Mock()
59 self.hs.get_deactivate_account_handler = Mock()
60
61 self.resource = JsonResource(self.hs)
62 register_servlets(self.hs, self.resource)
63
64 def test_disabled(self):
65 """
66 If there is no shared secret, registration through this method will be
67 prevented.
68 """
69 self.hs.config.registration_shared_secret = None
70
71 request, channel = make_request("POST", self.url, b'{}')
72 render(request, self.resource, self.clock)
73
74 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
75 self.assertEqual(
76 'Shared secret registration is not enabled', channel.json_body["error"]
77 )
78
79 def test_get_nonce(self):
80 """
81 Calling GET on the endpoint will return a randomised nonce, using the
82 homeserver's secrets provider.
83 """
84 secrets = Mock()
85 secrets.token_hex = Mock(return_value="abcd")
86
87 self.hs.get_secrets = Mock(return_value=secrets)
88
89 request, channel = make_request("GET", self.url)
90 render(request, self.resource, self.clock)
91
92 self.assertEqual(channel.json_body, {"nonce": "abcd"})
93
94 def test_expired_nonce(self):
95 """
96 Calling GET on the endpoint will return a randomised nonce, which will
97 only last for SALT_TIMEOUT (60s).
98 """
99 request, channel = make_request("GET", self.url)
100 render(request, self.resource, self.clock)
101 nonce = channel.json_body["nonce"]
102
103 # 59 seconds
104 self.clock.advance(59)
105
106 body = json.dumps({"nonce": nonce})
107 request, channel = make_request("POST", self.url, body.encode('utf8'))
108 render(request, self.resource, self.clock)
109
110 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
111 self.assertEqual('username must be specified', channel.json_body["error"])
112
113 # 61 seconds
114 self.clock.advance(2)
115
116 request, channel = make_request("POST", self.url, body.encode('utf8'))
117 render(request, self.resource, self.clock)
118
119 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
120 self.assertEqual('unrecognised nonce', channel.json_body["error"])
121
122 def test_register_incorrect_nonce(self):
123 """
124 Only the provided nonce can be used, as it's checked in the MAC.
125 """
126 request, channel = make_request("GET", self.url)
127 render(request, self.resource, self.clock)
128 nonce = channel.json_body["nonce"]
129
130 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
131 want_mac.update(b"notthenonce\x00bob\x00abc123\x00admin")
132 want_mac = want_mac.hexdigest()
133
134 body = json.dumps(
135 {
136 "nonce": nonce,
137 "username": "bob",
138 "password": "abc123",
139 "admin": True,
140 "mac": want_mac,
141 }
142 ).encode('utf8')
143 request, channel = make_request("POST", self.url, body.encode('utf8'))
144 render(request, self.resource, self.clock)
145
146 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
147 self.assertEqual("HMAC incorrect", channel.json_body["error"])
148
149 def test_register_correct_nonce(self):
150 """
151 When the correct nonce is provided, and the right key is provided, the
152 user is registered.
153 """
154 request, channel = make_request("GET", self.url)
155 render(request, self.resource, self.clock)
156 nonce = channel.json_body["nonce"]
157
158 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
159 want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin")
160 want_mac = want_mac.hexdigest()
161
162 body = json.dumps(
163 {
164 "nonce": nonce,
165 "username": "bob",
166 "password": "abc123",
167 "admin": True,
168 "mac": want_mac,
169 }
170 ).encode('utf8')
171 request, channel = make_request("POST", self.url, body.encode('utf8'))
172 render(request, self.resource, self.clock)
173
174 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
175 self.assertEqual("@bob:test", channel.json_body["user_id"])
176
177 def test_nonce_reuse(self):
178 """
179 A valid unrecognised nonce.
180 """
181 request, channel = make_request("GET", self.url)
182 render(request, self.resource, self.clock)
183 nonce = channel.json_body["nonce"]
184
185 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
186 want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin")
187 want_mac = want_mac.hexdigest()
188
189 body = json.dumps(
190 {
191 "nonce": nonce,
192 "username": "bob",
193 "password": "abc123",
194 "admin": True,
195 "mac": want_mac,
196 }
197 ).encode('utf8')
198 request, channel = make_request("POST", self.url, body.encode('utf8'))
199 render(request, self.resource, self.clock)
200
201 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
202 self.assertEqual("@bob:test", channel.json_body["user_id"])
203
204 # Now, try and reuse it
205 request, channel = make_request("POST", self.url, body.encode('utf8'))
206 render(request, self.resource, self.clock)
207
208 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
209 self.assertEqual('unrecognised nonce', channel.json_body["error"])
210
211 def test_missing_parts(self):
212 """
213 Synapse will complain if you don't give nonce, username, password, and
214 mac. Admin is optional. Additional checks are done for length and
215 type.
216 """
217 def nonce():
218 request, channel = make_request("GET", self.url)
219 render(request, self.resource, self.clock)
220 return channel.json_body["nonce"]
221
222 #
223 # Nonce check
224 #
225
226 # Must be present
227 body = json.dumps({})
228 request, channel = make_request("POST", self.url, body.encode('utf8'))
229 render(request, self.resource, self.clock)
230
231 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
232 self.assertEqual('nonce must be specified', channel.json_body["error"])
233
234 #
235 # Username checks
236 #
237
238 # Must be present
239 body = json.dumps({"nonce": nonce()})
240 request, channel = make_request("POST", self.url, body.encode('utf8'))
241 render(request, self.resource, self.clock)
242
243 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
244 self.assertEqual('username must be specified', channel.json_body["error"])
245
246 # Must be a string
247 body = json.dumps({"nonce": nonce(), "username": 1234})
248 request, channel = make_request("POST", self.url, body.encode('utf8'))
249 render(request, self.resource, self.clock)
250
251 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
252 self.assertEqual('Invalid username', channel.json_body["error"])
253
254 # Must not have null bytes
255 body = json.dumps({"nonce": nonce(), "username": b"abcd\x00"})
256 request, channel = make_request("POST", self.url, body.encode('utf8'))
257 render(request, self.resource, self.clock)
258
259 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
260 self.assertEqual('Invalid username', channel.json_body["error"])
261
262 # Must not have null bytes
263 body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
264 request, channel = make_request("POST", self.url, body.encode('utf8'))
265 render(request, self.resource, self.clock)
266
267 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
268 self.assertEqual('Invalid username', channel.json_body["error"])
269
270 #
271 # Username checks
272 #
273
274 # Must be present
275 body = json.dumps({"nonce": nonce(), "username": "a"})
276 request, channel = make_request("POST", self.url, body.encode('utf8'))
277 render(request, self.resource, self.clock)
278
279 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
280 self.assertEqual('password must be specified', channel.json_body["error"])
281
282 # Must be a string
283 body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
284 request, channel = make_request("POST", self.url, body.encode('utf8'))
285 render(request, self.resource, self.clock)
286
287 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
288 self.assertEqual('Invalid password', channel.json_body["error"])
289
290 # Must not have null bytes
291 body = json.dumps({"nonce": nonce(), "username": "a", "password": b"abcd\x00"})
292 request, channel = make_request("POST", self.url, body.encode('utf8'))
293 render(request, self.resource, self.clock)
294
295 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
296 self.assertEqual('Invalid password', channel.json_body["error"])
297
298 # Super long
299 body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
300 request, channel = make_request("POST", self.url, body.encode('utf8'))
301 render(request, self.resource, self.clock)
302
303 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
304 self.assertEqual('Invalid password', channel.json_body["error"])
1313 # limitations under the License.
1414
1515 """ Tests REST events for /events paths."""
16
1617 from mock import Mock, NonCallableMock
18 from six import PY3
1719
18 # twisted imports
1920 from twisted.internet import defer
20
21 import synapse.rest.client.v1.events
22 import synapse.rest.client.v1.register
23 import synapse.rest.client.v1.room
24
25 from tests import unittest
2621
2722 from ....utils import MockHttpResource, setup_test_homeserver
2823 from .utils import RestTestCase
3025 PATH_PREFIX = "/_matrix/client/api/v1"
3126
3227
33 class EventStreamPaginationApiTestCase(unittest.TestCase):
34 """ Tests event streaming query parameters and start/end keys used in the
35 Pagination stream API. """
36 user_id = "sid1"
37
38 def setUp(self):
39 # configure stream and inject items
40 pass
41
42 def tearDown(self):
43 pass
44
45 def TODO_test_long_poll(self):
46 # stream from 'end' key, send (self+other) message, expect message.
47
48 # stream from 'END', send (self+other) message, expect message.
49
50 # stream from 'end' key, send (self+other) topic, expect topic.
51
52 # stream from 'END', send (self+other) topic, expect topic.
53
54 # stream from 'end' key, send (self+other) invite, expect invite.
55
56 # stream from 'END', send (self+other) invite, expect invite.
57
58 pass
59
60 def TODO_test_stream_forward(self):
61 # stream from START, expect injected items
62
63 # stream from 'start' key, expect same content
64
65 # stream from 'end' key, expect nothing
66
67 # stream from 'END', expect nothing
68
69 # The following is needed for cases where content is removed e.g. you
70 # left a room, so the token you're streaming from is > the one that
71 # would be returned naturally from START>END.
72 # stream from very new token (higher than end key), expect same token
73 # returned as end key
74 pass
75
76 def TODO_test_limits(self):
77 # stream from a key, expect limit_num items
78
79 # stream from START, expect limit_num items
80
81 pass
82
83 def TODO_test_range(self):
84 # stream from key to key, expect X items
85
86 # stream from key to END, expect X items
87
88 # stream from START to key, expect X items
89
90 # stream from START to END, expect all items
91 pass
92
93 def TODO_test_direction(self):
94 # stream from END to START and fwds, expect newest first
95
96 # stream from END to START and bwds, expect oldest first
97
98 # stream from START to END and fwds, expect oldest first
99
100 # stream from START to END and bwds, expect newest first
101
102 pass
103
104
10528 class EventStreamPermissionsTestCase(RestTestCase):
10629 """ Tests event streaming (GET /events). """
10730
31 if PY3:
32 skip = "Skip on Py3 until ported to use not V1 only register."
33
10834 @defer.inlineCallbacks
10935 def setUp(self):
36 import synapse.rest.client.v1.events
37 import synapse.rest.client.v1_only.register
38 import synapse.rest.client.v1.room
39
11040 self.mock_resource = MockHttpResource(prefix=PATH_PREFIX)
11141
11242 hs = yield setup_test_homeserver(
12454
12555 hs.get_handlers().federation_handler = Mock()
12656
127 synapse.rest.client.v1.register.register_servlets(hs, self.mock_resource)
57 synapse.rest.client.v1_only.register.register_servlets(hs, self.mock_resource)
12858 synapse.rest.client.v1.events.register_servlets(hs, self.mock_resource)
12959 synapse.rest.client.v1.room.register_servlets(hs, self.mock_resource)
13060
1515 import json
1616
1717 from mock import Mock
18 from six import PY3
1819
1920 from twisted.test.proto_helpers import MemoryReactorClock
2021
2122 from synapse.http.server import JsonResource
22 from synapse.rest.client.v1.register import register_servlets
23 from synapse.rest.client.v1_only.register import register_servlets
2324 from synapse.util import Clock
2425
2526 from tests import unittest
3031 """
3132 Tests for CreateUserRestServlet.
3233 """
34 if PY3:
35 skip = "Not ported to Python 3."
3336
3437 def setUp(self):
3538 self.registration_handler = Mock()
1919 from mock import Mock, NonCallableMock
2020 from six.moves.urllib import parse as urlparse
2121
22 # twisted imports
2322 from twisted.internet import defer
2423
2524 import synapse.rest.client.v1.room
8584
8685 self.resource = JsonResource(self.hs)
8786 synapse.rest.client.v1.room.register_servlets(self.hs, self.resource)
87 synapse.rest.client.v1.room.register_deprecated_servlets(self.hs, self.resource)
8888 self.helper = RestHelper(self.hs, self.resource, self.user_id)
8989
9090
2020 from synapse.util import Clock
2121
2222 from tests import unittest
23 from tests.server import ThreadedMemoryReactorClock as MemoryReactorClock
24 from tests.server import make_request, setup_test_homeserver, wait_until_result
23 from tests.server import (
24 ThreadedMemoryReactorClock as MemoryReactorClock,
25 make_request,
26 setup_test_homeserver,
27 wait_until_result,
28 )
2529
2630 PATH_PREFIX = "/_matrix/client/v2_alpha"
2731
1919 from synapse.util import Clock
2020
2121 from tests import unittest
22 from tests.server import ThreadedMemoryReactorClock as MemoryReactorClock
23 from tests.server import make_request, setup_test_homeserver, wait_until_result
22 from tests.server import (
23 ThreadedMemoryReactorClock as MemoryReactorClock,
24 make_request,
25 setup_test_homeserver,
26 wait_until_result,
27 )
2428
2529 PATH_PREFIX = "/_matrix/client/v2_alpha"
2630
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from twisted.internet import defer
16
17 import tests.utils
18
19
20 class InitTestCase(tests.unittest.TestCase):
21 def __init__(self, *args, **kwargs):
22 super(InitTestCase, self).__init__(*args, **kwargs)
23 self.store = None # type: synapse.storage.DataStore
24
25 @defer.inlineCallbacks
26 def setUp(self):
27 hs = yield tests.utils.setup_test_homeserver()
28
29 hs.config.max_mau_value = 50
30 hs.config.limit_usage_by_mau = True
31 self.store = hs.get_datastore()
32 self.clock = hs.get_clock()
33
34 @defer.inlineCallbacks
35 def test_count_monthly_users(self):
36 count = yield self.store.count_monthly_users()
37 self.assertEqual(0, count)
38
39 yield self._insert_user_ips("@user:server1")
40 yield self._insert_user_ips("@user:server2")
41
42 count = yield self.store.count_monthly_users()
43 self.assertEqual(2, count)
44
45 @defer.inlineCallbacks
46 def _insert_user_ips(self, user):
47 """
48 Helper function to populate user_ips without using batch insertion infra
49 args:
50 user (str): specify username i.e. @user:server.com
51 """
52 yield self.store._simple_upsert(
53 table="user_ips",
54 keyvalues={
55 "user_id": user,
56 "access_token": "access_token",
57 "ip": "ip",
58 "user_agent": "user_agent",
59 "device_id": "device_id",
60 },
61 values={
62 "last_seen": self.clock.time_msec(),
63 }
64 )
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16
17 from twisted.internet import defer
18
19 from synapse.api.constants import EventTypes, Membership
20 from synapse.types import RoomID, UserID
21
22 import tests.unittest
23 import tests.utils
24
25 logger = logging.getLogger(__name__)
26
27
28 class StateStoreTestCase(tests.unittest.TestCase):
29 def __init__(self, *args, **kwargs):
30 super(StateStoreTestCase, self).__init__(*args, **kwargs)
31 self.store = None # type: synapse.storage.DataStore
32
33 @defer.inlineCallbacks
34 def setUp(self):
35 hs = yield tests.utils.setup_test_homeserver()
36
37 self.store = hs.get_datastore()
38 self.event_builder_factory = hs.get_event_builder_factory()
39 self.event_creation_handler = hs.get_event_creation_handler()
40
41 self.u_alice = UserID.from_string("@alice:test")
42 self.u_bob = UserID.from_string("@bob:test")
43
44 self.room = RoomID.from_string("!abc123:test")
45
46 yield self.store.store_room(
47 self.room.to_string(),
48 room_creator_user_id="@creator:text",
49 is_public=True
50 )
51
52 @defer.inlineCallbacks
53 def inject_state_event(self, room, sender, typ, state_key, content):
54 builder = self.event_builder_factory.new({
55 "type": typ,
56 "sender": sender.to_string(),
57 "state_key": state_key,
58 "room_id": room.to_string(),
59 "content": content,
60 })
61
62 event, context = yield self.event_creation_handler.create_new_client_event(
63 builder
64 )
65
66 yield self.store.persist_event(event, context)
67
68 defer.returnValue(event)
69
70 def assertStateMapEqual(self, s1, s2):
71 for t in s1:
72 # just compare event IDs for simplicity
73 self.assertEqual(s1[t].event_id, s2[t].event_id)
74 self.assertEqual(len(s1), len(s2))
75
76 @defer.inlineCallbacks
77 def test_get_state_for_event(self):
78
79 # this defaults to a linear DAG as each new injection defaults to whatever
80 # forward extremities are currently in the DB for this room.
81 e1 = yield self.inject_state_event(
82 self.room, self.u_alice, EventTypes.Create, '', {},
83 )
84 e2 = yield self.inject_state_event(
85 self.room, self.u_alice, EventTypes.Name, '', {
86 "name": "test room"
87 },
88 )
89 e3 = yield self.inject_state_event(
90 self.room, self.u_alice, EventTypes.Member, self.u_alice.to_string(), {
91 "membership": Membership.JOIN
92 },
93 )
94 e4 = yield self.inject_state_event(
95 self.room, self.u_bob, EventTypes.Member, self.u_bob.to_string(), {
96 "membership": Membership.JOIN
97 },
98 )
99 e5 = yield self.inject_state_event(
100 self.room, self.u_bob, EventTypes.Member, self.u_bob.to_string(), {
101 "membership": Membership.LEAVE
102 },
103 )
104
105 # check we get the full state as of the final event
106 state = yield self.store.get_state_for_event(
107 e5.event_id, None, filtered_types=None
108 )
109
110 self.assertIsNotNone(e4)
111
112 self.assertStateMapEqual({
113 (e1.type, e1.state_key): e1,
114 (e2.type, e2.state_key): e2,
115 (e3.type, e3.state_key): e3,
116 # e4 is overwritten by e5
117 (e5.type, e5.state_key): e5,
118 }, state)
119
120 # check we can filter to the m.room.name event (with a '' state key)
121 state = yield self.store.get_state_for_event(
122 e5.event_id, [(EventTypes.Name, '')], filtered_types=None
123 )
124
125 self.assertStateMapEqual({
126 (e2.type, e2.state_key): e2,
127 }, state)
128
129 # check we can filter to the m.room.name event (with a wildcard None state key)
130 state = yield self.store.get_state_for_event(
131 e5.event_id, [(EventTypes.Name, None)], filtered_types=None
132 )
133
134 self.assertStateMapEqual({
135 (e2.type, e2.state_key): e2,
136 }, state)
137
138 # check we can grab the m.room.member events (with a wildcard None state key)
139 state = yield self.store.get_state_for_event(
140 e5.event_id, [(EventTypes.Member, None)], filtered_types=None
141 )
142
143 self.assertStateMapEqual({
144 (e3.type, e3.state_key): e3,
145 (e5.type, e5.state_key): e5,
146 }, state)
147
148 # check we can use filter_types to grab a specific room member
149 # without filtering out the other event types
150 state = yield self.store.get_state_for_event(
151 e5.event_id, [(EventTypes.Member, self.u_alice.to_string())],
152 filtered_types=[EventTypes.Member],
153 )
154
155 self.assertStateMapEqual({
156 (e1.type, e1.state_key): e1,
157 (e2.type, e2.state_key): e2,
158 (e3.type, e3.state_key): e3,
159 }, state)
160
161 # check that types=[], filtered_types=[EventTypes.Member]
162 # doesn't return all members
163 state = yield self.store.get_state_for_event(
164 e5.event_id, [], filtered_types=[EventTypes.Member],
165 )
166
167 self.assertStateMapEqual({
168 (e1.type, e1.state_key): e1,
169 (e2.type, e2.state_key): e2,
170 }, state)
171
172 #######################################################
173 # _get_some_state_from_cache tests against a full cache
174 #######################################################
175
176 room_id = self.room.to_string()
177 group_ids = yield self.store.get_state_groups_ids(room_id, [e5.event_id])
178 group = group_ids.keys()[0]
179
180 # test _get_some_state_from_cache correctly filters out members with types=[]
181 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
182 group, [], filtered_types=[EventTypes.Member]
183 )
184
185 self.assertEqual(is_all, True)
186 self.assertDictEqual({
187 (e1.type, e1.state_key): e1.event_id,
188 (e2.type, e2.state_key): e2.event_id,
189 }, state_dict)
190
191 # test _get_some_state_from_cache correctly filters in members with wildcard types
192 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
193 group, [(EventTypes.Member, None)], filtered_types=[EventTypes.Member]
194 )
195
196 self.assertEqual(is_all, True)
197 self.assertDictEqual({
198 (e1.type, e1.state_key): e1.event_id,
199 (e2.type, e2.state_key): e2.event_id,
200 (e3.type, e3.state_key): e3.event_id,
201 # e4 is overwritten by e5
202 (e5.type, e5.state_key): e5.event_id,
203 }, state_dict)
204
205 # test _get_some_state_from_cache correctly filters in members with specific types
206 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
207 group, [(EventTypes.Member, e5.state_key)], filtered_types=[EventTypes.Member]
208 )
209
210 self.assertEqual(is_all, True)
211 self.assertDictEqual({
212 (e1.type, e1.state_key): e1.event_id,
213 (e2.type, e2.state_key): e2.event_id,
214 (e5.type, e5.state_key): e5.event_id,
215 }, state_dict)
216
217 # test _get_some_state_from_cache correctly filters in members with specific types
218 # and no filtered_types
219 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
220 group, [(EventTypes.Member, e5.state_key)], filtered_types=None
221 )
222
223 self.assertEqual(is_all, True)
224 self.assertDictEqual({
225 (e5.type, e5.state_key): e5.event_id,
226 }, state_dict)
227
228 #######################################################
229 # deliberately remove e2 (room name) from the _state_group_cache
230
231 (is_all, known_absent, state_dict_ids) = self.store._state_group_cache.get(group)
232
233 self.assertEqual(is_all, True)
234 self.assertEqual(known_absent, set())
235 self.assertDictEqual(state_dict_ids, {
236 (e1.type, e1.state_key): e1.event_id,
237 (e2.type, e2.state_key): e2.event_id,
238 (e3.type, e3.state_key): e3.event_id,
239 # e4 is overwritten by e5
240 (e5.type, e5.state_key): e5.event_id,
241 })
242
243 state_dict_ids.pop((e2.type, e2.state_key))
244 self.store._state_group_cache.invalidate(group)
245 self.store._state_group_cache.update(
246 sequence=self.store._state_group_cache.sequence,
247 key=group,
248 value=state_dict_ids,
249 # list fetched keys so it knows it's partial
250 fetched_keys=(
251 (e1.type, e1.state_key),
252 (e3.type, e3.state_key),
253 (e5.type, e5.state_key),
254 )
255 )
256
257 (is_all, known_absent, state_dict_ids) = self.store._state_group_cache.get(group)
258
259 self.assertEqual(is_all, False)
260 self.assertEqual(known_absent, set([
261 (e1.type, e1.state_key),
262 (e3.type, e3.state_key),
263 (e5.type, e5.state_key),
264 ]))
265 self.assertDictEqual(state_dict_ids, {
266 (e1.type, e1.state_key): e1.event_id,
267 (e3.type, e3.state_key): e3.event_id,
268 (e5.type, e5.state_key): e5.event_id,
269 })
270
271 ############################################
272 # test that things work with a partial cache
273
274 # test _get_some_state_from_cache correctly filters out members with types=[]
275 room_id = self.room.to_string()
276 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
277 group, [], filtered_types=[EventTypes.Member]
278 )
279
280 self.assertEqual(is_all, False)
281 self.assertDictEqual({
282 (e1.type, e1.state_key): e1.event_id,
283 }, state_dict)
284
285 # test _get_some_state_from_cache correctly filters in members wildcard types
286 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
287 group, [(EventTypes.Member, None)], filtered_types=[EventTypes.Member]
288 )
289
290 self.assertEqual(is_all, False)
291 self.assertDictEqual({
292 (e1.type, e1.state_key): e1.event_id,
293 (e3.type, e3.state_key): e3.event_id,
294 # e4 is overwritten by e5
295 (e5.type, e5.state_key): e5.event_id,
296 }, state_dict)
297
298 # test _get_some_state_from_cache correctly filters in members with specific types
299 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
300 group, [(EventTypes.Member, e5.state_key)], filtered_types=[EventTypes.Member]
301 )
302
303 self.assertEqual(is_all, False)
304 self.assertDictEqual({
305 (e1.type, e1.state_key): e1.event_id,
306 (e5.type, e5.state_key): e5.event_id,
307 }, state_dict)
308
309 # test _get_some_state_from_cache correctly filters in members with specific types
310 # and no filtered_types
311 (state_dict, is_all) = yield self.store._get_some_state_from_cache(
312 group, [(EventTypes.Member, e5.state_key)], filtered_types=None
313 )
314
315 self.assertEqual(is_all, True)
316 self.assertDictEqual({
317 (e5.type, e5.state_key): e5.event_id,
318 }, state_dict)
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1415
1516 from mock import Mock, patch
1617
17 from twisted.internet import defer
18
1918 from synapse.util.distributor import Distributor
2019
2120 from . import unittest
2625 def setUp(self):
2726 self.dist = Distributor()
2827
29 @defer.inlineCallbacks
3028 def test_signal_dispatch(self):
3129 self.dist.declare("alert")
3230
3331 observer = Mock()
3432 self.dist.observe("alert", observer)
3533
36 d = self.dist.fire("alert", 1, 2, 3)
37 yield d
38 self.assertTrue(d.called)
34 self.dist.fire("alert", 1, 2, 3)
3935 observer.assert_called_with(1, 2, 3)
4036
41 @defer.inlineCallbacks
42 def test_signal_dispatch_deferred(self):
43 self.dist.declare("whine")
44
45 d_inner = defer.Deferred()
46
47 def observer():
48 return d_inner
49
50 self.dist.observe("whine", observer)
51
52 d_outer = self.dist.fire("whine")
53
54 self.assertFalse(d_outer.called)
55
56 d_inner.callback(None)
57 yield d_outer
58 self.assertTrue(d_outer.called)
59
60 @defer.inlineCallbacks
6137 def test_signal_catch(self):
6238 self.dist.declare("alarm")
6339
7046 with patch(
7147 "synapse.util.distributor.logger", spec=["warning"]
7248 ) as mock_logger:
73 d = self.dist.fire("alarm", "Go")
74 yield d
75 self.assertTrue(d.called)
49 self.dist.fire("alarm", "Go")
7650
7751 observers[0].assert_called_once_with("Go")
7852 observers[1].assert_called_once_with("Go")
8256 mock_logger.warning.call_args[0][0], str
8357 )
8458
85 @defer.inlineCallbacks
86 def test_signal_catch_no_suppress(self):
87 # Gut-wrenching
88 self.dist.suppress_failures = False
89
90 self.dist.declare("whail")
91
92 class MyException(Exception):
93 pass
94
95 @defer.inlineCallbacks
96 def observer():
97 raise MyException("Oopsie")
98
99 self.dist.observe("whail", observer)
100
101 d = self.dist.fire("whail")
102
103 yield self.assertFailure(d, MyException)
104 self.dist.suppress_failures = True
105
106 @defer.inlineCallbacks
10759 def test_signal_prereg(self):
10860 observer = Mock()
10961 self.dist.observe("flare", observer)
11062
11163 self.dist.declare("flare")
112 yield self.dist.fire("flare", 4, 5)
64 self.dist.fire("flare", 4, 5)
11365
11466 observer.assert_called_with(4, 5)
11567
136136 )
137137 self.assertEqual(self.successResultOf(extrem)[0], "$join:test.serv")
138138
139 @unittest.DEBUG
140139 def test_cant_hide_past_history(self):
141140 """
142141 If you send a message, you must be able to provide the direct
177176 for x, y in d.items()
178177 if x == ("m.room.member", "@us:test")
179178 ],
180 "auth_chain_ids": d.values(),
179 "auth_chain_ids": list(d.values()),
181180 }
182181 )
183182
203203 self.store.register_event_context(event, context)
204204 context_store[event.event_id] = context
205205
206 self.assertEqual(2, len(context_store["D"].prev_state_ids))
206 prev_state_ids = yield context_store["D"].get_prev_state_ids(self.store)
207 self.assertEqual(2, len(prev_state_ids))
207208
208209 @defer.inlineCallbacks
209210 def test_branch_basic_conflict(self):
254255 self.store.register_event_context(event, context)
255256 context_store[event.event_id] = context
256257
258 prev_state_ids = yield context_store["D"].get_prev_state_ids(self.store)
259
257260 self.assertSetEqual(
258261 {"START", "A", "C"},
259 {e_id for e_id in context_store["D"].prev_state_ids.values()}
262 {e_id for e_id in prev_state_ids.values()}
260263 )
261264
262265 @defer.inlineCallbacks
317320 self.store.register_event_context(event, context)
318321 context_store[event.event_id] = context
319322
323 prev_state_ids = yield context_store["E"].get_prev_state_ids(self.store)
324
320325 self.assertSetEqual(
321326 {"START", "A", "B", "C"},
322 {e for e in context_store["E"].prev_state_ids.values()}
327 {e for e in prev_state_ids.values()}
323328 )
324329
325330 @defer.inlineCallbacks
397402 self.store.register_event_context(event, context)
398403 context_store[event.event_id] = context
399404
405 prev_state_ids = yield context_store["D"].get_prev_state_ids(self.store)
406
400407 self.assertSetEqual(
401408 {"A1", "A2", "A3", "A5", "B"},
402 {e for e in context_store["D"].prev_state_ids.values()}
409 {e for e in prev_state_ids.values()}
403410 )
404411
405412 def _add_depths(self, nodes, edges):
428435 event, old_state=old_state
429436 )
430437
438 current_state_ids = yield context.get_current_state_ids(self.store)
439
431440 self.assertEqual(
432 set(e.event_id for e in old_state), set(context.current_state_ids.values())
441 set(e.event_id for e in old_state), set(current_state_ids.values())
433442 )
434443
435444 self.assertIsNotNone(context.state_group)
448457 event, old_state=old_state
449458 )
450459
460 prev_state_ids = yield context.get_prev_state_ids(self.store)
461
451462 self.assertEqual(
452 set(e.event_id for e in old_state), set(context.prev_state_ids.values())
463 set(e.event_id for e in old_state), set(prev_state_ids.values())
453464 )
454465
455466 @defer.inlineCallbacks
474485
475486 context = yield self.state.compute_event_context(event)
476487
488 current_state_ids = yield context.get_current_state_ids(self.store)
489
477490 self.assertEqual(
478491 set([e.event_id for e in old_state]),
479 set(context.current_state_ids.values())
492 set(current_state_ids.values())
480493 )
481494
482495 self.assertEqual(group_name, context.state_group)
503516
504517 context = yield self.state.compute_event_context(event)
505518
519 prev_state_ids = yield context.get_prev_state_ids(self.store)
520
506521 self.assertEqual(
507522 set([e.event_id for e in old_state]),
508 set(context.prev_state_ids.values())
523 set(prev_state_ids.values())
509524 )
510525
511526 self.assertIsNotNone(context.state_group)
544559 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2,
545560 )
546561
547 self.assertEqual(len(context.current_state_ids), 6)
562 current_state_ids = yield context.get_current_state_ids(self.store)
563
564 self.assertEqual(len(current_state_ids), 6)
548565
549566 self.assertIsNotNone(context.state_group)
550567
584601 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2,
585602 )
586603
587 self.assertEqual(len(context.current_state_ids), 6)
604 current_state_ids = yield context.get_current_state_ids(self.store)
605
606 self.assertEqual(len(current_state_ids), 6)
588607
589608 self.assertIsNotNone(context.state_group)
590609
641660 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2,
642661 )
643662
663 current_state_ids = yield context.get_current_state_ids(self.store)
664
644665 self.assertEqual(
645 old_state_2[3].event_id, context.current_state_ids[("test1", "1")]
666 old_state_2[3].event_id, current_state_ids[("test1", "1")]
646667 )
647668
648669 # Reverse the depth to make sure we are actually using the depths
669690 event, prev_event_id1, old_state_1, prev_event_id2, old_state_2,
670691 )
671692
693 current_state_ids = yield context.get_current_state_ids(self.store)
694
672695 self.assertEqual(
673 old_state_1[3].event_id, context.current_state_ids[("test1", "1")]
696 old_state_1[3].event_id, current_state_ids[("test1", "1")]
674697 )
675698
676699 def _get_context(self, event, prev_event_id_1, old_state_1, prev_event_id_2,
272272 r = yield obj.fn(2, 3)
273273 self.assertEqual(r, 'chips')
274274 obj.mock.assert_not_called()
275
276
277 class CachedListDescriptorTestCase(unittest.TestCase):
278 @defer.inlineCallbacks
279 def test_cache(self):
280 class Cls(object):
281 def __init__(self):
282 self.mock = mock.Mock()
283
284 @descriptors.cached()
285 def fn(self, arg1, arg2):
286 pass
287
288 @descriptors.cachedList("fn", "args1", inlineCallbacks=True)
289 def list_fn(self, args1, arg2):
290 assert (
291 logcontext.LoggingContext.current_context().request == "c1"
292 )
293 # we want this to behave like an asynchronous function
294 yield run_on_reactor()
295 assert (
296 logcontext.LoggingContext.current_context().request == "c1"
297 )
298 defer.returnValue(self.mock(args1, arg2))
299
300 with logcontext.LoggingContext() as c1:
301 c1.request = "c1"
302 obj = Cls()
303 obj.mock.return_value = {10: 'fish', 20: 'chips'}
304 d1 = obj.list_fn([10, 20], 2)
305 self.assertEqual(
306 logcontext.LoggingContext.current_context(),
307 logcontext.LoggingContext.sentinel,
308 )
309 r = yield d1
310 self.assertEqual(
311 logcontext.LoggingContext.current_context(),
312 c1
313 )
314 obj.mock.assert_called_once_with([10, 20], 2)
315 self.assertEqual(r, {10: 'fish', 20: 'chips'})
316 obj.mock.reset_mock()
317
318 # a call with different params should call the mock again
319 obj.mock.return_value = {30: 'peas'}
320 r = yield obj.list_fn([20, 30], 2)
321 obj.mock.assert_called_once_with([30], 2)
322 self.assertEqual(r, {20: 'chips', 30: 'peas'})
323 obj.mock.reset_mock()
324
325 # all the values should now be cached
326 r = yield obj.fn(10, 2)
327 self.assertEqual(r, 'fish')
328 r = yield obj.fn(20, 2)
329 self.assertEqual(r, 'chips')
330 r = yield obj.fn(30, 2)
331 self.assertEqual(r, 'peas')
332 r = yield obj.list_fn([10, 20, 30], 2)
333 obj.mock.assert_not_called()
334 self.assertEqual(r, {10: 'fish', 20: 'chips', 30: 'peas'})
335
336 @defer.inlineCallbacks
337 def test_invalidate(self):
338 """Make sure that invalidation callbacks are called."""
339 class Cls(object):
340 def __init__(self):
341 self.mock = mock.Mock()
342
343 @descriptors.cached()
344 def fn(self, arg1, arg2):
345 pass
346
347 @descriptors.cachedList("fn", "args1", inlineCallbacks=True)
348 def list_fn(self, args1, arg2):
349 # we want this to behave like an asynchronous function
350 yield run_on_reactor()
351 defer.returnValue(self.mock(args1, arg2))
352
353 obj = Cls()
354 invalidate0 = mock.Mock()
355 invalidate1 = mock.Mock()
356
357 # cache miss
358 obj.mock.return_value = {10: 'fish', 20: 'chips'}
359 r1 = yield obj.list_fn([10, 20], 2, on_invalidate=invalidate0)
360 obj.mock.assert_called_once_with([10, 20], 2)
361 self.assertEqual(r1, {10: 'fish', 20: 'chips'})
362 obj.mock.reset_mock()
363
364 # cache hit
365 r2 = yield obj.list_fn([10, 20], 2, on_invalidate=invalidate1)
366 obj.mock.assert_not_called()
367 self.assertEqual(r2, {10: 'fish', 20: 'chips'})
368
369 invalidate0.assert_not_called()
370 invalidate1.assert_not_called()
371
372 # now if we invalidate the keys, both invalidations should get called
373 obj.fn.invalidate((10, 2))
374 invalidate0.assert_called_once()
375 invalidate1.assert_called_once()
+0
-70
tests/util/test_limiter.py less more
0 # -*- coding: utf-8 -*-
1 # Copyright 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from twisted.internet import defer
17
18 from synapse.util.async import Limiter
19
20 from tests import unittest
21
22
23 class LimiterTestCase(unittest.TestCase):
24
25 @defer.inlineCallbacks
26 def test_limiter(self):
27 limiter = Limiter(3)
28
29 key = object()
30
31 d1 = limiter.queue(key)
32 cm1 = yield d1
33
34 d2 = limiter.queue(key)
35 cm2 = yield d2
36
37 d3 = limiter.queue(key)
38 cm3 = yield d3
39
40 d4 = limiter.queue(key)
41 self.assertFalse(d4.called)
42
43 d5 = limiter.queue(key)
44 self.assertFalse(d5.called)
45
46 with cm1:
47 self.assertFalse(d4.called)
48 self.assertFalse(d5.called)
49
50 self.assertTrue(d4.called)
51 self.assertFalse(d5.called)
52
53 with cm3:
54 self.assertFalse(d5.called)
55
56 self.assertTrue(d5.called)
57
58 with cm2:
59 pass
60
61 with (yield d4):
62 pass
63
64 with (yield d5):
65 pass
66
67 d6 = limiter.queue(key)
68 with (yield d6):
69 pass
00 # -*- coding: utf-8 -*-
11 # Copyright 2016 OpenMarket Ltd
2 # Copyright 2018 New Vector Ltd.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1516 from six.moves import range
1617
1718 from twisted.internet import defer, reactor
19 from twisted.internet.defer import CancelledError
1820
1921 from synapse.util import Clock, logcontext
2022 from synapse.util.async import Linearizer
6466 func(i)
6567
6668 return func(1000)
69
70 @defer.inlineCallbacks
71 def test_multiple_entries(self):
72 limiter = Linearizer(max_count=3)
73
74 key = object()
75
76 d1 = limiter.queue(key)
77 cm1 = yield d1
78
79 d2 = limiter.queue(key)
80 cm2 = yield d2
81
82 d3 = limiter.queue(key)
83 cm3 = yield d3
84
85 d4 = limiter.queue(key)
86 self.assertFalse(d4.called)
87
88 d5 = limiter.queue(key)
89 self.assertFalse(d5.called)
90
91 with cm1:
92 self.assertFalse(d4.called)
93 self.assertFalse(d5.called)
94
95 cm4 = yield d4
96 self.assertFalse(d5.called)
97
98 with cm3:
99 self.assertFalse(d5.called)
100
101 cm5 = yield d5
102
103 with cm2:
104 pass
105
106 with cm4:
107 pass
108
109 with cm5:
110 pass
111
112 d6 = limiter.queue(key)
113 with (yield d6):
114 pass
115
116 @defer.inlineCallbacks
117 def test_cancellation(self):
118 linearizer = Linearizer()
119
120 key = object()
121
122 d1 = linearizer.queue(key)
123 cm1 = yield d1
124
125 d2 = linearizer.queue(key)
126 self.assertFalse(d2.called)
127
128 d3 = linearizer.queue(key)
129 self.assertFalse(d3.called)
130
131 d2.cancel()
132
133 with cm1:
134 pass
135
136 self.assertTrue(d2.called)
137 try:
138 yield d2
139 self.fail("Expected d2 to raise CancelledError")
140 except CancelledError:
141 pass
142
143 with (yield d3):
144 pass
7070 config.user_directory_search_all_users = False
7171 config.user_consent_server_notice_content = None
7272 config.block_events_without_consent_error = None
73 config.media_storage_providers = []
74 config.auto_join_rooms = []
7375
7476 # disable user directory updates, because they get done in the
7577 # background, which upsets the test runner.
135137 database_engine=db_engine,
136138 room_list_handler=object(),
137139 tls_server_context_factory=Mock(),
140 reactor=reactor,
138141 **kargs
139142 )
140143
189192 self.prefix = prefix
190193
191194 def trigger_get(self, path):
192 return self.trigger("GET", path, None)
195 return self.trigger(b"GET", path, None)
193196
194197 @patch('twisted.web.http.Request')
195198 @defer.inlineCallbacks
223226
224227 headers = {}
225228 if federation_auth:
226 headers[b"Authorization"] = ["X-Matrix origin=test,key=,sig="]
229 headers[b"Authorization"] = [b"X-Matrix origin=test,key=,sig="]
227230 mock_request.requestHeaders.getRawHeaders = mock_getRawHeaders(headers)
228231
229232 # return the right path if the event requires it
237240 except Exception:
238241 pass
239242
243 if isinstance(path, bytes):
244 path = path.decode('utf8')
245
240246 for (method, pattern, func) in self.callbacks:
241247 if http_method != method:
242248 continue
245251 if matcher:
246252 try:
247253 args = [
248 urlparse.unquote(u).decode("UTF-8")
254 urlparse.unquote(u)
249255 for u in matcher.groups()
250256 ]
251257