Codebase list matrix-synapse / 6396247
Imported Upstream version 0.20.0 Erik Johnston 7 years ago
100 changed file(s) with 3815 addition(s) and 1446 deletion(s). Raw diff Collapse all Expand all
0 Changes in synapse v0.20.0 (2017-04-11)
1 =======================================
2
3 Bug fixes:
4
5 * Fix joining rooms over federation where not all servers in the room saw the
6 new server had joined (PR #2094)
7
8
9 Changes in synapse v0.20.0-rc1 (2017-03-30)
10 ===========================================
11
12 Features:
13
14 * Add delete_devices API (PR #1993)
15 * Add phone number registration/login support (PR #1994, #2055)
16
17
18 Changes:
19
20 * Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
21 * Reread log config on SIGHUP (PR #1982)
22 * Speed up public room list (PR #1989)
23 * Add helpful texts to logger config options (PR #1990)
24 * Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022)
25 * Add some debug to help diagnose weird federation issue (PR #2035)
26 * Correctly limit retries for all federation requests (PR #2050, #2061)
27 * Don't lock table when persisting new one time keys (PR #2053)
28 * Reduce some CPU work on DB threads (PR #2054)
29 * Cache hosts in room (PR #2060)
30 * Batch sending of device list pokes (PR #2063)
31 * Speed up persist event path in certain edge cases (PR #2070)
32
33
34 Bug fixes:
35
36 * Fix bug where current_state_events renamed to current_state_ids (PR #1849)
37 * Fix routing loop when fetching remote media (PR #1992)
38 * Fix current_state_events table to not lie (PR #1996)
39 * Fix CAS login to handle PartialDownloadError (PR #1997)
40 * Fix assertion to stop transaction queue getting wedged (PR #2010)
41 * Fix presence to fallback to last_active_ts if it beats the last sync time.
42 Thanks @Half-Shot! (PR #2014)
43 * Fix bug when federation received a PDU while a room join is in progress (PR
44 #2016)
45 * Fix resetting state on rejected events (PR #2025)
46 * Fix installation issues in readme. Thanks @ricco386 (PR #2037)
47 * Fix caching of remote servers' signature keys (PR #2042)
48 * Fix some leaking log context (PR #2048, #2049, #2057, #2058)
49 * Fix rejection of invites not reaching sync (PR #2056)
50
51
52
053 Changes in synapse v0.19.3 (2017-03-20)
154 =======================================
255
145145
146146 virtualenv -p python2.7 ~/.synapse
147147 source ~/.synapse/bin/activate
148 pip install --upgrade pip
148149 pip install --upgrade setuptools
149150 pip install https://github.com/matrix-org/synapse/tarball/master
150151
227228 New user localpart: erikj
228229 Password:
229230 Confirm password:
231 Make admin [no]:
230232 Success!
231233
232234 This process uses a setting ``registration_shared_secret`` in
807809 Synapse has a number of external dependencies, that are easiest
808810 to install using pip and a virtualenv::
809811
810 virtualenv env
812 virtualenv -p python2.7 env
811813 source env/bin/activate
812814 python synapse/python_dependencies.py | xargs pip install
813815 pip install lxml mock
3838 synapse:
3939 level: INFO
4040
41 synapse.storage:
41 synapse.storage.SQL:
42 # beware: increasing this to DEBUG will make synapse log sensitive
43 # information such as access tokens.
4244 level: INFO
43
45
4446 # example of enabling debugging for a component:
4547 #
4648 # synapse.federation.transport.server:
0 What do I do about "Unexpected logging context" debug log-lines everywhere?
1
2 <Mjark> The logging context lives in thread local storage
3 <Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context.
4 <Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context?
5 <Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed.
6 <Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor.
7 <Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults
8
9 Unanswered: how and when should we preserve log context?
0 Log contexts
1 ============
2
3 .. contents::
4
5 To help track the processing of individual requests, synapse uses a
6 'log context' to track which request it is handling at any given moment. This
7 is done via a thread-local variable; a ``logging.Filter`` is then used to fish
8 the information back out of the thread-local variable and add it to each log
9 record.
10
11 Logcontexts are also used for CPU and database accounting, so that we can track
12 which requests were responsible for high CPU use or database activity.
13
14 The ``synapse.util.logcontext`` module provides a facilities for managing the
15 current log context (as well as providing the ``LoggingContextFilter`` class).
16
17 Deferreds make the whole thing complicated, so this document describes how it
18 all works, and how to write code which follows the rules.
19
20 Logcontexts without Deferreds
21 -----------------------------
22
23 In the absence of any Deferred voodoo, things are simple enough. As with any
24 code of this nature, the rule is that our function should leave things as it
25 found them:
26
27 .. code:: python
28
29 from synapse.util import logcontext # omitted from future snippets
30
31 def handle_request(request_id):
32 request_context = logcontext.LoggingContext()
33
34 calling_context = logcontext.LoggingContext.current_context()
35 logcontext.LoggingContext.set_current_context(request_context)
36 try:
37 request_context.request = request_id
38 do_request_handling()
39 logger.debug("finished")
40 finally:
41 logcontext.LoggingContext.set_current_context(calling_context)
42
43 def do_request_handling():
44 logger.debug("phew") # this will be logged against request_id
45
46
47 LoggingContext implements the context management methods, so the above can be
48 written much more succinctly as:
49
50 .. code:: python
51
52 def handle_request(request_id):
53 with logcontext.LoggingContext() as request_context:
54 request_context.request = request_id
55 do_request_handling()
56 logger.debug("finished")
57
58 def do_request_handling():
59 logger.debug("phew")
60
61
62 Using logcontexts with Deferreds
63 --------------------------------
64
65 Deferreds — and in particular, ``defer.inlineCallbacks`` — break
66 the linear flow of code so that there is no longer a single entry point where
67 we should set the logcontext and a single exit point where we should remove it.
68
69 Consider the example above, where ``do_request_handling`` needs to do some
70 blocking operation, and returns a deferred:
71
72 .. code:: python
73
74 @defer.inlineCallbacks
75 def handle_request(request_id):
76 with logcontext.LoggingContext() as request_context:
77 request_context.request = request_id
78 yield do_request_handling()
79 logger.debug("finished")
80
81
82 In the above flow:
83
84 * The logcontext is set
85 * ``do_request_handling`` is called, and returns a deferred
86 * ``handle_request`` yields the deferred
87 * The ``inlineCallbacks`` wrapper of ``handle_request`` returns a deferred
88
89 So we have stopped processing the request (and will probably go on to start
90 processing the next), without clearing the logcontext.
91
92 To circumvent this problem, synapse code assumes that, wherever you have a
93 deferred, you will want to yield on it. To that end, whereever functions return
94 a deferred, we adopt the following conventions:
95
96 **Rules for functions returning deferreds:**
97
98 * If the deferred is already complete, the function returns with the same
99 logcontext it started with.
100 * If the deferred is incomplete, the function clears the logcontext before
101 returning; when the deferred completes, it restores the logcontext before
102 running any callbacks.
103
104 That sounds complicated, but actually it means a lot of code (including the
105 example above) "just works". There are two cases:
106
107 * If ``do_request_handling`` returns a completed deferred, then the logcontext
108 will still be in place. In this case, execution will continue immediately
109 after the ``yield``; the "finished" line will be logged against the right
110 context, and the ``with`` block restores the original context before we
111 return to the caller.
112
113 * If the returned deferred is incomplete, ``do_request_handling`` clears the
114 logcontext before returning. The logcontext is therefore clear when
115 ``handle_request`` yields the deferred. At that point, the ``inlineCallbacks``
116 wrapper adds a callback to the deferred, and returns another (incomplete)
117 deferred to the caller, and it is safe to begin processing the next request.
118
119 Once ``do_request_handling``'s deferred completes, it will reinstate the
120 logcontext, before running the callback added by the ``inlineCallbacks``
121 wrapper. That callback runs the second half of ``handle_request``, so again
122 the "finished" line will be logged against the right
123 context, and the ``with`` block restores the original context.
124
125 As an aside, it's worth noting that ``handle_request`` follows our rules -
126 though that only matters if the caller has its own logcontext which it cares
127 about.
128
129 The following sections describe pitfalls and helpful patterns when implementing
130 these rules.
131
132 Always yield your deferreds
133 ---------------------------
134
135 Whenever you get a deferred back from a function, you should ``yield`` on it
136 as soon as possible. (Returning it directly to your caller is ok too, if you're
137 not doing ``inlineCallbacks``.) Do not pass go; do not do any logging; do not
138 call any other functions.
139
140 .. code:: python
141
142 @defer.inlineCallbacks
143 def fun():
144 logger.debug("starting")
145 yield do_some_stuff() # just like this
146
147 d = more_stuff()
148 result = yield d # also fine, of course
149
150 defer.returnValue(result)
151
152 def nonInlineCallbacksFun():
153 logger.debug("just a wrapper really")
154 return do_some_stuff() # this is ok too - the caller will yield on
155 # it anyway.
156
157 Provided this pattern is followed all the way back up to the callchain to where
158 the logcontext was set, this will make things work out ok: provided
159 ``do_some_stuff`` and ``more_stuff`` follow the rules above, then so will
160 ``fun`` (as wrapped by ``inlineCallbacks``) and ``nonInlineCallbacksFun``.
161
162 It's all too easy to forget to ``yield``: for instance if we forgot that
163 ``do_some_stuff`` returned a deferred, we might plough on regardless. This
164 leads to a mess; it will probably work itself out eventually, but not before
165 a load of stuff has been logged against the wrong content. (Normally, other
166 things will break, more obviously, if you forget to ``yield``, so this tends
167 not to be a major problem in practice.)
168
169 Of course sometimes you need to do something a bit fancier with your Deferreds
170 - not all code follows the linear A-then-B-then-C pattern. Notes on
171 implementing more complex patterns are in later sections.
172
173 Where you create a new Deferred, make it follow the rules
174 ---------------------------------------------------------
175
176 Most of the time, a Deferred comes from another synapse function. Sometimes,
177 though, we need to make up a new Deferred, or we get a Deferred back from
178 external code. We need to make it follow our rules.
179
180 The easy way to do it is with a combination of ``defer.inlineCallbacks``, and
181 ``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
182 which returns a deferred which will run its callbacks after a given number of
183 seconds. That might look like:
184
185 .. code:: python
186
187 # not a logcontext-rules-compliant function
188 def get_sleep_deferred(seconds):
189 d = defer.Deferred()
190 reactor.callLater(seconds, d.callback, None)
191 return d
192
193 That doesn't follow the rules, but we can fix it by wrapping it with
194 ``PreserveLoggingContext`` and ``yield`` ing on it:
195
196 .. code:: python
197
198 @defer.inlineCallbacks
199 def sleep(seconds):
200 with PreserveLoggingContext():
201 yield get_sleep_deferred(seconds)
202
203 This technique works equally for external functions which return deferreds,
204 or deferreds we have made ourselves.
205
206 You can also use ``logcontext.make_deferred_yieldable``, which just does the
207 boilerplate for you, so the above could be written:
208
209 .. code:: python
210
211 def sleep(seconds):
212 return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds))
213
214
215 Fire-and-forget
216 ---------------
217
218 Sometimes you want to fire off a chain of execution, but not wait for its
219 result. That might look a bit like this:
220
221 .. code:: python
222
223 @defer.inlineCallbacks
224 def do_request_handling():
225 yield foreground_operation()
226
227 # *don't* do this
228 background_operation()
229
230 logger.debug("Request handling complete")
231
232 @defer.inlineCallbacks
233 def background_operation():
234 yield first_background_step()
235 logger.debug("Completed first step")
236 yield second_background_step()
237 logger.debug("Completed second step")
238
239 The above code does a couple of steps in the background after
240 ``do_request_handling`` has finished. The log lines are still logged against
241 the ``request_context`` logcontext, which may or may not be desirable. There
242 are two big problems with the above, however. The first problem is that, if
243 ``background_operation`` returns an incomplete Deferred, it will expect its
244 caller to ``yield`` immediately, so will have cleared the logcontext. In this
245 example, that means that 'Request handling complete' will be logged without any
246 context.
247
248 The second problem, which is potentially even worse, is that when the Deferred
249 returned by ``background_operation`` completes, it will restore the original
250 logcontext. There is nothing waiting on that Deferred, so the logcontext will
251 leak into the reactor and possibly get attached to some arbitrary future
252 operation.
253
254 There are two potential solutions to this.
255
256 One option is to surround the call to ``background_operation`` with a
257 ``PreserveLoggingContext`` call. That will reset the logcontext before
258 starting ``background_operation`` (so the context restored when the deferred
259 completes will be the empty logcontext), and will restore the current
260 logcontext before continuing the foreground process:
261
262 .. code:: python
263
264 @defer.inlineCallbacks
265 def do_request_handling():
266 yield foreground_operation()
267
268 # start background_operation off in the empty logcontext, to
269 # avoid leaking the current context into the reactor.
270 with PreserveLoggingContext():
271 background_operation()
272
273 # this will now be logged against the request context
274 logger.debug("Request handling complete")
275
276 Obviously that option means that the operations done in
277 ``background_operation`` would be not be logged against a logcontext (though
278 that might be fixed by setting a different logcontext via a ``with
279 LoggingContext(...)`` in ``background_operation``).
280
281 The second option is to use ``logcontext.preserve_fn``, which wraps a function
282 so that it doesn't reset the logcontext even when it returns an incomplete
283 deferred, and adds a callback to the returned deferred to reset the
284 logcontext. In other words, it turns a function that follows the Synapse rules
285 about logcontexts and Deferreds into one which behaves more like an external
286 function — the opposite operation to that described in the previous section.
287 It can be used like this:
288
289 .. code:: python
290
291 @defer.inlineCallbacks
292 def do_request_handling():
293 yield foreground_operation()
294
295 logcontext.preserve_fn(background_operation)()
296
297 # this will now be logged against the request context
298 logger.debug("Request handling complete")
299
300 XXX: I think ``preserve_context_over_fn`` is supposed to do the first option,
301 but the fact that it does ``preserve_context_over_deferred`` on its results
302 means that its use is fraught with difficulty.
303
304 Passing synapse deferreds into third-party functions
305 ----------------------------------------------------
306
307 A typical example of this is where we want to collect together two or more
308 deferred via ``defer.gatherResults``:
309
310 .. code:: python
311
312 d1 = operation1()
313 d2 = operation2()
314 d3 = defer.gatherResults([d1, d2])
315
316 This is really a variation of the fire-and-forget problem above, in that we are
317 firing off ``d1`` and ``d2`` without yielding on them. The difference
318 is that we now have third-party code attached to their callbacks. Anyway either
319 technique given in the `Fire-and-forget`_ section will work.
320
321 Of course, the new Deferred returned by ``gatherResults`` needs to be wrapped
322 in order to make it follow the logcontext rules before we can yield it, as
323 described in `Where you create a new Deferred, make it follow the rules`_.
324
325 So, option one: reset the logcontext before starting the operations to be
326 gathered:
327
328 .. code:: python
329
330 @defer.inlineCallbacks
331 def do_request_handling():
332 with PreserveLoggingContext():
333 d1 = operation1()
334 d2 = operation2()
335 result = yield defer.gatherResults([d1, d2])
336
337 In this case particularly, though, option two, of using
338 ``logcontext.preserve_fn`` almost certainly makes more sense, so that
339 ``operation1`` and ``operation2`` are both logged against the original
340 logcontext. This looks like:
341
342 .. code:: python
343
344 @defer.inlineCallbacks
345 def do_request_handling():
346 d1 = logcontext.preserve_fn(operation1)()
347 d2 = logcontext.preserve_fn(operation2)()
348
349 with PreserveLoggingContext():
350 result = yield defer.gatherResults([d1, d2])
351
352
353 Was all this really necessary?
354 ------------------------------
355
356 The conventions used work fine for a linear flow where everything happens in
357 series via ``defer.inlineCallbacks`` and ``yield``, but are certainly tricky to
358 follow for any more exotic flows. It's hard not to wonder if we could have done
359 something else.
360
361 We're not going to rewrite Synapse now, so the following is entirely of
362 academic interest, but I'd like to record some thoughts on an alternative
363 approach.
364
365 I briefly prototyped some code following an alternative set of rules. I think
366 it would work, but I certainly didn't get as far as thinking how it would
367 interact with concepts as complicated as the cache descriptors.
368
369 My alternative rules were:
370
371 * functions always preserve the logcontext of their caller, whether or not they
372 are returning a Deferred.
373
374 * Deferreds returned by synapse functions run their callbacks in the same
375 context as the function was orignally called in.
376
377 The main point of this scheme is that everywhere that sets the logcontext is
378 responsible for clearing it before returning control to the reactor.
379
380 So, for example, if you were the function which started a ``with
381 LoggingContext`` block, you wouldn't ``yield`` within it — instead you'd start
382 off the background process, and then leave the ``with`` block to wait for it:
383
384 .. code:: python
385
386 def handle_request(request_id):
387 with logcontext.LoggingContext() as request_context:
388 request_context.request = request_id
389 d = do_request_handling()
390
391 def cb(r):
392 logger.debug("finished")
393
394 d.addCallback(cb)
395 return d
396
397 (in general, mixing ``with LoggingContext`` blocks and
398 ``defer.inlineCallbacks`` in the same function leads to slighly
399 counter-intuitive code, under this scheme).
400
401 Because we leave the original ``with`` block as soon as the Deferred is
402 returned (as opposed to waiting for it to be resolved, as we do today), the
403 logcontext is cleared before control passes back to the reactor; so if there is
404 some code within ``do_request_handling`` which needs to wait for a Deferred to
405 complete, there is no need for it to worry about clearing the logcontext before
406 doing so:
407
408 .. code:: python
409
410 def handle_request():
411 r = do_some_stuff()
412 r.addCallback(do_some_more_stuff)
413 return r
414
415 — and provided ``do_some_stuff`` follows the rules of returning a Deferred which
416 runs its callbacks in the original logcontext, all is happy.
417
418 The business of a Deferred which runs its callbacks in the original logcontext
419 isn't hard to achieve — we have it today, in the shape of
420 ``logcontext._PreservingContextDeferred``:
421
422 .. code:: python
423
424 def do_some_stuff():
425 deferred = do_some_io()
426 pcd = _PreservingContextDeferred(LoggingContext.current_context())
427 deferred.chainDeferred(pcd)
428 return pcd
429
430 It turns out that, thanks to the way that Deferreds chain together, we
431 automatically get the property of a context-preserving deferred with
432 ``defer.inlineCallbacks``, provided the final Defered the function ``yields``
433 on has that property. So we can just write:
434
435 .. code:: python
436
437 @defer.inlineCallbacks
438 def handle_request():
439 yield do_some_stuff()
440 yield do_some_more_stuff()
441
442 To conclude: I think this scheme would have worked equally well, with less
443 danger of messing it up, and probably made some more esoteric code easier to
444 write. But again — changing the conventions of the entire Synapse codebase is
445 not a sensible option for the marginal improvement offered.
1515 """ This is a reference implementation of a Matrix home server.
1616 """
1717
18 __version__ = "0.19.3"
18 __version__ = "0.20.0"
2222 from synapse.api.constants import EventTypes, Membership, JoinRules
2323 from synapse.api.errors import AuthError, Codes
2424 from synapse.types import UserID
25 from synapse.util.logcontext import preserve_context_over_fn
25 from synapse.util import logcontext
2626 from synapse.util.metrics import Measure
2727
2828 logger = logging.getLogger(__name__)
208208 default=[""]
209209 )[0]
210210 if user and access_token and ip_addr:
211 preserve_context_over_fn(
212 self.store.insert_client_ip,
211 logcontext.preserve_fn(self.store.insert_client_ip)(
213212 user=user,
214213 access_token=access_token,
215214 ip=ip_addr,
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
4344 class LoginType(object):
4445 PASSWORD = u"m.login.password"
4546 EMAIL_IDENTITY = u"m.login.email.identity"
47 MSISDN = u"m.login.msisdn"
4648 RECAPTCHA = u"m.login.recaptcha"
4749 DUMMY = u"m.login.dummy"
4850
1414
1515 """Contains exceptions and error codes."""
1616
17 import json
1718 import logging
1819
1920 logger = logging.getLogger(__name__)
4950
5051
5152 class CodeMessageException(RuntimeError):
52 """An exception with integer code and message string attributes."""
53
53 """An exception with integer code and message string attributes.
54
55 Attributes:
56 code (int): HTTP error code
57 msg (str): string describing the error
58 """
5459 def __init__(self, code, msg):
5560 super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
5661 self.code = code
5762 self.msg = msg
58 self.response_code_message = None
5963
6064 def error_dict(self):
6165 return cs_error(self.msg)
6266
6367
6468 class SynapseError(CodeMessageException):
65 """A base error which can be caught for all synapse events."""
69 """A base exception type for matrix errors which have an errcode and error
70 message (as well as an HTTP status code).
71
72 Attributes:
73 errcode (str): Matrix error code e.g 'M_FORBIDDEN'
74 """
6675 def __init__(self, code, msg, errcode=Codes.UNKNOWN):
6776 """Constructs a synapse error.
6877
6978 Args:
7079 code (int): The integer error code (an HTTP response code)
7180 msg (str): The human-readable error message.
72 err (str): The error code e.g 'M_FORBIDDEN'
81 errcode (str): The matrix error code e.g 'M_FORBIDDEN'
7382 """
7483 super(SynapseError, self).__init__(code, msg)
7584 self.errcode = errcode
7988 self.msg,
8089 self.errcode,
8190 )
91
92 @classmethod
93 def from_http_response_exception(cls, err):
94 """Make a SynapseError based on an HTTPResponseException
95
96 This is useful when a proxied request has failed, and we need to
97 decide how to map the failure onto a matrix error to send back to the
98 client.
99
100 An attempt is made to parse the body of the http response as a matrix
101 error. If that succeeds, the errcode and error message from the body
102 are used as the errcode and error message in the new synapse error.
103
104 Otherwise, the errcode is set to M_UNKNOWN, and the error message is
105 set to the reason code from the HTTP response.
106
107 Args:
108 err (HttpResponseException):
109
110 Returns:
111 SynapseError:
112 """
113 # try to parse the body as json, to get better errcode/msg, but
114 # default to M_UNKNOWN with the HTTP status as the error text
115 try:
116 j = json.loads(err.response)
117 except ValueError:
118 j = {}
119 errcode = j.get('errcode', Codes.UNKNOWN)
120 errmsg = j.get('error', err.msg)
121
122 res = SynapseError(err.code, errmsg, errcode)
123 return res
82124
83125
84126 class RegistrationError(SynapseError):
105147
106148 class NotFoundError(SynapseError):
107149 """An error indicating we can't find the thing you asked for"""
108 def __init__(self, *args, **kwargs):
109 if "errcode" not in kwargs:
110 kwargs["errcode"] = Codes.NOT_FOUND
150 def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND):
111151 super(NotFoundError, self).__init__(
112152 404,
113 "Not found",
114 **kwargs
153 msg,
154 errcode=errcode
115155 )
116156
117157
172212 errcode=Codes.LIMIT_EXCEEDED):
173213 super(LimitExceededError, self).__init__(code, msg, errcode)
174214 self.retry_after_ms = retry_after_ms
175 self.response_code_message = "Too Many Requests"
176215
177216 def error_dict(self):
178217 return cs_error(
242281
243282
244283 class HttpResponseException(CodeMessageException):
284 """
285 Represents an HTTP-level failure of an outbound request
286
287 Attributes:
288 response (str): body of response
289 """
245290 def __init__(self, code, msg, response):
291 """
292
293 Args:
294 code (int): HTTP status code
295 msg (str): reason phrase from HTTP response status line
296 response (str): body of response
297 """
298 super(HttpResponseException, self).__init__(code, msg)
246299 self.response = response
247 super(HttpResponseException, self).__init__(code, msg)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 from synapse.api.errors import SynapseError
15 from synapse.storage.presence import UserPresenceState
1516 from synapse.types import UserID, RoomID
16
1717 from twisted.internet import defer
1818
1919 import ujson as json
20 import jsonschema
21 from jsonschema import FormatChecker
22
23 FILTER_SCHEMA = {
24 "additionalProperties": False,
25 "type": "object",
26 "properties": {
27 "limit": {
28 "type": "number"
29 },
30 "senders": {
31 "$ref": "#/definitions/user_id_array"
32 },
33 "not_senders": {
34 "$ref": "#/definitions/user_id_array"
35 },
36 # TODO: We don't limit event type values but we probably should...
37 # check types are valid event types
38 "types": {
39 "type": "array",
40 "items": {
41 "type": "string"
42 }
43 },
44 "not_types": {
45 "type": "array",
46 "items": {
47 "type": "string"
48 }
49 }
50 }
51 }
52
53 ROOM_FILTER_SCHEMA = {
54 "additionalProperties": False,
55 "type": "object",
56 "properties": {
57 "not_rooms": {
58 "$ref": "#/definitions/room_id_array"
59 },
60 "rooms": {
61 "$ref": "#/definitions/room_id_array"
62 },
63 "ephemeral": {
64 "$ref": "#/definitions/room_event_filter"
65 },
66 "include_leave": {
67 "type": "boolean"
68 },
69 "state": {
70 "$ref": "#/definitions/room_event_filter"
71 },
72 "timeline": {
73 "$ref": "#/definitions/room_event_filter"
74 },
75 "account_data": {
76 "$ref": "#/definitions/room_event_filter"
77 },
78 }
79 }
80
81 ROOM_EVENT_FILTER_SCHEMA = {
82 "additionalProperties": False,
83 "type": "object",
84 "properties": {
85 "limit": {
86 "type": "number"
87 },
88 "senders": {
89 "$ref": "#/definitions/user_id_array"
90 },
91 "not_senders": {
92 "$ref": "#/definitions/user_id_array"
93 },
94 "types": {
95 "type": "array",
96 "items": {
97 "type": "string"
98 }
99 },
100 "not_types": {
101 "type": "array",
102 "items": {
103 "type": "string"
104 }
105 },
106 "rooms": {
107 "$ref": "#/definitions/room_id_array"
108 },
109 "not_rooms": {
110 "$ref": "#/definitions/room_id_array"
111 },
112 "contains_url": {
113 "type": "boolean"
114 }
115 }
116 }
117
118 USER_ID_ARRAY_SCHEMA = {
119 "type": "array",
120 "items": {
121 "type": "string",
122 "format": "matrix_user_id"
123 }
124 }
125
126 ROOM_ID_ARRAY_SCHEMA = {
127 "type": "array",
128 "items": {
129 "type": "string",
130 "format": "matrix_room_id"
131 }
132 }
133
134 USER_FILTER_SCHEMA = {
135 "$schema": "http://json-schema.org/draft-04/schema#",
136 "description": "schema for a Sync filter",
137 "type": "object",
138 "definitions": {
139 "room_id_array": ROOM_ID_ARRAY_SCHEMA,
140 "user_id_array": USER_ID_ARRAY_SCHEMA,
141 "filter": FILTER_SCHEMA,
142 "room_filter": ROOM_FILTER_SCHEMA,
143 "room_event_filter": ROOM_EVENT_FILTER_SCHEMA
144 },
145 "properties": {
146 "presence": {
147 "$ref": "#/definitions/filter"
148 },
149 "account_data": {
150 "$ref": "#/definitions/filter"
151 },
152 "room": {
153 "$ref": "#/definitions/room_filter"
154 },
155 "event_format": {
156 "type": "string",
157 "enum": ["client", "federation"]
158 },
159 "event_fields": {
160 "type": "array",
161 "items": {
162 "type": "string",
163 # Don't allow '\\' in event field filters. This makes matching
164 # events a lot easier as we can then use a negative lookbehind
165 # assertion to split '\.' If we allowed \\ then it would
166 # incorrectly split '\\.' See synapse.events.utils.serialize_event
167 "pattern": "^((?!\\\).)*$"
168 }
169 }
170 },
171 "additionalProperties": False
172 }
173
174
175 @FormatChecker.cls_checks('matrix_room_id')
176 def matrix_room_id_validator(room_id_str):
177 return RoomID.from_string(room_id_str)
178
179
180 @FormatChecker.cls_checks('matrix_user_id')
181 def matrix_user_id_validator(user_id_str):
182 return UserID.from_string(user_id_str)
20183
21184
22185 class Filtering(object):
51214 # NB: Filters are the complete json blobs. "Definitions" are an
52215 # individual top-level key e.g. public_user_data. Filters are made of
53216 # many definitions.
54
55 top_level_definitions = [
56 "presence", "account_data"
57 ]
58
59 room_level_definitions = [
60 "state", "timeline", "ephemeral", "account_data"
61 ]
62
63 for key in top_level_definitions:
64 if key in user_filter_json:
65 self._check_definition(user_filter_json[key])
66
67 if "room" in user_filter_json:
68 self._check_definition_room_lists(user_filter_json["room"])
69 for key in room_level_definitions:
70 if key in user_filter_json["room"]:
71 self._check_definition(user_filter_json["room"][key])
72
73 if "event_fields" in user_filter_json:
74 if type(user_filter_json["event_fields"]) != list:
75 raise SynapseError(400, "event_fields must be a list of strings")
76 for field in user_filter_json["event_fields"]:
77 if not isinstance(field, basestring):
78 raise SynapseError(400, "Event field must be a string")
79 # Don't allow '\\' in event field filters. This makes matching
80 # events a lot easier as we can then use a negative lookbehind
81 # assertion to split '\.' If we allowed \\ then it would
82 # incorrectly split '\\.' See synapse.events.utils.serialize_event
83 if r'\\' in field:
84 raise SynapseError(
85 400, r'The escape character \ cannot itself be escaped'
86 )
87
88 def _check_definition_room_lists(self, definition):
89 """Check that "rooms" and "not_rooms" are lists of room ids if they
90 are present
91
92 Args:
93 definition(dict): The filter definition
94 Raises:
95 SynapseError: If there was a problem with this definition.
96 """
97 # check rooms are valid room IDs
98 room_id_keys = ["rooms", "not_rooms"]
99 for key in room_id_keys:
100 if key in definition:
101 if type(definition[key]) != list:
102 raise SynapseError(400, "Expected %s to be a list." % key)
103 for room_id in definition[key]:
104 RoomID.from_string(room_id)
105
106 def _check_definition(self, definition):
107 """Check if the provided definition is valid.
108
109 This inspects not only the types but also the values to make sure they
110 make sense.
111
112 Args:
113 definition(dict): The filter definition
114 Raises:
115 SynapseError: If there was a problem with this definition.
116 """
117 # NB: Filters are the complete json blobs. "Definitions" are an
118 # individual top-level key e.g. public_user_data. Filters are made of
119 # many definitions.
120 if type(definition) != dict:
121 raise SynapseError(
122 400, "Expected JSON object, not %s" % (definition,)
123 )
124
125 self._check_definition_room_lists(definition)
126
127 # check senders are valid user IDs
128 user_id_keys = ["senders", "not_senders"]
129 for key in user_id_keys:
130 if key in definition:
131 if type(definition[key]) != list:
132 raise SynapseError(400, "Expected %s to be a list." % key)
133 for user_id in definition[key]:
134 UserID.from_string(user_id)
135
136 # TODO: We don't limit event type values but we probably should...
137 # check types are valid event types
138 event_keys = ["types", "not_types"]
139 for key in event_keys:
140 if key in definition:
141 if type(definition[key]) != list:
142 raise SynapseError(400, "Expected %s to be a list." % key)
143 for event_type in definition[key]:
144 if not isinstance(event_type, basestring):
145 raise SynapseError(400, "Event type should be a string")
217 try:
218 jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
219 format_checker=FormatChecker())
220 except jsonschema.ValidationError as e:
221 raise SynapseError(400, e.message)
146222
147223
148224 class FilterCollection(object):
252328 Returns:
253329 bool: True if the event matches
254330 """
255 sender = event.get("sender", None)
256 if not sender:
257 # Presence events have their 'sender' in content.user_id
258 content = event.get("content")
259 # account_data has been allowed to have non-dict content, so check type first
260 if isinstance(content, dict):
261 sender = content.get("user_id")
331 # We usually get the full "events" as dictionaries coming through,
332 # except for presence which actually gets passed around as its own
333 # namedtuple type.
334 if isinstance(event, UserPresenceState):
335 sender = event.user_id
336 room_id = None
337 ev_type = "m.presence"
338 is_url = False
339 else:
340 sender = event.get("sender", None)
341 if not sender:
342 # Presence events had their 'sender' in content.user_id, but are
343 # now handled above. We don't know if anything else uses this
344 # form. TODO: Check this and probably remove it.
345 content = event.get("content")
346 # account_data has been allowed to have non-dict content, so
347 # check type first
348 if isinstance(content, dict):
349 sender = content.get("user_id")
350
351 room_id = event.get("room_id", None)
352 ev_type = event.get("type", None)
353 is_url = "url" in event.get("content", {})
262354
263355 return self.check_fields(
264 event.get("room_id", None),
356 room_id,
265357 sender,
266 event.get("type", None),
267 "url" in event.get("content", {})
358 ev_type,
359 is_url,
268360 )
269361
270362 def check_fields(self, room_id, sender, event_type, contains_url):
2828 from synapse.storage.engines import create_engine
2929 from synapse.util.async import sleep
3030 from synapse.util.httpresourcetree import create_resource_tree
31 from synapse.util.logcontext import LoggingContext
31 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
3232 from synapse.util.manhole import manhole
3333 from synapse.util.rlimit import change_resource_limit
3434 from synapse.util.versionstring import get_version_string
156156
157157 assert config.worker_app == "synapse.app.appservice"
158158
159 setup_logging(config.worker_log_config, config.worker_log_file)
159 setup_logging(config, use_worker_options=True)
160160
161161 events.USE_FROZEN_DICTS = config.use_frozen_dicts
162162
186186 ps.start_listening(config.worker_listeners)
187187
188188 def run():
189 with LoggingContext("run"):
189 # make sure that we run the reactor with the sentinel log context,
190 # otherwise other PreserveLoggingContext instances will get confused
191 # and complain when they see the logcontext arbitrarily swapping
192 # between the sentinel and `run` logcontexts.
193 with PreserveLoggingContext():
190194 logger.info("Running")
191195 change_resource_limit(config.soft_file_limit)
192196 if config.gc_thresholds:
2828 from synapse.replication.slave.storage.room import RoomStore
2929 from synapse.replication.slave.storage.directory import DirectoryStore
3030 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
31 from synapse.replication.slave.storage.transactions import TransactionStore
3132 from synapse.rest.client.v1.room import PublicRoomListRestServlet
3233 from synapse.server import HomeServer
3334 from synapse.storage.client_ips import ClientIpStore
3435 from synapse.storage.engines import create_engine
3536 from synapse.util.async import sleep
3637 from synapse.util.httpresourcetree import create_resource_tree
37 from synapse.util.logcontext import LoggingContext
38 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
3839 from synapse.util.manhole import manhole
3940 from synapse.util.rlimit import change_resource_limit
4041 from synapse.util.versionstring import get_version_string
6263 DirectoryStore,
6364 SlavedApplicationServiceStore,
6465 SlavedRegistrationStore,
66 TransactionStore,
6567 BaseSlavedStore,
6668 ClientIpStore, # After BaseSlavedStore because the constructor is different
6769 ):
170172
171173 assert config.worker_app == "synapse.app.client_reader"
172174
173 setup_logging(config.worker_log_config, config.worker_log_file)
175 setup_logging(config, use_worker_options=True)
174176
175177 events.USE_FROZEN_DICTS = config.use_frozen_dicts
176178
192194 ss.start_listening(config.worker_listeners)
193195
194196 def run():
195 with LoggingContext("run"):
197 # make sure that we run the reactor with the sentinel log context,
198 # otherwise other PreserveLoggingContext instances will get confused
199 # and complain when they see the logcontext arbitrarily swapping
200 # between the sentinel and `run` logcontexts.
201 with PreserveLoggingContext():
196202 logger.info("Running")
197203 change_resource_limit(config.soft_file_limit)
198204 if config.gc_thresholds:
3030 from synapse.storage.engines import create_engine
3131 from synapse.util.async import sleep
3232 from synapse.util.httpresourcetree import create_resource_tree
33 from synapse.util.logcontext import LoggingContext
33 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
3434 from synapse.util.manhole import manhole
3535 from synapse.util.rlimit import change_resource_limit
3636 from synapse.util.versionstring import get_version_string
161161
162162 assert config.worker_app == "synapse.app.federation_reader"
163163
164 setup_logging(config.worker_log_config, config.worker_log_file)
164 setup_logging(config, use_worker_options=True)
165165
166166 events.USE_FROZEN_DICTS = config.use_frozen_dicts
167167
183183 ss.start_listening(config.worker_listeners)
184184
185185 def run():
186 with LoggingContext("run"):
186 # make sure that we run the reactor with the sentinel log context,
187 # otherwise other PreserveLoggingContext instances will get confused
188 # and complain when they see the logcontext arbitrarily swapping
189 # between the sentinel and `run` logcontexts.
190 with PreserveLoggingContext():
187191 logger.info("Running")
188192 change_resource_limit(config.soft_file_limit)
189193 if config.gc_thresholds:
3434 from synapse.storage.presence import UserPresenceState
3535 from synapse.util.async import sleep
3636 from synapse.util.httpresourcetree import create_resource_tree
37 from synapse.util.logcontext import LoggingContext
37 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
3838 from synapse.util.manhole import manhole
3939 from synapse.util.rlimit import change_resource_limit
4040 from synapse.util.versionstring import get_version_string
159159
160160 assert config.worker_app == "synapse.app.federation_sender"
161161
162 setup_logging(config.worker_log_config, config.worker_log_file)
162 setup_logging(config, use_worker_options=True)
163163
164164 events.USE_FROZEN_DICTS = config.use_frozen_dicts
165165
192192 ps.start_listening(config.worker_listeners)
193193
194194 def run():
195 with LoggingContext("run"):
195 # make sure that we run the reactor with the sentinel log context,
196 # otherwise other PreserveLoggingContext instances will get confused
197 # and complain when they see the logcontext arbitrarily swapping
198 # between the sentinel and `run` logcontexts.
199 with PreserveLoggingContext():
196200 logger.info("Running")
197201 change_resource_limit(config.soft_file_limit)
198202 if config.gc_thresholds:
1919 import logging
2020 import os
2121 import sys
22
23 import synapse.config.logger
2224 from synapse.config._base import ConfigError
2325
2426 from synapse.python_dependencies import (
4951 )
5052 from synapse.config.homeserver import HomeServerConfig
5153 from synapse.crypto import context_factory
52 from synapse.util.logcontext import LoggingContext
54 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
5355 from synapse.metrics import register_memory_metrics, get_metrics_for
5456 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
5557 from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
285287 # generating config files and shouldn't try to continue.
286288 sys.exit(0)
287289
288 config.setup_logging()
290 synapse.config.logger.setup_logging(config, use_worker_options=False)
289291
290292 # check any extra requirements we have now we have a config
291293 check_requirements(config)
453455 def in_thread():
454456 # Uncomment to enable tracing of log context changes.
455457 # sys.settrace(logcontext_tracer)
456 with LoggingContext("run"):
458
459 # make sure that we run the reactor with the sentinel log context,
460 # otherwise other PreserveLoggingContext instances will get confused
461 # and complain when they see the logcontext arbitrarily swapping
462 # between the sentinel and `run` logcontexts.
463 with PreserveLoggingContext():
457464 change_resource_limit(hs.config.soft_file_limit)
458465 if hs.config.gc_thresholds:
459466 gc.set_threshold(*hs.config.gc_thresholds)
2323 from synapse.replication.slave.storage._base import BaseSlavedStore
2424 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2525 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
26 from synapse.replication.slave.storage.transactions import TransactionStore
2627 from synapse.rest.media.v0.content_repository import ContentRepoResource
2728 from synapse.rest.media.v1.media_repository import MediaRepositoryResource
2829 from synapse.server import HomeServer
3132 from synapse.storage.media_repository import MediaRepositoryStore
3233 from synapse.util.async import sleep
3334 from synapse.util.httpresourcetree import create_resource_tree
34 from synapse.util.logcontext import LoggingContext
35 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
3536 from synapse.util.manhole import manhole
3637 from synapse.util.rlimit import change_resource_limit
3738 from synapse.util.versionstring import get_version_string
5859 class MediaRepositorySlavedStore(
5960 SlavedApplicationServiceStore,
6061 SlavedRegistrationStore,
62 TransactionStore,
6163 BaseSlavedStore,
6264 MediaRepositoryStore,
6365 ClientIpStore,
167169
168170 assert config.worker_app == "synapse.app.media_repository"
169171
170 setup_logging(config.worker_log_config, config.worker_log_file)
172 setup_logging(config, use_worker_options=True)
171173
172174 events.USE_FROZEN_DICTS = config.use_frozen_dicts
173175
189191 ss.start_listening(config.worker_listeners)
190192
191193 def run():
192 with LoggingContext("run"):
194 # make sure that we run the reactor with the sentinel log context,
195 # otherwise other PreserveLoggingContext instances will get confused
196 # and complain when they see the logcontext arbitrarily swapping
197 # between the sentinel and `run` logcontexts.
198 with PreserveLoggingContext():
193199 logger.info("Running")
194200 change_resource_limit(config.soft_file_limit)
195201 if config.gc_thresholds:
3030 from synapse.storage import DataStore
3131 from synapse.util.async import sleep
3232 from synapse.util.httpresourcetree import create_resource_tree
33 from synapse.util.logcontext import LoggingContext, preserve_fn
33 from synapse.util.logcontext import LoggingContext, preserve_fn, \
34 PreserveLoggingContext
3435 from synapse.util.manhole import manhole
3536 from synapse.util.rlimit import change_resource_limit
3637 from synapse.util.versionstring import get_version_string
244245
245246 assert config.worker_app == "synapse.app.pusher"
246247
247 setup_logging(config.worker_log_config, config.worker_log_file)
248 setup_logging(config, use_worker_options=True)
248249
249250 events.USE_FROZEN_DICTS = config.use_frozen_dicts
250251
274275 ps.start_listening(config.worker_listeners)
275276
276277 def run():
277 with LoggingContext("run"):
278 # make sure that we run the reactor with the sentinel log context,
279 # otherwise other PreserveLoggingContext instances will get confused
280 # and complain when they see the logcontext arbitrarily swapping
281 # between the sentinel and `run` logcontexts.
282 with PreserveLoggingContext():
278283 logger.info("Running")
279284 change_resource_limit(config.soft_file_limit)
280285 if config.gc_thresholds:
1919 from synapse.config._base import ConfigError
2020 from synapse.config.homeserver import HomeServerConfig
2121 from synapse.config.logger import setup_logging
22 from synapse.events import FrozenEvent
2322 from synapse.handlers.presence import PresenceHandler
2423 from synapse.http.site import SynapseSite
2524 from synapse.http.server import JsonResource
4746 from synapse.storage.roommember import RoomMemberStore
4847 from synapse.util.async import sleep
4948 from synapse.util.httpresourcetree import create_resource_tree
50 from synapse.util.logcontext import LoggingContext, preserve_fn
49 from synapse.util.logcontext import LoggingContext, preserve_fn, \
50 PreserveLoggingContext
5151 from synapse.util.manhole import manhole
5252 from synapse.util.rlimit import change_resource_limit
5353 from synapse.util.stringutils import random_string
398398 position = row[position_index]
399399 user_id = row[user_index]
400400
401 rooms = yield store.get_rooms_for_user(user_id)
402 room_ids = [r.room_id for r in rooms]
401 room_ids = yield store.get_rooms_for_user(user_id)
403402
404403 notifier.on_new_event(
405404 "device_list_key", position, rooms=room_ids,
410409 stream = result.get("events")
411410 if stream:
412411 max_position = stream["position"]
412
413 event_map = yield store.get_events([row[1] for row in stream["rows"]])
414
413415 for row in stream["rows"]:
414416 position = row[0]
415 internal = json.loads(row[1])
416 event_json = json.loads(row[2])
417 event = FrozenEvent(event_json, internal_metadata_dict=internal)
417 event_id = row[1]
418 event = event_map.get(event_id, None)
419 if not event:
420 continue
421
418422 extra_users = ()
419423 if event.type == EventTypes.Member:
420424 extra_users = (event.state_key,)
477481
478482 assert config.worker_app == "synapse.app.synchrotron"
479483
480 setup_logging(config.worker_log_config, config.worker_log_file)
484 setup_logging(config, use_worker_options=True)
481485
482486 synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
483487
496500 ss.start_listening(config.worker_listeners)
497501
498502 def run():
499 with LoggingContext("run"):
503 # make sure that we run the reactor with the sentinel log context,
504 # otherwise other PreserveLoggingContext instances will get confused
505 # and complain when they see the logcontext arbitrarily swapping
506 # between the sentinel and `run` logcontexts.
507 with PreserveLoggingContext():
500508 logger.info("Running")
501509 change_resource_limit(config.soft_file_limit)
502510 if config.gc_thresholds:
2222 import subprocess
2323 import sys
2424 import yaml
25 import errno
26 import time
2527
2628 SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
2729
2830 GREEN = "\x1b[1;32m"
31 YELLOW = "\x1b[1;33m"
2932 RED = "\x1b[1;31m"
3033 NORMAL = "\x1b[m"
34
35
36 def pid_running(pid):
37 try:
38 os.kill(pid, 0)
39 return True
40 except OSError, err:
41 if err.errno == errno.EPERM:
42 return True
43 return False
3144
3245
3346 def write(message, colour=NORMAL, stream=sys.stdout):
3750 stream.write(colour + message + NORMAL + "\n")
3851
3952
53 def abort(message, colour=RED, stream=sys.stderr):
54 write(message, colour, stream)
55 sys.exit(1)
56
57
4058 def start(configfile):
4159 write("Starting ...")
4260 args = SYNAPSE
4462
4563 try:
4664 subprocess.check_call(args)
47 write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN)
65 write("started synapse.app.homeserver(%r)" %
66 (configfile,), colour=GREEN)
4867 except subprocess.CalledProcessError as e:
4968 write(
5069 "error starting (exit code: %d); see above for logs" % e.returncode,
7594 def stop(pidfile, app):
7695 if os.path.exists(pidfile):
7796 pid = int(open(pidfile).read())
78 os.kill(pid, signal.SIGTERM)
79 write("stopped %s" % (app,), colour=GREEN)
97 try:
98 os.kill(pid, signal.SIGTERM)
99 write("stopped %s" % (app,), colour=GREEN)
100 except OSError, err:
101 if err.errno == errno.ESRCH:
102 write("%s not running" % (app,), colour=YELLOW)
103 elif err.errno == errno.EPERM:
104 abort("Cannot stop %s: Operation not permitted" % (app,))
105 else:
106 abort("Cannot stop %s: Unknown error" % (app,))
80107
81108
82109 Worker = collections.namedtuple("Worker", [
189216 if start_stop_synapse:
190217 stop(pidfile, "synapse.app.homeserver")
191218
192 # TODO: Wait for synapse to actually shutdown before starting it again
219 # Wait for synapse to actually shutdown before starting it again
220 if action == "restart":
221 running_pids = []
222 if start_stop_synapse and os.path.exists(pidfile):
223 running_pids.append(int(open(pidfile).read()))
224 for worker in workers:
225 if os.path.exists(worker.pidfile):
226 running_pids.append(int(open(worker.pidfile).read()))
227 if len(running_pids) > 0:
228 write("Waiting for process to exit before restarting...")
229 for running_pid in running_pids:
230 while pid_running(running_pid):
231 time.sleep(0.2)
193232
194233 if action == "start" or action == "restart":
195234 if start_stop_synapse:
4444 maxBytes: 104857600
4545 backupCount: 10
4646 filters: [context]
47 level: INFO
4847 console:
4948 class: logging.StreamHandler
5049 formatter: precise
5554 level: INFO
5655
5756 synapse.storage.SQL:
57 # beware: increasing this to DEBUG will make synapse log sensitive
58 # information such as access tokens.
5859 level: INFO
5960
6061 root:
6768
6869 def read_config(self, config):
6970 self.verbosity = config.get("verbose", 0)
71 self.no_redirect_stdio = config.get("no_redirect_stdio", False)
7072 self.log_config = self.abspath(config.get("log_config"))
7173 self.log_file = self.abspath(config.get("log_file"))
7274
7678 os.path.join(config_dir_path, server_name + ".log.config")
7779 )
7880 return """
79 # Logging verbosity level.
81 # Logging verbosity level. Ignored if log_config is specified.
8082 verbose: 0
8183
82 # File to write logging to
84 # File to write logging to. Ignored if log_config is specified.
8385 log_file: "%(log_file)s"
8486
8587 # A yaml python logging config file
8991 def read_arguments(self, args):
9092 if args.verbose is not None:
9193 self.verbosity = args.verbose
94 if args.no_redirect_stdio is not None:
95 self.no_redirect_stdio = args.no_redirect_stdio
9296 if args.log_config is not None:
9397 self.log_config = args.log_config
9498 if args.log_file is not None:
98102 logging_group = parser.add_argument_group("logging")
99103 logging_group.add_argument(
100104 '-v', '--verbose', dest="verbose", action='count',
101 help="The verbosity level."
105 help="The verbosity level. Specify multiple times to increase "
106 "verbosity. (Ignored if --log-config is specified.)"
102107 )
103108 logging_group.add_argument(
104109 '-f', '--log-file', dest="log_file",
105 help="File to log to."
110 help="File to log to. (Ignored if --log-config is specified.)"
106111 )
107112 logging_group.add_argument(
108113 '--log-config', dest="log_config", default=None,
109114 help="Python logging config file"
115 )
116 logging_group.add_argument(
117 '-n', '--no-redirect-stdio',
118 action='store_true', default=None,
119 help="Do not redirect stdout/stderr to the log"
110120 )
111121
112122 def generate_files(self, config):
117127 DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
118128 )
119129
120 def setup_logging(self):
121 setup_logging(self.log_config, self.log_file, self.verbosity)
122
123
124 def setup_logging(log_config=None, log_file=None, verbosity=None):
130
131 def setup_logging(config, use_worker_options=False):
132 """ Set up python logging
133
134 Args:
135 config (LoggingConfig | synapse.config.workers.WorkerConfig):
136 configuration data
137
138 use_worker_options (bool): True to use 'worker_log_config' and
139 'worker_log_file' options instead of 'log_config' and 'log_file'.
140 """
141 log_config = (config.worker_log_config if use_worker_options
142 else config.log_config)
143 log_file = (config.worker_log_file if use_worker_options
144 else config.log_file)
145
125146 log_format = (
126147 "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
127148 " - %(message)s"
130151
131152 level = logging.INFO
132153 level_for_storage = logging.INFO
133 if verbosity:
154 if config.verbosity:
134155 level = logging.DEBUG
135 if verbosity > 1:
156 if config.verbosity > 1:
136157 level_for_storage = logging.DEBUG
137158
138159 # FIXME: we need a logging.WARN for a -q quiet option
152173 logger.info("Closing log file due to SIGHUP")
153174 handler.doRollover()
154175 logger.info("Opened new log file due to SIGHUP")
155
156 # TODO(paul): obviously this is a terrible mechanism for
157 # stealing SIGHUP, because it means no other part of synapse
158 # can use it instead. If we want to catch SIGHUP anywhere
159 # else as well, I'd suggest we find a nicer way to broadcast
160 # it around.
161 if getattr(signal, "SIGHUP"):
162 signal.signal(signal.SIGHUP, sighup)
163176 else:
164177 handler = logging.StreamHandler()
165178 handler.setFormatter(formatter)
168181
169182 logger.addHandler(handler)
170183 else:
171 with open(log_config, 'r') as f:
172 logging.config.dictConfig(yaml.load(f))
184 def load_log_config():
185 with open(log_config, 'r') as f:
186 logging.config.dictConfig(yaml.load(f))
187
188 def sighup(signum, stack):
189 # it might be better to use a file watcher or something for this.
190 logging.info("Reloading log config from %s due to SIGHUP",
191 log_config)
192 load_log_config()
193
194 load_log_config()
195
196 # TODO(paul): obviously this is a terrible mechanism for
197 # stealing SIGHUP, because it means no other part of synapse
198 # can use it instead. If we want to catch SIGHUP anywhere
199 # else as well, I'd suggest we find a nicer way to broadcast
200 # it around.
201 if getattr(signal, "SIGHUP"):
202 signal.signal(signal.SIGHUP, sighup)
173203
174204 # It's critical to point twisted's internal logging somewhere, otherwise it
175205 # stacks up and leaks kup to 64K object;
182212 #
183213 # However this may not be too much of a problem if we are just writing to a file.
184214 observer = STDLibLogObserver()
185 globalLogBeginner.beginLoggingTo([observer])
215 globalLogBeginner.beginLoggingTo(
216 [observer],
217 redirectStandardIO=not config.no_redirect_stdio,
218 )
1414
1515 from synapse.crypto.keyclient import fetch_server_key
1616 from synapse.api.errors import SynapseError, Codes
17 from synapse.util.retryutils import get_retry_limiter
1817 from synapse.util import unwrapFirstError
1918 from synapse.util.async import ObservableDeferred
2019 from synapse.util.logcontext import (
9594 verify_requests = []
9695
9796 for server_name, json_object in server_and_json:
98 logger.debug("Verifying for %s", server_name)
9997
10098 key_ids = signature_ids(json_object, server_name)
10199 if not key_ids:
100 logger.warn("Request from %s: no supported signature keys",
101 server_name)
102102 deferred = defer.fail(SynapseError(
103103 400,
104104 "Not signed with a supported algorithm",
106106 ))
107107 else:
108108 deferred = defer.Deferred()
109
110 logger.debug("Verifying for %s with key_ids %s",
111 server_name, key_ids)
109112
110113 verify_request = VerifyKeyRequest(
111114 server_name, key_ids, json_object, deferred
141144
142145 json_object = verify_request.json_object
143146
147 logger.debug("Got key %s %s:%s for server %s, verifying" % (
148 key_id, verify_key.alg, verify_key.version, server_name,
149 ))
144150 try:
145151 verify_signed_json(json_object, server_name, verify_key)
146152 except:
230236 d.addBoth(rm, server_name)
231237
232238 def get_server_verify_keys(self, verify_requests):
233 """Takes a dict of KeyGroups and tries to find at least one key for
234 each group.
239 """Tries to find at least one key for each verify request
240
241 For each verify_request, verify_request.deferred is called back with
242 params (server_name, key_id, VerifyKey) if a key is found, or errbacked
243 with a SynapseError if none of the keys are found.
244
245 Args:
246 verify_requests (list[VerifyKeyRequest]): list of verify requests
235247 """
236248
237249 # These are functions that produce keys given a list of key ids
244256 @defer.inlineCallbacks
245257 def do_iterations():
246258 with Measure(self.clock, "get_server_verify_keys"):
259 # dict[str, dict[str, VerifyKey]]: results so far.
260 # map server_name -> key_id -> VerifyKey
247261 merged_results = {}
248262
263 # dict[str, set(str)]: keys to fetch for each server
249264 missing_keys = {}
250265 for verify_request in verify_requests:
251266 missing_keys.setdefault(verify_request.server_name, set()).update(
307322
308323 @defer.inlineCallbacks
309324 def get_keys_from_store(self, server_name_and_key_ids):
325 """
326
327 Args:
328 server_name_and_key_ids (list[(str, iterable[str])]):
329 list of (server_name, iterable[key_id]) tuples to fetch keys for
330
331 Returns:
332 Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
333 server_name -> key_id -> VerifyKey
334 """
310335 res = yield preserve_context_over_deferred(defer.gatherResults(
311336 [
312337 preserve_fn(self.store.get_server_verify_keys)(
355380 def get_keys_from_server(self, server_name_and_key_ids):
356381 @defer.inlineCallbacks
357382 def get_key(server_name, key_ids):
358 limiter = yield get_retry_limiter(
359 server_name,
360 self.clock,
361 self.store,
362 )
363 with limiter:
364 keys = None
365 try:
366 keys = yield self.get_server_verify_key_v2_direct(
367 server_name, key_ids
368 )
369 except Exception as e:
370 logger.info(
371 "Unable to get key %r for %r directly: %s %s",
372 key_ids, server_name,
373 type(e).__name__, str(e.message),
374 )
375
376 if not keys:
377 keys = yield self.get_server_verify_key_v1_direct(
378 server_name, key_ids
379 )
380
381 keys = {server_name: keys}
383 keys = None
384 try:
385 keys = yield self.get_server_verify_key_v2_direct(
386 server_name, key_ids
387 )
388 except Exception as e:
389 logger.info(
390 "Unable to get key %r for %r directly: %s %s",
391 key_ids, server_name,
392 type(e).__name__, str(e.message),
393 )
394
395 if not keys:
396 keys = yield self.get_server_verify_key_v1_direct(
397 server_name, key_ids
398 )
399
400 keys = {server_name: keys}
382401
383402 defer.returnValue(keys)
384403
1414
1515
1616 class EventContext(object):
17 """
18 Attributes:
19 current_state_ids (dict[(str, str), str]):
20 The current state map including the current event.
21 (type, state_key) -> event_id
22
23 prev_state_ids (dict[(str, str), str]):
24 The current state map excluding the current event.
25 (type, state_key) -> event_id
26
27 state_group (int): state group id
28 rejected (bool|str): A rejection reason if the event was rejected, else
29 False
30
31 push_actions (list[(str, list[object])]): list of (user_id, actions)
32 tuples
33
34 prev_group (int): Previously persisted state group. ``None`` for an
35 outlier.
36 delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
37 (type, state_key) -> event_id. ``None`` for an outlier.
38
39 prev_state_events (?): XXX: is this ever set to anything other than
40 the empty list?
41 """
42
1743 __slots__ = [
1844 "current_state_ids",
1945 "prev_state_ids",
2828 from synapse.events import FrozenEvent, builder
2929 import synapse.metrics
3030
31 from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
31 from synapse.util.retryutils import NotRetryingDestination
3232
3333 import copy
3434 import itertools
8787
8888 @log_function
8989 def make_query(self, destination, query_type, args,
90 retry_on_dns_fail=False):
90 retry_on_dns_fail=False, ignore_backoff=False):
9191 """Sends a federation Query to a remote homeserver of the given type
9292 and arguments.
9393
9797 handler name used in register_query_handler().
9898 args (dict): Mapping of strings to strings containing the details
9999 of the query request.
100 ignore_backoff (bool): true to ignore the historical backoff data
101 and try the request anyway.
100102
101103 Returns:
102104 a Deferred which will eventually yield a JSON object from the
105107 sent_queries_counter.inc(query_type)
106108
107109 return self.transport_layer.make_query(
108 destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail
110 destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
111 ignore_backoff=ignore_backoff,
109112 )
110113
111114 @log_function
233236 continue
234237
235238 try:
236 limiter = yield get_retry_limiter(
237 destination,
238 self._clock,
239 self.store,
240 )
241
242 with limiter:
243 transaction_data = yield self.transport_layer.get_event(
244 destination, event_id, timeout=timeout,
245 )
246
247 logger.debug("transaction_data %r", transaction_data)
248
249 pdu_list = [
250 self.event_from_pdu_json(p, outlier=outlier)
251 for p in transaction_data["pdus"]
252 ]
253
254 if pdu_list and pdu_list[0]:
255 pdu = pdu_list[0]
256
257 # Check signatures are correct.
258 signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
259
260 break
239 transaction_data = yield self.transport_layer.get_event(
240 destination, event_id, timeout=timeout,
241 )
242
243 logger.debug("transaction_data %r", transaction_data)
244
245 pdu_list = [
246 self.event_from_pdu_json(p, outlier=outlier)
247 for p in transaction_data["pdus"]
248 ]
249
250 if pdu_list and pdu_list[0]:
251 pdu = pdu_list[0]
252
253 # Check signatures are correct.
254 signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
255
256 break
261257
262258 pdu_attempts[destination] = now
263259
5151
5252 self.auth = hs.get_auth()
5353
54 self._room_pdu_linearizer = Linearizer("fed_room_pdu")
5554 self._server_linearizer = Linearizer("fed_server")
5655
5756 # We cache responses to state queries, as they take a while and often
146145 # check that it's actually being sent from a valid destination to
147146 # workaround bug #1753 in 0.18.5 and 0.18.6
148147 if transaction.origin != get_domain_from_id(pdu.event_id):
148 # We continue to accept join events from any server; this is
149 # necessary for the federation join dance to work correctly.
150 # (When we join over federation, the "helper" server is
151 # responsible for sending out the join event, rather than the
152 # origin. See bug #1893).
149153 if not (
150154 pdu.type == 'm.room.member' and
151155 pdu.content and
152 pdu.content.get("membership", None) == 'join' and
153 self.hs.is_mine_id(pdu.state_key)
156 pdu.content.get("membership", None) == 'join'
154157 ):
155158 logger.info(
156159 "Discarding PDU %s from invalid origin %s",
164167 )
165168
166169 try:
167 yield self._handle_new_pdu(transaction.origin, pdu)
170 yield self._handle_received_pdu(transaction.origin, pdu)
168171 results.append({})
169172 except FederationError as e:
170173 self.send_failure(e, transaction.origin)
496499 )
497500
498501 @defer.inlineCallbacks
499 @log_function
500 def _handle_new_pdu(self, origin, pdu, get_missing=True):
501
502 # We reprocess pdus when we have seen them only as outliers
503 existing = yield self._get_persisted_pdu(
504 origin, pdu.event_id, do_auth=False
505 )
506
507 # FIXME: Currently we fetch an event again when we already have it
508 # if it has been marked as an outlier.
509
510 already_seen = (
511 existing and (
512 not existing.internal_metadata.is_outlier()
513 or pdu.internal_metadata.is_outlier()
514 )
515 )
516 if already_seen:
517 logger.debug("Already seen pdu %s", pdu.event_id)
518 return
519
502 def _handle_received_pdu(self, origin, pdu):
503 """ Process a PDU received in a federation /send/ transaction.
504
505 Args:
506 origin (str): server which sent the pdu
507 pdu (FrozenEvent): received pdu
508
509 Returns (Deferred): completes with None
510 Raises: FederationError if the signatures / hash do not match
511 """
520512 # Check signature.
521513 try:
522514 pdu = yield self._check_sigs_and_hash(pdu)
528520 affected=pdu.event_id,
529521 )
530522
531 state = None
532
533 auth_chain = []
534
535 have_seen = yield self.store.have_events(
536 [ev for ev, _ in pdu.prev_events]
537 )
538
539 fetch_state = False
540
541 # Get missing pdus if necessary.
542 if not pdu.internal_metadata.is_outlier():
543 # We only backfill backwards to the min depth.
544 min_depth = yield self.handler.get_min_depth_for_context(
545 pdu.room_id
546 )
547
548 logger.debug(
549 "_handle_new_pdu min_depth for %s: %d",
550 pdu.room_id, min_depth
551 )
552
553 prevs = {e_id for e_id, _ in pdu.prev_events}
554 seen = set(have_seen.keys())
555
556 if min_depth and pdu.depth < min_depth:
557 # This is so that we don't notify the user about this
558 # message, to work around the fact that some events will
559 # reference really really old events we really don't want to
560 # send to the clients.
561 pdu.internal_metadata.outlier = True
562 elif min_depth and pdu.depth > min_depth:
563 if get_missing and prevs - seen:
564 # If we're missing stuff, ensure we only fetch stuff one
565 # at a time.
566 logger.info(
567 "Acquiring lock for room %r to fetch %d missing events: %r...",
568 pdu.room_id, len(prevs - seen), list(prevs - seen)[:5],
569 )
570 with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
571 logger.info(
572 "Acquired lock for room %r to fetch %d missing events",
573 pdu.room_id, len(prevs - seen),
574 )
575
576 # We recalculate seen, since it may have changed.
577 have_seen = yield self.store.have_events(prevs)
578 seen = set(have_seen.keys())
579
580 if prevs - seen:
581 latest = yield self.store.get_latest_event_ids_in_room(
582 pdu.room_id
583 )
584
585 # We add the prev events that we have seen to the latest
586 # list to ensure the remote server doesn't give them to us
587 latest = set(latest)
588 latest |= seen
589
590 logger.info(
591 "Missing %d events for room %r: %r...",
592 len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
593 )
594
595 # XXX: we set timeout to 10s to help workaround
596 # https://github.com/matrix-org/synapse/issues/1733.
597 # The reason is to avoid holding the linearizer lock
598 # whilst processing inbound /send transactions, causing
599 # FDs to stack up and block other inbound transactions
600 # which empirically can currently take up to 30 minutes.
601 #
602 # N.B. this explicitly disables retry attempts.
603 #
604 # N.B. this also increases our chances of falling back to
605 # fetching fresh state for the room if the missing event
606 # can't be found, which slightly reduces our security.
607 # it may also increase our DAG extremity count for the room,
608 # causing additional state resolution? See #1760.
609 # However, fetching state doesn't hold the linearizer lock
610 # apparently.
611 #
612 # see https://github.com/matrix-org/synapse/pull/1744
613
614 missing_events = yield self.get_missing_events(
615 origin,
616 pdu.room_id,
617 earliest_events_ids=list(latest),
618 latest_events=[pdu],
619 limit=10,
620 min_depth=min_depth,
621 timeout=10000,
622 )
623
624 # We want to sort these by depth so we process them and
625 # tell clients about them in order.
626 missing_events.sort(key=lambda x: x.depth)
627
628 for e in missing_events:
629 yield self._handle_new_pdu(
630 origin,
631 e,
632 get_missing=False
633 )
634
635 have_seen = yield self.store.have_events(
636 [ev for ev, _ in pdu.prev_events]
637 )
638
639 prevs = {e_id for e_id, _ in pdu.prev_events}
640 seen = set(have_seen.keys())
641 if prevs - seen:
642 logger.info(
643 "Still missing %d events for room %r: %r...",
644 len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
645 )
646 fetch_state = True
647
648 if fetch_state:
649 # We need to get the state at this event, since we haven't
650 # processed all the prev events.
651 logger.debug(
652 "_handle_new_pdu getting state for %s",
653 pdu.room_id
654 )
655 try:
656 state, auth_chain = yield self.get_state_for_room(
657 origin, pdu.room_id, pdu.event_id,
658 )
659 except:
660 logger.exception("Failed to get state for event: %s", pdu.event_id)
661
662 yield self.handler.on_receive_pdu(
663 origin,
664 pdu,
665 state=state,
666 auth_chain=auth_chain,
667 )
523 yield self.handler.on_receive_pdu(origin, pdu, get_missing=True)
668524
669525 def __str__(self):
670526 return "<ReplicationLayer(%s)>" % self.server_name
5353 def __init__(self, hs):
5454 self.server_name = hs.hostname
5555 self.clock = hs.get_clock()
56 self.notifier = hs.get_notifier()
5657
5758 self.presence_map = {}
5859 self.presence_changed = sorteddict()
185186 else:
186187 self.edus[pos] = edu
187188
189 self.notifier.on_new_replication_data()
190
188191 def send_presence(self, destination, states):
189192 """As per TransactionQueue"""
190193 pos = self._next_pos()
198201 (destination, state.user_id) for state in states
199202 ]
200203
204 self.notifier.on_new_replication_data()
205
201206 def send_failure(self, failure, destination):
202207 """As per TransactionQueue"""
203208 pos = self._next_pos()
204209
205210 self.failures[pos] = (destination, str(failure))
211 self.notifier.on_new_replication_data()
206212
207213 def send_device_messages(self, destination):
208214 """As per TransactionQueue"""
209215 pos = self._next_pos()
210216 self.device_messages[pos] = destination
217 self.notifier.on_new_replication_data()
211218
212219 def get_current_token(self):
213220 return self.pos - 1
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
14 import datetime
1515
1616 from twisted.internet import defer
1717
2121 from synapse.api.errors import HttpResponseException
2222 from synapse.util.async import run_on_reactor
2323 from synapse.util.logcontext import preserve_context_over_fn
24 from synapse.util.retryutils import (
25 get_retry_limiter, NotRetryingDestination,
26 )
24 from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
2725 from synapse.util.metrics import measure_func
2826 from synapse.types import get_domain_from_id
2927 from synapse.handlers.presence import format_user_presence_state
9896 # destination -> list of tuple(failure, deferred)
9997 self.pending_failures_by_dest = {}
10098
99 # destination -> stream_id of last successfully sent to-device message.
100 # NB: may be a long or an int.
101101 self.last_device_stream_id_by_dest = {}
102
103 # destination -> stream_id of last successfully sent device list
104 # update.
102105 self.last_device_list_stream_id_by_dest = {}
103106
104107 # HACK to get unique tx id
299302 )
300303 return
301304
305 pending_pdus = []
302306 try:
303307 self.pending_transactions[destination] = 1
304308
309 # This will throw if we wouldn't retry. We do this here so we fail
310 # quickly, but we will later check this again in the http client,
311 # hence why we throw the result away.
312 yield get_retry_limiter(destination, self.clock, self.store)
313
305314 # XXX: what's this for?
306315 yield run_on_reactor()
307316
317 pending_pdus = []
308318 while True:
309 limiter = yield get_retry_limiter(
310 destination,
311 self.clock,
312 self.store,
313 backoff_on_404=True, # If we get a 404 the other side has gone
314 )
315
316319 device_message_edus, device_stream_id, dev_list_id = (
317320 yield self._get_new_device_messages(destination)
318321 )
368371
369372 success = yield self._send_new_transaction(
370373 destination, pending_pdus, pending_edus, pending_failures,
371 limiter=limiter,
372374 )
373375 if success:
374376 # Remove the acknowledged device messages from the database
386388 self.last_device_list_stream_id_by_dest[destination] = dev_list_id
387389 else:
388390 break
389 except NotRetryingDestination:
391 except NotRetryingDestination as e:
390392 logger.debug(
391 "TX [%s] not ready for retry yet - "
393 "TX [%s] not ready for retry yet (next retry at %s) - "
392394 "dropping transaction for now",
393395 destination,
394 )
396 datetime.datetime.fromtimestamp(
397 (e.retry_last_ts + e.retry_interval) / 1000.0
398 ),
399 )
400 except Exception as e:
401 logger.warn(
402 "TX [%s] Failed to send transaction: %s",
403 destination,
404 e,
405 )
406 for p, _ in pending_pdus:
407 logger.info("Failed to send event %s to %s", p.event_id,
408 destination)
395409 finally:
396410 # We want to be *very* sure we delete this after we stop processing
397411 self.pending_transactions.pop(destination, None)
431445 @measure_func("_send_new_transaction")
432446 @defer.inlineCallbacks
433447 def _send_new_transaction(self, destination, pending_pdus, pending_edus,
434 pending_failures, limiter):
448 pending_failures):
435449
436450 # Sort based on the order field
437451 pending_pdus.sort(key=lambda t: t[1])
441455
442456 success = True
443457
458 logger.debug("TX [%s] _attempt_new_transaction", destination)
459
460 txn_id = str(self._next_txn_id)
461
462 logger.debug(
463 "TX [%s] {%s} Attempting new transaction"
464 " (pdus: %d, edus: %d, failures: %d)",
465 destination, txn_id,
466 len(pdus),
467 len(edus),
468 len(failures)
469 )
470
471 logger.debug("TX [%s] Persisting transaction...", destination)
472
473 transaction = Transaction.create_new(
474 origin_server_ts=int(self.clock.time_msec()),
475 transaction_id=txn_id,
476 origin=self.server_name,
477 destination=destination,
478 pdus=pdus,
479 edus=edus,
480 pdu_failures=failures,
481 )
482
483 self._next_txn_id += 1
484
485 yield self.transaction_actions.prepare_to_send(transaction)
486
487 logger.debug("TX [%s] Persisted transaction", destination)
488 logger.info(
489 "TX [%s] {%s} Sending transaction [%s],"
490 " (PDUs: %d, EDUs: %d, failures: %d)",
491 destination, txn_id,
492 transaction.transaction_id,
493 len(pdus),
494 len(edus),
495 len(failures),
496 )
497
498 # Actually send the transaction
499
500 # FIXME (erikj): This is a bit of a hack to make the Pdu age
501 # keys work
502 def json_data_cb():
503 data = transaction.get_dict()
504 now = int(self.clock.time_msec())
505 if "pdus" in data:
506 for p in data["pdus"]:
507 if "age_ts" in p:
508 unsigned = p.setdefault("unsigned", {})
509 unsigned["age"] = now - int(p["age_ts"])
510 del p["age_ts"]
511 return data
512
444513 try:
445 logger.debug("TX [%s] _attempt_new_transaction", destination)
446
447 txn_id = str(self._next_txn_id)
448
449 logger.debug(
450 "TX [%s] {%s} Attempting new transaction"
451 " (pdus: %d, edus: %d, failures: %d)",
452 destination, txn_id,
453 len(pdus),
454 len(edus),
455 len(failures)
456 )
457
458 logger.debug("TX [%s] Persisting transaction...", destination)
459
460 transaction = Transaction.create_new(
461 origin_server_ts=int(self.clock.time_msec()),
462 transaction_id=txn_id,
463 origin=self.server_name,
464 destination=destination,
465 pdus=pdus,
466 edus=edus,
467 pdu_failures=failures,
468 )
469
470 self._next_txn_id += 1
471
472 yield self.transaction_actions.prepare_to_send(transaction)
473
474 logger.debug("TX [%s] Persisted transaction", destination)
475 logger.info(
476 "TX [%s] {%s} Sending transaction [%s],"
477 " (PDUs: %d, EDUs: %d, failures: %d)",
478 destination, txn_id,
479 transaction.transaction_id,
480 len(pdus),
481 len(edus),
482 len(failures),
483 )
484
485 with limiter:
486 # Actually send the transaction
487
488 # FIXME (erikj): This is a bit of a hack to make the Pdu age
489 # keys work
490 def json_data_cb():
491 data = transaction.get_dict()
492 now = int(self.clock.time_msec())
493 if "pdus" in data:
494 for p in data["pdus"]:
495 if "age_ts" in p:
496 unsigned = p.setdefault("unsigned", {})
497 unsigned["age"] = now - int(p["age_ts"])
498 del p["age_ts"]
499 return data
500
501 try:
502 response = yield self.transport_layer.send_transaction(
503 transaction, json_data_cb
504 )
505 code = 200
506
507 if response:
508 for e_id, r in response.get("pdus", {}).items():
509 if "error" in r:
510 logger.warn(
511 "Transaction returned error for %s: %s",
512 e_id, r,
513 )
514 except HttpResponseException as e:
515 code = e.code
516 response = e.response
517
518 if e.code in (401, 404, 429) or 500 <= e.code:
519 logger.info(
520 "TX [%s] {%s} got %d response",
521 destination, txn_id, code
514 response = yield self.transport_layer.send_transaction(
515 transaction, json_data_cb
516 )
517 code = 200
518
519 if response:
520 for e_id, r in response.get("pdus", {}).items():
521 if "error" in r:
522 logger.warn(
523 "Transaction returned error for %s: %s",
524 e_id, r,
522525 )
523 raise e
524
526 except HttpResponseException as e:
527 code = e.code
528 response = e.response
529
530 if e.code in (401, 404, 429) or 500 <= e.code:
525531 logger.info(
526532 "TX [%s] {%s} got %d response",
527533 destination, txn_id, code
528534 )
529
530 logger.debug("TX [%s] Sent transaction", destination)
531 logger.debug("TX [%s] Marking as delivered...", destination)
532
533 yield self.transaction_actions.delivered(
534 transaction, code, response
535 )
536
537 logger.debug("TX [%s] Marked as delivered", destination)
538
539 if code != 200:
540 for p in pdus:
541 logger.info(
542 "Failed to send event %s to %s", p.event_id, destination
543 )
544 success = False
545 except RuntimeError as e:
546 # We capture this here as there as nothing actually listens
547 # for this finishing functions deferred.
548 logger.warn(
549 "TX [%s] Problem in _attempt_transaction: %s",
550 destination,
551 e,
552 )
553
535 raise e
536
537 logger.info(
538 "TX [%s] {%s} got %d response",
539 destination, txn_id, code
540 )
541
542 logger.debug("TX [%s] Sent transaction", destination)
543 logger.debug("TX [%s] Marking as delivered...", destination)
544
545 yield self.transaction_actions.delivered(
546 transaction, code, response
547 )
548
549 logger.debug("TX [%s] Marked as delivered", destination)
550
551 if code != 200:
552 for p in pdus:
553 logger.info(
554 "Failed to send event %s to %s", p.event_id, destination
555 )
554556 success = False
555557
556 for p in pdus:
557 logger.info("Failed to send event %s to %s", p.event_id, destination)
558 except Exception as e:
559 # We capture this here as there as nothing actually listens
560 # for this finishing functions deferred.
561 logger.warn(
562 "TX [%s] Problem in _attempt_transaction: %s",
563 destination,
564 e,
565 )
566
567 success = False
568
569 for p in pdus:
570 logger.info("Failed to send event %s to %s", p.event_id, destination)
571
572558 defer.returnValue(success)
162162 data=json_data,
163163 json_data_callback=json_data_callback,
164164 long_retries=True,
165 backoff_on_404=True, # If we get a 404 the other side has gone
165166 )
166167
167168 logger.debug(
173174
174175 @defer.inlineCallbacks
175176 @log_function
176 def make_query(self, destination, query_type, args, retry_on_dns_fail):
177 def make_query(self, destination, query_type, args, retry_on_dns_fail,
178 ignore_backoff=False):
177179 path = PREFIX + "/query/%s" % query_type
178180
179181 content = yield self.client.get_json(
182184 args=args,
183185 retry_on_dns_fail=retry_on_dns_fail,
184186 timeout=10000,
187 ignore_backoff=ignore_backoff,
185188 )
186189
187190 defer.returnValue(content)
241244 destination=destination,
242245 path=path,
243246 data=content,
247 ignore_backoff=True,
244248 )
245249
246250 defer.returnValue(response)
268272 destination=remote_server,
269273 path=path,
270274 args=args,
275 ignore_backoff=True,
271276 )
272277
273278 defer.returnValue(response)
00 # -*- coding: utf-8 -*-
11 # Copyright 2014 - 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
4647 LoginType.PASSWORD: self._check_password_auth,
4748 LoginType.RECAPTCHA: self._check_recaptcha,
4849 LoginType.EMAIL_IDENTITY: self._check_email_identity,
50 LoginType.MSISDN: self._check_msisdn,
4951 LoginType.DUMMY: self._check_dummy_auth,
5052 }
5153 self.bcrypt_rounds = hs.config.bcrypt_rounds
306308 defer.returnValue(True)
307309 raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
308310
309 @defer.inlineCallbacks
310311 def _check_email_identity(self, authdict, _):
311 yield run_on_reactor()
312
313 if 'threepid_creds' not in authdict:
314 raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM)
315
316 threepid_creds = authdict['threepid_creds']
317 identity_handler = self.hs.get_handlers().identity_handler
318
319 logger.info("Getting validated threepid. threepidcreds: %r" % (threepid_creds,))
320 threepid = yield identity_handler.threepid_from_creds(threepid_creds)
321
322 if not threepid:
323 raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
324
325 threepid['threepid_creds'] = authdict['threepid_creds']
326
327 defer.returnValue(threepid)
312 return self._check_threepid('email', authdict)
313
314 def _check_msisdn(self, authdict, _):
315 return self._check_threepid('msisdn', authdict)
328316
329317 @defer.inlineCallbacks
330318 def _check_dummy_auth(self, authdict, _):
331319 yield run_on_reactor()
332320 defer.returnValue(True)
321
322 @defer.inlineCallbacks
323 def _check_threepid(self, medium, authdict):
324 yield run_on_reactor()
325
326 if 'threepid_creds' not in authdict:
327 raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM)
328
329 threepid_creds = authdict['threepid_creds']
330
331 identity_handler = self.hs.get_handlers().identity_handler
332
333 logger.info("Getting validated threepid. threepidcreds: %r", (threepid_creds,))
334 threepid = yield identity_handler.threepid_from_creds(threepid_creds)
335
336 if not threepid:
337 raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
338
339 if threepid['medium'] != medium:
340 raise LoginError(
341 401,
342 "Expecting threepid of type '%s', got '%s'" % (
343 medium, threepid['medium'],
344 ),
345 errcode=Codes.UNAUTHORIZED
346 )
347
348 threepid['threepid_creds'] = authdict['threepid_creds']
349
350 defer.returnValue(threepid)
333351
334352 def _get_params_recaptcha(self):
335353 return {"public_key": self.hs.config.recaptcha_public_key}
169169 yield self.notify_device_update(user_id, [device_id])
170170
171171 @defer.inlineCallbacks
172 def delete_devices(self, user_id, device_ids):
173 """ Delete several devices
174
175 Args:
176 user_id (str):
177 device_ids (str): The list of device IDs to delete
178
179 Returns:
180 defer.Deferred:
181 """
182
183 try:
184 yield self.store.delete_devices(user_id, device_ids)
185 except errors.StoreError, e:
186 if e.code == 404:
187 # no match
188 pass
189 else:
190 raise
191
192 # Delete access tokens and e2e keys for each device. Not optimised as it is not
193 # considered as part of a critical path.
194 for device_id in device_ids:
195 yield self.store.user_delete_access_tokens(
196 user_id, device_id=device_id,
197 delete_refresh_tokens=True,
198 )
199 yield self.store.delete_e2e_keys_by_device(
200 user_id=user_id, device_id=device_id
201 )
202
203 yield self.notify_device_update(user_id, device_ids)
204
205 @defer.inlineCallbacks
172206 def update_device(self, user_id, device_id, content):
173207 """ Update the given device
174208
213247 user_id, device_ids, list(hosts)
214248 )
215249
216 rooms = yield self.store.get_rooms_for_user(user_id)
217 room_ids = [r.room_id for r in rooms]
250 room_ids = yield self.store.get_rooms_for_user(user_id)
218251
219252 yield self.notifier.on_new_event(
220253 "device_list_key", position, rooms=room_ids,
235268 user_id (str)
236269 from_token (StreamToken)
237270 """
238 rooms = yield self.store.get_rooms_for_user(user_id)
239 room_ids = set(r.room_id for r in rooms)
271 room_ids = yield self.store.get_rooms_for_user(user_id)
240272
241273 # First we check if any devices have changed
242274 changed = yield self.store.get_user_whose_devices_changed(
261293 # ordering: treat it the same as a new room
262294 event_ids = []
263295
264 current_state_ids = yield self.state.get_current_state_ids(room_id)
296 current_state_ids = yield self.store.get_current_state_ids(room_id)
265297
266298 # special-case for an empty prev state: include all members
267299 # in the changed list
312344 @defer.inlineCallbacks
313345 def user_left_room(self, user, room_id):
314346 user_id = user.to_string()
315 rooms = yield self.store.get_rooms_for_user(user_id)
316 if not rooms:
347 room_ids = yield self.store.get_rooms_for_user(user_id)
348 if not room_ids:
317349 # We no longer share rooms with this user, so we'll no longer
318350 # receive device updates. Mark this in DB.
319351 yield self.store.mark_remote_user_device_list_as_unsubscribed(user_id)
369401 logger.warning("Got device list update edu for %r from %r", user_id, origin)
370402 return
371403
372 rooms = yield self.store.get_rooms_for_user(user_id)
373 if not rooms:
404 room_ids = yield self.store.get_rooms_for_user(user_id)
405 if not room_ids:
374406 # We don't share any rooms with this user. Ignore update, as we
375407 # probably won't get any further updates.
376408 return
174174 "room_alias": room_alias.to_string(),
175175 },
176176 retry_on_dns_fail=False,
177 ignore_backoff=True,
177178 )
178179 except CodeMessageException as e:
179180 logging.warn("Error retrieving alias")
2121 from synapse.api.errors import SynapseError, CodeMessageException
2222 from synapse.types import get_domain_from_id
2323 from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
24 from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
24 from synapse.util.retryutils import NotRetryingDestination
2525
2626 logger = logging.getLogger(__name__)
2727
120120 def do_remote_query(destination):
121121 destination_query = remote_queries_not_in_cache[destination]
122122 try:
123 limiter = yield get_retry_limiter(
124 destination, self.clock, self.store
123 remote_result = yield self.federation.query_client_keys(
124 destination,
125 {"device_keys": destination_query},
126 timeout=timeout
125127 )
126 with limiter:
127 remote_result = yield self.federation.query_client_keys(
128 destination,
129 {"device_keys": destination_query},
130 timeout=timeout
131 )
132128
133129 for user_id, keys in remote_result["device_keys"].items():
134130 if user_id in destination_query:
238234 def claim_client_keys(destination):
239235 device_keys = remote_queries[destination]
240236 try:
241 limiter = yield get_retry_limiter(
242 destination, self.clock, self.store
237 remote_result = yield self.federation.claim_client_keys(
238 destination,
239 {"one_time_keys": device_keys},
240 timeout=timeout
243241 )
244 with limiter:
245 remote_result = yield self.federation.claim_client_keys(
246 destination,
247 {"one_time_keys": device_keys},
248 timeout=timeout
249 )
250 for user_id, keys in remote_result["one_time_keys"].items():
251 if user_id in device_keys:
252 json_result[user_id] = keys
242 for user_id, keys in remote_result["one_time_keys"].items():
243 if user_id in device_keys:
244 json_result[user_id] = keys
253245 except CodeMessageException as e:
254246 failures[destination] = {
255247 "status": e.code, "message": e.message
315307 # old access_token without an associated device_id. Either way, we
316308 # need to double-check the device is registered to avoid ending up with
317309 # keys without a corresponding device.
318 self.device_handler.check_device_registered(user_id, device_id)
310 yield self.device_handler.check_device_registered(user_id, device_id)
319311
320312 result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
321313
1313 # limitations under the License.
1414
1515 """Contains handlers for federation events."""
16 import synapse.util.logcontext
1617 from signedjson.key import decode_verify_key_bytes
1718 from signedjson.sign import verify_signed_json
1819 from unpaddedbase64 import decode_base64
3031 )
3132 from synapse.util.metrics import measure_func
3233 from synapse.util.logutils import log_function
33 from synapse.util.async import run_on_reactor
34 from synapse.util.async import run_on_reactor, Linearizer
3435 from synapse.util.frozenutils import unfreeze
3536 from synapse.crypto.event_signing import (
3637 compute_event_signature, add_hashes_and_signatures,
7879
7980 # When joining a room we need to queue any events for that room up
8081 self.room_queues = {}
81
82 self._room_pdu_linearizer = Linearizer("fed_room_pdu")
83
84 @defer.inlineCallbacks
8285 @log_function
83 @defer.inlineCallbacks
84 def on_receive_pdu(self, origin, pdu, state=None, auth_chain=None):
85 """ Called by the ReplicationLayer when we have a new pdu. We need to
86 do auth checks and put it through the StateHandler.
87
88 auth_chain and state are None if we already have the necessary state
89 and prev_events in the db
90 """
91 event = pdu
92
93 logger.debug("Got event: %s", event.event_id)
86 def on_receive_pdu(self, origin, pdu, get_missing=True):
87 """ Process a PDU received via a federation /send/ transaction, or
88 via backfill of missing prev_events
89
90 Args:
91 origin (str): server which initiated the /send/ transaction. Will
92 be used to fetch missing events or state.
93 pdu (FrozenEvent): received PDU
94 get_missing (bool): True if we should fetch missing prev_events
95
96 Returns (Deferred): completes with None
97 """
98
99 # We reprocess pdus when we have seen them only as outliers
100 existing = yield self.get_persisted_pdu(
101 origin, pdu.event_id, do_auth=False
102 )
103
104 # FIXME: Currently we fetch an event again when we already have it
105 # if it has been marked as an outlier.
106
107 already_seen = (
108 existing and (
109 not existing.internal_metadata.is_outlier()
110 or pdu.internal_metadata.is_outlier()
111 )
112 )
113 if already_seen:
114 logger.debug("Already seen pdu %s", pdu.event_id)
115 return
94116
95117 # If we are currently in the process of joining this room, then we
96118 # queue up events for later processing.
97 if event.room_id in self.room_queues:
98 self.room_queues[event.room_id].append((pdu, origin))
119 if pdu.room_id in self.room_queues:
120 logger.info("Ignoring PDU %s for room %s from %s for now; join "
121 "in progress", pdu.event_id, pdu.room_id, origin)
122 self.room_queues[pdu.room_id].append((pdu, origin))
99123 return
100124
101 logger.debug("Processing event: %s", event.event_id)
102
103 logger.debug("Event: %s", event)
125 state = None
126
127 auth_chain = []
128
129 have_seen = yield self.store.have_events(
130 [ev for ev, _ in pdu.prev_events]
131 )
132
133 fetch_state = False
134
135 # Get missing pdus if necessary.
136 if not pdu.internal_metadata.is_outlier():
137 # We only backfill backwards to the min depth.
138 min_depth = yield self.get_min_depth_for_context(
139 pdu.room_id
140 )
141
142 logger.debug(
143 "_handle_new_pdu min_depth for %s: %d",
144 pdu.room_id, min_depth
145 )
146
147 prevs = {e_id for e_id, _ in pdu.prev_events}
148 seen = set(have_seen.keys())
149
150 if min_depth and pdu.depth < min_depth:
151 # This is so that we don't notify the user about this
152 # message, to work around the fact that some events will
153 # reference really really old events we really don't want to
154 # send to the clients.
155 pdu.internal_metadata.outlier = True
156 elif min_depth and pdu.depth > min_depth:
157 if get_missing and prevs - seen:
158 # If we're missing stuff, ensure we only fetch stuff one
159 # at a time.
160 logger.info(
161 "Acquiring lock for room %r to fetch %d missing events: %r...",
162 pdu.room_id, len(prevs - seen), list(prevs - seen)[:5],
163 )
164 with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
165 logger.info(
166 "Acquired lock for room %r to fetch %d missing events",
167 pdu.room_id, len(prevs - seen),
168 )
169
170 yield self._get_missing_events_for_pdu(
171 origin, pdu, prevs, min_depth
172 )
173
174 prevs = {e_id for e_id, _ in pdu.prev_events}
175 seen = set(have_seen.keys())
176 if prevs - seen:
177 logger.info(
178 "Still missing %d events for room %r: %r...",
179 len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
180 )
181 fetch_state = True
182
183 if fetch_state:
184 # We need to get the state at this event, since we haven't
185 # processed all the prev events.
186 logger.debug(
187 "_handle_new_pdu getting state for %s",
188 pdu.room_id
189 )
190 try:
191 state, auth_chain = yield self.replication_layer.get_state_for_room(
192 origin, pdu.room_id, pdu.event_id,
193 )
194 except:
195 logger.exception("Failed to get state for event: %s", pdu.event_id)
196
197 yield self._process_received_pdu(
198 origin,
199 pdu,
200 state=state,
201 auth_chain=auth_chain,
202 )
203
204 @defer.inlineCallbacks
205 def _get_missing_events_for_pdu(self, origin, pdu, prevs, min_depth):
206 """
207 Args:
208 origin (str): Origin of the pdu. Will be called to get the missing events
209 pdu: received pdu
210 prevs (str[]): List of event ids which we are missing
211 min_depth (int): Minimum depth of events to return.
212
213 Returns:
214 Deferred<dict(str, str?)>: updated have_seen dictionary
215 """
216 # We recalculate seen, since it may have changed.
217 have_seen = yield self.store.have_events(prevs)
218 seen = set(have_seen.keys())
219
220 if not prevs - seen:
221 # nothing left to do
222 defer.returnValue(have_seen)
223
224 latest = yield self.store.get_latest_event_ids_in_room(
225 pdu.room_id
226 )
227
228 # We add the prev events that we have seen to the latest
229 # list to ensure the remote server doesn't give them to us
230 latest = set(latest)
231 latest |= seen
232
233 logger.info(
234 "Missing %d events for room %r: %r...",
235 len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
236 )
237
238 # XXX: we set timeout to 10s to help workaround
239 # https://github.com/matrix-org/synapse/issues/1733.
240 # The reason is to avoid holding the linearizer lock
241 # whilst processing inbound /send transactions, causing
242 # FDs to stack up and block other inbound transactions
243 # which empirically can currently take up to 30 minutes.
244 #
245 # N.B. this explicitly disables retry attempts.
246 #
247 # N.B. this also increases our chances of falling back to
248 # fetching fresh state for the room if the missing event
249 # can't be found, which slightly reduces our security.
250 # it may also increase our DAG extremity count for the room,
251 # causing additional state resolution? See #1760.
252 # However, fetching state doesn't hold the linearizer lock
253 # apparently.
254 #
255 # see https://github.com/matrix-org/synapse/pull/1744
256
257 missing_events = yield self.replication_layer.get_missing_events(
258 origin,
259 pdu.room_id,
260 earliest_events_ids=list(latest),
261 latest_events=[pdu],
262 limit=10,
263 min_depth=min_depth,
264 timeout=10000,
265 )
266
267 # We want to sort these by depth so we process them and
268 # tell clients about them in order.
269 missing_events.sort(key=lambda x: x.depth)
270
271 for e in missing_events:
272 yield self.on_receive_pdu(
273 origin,
274 e,
275 get_missing=False
276 )
277
278 have_seen = yield self.store.have_events(
279 [ev for ev, _ in pdu.prev_events]
280 )
281 defer.returnValue(have_seen)
282
283 @log_function
284 @defer.inlineCallbacks
285 def _process_received_pdu(self, origin, pdu, state, auth_chain):
286 """ Called when we have a new pdu. We need to do auth checks and put it
287 through the StateHandler.
288 """
289 event = pdu
290
291 logger.debug("Processing event: %s", event)
104292
105293 # FIXME (erikj): Awful hack to make the case where we are not currently
106294 # in the room work
669857 """
670858 logger.debug("Joining %s to %s", joinee, room_id)
671859
672 yield self.store.clean_room_for_join(room_id)
673
674860 origin, event = yield self._make_and_verify_event(
675861 target_hosts,
676862 room_id,
679865 content,
680866 )
681867
868 # This shouldn't happen, because the RoomMemberHandler has a
869 # linearizer lock which only allows one operation per user per room
870 # at a time - so this is just paranoia.
871 assert (room_id not in self.room_queues)
872
682873 self.room_queues[room_id] = []
874
875 yield self.store.clean_room_for_join(room_id)
876
683877 handled_events = set()
684878
685879 try:
732926 room_queue = self.room_queues[room_id]
733927 del self.room_queues[room_id]
734928
735 for p, origin in room_queue:
736 if p.event_id in handled_events:
737 continue
738
739 try:
740 self.on_receive_pdu(origin, p)
741 except:
742 logger.exception("Couldn't handle pdu")
929 # we don't need to wait for the queued events to be processed -
930 # it's just a best-effort thing at this point. We do want to do
931 # them roughly in order, though, otherwise we'll end up making
932 # lots of requests for missing prev_events which we do actually
933 # have. Hence we fire off the deferred, but don't wait for it.
934
935 synapse.util.logcontext.preserve_fn(self._handle_queued_pdus)(
936 room_queue
937 )
743938
744939 defer.returnValue(True)
940
941 @defer.inlineCallbacks
942 def _handle_queued_pdus(self, room_queue):
943 """Process PDUs which got queued up while we were busy send_joining.
944
945 Args:
946 room_queue (list[FrozenEvent, str]): list of PDUs to be processed
947 and the servers that sent them
948 """
949 for p, origin in room_queue:
950 try:
951 logger.info("Processing queued PDU %s which was received "
952 "while we were joining %s", p.event_id, p.room_id)
953 yield self.on_receive_pdu(origin, p)
954 except Exception as e:
955 logger.warn(
956 "Error handling queued PDU %s from %s: %s",
957 p.event_id, origin, e)
745958
746959 @defer.inlineCallbacks
747960 @log_function
7901003 )
7911004
7921005 event.internal_metadata.outlier = False
793 # Send this event on behalf of the origin server since they may not
794 # have an up to data view of the state of the room at this event so
795 # will not know which servers to send the event to.
1006 # Send this event on behalf of the origin server.
1007 #
1008 # The reasons we have the destination server rather than the origin
1009 # server send it are slightly mysterious: the origin server should have
1010 # all the neccessary state once it gets the response to the send_join,
1011 # so it could send the event itself if it wanted to. It may be that
1012 # doing it this way reduces failure modes, or avoids certain attacks
1013 # where a new server selectively tells a subset of the federation that
1014 # it has joined.
1015 #
1016 # The fact is that, as of the current writing, Synapse doesn't send out
1017 # the join event over federation after joining, and changing it now
1018 # would introduce the danger of backwards-compatibility problems.
7961019 event.internal_metadata.send_on_behalf_of = origin
7971020
7981021 context, event_stream_id, max_stream_id = yield self._handle_new_event(
8771100 user_id,
8781101 "leave"
8791102 )
880 signed_event = self._sign_event(event)
1103 event = self._sign_event(event)
8811104 except SynapseError:
8821105 raise
8831106 except CodeMessageException as e:
8841107 logger.warn("Failed to reject invite: %s", e)
8851108 raise SynapseError(500, "Failed to reject invite")
8861109
887 # Try the host we successfully got a response to /make_join/
888 # request first.
1110 # Try the host that we succesfully called /make_leave/ on first for
1111 # the /send_leave/ request.
8891112 try:
8901113 target_hosts.remove(origin)
8911114 target_hosts.insert(0, origin)
8951118 try:
8961119 yield self.replication_layer.send_leave(
8971120 target_hosts,
898 signed_event
1121 event
8991122 )
9001123 except SynapseError:
9011124 raise
13241547
13251548 @defer.inlineCallbacks
13261549 def _prep_event(self, origin, event, state=None, auth_events=None):
1327
1550 """
1551
1552 Args:
1553 origin:
1554 event:
1555 state:
1556 auth_events:
1557
1558 Returns:
1559 Deferred, which resolves to synapse.events.snapshot.EventContext
1560 """
13281561 context = yield self.state_handler.compute_event_context(
13291562 event, old_state=state,
13301563 )
00 # -*- coding: utf-8 -*-
11 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
149150 params.update(kwargs)
150151
151152 try:
152 data = yield self.http_client.post_urlencoded_get_json(
153 data = yield self.http_client.post_json_get_json(
153154 "https://%s%s" % (
154155 id_server,
155156 "/_matrix/identity/api/v1/validate/email/requestToken"
160161 except CodeMessageException as e:
161162 logger.info("Proxied requestToken failed: %r", e)
162163 raise e
164
165 @defer.inlineCallbacks
166 def requestMsisdnToken(
167 self, id_server, country, phone_number,
168 client_secret, send_attempt, **kwargs
169 ):
170 yield run_on_reactor()
171
172 if not self._should_trust_id_server(id_server):
173 raise SynapseError(
174 400, "Untrusted ID server '%s'" % id_server,
175 Codes.SERVER_NOT_TRUSTED
176 )
177
178 params = {
179 'country': country,
180 'phone_number': phone_number,
181 'client_secret': client_secret,
182 'send_attempt': send_attempt,
183 }
184 params.update(kwargs)
185
186 try:
187 data = yield self.http_client.post_json_get_json(
188 "https://%s%s" % (
189 id_server,
190 "/_matrix/identity/api/v1/validate/msisdn/requestToken"
191 ),
192 params
193 )
194 defer.returnValue(data)
195 except CodeMessageException as e:
196 logger.info("Proxied requestToken failed: %r", e)
197 raise e
1818 from synapse.api.errors import AuthError, Codes
1919 from synapse.events.utils import serialize_event
2020 from synapse.events.validator import EventValidator
21 from synapse.handlers.presence import format_user_presence_state
2122 from synapse.streams.config import PaginationConfig
2223 from synapse.types import (
2324 UserID, StreamToken,
224225 "content": content,
225226 })
226227
228 now = self.clock.time_msec()
229
227230 ret = {
228231 "rooms": rooms_ret,
229 "presence": presence,
232 "presence": [
233 {
234 "type": "m.presence",
235 "content": format_user_presence_state(event, now),
236 }
237 for event in presence
238 ],
230239 "account_data": account_data_events,
231240 "receipts": receipt,
232241 "end": now_token.to_string(),
2828 from synapse.api.constants import PresenceState
2929 from synapse.storage.presence import UserPresenceState
3030
31 from synapse.util.caches.descriptors import cachedInlineCallbacks
3132 from synapse.util.logcontext import preserve_fn
3233 from synapse.util.logutils import log_function
3334 from synapse.util.metrics import Measure
555556 room_ids_to_states = {}
556557 users_to_states = {}
557558 for state in states:
558 events = yield self.store.get_rooms_for_user(state.user_id)
559 for e in events:
560 room_ids_to_states.setdefault(e.room_id, []).append(state)
559 room_ids = yield self.store.get_rooms_for_user(state.user_id)
560 for room_id in room_ids:
561 room_ids_to_states.setdefault(room_id, []).append(state)
561562
562563 plist = yield self.store.get_presence_list_observers_accepted(state.user_id)
563564 for u in plist:
573574 if not local_states:
574575 continue
575576
576 users = yield self.store.get_users_in_room(room_id)
577 hosts = set(get_domain_from_id(u) for u in users)
577 hosts = yield self.store.get_hosts_in_room(room_id)
578578
579579 for host in hosts:
580580 hosts_to_states.setdefault(host, []).extend(local_states)
718718 for state in updates
719719 ])
720720 else:
721 defer.returnValue([
722 format_user_presence_state(state, now) for state in updates
723 ])
721 defer.returnValue(updates)
724722
725723 @defer.inlineCallbacks
726724 def set_state(self, target_user, state, ignore_status_msg=False):
793791 target_user_ids=[row["observed_user_id"] for row in presence_list],
794792 as_event=False,
795793 )
794
795 now = self.clock.time_msec()
796 results[:] = [format_user_presence_state(r, now) for r in results]
796797
797798 is_accepted = {
798799 row["observed_user_id"]: row["accepted"] for row in presence_list
846847 )
847848
848849 state_dict = yield self.get_state(observed_user, as_event=False)
850 state_dict = format_user_presence_state(state_dict, self.clock.time_msec())
849851
850852 self.federation.send_edu(
851853 destination=observer_user.domain,
909911 def is_visible(self, observed_user, observer_user):
910912 """Returns whether a user can see another user's presence.
911913 """
912 observer_rooms = yield self.store.get_rooms_for_user(observer_user.to_string())
913 observed_rooms = yield self.store.get_rooms_for_user(observed_user.to_string())
914
915 observer_room_ids = set(r.room_id for r in observer_rooms)
916 observed_room_ids = set(r.room_id for r in observed_rooms)
914 observer_room_ids = yield self.store.get_rooms_for_user(
915 observer_user.to_string()
916 )
917 observed_room_ids = yield self.store.get_rooms_for_user(
918 observed_user.to_string()
919 )
917920
918921 if observer_room_ids & observed_room_ids:
919922 defer.returnValue(True)
978981 return False
979982
980983
981 def format_user_presence_state(state, now):
984 def format_user_presence_state(state, now, include_user_id=True):
982985 """Convert UserPresenceState to a format that can be sent down to clients
983986 and to other servers.
987
988 The "user_id" is optional so that this function can be used to format presence
989 updates for client /sync responses and for federation /send requests.
984990 """
985991 content = {
986992 "presence": state.state,
987 "user_id": state.user_id,
988993 }
994 if include_user_id:
995 content["user_id"] = state.user_id
989996 if state.last_active_ts:
990997 content["last_active_ago"] = now - state.last_active_ts
991998 if state.status_msg and state.state != PresenceState.OFFLINE:
10241031 # sending down the rare duplicate is not a concern.
10251032
10261033 with Measure(self.clock, "presence.get_new_events"):
1027 user_id = user.to_string()
10281034 if from_key is not None:
10291035 from_key = int(from_key)
10301036
10331039
10341040 max_token = self.store.get_current_presence_token()
10351041
1036 plist = yield self.store.get_presence_list_accepted(user.localpart)
1037 users_interested_in = set(row["observed_user_id"] for row in plist)
1038 users_interested_in.add(user_id) # So that we receive our own presence
1039
1040 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
1041 user_id
1042 )
1043 users_interested_in.update(users_who_share_room)
1044
1045 if explicit_room_id:
1046 user_ids = yield self.store.get_users_in_room(explicit_room_id)
1047 users_interested_in.update(user_ids)
1042 users_interested_in = yield self._get_interested_in(user, explicit_room_id)
10481043
10491044 user_ids_changed = set()
10501045 changed = None
10721067
10731068 updates = yield presence.current_state_for_users(user_ids_changed)
10741069
1075 now = self.clock.time_msec()
1076
1077 defer.returnValue(([
1078 {
1079 "type": "m.presence",
1080 "content": format_user_presence_state(s, now),
1081 }
1082 for s in updates.values()
1083 if include_offline or s.state != PresenceState.OFFLINE
1084 ], max_token))
1070 if include_offline:
1071 defer.returnValue((updates.values(), max_token))
1072 else:
1073 defer.returnValue(([
1074 s for s in updates.itervalues()
1075 if s.state != PresenceState.OFFLINE
1076 ], max_token))
10851077
10861078 def get_current_key(self):
10871079 return self.store.get_current_presence_token()
10881080
10891081 def get_pagination_rows(self, user, pagination_config, key):
10901082 return self.get_new_events(user, from_key=None, include_offline=False)
1083
1084 @cachedInlineCallbacks(num_args=2, cache_context=True)
1085 def _get_interested_in(self, user, explicit_room_id, cache_context):
1086 """Returns the set of users that the given user should see presence
1087 updates for
1088 """
1089 user_id = user.to_string()
1090 plist = yield self.store.get_presence_list_accepted(
1091 user.localpart, on_invalidate=cache_context.invalidate,
1092 )
1093 users_interested_in = set(row["observed_user_id"] for row in plist)
1094 users_interested_in.add(user_id) # So that we receive our own presence
1095
1096 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
1097 user_id, on_invalidate=cache_context.invalidate,
1098 )
1099 users_interested_in.update(users_who_share_room)
1100
1101 if explicit_room_id:
1102 user_ids = yield self.store.get_users_in_room(
1103 explicit_room_id, on_invalidate=cache_context.invalidate,
1104 )
1105 users_interested_in.update(user_ids)
1106
1107 defer.returnValue(users_interested_in)
10911108
10921109
10931110 def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
11561173 # If there are have been no sync for a while (and none ongoing),
11571174 # set presence to offline
11581175 if user_id not in syncing_user_ids:
1159 if now - state.last_user_sync_ts > SYNC_ONLINE_TIMEOUT:
1176 # If the user has done something recently but hasn't synced,
1177 # don't set them as offline.
1178 sync_or_active = max(state.last_user_sync_ts, state.last_active_ts)
1179 if now - sync_or_active > SYNC_ONLINE_TIMEOUT:
11601180 state = state.copy_and_replace(
11611181 state=PresenceState.OFFLINE,
11621182 status_msg=None,
5151 args={
5252 "user_id": target_user.to_string(),
5353 "field": "displayname",
54 }
54 },
55 ignore_backoff=True,
5556 )
5657 except CodeMessageException as e:
5758 if e.code != 404:
9899 args={
99100 "user_id": target_user.to_string(),
100101 "field": "avatar_url",
101 }
102 },
103 ignore_backoff=True,
102104 )
103105 except CodeMessageException as e:
104106 if e.code != 404:
155157
156158 self.ratelimit(requester)
157159
158 joins = yield self.store.get_rooms_for_user(
160 room_ids = yield self.store.get_rooms_for_user(
159161 user.to_string(),
160162 )
161163
162 for j in joins:
164 for room_id in room_ids:
163165 handler = self.hs.get_handlers().room_member_handler
164166 try:
165167 # Assume the user isn't a guest because we don't let guests set
170172 yield handler.update_membership(
171173 requester,
172174 user,
173 j.room_id,
175 room_id,
174176 "join", # We treat a profile update like a join.
175177 ratelimit=False, # Try to hide that these events aren't atomic.
176178 )
177179 except Exception as e:
178180 logger.warn(
179181 "Failed to update join event for room %s - %s",
180 j.room_id, str(e.message)
182 room_id, str(e.message)
181183 )
209209 else:
210210 from_key = None
211211
212 rooms = yield self.store.get_rooms_for_user(user.to_string())
213 rooms = [room.room_id for room in rooms]
212 room_ids = yield self.store.get_rooms_for_user(user.to_string())
214213 events = yield self.store.get_linearized_receipts_for_rooms(
215 rooms,
214 room_ids,
216215 from_key=from_key,
217216 to_key=to_key,
218217 )
2020 EventTypes, JoinRules,
2121 )
2222 from synapse.util.async import concurrently_execute
23 from synapse.util.caches.descriptors import cachedInlineCallbacks
2324 from synapse.util.caches.response_cache import ResponseCache
2425 from synapse.types import ThirdPartyInstanceID
2526
6162 appservice and network id to use an appservice specific one.
6263 Setting to None returns all public rooms across all lists.
6364 """
65 logger.info(
66 "Getting public room list: limit=%r, since=%r, search=%r, network=%r",
67 limit, since_token, bool(search_filter), network_tuple,
68 )
6469 if search_filter:
6570 # We explicitly don't bother caching searches or requests for
6671 # appservice specific lists.
9095
9196 rooms_to_order_value = {}
9297 rooms_to_num_joined = {}
93 rooms_to_latest_event_ids = {}
9498
9599 newly_visible = []
96100 newly_unpublished = []
115119
116120 @defer.inlineCallbacks
117121 def get_order_for_room(room_id):
118 latest_event_ids = rooms_to_latest_event_ids.get(room_id, None)
119 if not latest_event_ids:
122 # Most of the rooms won't have changed between the since token and
123 # now (especially if the since token is "now"). So, we can ask what
124 # the current users are in a room (that will hit a cache) and then
125 # check if the room has changed since the since token. (We have to
126 # do it in that order to avoid races).
127 # If things have changed then fall back to getting the current state
128 # at the since token.
129 joined_users = yield self.store.get_users_in_room(room_id)
130 if self.store.has_room_changed_since(room_id, stream_token):
120131 latest_event_ids = yield self.store.get_forward_extremeties_for_room(
121132 room_id, stream_token
122133 )
123 rooms_to_latest_event_ids[room_id] = latest_event_ids
124
125 if not latest_event_ids:
126 return
127
128 joined_users = yield self.state_handler.get_current_user_in_room(
129 room_id, latest_event_ids,
130 )
134
135 if not latest_event_ids:
136 return
137
138 joined_users = yield self.state_handler.get_current_user_in_room(
139 room_id, latest_event_ids,
140 )
141
131142 num_joined_users = len(joined_users)
132143 rooms_to_num_joined[room_id] = num_joined_users
133144
164175 rooms_to_scan = rooms_to_scan[:since_token.current_limit]
165176 rooms_to_scan.reverse()
166177
167 # Actually generate the entries. _generate_room_entry will append to
178 # Actually generate the entries. _append_room_entry_to_chunk will append to
168179 # chunk but will stop if len(chunk) > limit
169180 chunk = []
170181 if limit and not search_filter:
171182 step = limit + 1
172183 for i in xrange(0, len(rooms_to_scan), step):
173184 # We iterate here because the vast majority of cases we'll stop
174 # at first iteration, but occaisonally _generate_room_entry
185 # at first iteration, but occaisonally _append_room_entry_to_chunk
175186 # won't append to the chunk and so we need to loop again.
176187 # We don't want to scan over the entire range either as that
177188 # would potentially waste a lot of work.
178189 yield concurrently_execute(
179 lambda r: self._generate_room_entry(
190 lambda r: self._append_room_entry_to_chunk(
180191 r, rooms_to_num_joined[r],
181192 chunk, limit, search_filter
182193 ),
186197 break
187198 else:
188199 yield concurrently_execute(
189 lambda r: self._generate_room_entry(
200 lambda r: self._append_room_entry_to_chunk(
190201 r, rooms_to_num_joined[r],
191202 chunk, limit, search_filter
192203 ),
255266 defer.returnValue(results)
256267
257268 @defer.inlineCallbacks
258 def _generate_room_entry(self, room_id, num_joined_users, chunk, limit,
259 search_filter):
269 def _append_room_entry_to_chunk(self, room_id, num_joined_users, chunk, limit,
270 search_filter):
271 """Generate the entry for a room in the public room list and append it
272 to the `chunk` if it matches the search filter
273 """
260274 if limit and len(chunk) > limit + 1:
261275 # We've already got enough, so lets just drop it.
262276 return
263277
278 result = yield self._generate_room_entry(room_id, num_joined_users)
279
280 if result and _matches_room_entry(result, search_filter):
281 chunk.append(result)
282
283 @cachedInlineCallbacks(num_args=1, cache_context=True)
284 def _generate_room_entry(self, room_id, num_joined_users, cache_context):
285 """Returns the entry for a room
286 """
264287 result = {
265288 "room_id": room_id,
266289 "num_joined_members": num_joined_users,
267290 }
268291
269 current_state_ids = yield self.state_handler.get_current_state_ids(room_id)
292 current_state_ids = yield self.store.get_current_state_ids(
293 room_id, on_invalidate=cache_context.invalidate,
294 )
270295
271296 event_map = yield self.store.get_events([
272 event_id for key, event_id in current_state_ids.items()
297 event_id for key, event_id in current_state_ids.iteritems()
273298 if key[0] in (
274299 EventTypes.JoinRules,
275300 EventTypes.Name,
293318 if join_rule and join_rule != JoinRules.PUBLIC:
294319 defer.returnValue(None)
295320
296 aliases = yield self.store.get_aliases_for_room(room_id)
321 aliases = yield self.store.get_aliases_for_room(
322 room_id, on_invalidate=cache_context.invalidate
323 )
297324 if aliases:
298325 result["aliases"] = aliases
299326
333360 if avatar_url:
334361 result["avatar_url"] = avatar_url
335362
336 if _matches_room_entry(result, search_filter):
337 chunk.append(result)
363 defer.returnValue(result)
338364
339365 @defer.inlineCallbacks
340366 def get_remote_public_room_list(self, server_name, limit=None, since_token=None,
1919 from synapse.util.caches.response_cache import ResponseCache
2020 from synapse.push.clientformat import format_push_rules_for_user
2121 from synapse.visibility import filter_events_for_client
22 from synapse.types import RoomStreamToken
2223
2324 from twisted.internet import defer
2425
224225 with Measure(self.clock, "ephemeral_by_room"):
225226 typing_key = since_token.typing_key if since_token else "0"
226227
227 rooms = yield self.store.get_rooms_for_user(sync_config.user.to_string())
228 room_ids = [room.room_id for room in rooms]
228 room_ids = yield self.store.get_rooms_for_user(sync_config.user.to_string())
229229
230230 typing_source = self.event_sources.sources["typing"]
231231 typing, typing_key = yield typing_source.get_new_events(
567567 since_token = sync_result_builder.since_token
568568
569569 if since_token and since_token.device_list_key:
570 rooms = yield self.store.get_rooms_for_user(user_id)
571 room_ids = set(r.room_id for r in rooms)
570 room_ids = yield self.store.get_rooms_for_user(user_id)
572571
573572 user_ids_changed = set()
574573 changed = yield self.store.get_user_whose_devices_changed(
575574 since_token.device_list_key
576575 )
577576 for other_user_id in changed:
578 other_rooms = yield self.store.get_rooms_for_user(other_user_id)
579 if room_ids.intersection(e.room_id for e in other_rooms):
577 other_room_ids = yield self.store.get_rooms_for_user(other_user_id)
578 if room_ids.intersection(other_room_ids):
580579 user_ids_changed.add(other_user_id)
581580
582581 defer.returnValue(user_ids_changed)
720719 extra_users_ids.update(users)
721720 extra_users_ids.discard(user.to_string())
722721
723 states = yield self.presence_handler.get_states(
724 extra_users_ids,
725 as_event=True,
726 )
727 presence.extend(states)
728
729 # Deduplicate the presence entries so that there's at most one per user
730 presence = {p["content"]["user_id"]: p for p in presence}.values()
722 if extra_users_ids:
723 states = yield self.presence_handler.get_states(
724 extra_users_ids,
725 )
726 presence.extend(states)
727
728 # Deduplicate the presence entries so that there's at most one per user
729 presence = {p.user_id: p for p in presence}.values()
731730
732731 presence = sync_config.filter_collection.filter_presence(
733732 presence
764763 )
765764 sync_result_builder.now_token = now_token
766765
766 # We check up front if anything has changed, if it hasn't then there is
767 # no point in going futher.
768 since_token = sync_result_builder.since_token
769 if not sync_result_builder.full_state:
770 if since_token and not ephemeral_by_room and not account_data_by_room:
771 have_changed = yield self._have_rooms_changed(sync_result_builder)
772 if not have_changed:
773 tags_by_room = yield self.store.get_updated_tags(
774 user_id,
775 since_token.account_data_key,
776 )
777 if not tags_by_room:
778 logger.debug("no-oping sync")
779 defer.returnValue(([], []))
780
767781 ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
768782 "m.ignored_user_list", user_id=user_id,
769783 )
773787 else:
774788 ignored_users = frozenset()
775789
776 if sync_result_builder.since_token:
790 if since_token:
777791 res = yield self._get_rooms_changed(sync_result_builder, ignored_users)
778792 room_entries, invited, newly_joined_rooms = res
779793
780794 tags_by_room = yield self.store.get_updated_tags(
781 user_id,
782 sync_result_builder.since_token.account_data_key,
795 user_id, since_token.account_data_key,
783796 )
784797 else:
785798 res = yield self._get_all_rooms(sync_result_builder, ignored_users)
804817
805818 # Now we want to get any newly joined users
806819 newly_joined_users = set()
807 if sync_result_builder.since_token:
820 if since_token:
808821 for joined_sync in sync_result_builder.joined:
809822 it = itertools.chain(
810823 joined_sync.timeline.events, joined_sync.state.values()
817830 defer.returnValue((newly_joined_rooms, newly_joined_users))
818831
819832 @defer.inlineCallbacks
820 def _get_rooms_changed(self, sync_result_builder, ignored_users):
821 """Gets the the changes that have happened since the last sync.
822
823 Args:
824 sync_result_builder(SyncResultBuilder)
825 ignored_users(set(str)): Set of users ignored by user.
826
827 Returns:
828 Deferred(tuple): Returns a tuple of the form:
829 `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)`
833 def _have_rooms_changed(self, sync_result_builder):
834 """Returns whether there may be any new events that should be sent down
835 the sync. Returns True if there are.
830836 """
831837 user_id = sync_result_builder.sync_config.user.to_string()
832838 since_token = sync_result_builder.since_token
833839 now_token = sync_result_builder.now_token
834 sync_config = sync_result_builder.sync_config
835840
836841 assert since_token
842
843 # Get a list of membership change events that have happened.
844 rooms_changed = yield self.store.get_membership_changes_for_user(
845 user_id, since_token.room_key, now_token.room_key
846 )
847
848 if rooms_changed:
849 defer.returnValue(True)
837850
838851 app_service = self.store.get_app_service_by_user_id(user_id)
839852 if app_service:
840853 rooms = yield self.store.get_app_service_rooms(app_service)
841854 joined_room_ids = set(r.room_id for r in rooms)
842855 else:
843 rooms = yield self.store.get_rooms_for_user(user_id)
856 joined_room_ids = yield self.store.get_rooms_for_user(user_id)
857
858 stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream
859 for room_id in joined_room_ids:
860 if self.store.has_room_changed_since(room_id, stream_id):
861 defer.returnValue(True)
862 defer.returnValue(False)
863
864 @defer.inlineCallbacks
865 def _get_rooms_changed(self, sync_result_builder, ignored_users):
866 """Gets the the changes that have happened since the last sync.
867
868 Args:
869 sync_result_builder(SyncResultBuilder)
870 ignored_users(set(str)): Set of users ignored by user.
871
872 Returns:
873 Deferred(tuple): Returns a tuple of the form:
874 `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)`
875 """
876 user_id = sync_result_builder.sync_config.user.to_string()
877 since_token = sync_result_builder.since_token
878 now_token = sync_result_builder.now_token
879 sync_config = sync_result_builder.sync_config
880
881 assert since_token
882
883 app_service = self.store.get_app_service_by_user_id(user_id)
884 if app_service:
885 rooms = yield self.store.get_app_service_rooms(app_service)
844886 joined_room_ids = set(r.room_id for r in rooms)
887 else:
888 joined_room_ids = yield self.store.get_rooms_for_user(user_id)
845889
846890 # Get a list of membership change events that have happened.
847891 rooms_changed = yield self.store.get_membership_changes_for_user(
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
15
14 import synapse.util.retryutils
1615 from twisted.internet import defer, reactor, protocol
1716 from twisted.internet.error import DNSLookupError
1817 from twisted.web.client import readBody, HTTPConnectionPool, Agent
2120
2221 from synapse.http.endpoint import matrix_federation_endpoint
2322 from synapse.util.async import sleep
24 from synapse.util.logcontext import preserve_context_over_fn
23 from synapse.util import logcontext
2524 import synapse.metrics
2625
2726 from canonicaljson import encode_canonical_json
9392 reactor, MatrixFederationEndpointFactory(hs), pool=pool
9493 )
9594 self.clock = hs.get_clock()
95 self._store = hs.get_datastore()
9696 self.version_string = hs.version_string
9797 self._next_id = 1
9898
102102 )
103103
104104 @defer.inlineCallbacks
105 def _create_request(self, destination, method, path_bytes,
106 body_callback, headers_dict={}, param_bytes=b"",
107 query_bytes=b"", retry_on_dns_fail=True,
108 timeout=None, long_retries=False):
109 """ Creates and sends a request to the given url
105 def _request(self, destination, method, path,
106 body_callback, headers_dict={}, param_bytes=b"",
107 query_bytes=b"", retry_on_dns_fail=True,
108 timeout=None, long_retries=False,
109 ignore_backoff=False,
110 backoff_on_404=False):
111 """ Creates and sends a request to the given server
112 Args:
113 destination (str): The remote server to send the HTTP request to.
114 method (str): HTTP method
115 path (str): The HTTP path
116 ignore_backoff (bool): true to ignore the historical backoff data
117 and try the request anyway.
118 backoff_on_404 (bool): Back off if we get a 404
119
120 Returns:
121 Deferred: resolves with the http response object on success.
122
123 Fails with ``HTTPRequestException``: if we get an HTTP response
124 code >= 300.
125 Fails with ``NotRetryingDestination`` if we are not yet ready
126 to retry this server.
110127 """
111 headers_dict[b"User-Agent"] = [self.version_string]
112 headers_dict[b"Host"] = [destination]
113
114 url_bytes = self._create_url(
115 destination, path_bytes, param_bytes, query_bytes
116 )
117
118 txn_id = "%s-O-%s" % (method, self._next_id)
119 self._next_id = (self._next_id + 1) % (sys.maxint - 1)
120
121 outbound_logger.info(
122 "{%s} [%s] Sending request: %s %s",
123 txn_id, destination, method, url_bytes
124 )
125
126 # XXX: Would be much nicer to retry only at the transaction-layer
127 # (once we have reliable transactions in place)
128 if long_retries:
129 retries_left = MAX_LONG_RETRIES
130 else:
131 retries_left = MAX_SHORT_RETRIES
132
133 http_url_bytes = urlparse.urlunparse(
134 ("", "", path_bytes, param_bytes, query_bytes, "")
135 )
136
137 log_result = None
138 try:
139 while True:
140 producer = None
141 if body_callback:
142 producer = body_callback(method, http_url_bytes, headers_dict)
143
144 try:
145 def send_request():
146 request_deferred = preserve_context_over_fn(
147 self.agent.request,
128 limiter = yield synapse.util.retryutils.get_retry_limiter(
129 destination,
130 self.clock,
131 self._store,
132 backoff_on_404=backoff_on_404,
133 ignore_backoff=ignore_backoff,
134 )
135
136 destination = destination.encode("ascii")
137 path_bytes = path.encode("ascii")
138 with limiter:
139 headers_dict[b"User-Agent"] = [self.version_string]
140 headers_dict[b"Host"] = [destination]
141
142 url_bytes = self._create_url(
143 destination, path_bytes, param_bytes, query_bytes
144 )
145
146 txn_id = "%s-O-%s" % (method, self._next_id)
147 self._next_id = (self._next_id + 1) % (sys.maxint - 1)
148
149 outbound_logger.info(
150 "{%s} [%s] Sending request: %s %s",
151 txn_id, destination, method, url_bytes
152 )
153
154 # XXX: Would be much nicer to retry only at the transaction-layer
155 # (once we have reliable transactions in place)
156 if long_retries:
157 retries_left = MAX_LONG_RETRIES
158 else:
159 retries_left = MAX_SHORT_RETRIES
160
161 http_url_bytes = urlparse.urlunparse(
162 ("", "", path_bytes, param_bytes, query_bytes, "")
163 )
164
165 log_result = None
166 try:
167 while True:
168 producer = None
169 if body_callback:
170 producer = body_callback(method, http_url_bytes, headers_dict)
171
172 try:
173 def send_request():
174 request_deferred = self.agent.request(
175 method,
176 url_bytes,
177 Headers(headers_dict),
178 producer
179 )
180
181 return self.clock.time_bound_deferred(
182 request_deferred,
183 time_out=timeout / 1000. if timeout else 60,
184 )
185
186 with logcontext.PreserveLoggingContext():
187 response = yield send_request()
188
189 log_result = "%d %s" % (response.code, response.phrase,)
190 break
191 except Exception as e:
192 if not retry_on_dns_fail and isinstance(e, DNSLookupError):
193 logger.warn(
194 "DNS Lookup failed to %s with %s",
195 destination,
196 e
197 )
198 log_result = "DNS Lookup failed to %s with %s" % (
199 destination, e
200 )
201 raise
202
203 logger.warn(
204 "{%s} Sending request failed to %s: %s %s: %s - %s",
205 txn_id,
206 destination,
148207 method,
149208 url_bytes,
150 Headers(headers_dict),
151 producer
209 type(e).__name__,
210 _flatten_response_never_received(e),
152211 )
153212
154 return self.clock.time_bound_deferred(
155 request_deferred,
156 time_out=timeout / 1000. if timeout else 60,
213 log_result = "%s - %s" % (
214 type(e).__name__, _flatten_response_never_received(e),
157215 )
158216
159 response = yield preserve_context_over_fn(send_request)
160
161 log_result = "%d %s" % (response.code, response.phrase,)
162 break
163 except Exception as e:
164 if not retry_on_dns_fail and isinstance(e, DNSLookupError):
165 logger.warn(
166 "DNS Lookup failed to %s with %s",
167 destination,
168 e
169 )
170 log_result = "DNS Lookup failed to %s with %s" % (
171 destination, e
172 )
173 raise
174
175 logger.warn(
176 "{%s} Sending request failed to %s: %s %s: %s - %s",
177 txn_id,
178 destination,
179 method,
180 url_bytes,
181 type(e).__name__,
182 _flatten_response_never_received(e),
183 )
184
185 log_result = "%s - %s" % (
186 type(e).__name__, _flatten_response_never_received(e),
187 )
188
189 if retries_left and not timeout:
190 if long_retries:
191 delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
192 delay = min(delay, 60)
193 delay *= random.uniform(0.8, 1.4)
217 if retries_left and not timeout:
218 if long_retries:
219 delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
220 delay = min(delay, 60)
221 delay *= random.uniform(0.8, 1.4)
222 else:
223 delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
224 delay = min(delay, 2)
225 delay *= random.uniform(0.8, 1.4)
226
227 yield sleep(delay)
228 retries_left -= 1
194229 else:
195 delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
196 delay = min(delay, 2)
197 delay *= random.uniform(0.8, 1.4)
198
199 yield sleep(delay)
200 retries_left -= 1
201 else:
202 raise
203 finally:
204 outbound_logger.info(
205 "{%s} [%s] Result: %s",
206 txn_id,
207 destination,
208 log_result,
209 )
210
211 if 200 <= response.code < 300:
212 pass
213 else:
214 # :'(
215 # Update transactions table?
216 body = yield preserve_context_over_fn(readBody, response)
217 raise HttpResponseException(
218 response.code, response.phrase, body
219 )
220
221 defer.returnValue(response)
230 raise
231 finally:
232 outbound_logger.info(
233 "{%s} [%s] Result: %s",
234 txn_id,
235 destination,
236 log_result,
237 )
238
239 if 200 <= response.code < 300:
240 pass
241 else:
242 # :'(
243 # Update transactions table?
244 with logcontext.PreserveLoggingContext():
245 body = yield readBody(response)
246 raise HttpResponseException(
247 response.code, response.phrase, body
248 )
249
250 defer.returnValue(response)
222251
223252 def sign_request(self, destination, method, url_bytes, headers_dict,
224253 content=None):
247276
248277 @defer.inlineCallbacks
249278 def put_json(self, destination, path, data={}, json_data_callback=None,
250 long_retries=False, timeout=None):
279 long_retries=False, timeout=None,
280 ignore_backoff=False,
281 backoff_on_404=False):
251282 """ Sends the specifed json data using PUT
252283
253284 Args:
262293 retry for a short or long time.
263294 timeout(int): How long to try (in ms) the destination for before
264295 giving up. None indicates no timeout.
296 ignore_backoff (bool): true to ignore the historical backoff data
297 and try the request anyway.
298 backoff_on_404 (bool): True if we should count a 404 response as
299 a failure of the server (and should therefore back off future
300 requests)
265301
266302 Returns:
267303 Deferred: Succeeds when we get a 2xx HTTP response. The result
268304 will be the decoded JSON body. On a 4xx or 5xx error response a
269305 CodeMessageException is raised.
306
307 Fails with ``NotRetryingDestination`` if we are not yet ready
308 to retry this server.
270309 """
271310
272311 if not json_data_callback:
281320 producer = _JsonProducer(json_data)
282321 return producer
283322
284 response = yield self._create_request(
285 destination.encode("ascii"),
323 response = yield self._request(
324 destination,
286325 "PUT",
287 path.encode("ascii"),
326 path,
288327 body_callback=body_callback,
289328 headers_dict={"Content-Type": ["application/json"]},
290329 long_retries=long_retries,
291330 timeout=timeout,
331 ignore_backoff=ignore_backoff,
332 backoff_on_404=backoff_on_404,
292333 )
293334
294335 if 200 <= response.code < 300:
295336 # We need to update the transactions table to say it was sent?
296337 check_content_type_is_json(response.headers)
297338
298 body = yield preserve_context_over_fn(readBody, response)
339 with logcontext.PreserveLoggingContext():
340 body = yield readBody(response)
299341 defer.returnValue(json.loads(body))
300342
301343 @defer.inlineCallbacks
302344 def post_json(self, destination, path, data={}, long_retries=False,
303 timeout=None):
345 timeout=None, ignore_backoff=False):
304346 """ Sends the specifed json data using POST
305347
306348 Args:
313355 retry for a short or long time.
314356 timeout(int): How long to try (in ms) the destination for before
315357 giving up. None indicates no timeout.
316
358 ignore_backoff (bool): true to ignore the historical backoff data and
359 try the request anyway.
317360 Returns:
318361 Deferred: Succeeds when we get a 2xx HTTP response. The result
319362 will be the decoded JSON body. On a 4xx or 5xx error response a
320363 CodeMessageException is raised.
364
365 Fails with ``NotRetryingDestination`` if we are not yet ready
366 to retry this server.
321367 """
322368
323369 def body_callback(method, url_bytes, headers_dict):
326372 )
327373 return _JsonProducer(data)
328374
329 response = yield self._create_request(
330 destination.encode("ascii"),
375 response = yield self._request(
376 destination,
331377 "POST",
332 path.encode("ascii"),
378 path,
333379 body_callback=body_callback,
334380 headers_dict={"Content-Type": ["application/json"]},
335381 long_retries=long_retries,
336382 timeout=timeout,
383 ignore_backoff=ignore_backoff,
337384 )
338385
339386 if 200 <= response.code < 300:
340387 # We need to update the transactions table to say it was sent?
341388 check_content_type_is_json(response.headers)
342389
343 body = yield preserve_context_over_fn(readBody, response)
390 with logcontext.PreserveLoggingContext():
391 body = yield readBody(response)
344392
345393 defer.returnValue(json.loads(body))
346394
347395 @defer.inlineCallbacks
348396 def get_json(self, destination, path, args={}, retry_on_dns_fail=True,
349 timeout=None):
397 timeout=None, ignore_backoff=False):
350398 """ GETs some json from the given host homeserver and path
351399
352400 Args:
358406 timeout (int): How long to try (in ms) the destination for before
359407 giving up. None indicates no timeout and that the request will
360408 be retried.
409 ignore_backoff (bool): true to ignore the historical backoff data
410 and try the request anyway.
361411 Returns:
362412 Deferred: Succeeds when we get *any* HTTP response.
363413
364414 The result of the deferred is a tuple of `(code, response)`,
365415 where `response` is a dict representing the decoded JSON body.
416
417 Fails with ``NotRetryingDestination`` if we are not yet ready
418 to retry this server.
366419 """
367420 logger.debug("get_json args: %s", args)
368421
379432 self.sign_request(destination, method, url_bytes, headers_dict)
380433 return None
381434
382 response = yield self._create_request(
383 destination.encode("ascii"),
435 response = yield self._request(
436 destination,
384437 "GET",
385 path.encode("ascii"),
438 path,
386439 query_bytes=query_bytes,
387440 body_callback=body_callback,
388441 retry_on_dns_fail=retry_on_dns_fail,
389442 timeout=timeout,
443 ignore_backoff=ignore_backoff,
390444 )
391445
392446 if 200 <= response.code < 300:
393447 # We need to update the transactions table to say it was sent?
394448 check_content_type_is_json(response.headers)
395449
396 body = yield preserve_context_over_fn(readBody, response)
450 with logcontext.PreserveLoggingContext():
451 body = yield readBody(response)
397452
398453 defer.returnValue(json.loads(body))
399454
400455 @defer.inlineCallbacks
401456 def get_file(self, destination, path, output_stream, args={},
402 retry_on_dns_fail=True, max_size=None):
457 retry_on_dns_fail=True, max_size=None,
458 ignore_backoff=False):
403459 """GETs a file from a given homeserver
404460 Args:
405461 destination (str): The remote server to send the HTTP request to.
406462 path (str): The HTTP path to GET.
407463 output_stream (file): File to write the response body to.
408464 args (dict): Optional dictionary used to create the query string.
465 ignore_backoff (bool): true to ignore the historical backoff data
466 and try the request anyway.
409467 Returns:
410 A (int,dict) tuple of the file length and a dict of the response
411 headers.
468 Deferred: resolves with an (int,dict) tuple of the file length and
469 a dict of the response headers.
470
471 Fails with ``HTTPRequestException`` if we get an HTTP response code
472 >= 300
473
474 Fails with ``NotRetryingDestination`` if we are not yet ready
475 to retry this server.
412476 """
413477
414478 encoded_args = {}
418482 encoded_args[k] = [v.encode("UTF-8") for v in vs]
419483
420484 query_bytes = urllib.urlencode(encoded_args, True)
421 logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
485 logger.debug("Query bytes: %s Retry DNS: %s", query_bytes, retry_on_dns_fail)
422486
423487 def body_callback(method, url_bytes, headers_dict):
424488 self.sign_request(destination, method, url_bytes, headers_dict)
425489 return None
426490
427 response = yield self._create_request(
428 destination.encode("ascii"),
491 response = yield self._request(
492 destination,
429493 "GET",
430 path.encode("ascii"),
494 path,
431495 query_bytes=query_bytes,
432496 body_callback=body_callback,
433 retry_on_dns_fail=retry_on_dns_fail
497 retry_on_dns_fail=retry_on_dns_fail,
498 ignore_backoff=ignore_backoff,
434499 )
435500
436501 headers = dict(response.headers.getAllRawHeaders())
437502
438503 try:
439 length = yield preserve_context_over_fn(
440 _readBodyToFile,
441 response, output_stream, max_size
442 )
504 with logcontext.PreserveLoggingContext():
505 length = yield _readBodyToFile(
506 response, output_stream, max_size
507 )
443508 except:
444509 logger.exception("Failed to download body")
445510 raise
191191 return content
192192
193193
194 def assert_params_in_request(body, required):
195 absent = []
196 for k in required:
197 if k not in body:
198 absent.append(k)
199
200 if len(absent) > 0:
201 raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM)
202
203
194204 class RestServlet(object):
195205
196206 """ A Synapse REST Servlet.
1515 from twisted.internet import defer
1616 from synapse.api.constants import EventTypes, Membership
1717 from synapse.api.errors import AuthError
18 from synapse.handlers.presence import format_user_presence_state
1819
1920 from synapse.util import DeferredTimedOutError
2021 from synapse.util.logutils import log_function
3637
3738 notified_events_counter = metrics.register_counter("notified_events")
3839
40 users_woken_by_stream_counter = metrics.register_counter(
41 "users_woken_by_stream", labels=["stream"]
42 )
43
3944
4045 # TODO(paul): Should be shared somewhere
4146 def count(func, l):
7277 self.user_id = user_id
7378 self.rooms = set(rooms)
7479 self.current_token = current_token
80
81 # The last token for which we should wake up any streams that have a
82 # token that comes before it. This gets updated everytime we get poked.
83 # We start it at the current token since if we get any streams
84 # that have a token from before we have no idea whether they should be
85 # woken up or not, so lets just wake them up.
86 self.last_notified_token = current_token
7587 self.last_notified_ms = time_now_ms
7688
7789 with PreserveLoggingContext():
88100 self.current_token = self.current_token.copy_and_advance(
89101 stream_key, stream_id
90102 )
103 self.last_notified_token = self.current_token
91104 self.last_notified_ms = time_now_ms
92105 noify_deferred = self.notify_deferred
106
107 users_woken_by_stream_counter.inc(stream_key)
93108
94109 with PreserveLoggingContext():
95110 self.notify_deferred = ObservableDeferred(defer.Deferred())
112127 def new_listener(self, token):
113128 """Returns a deferred that is resolved when there is a new token
114129 greater than the given token.
115 """
116 if self.current_token.is_after(token):
130
131 Args:
132 token: The token from which we are streaming from, i.e. we shouldn't
133 notify for things that happened before this.
134 """
135 # Immediately wake up stream if something has already since happened
136 # since their last token.
137 if self.last_notified_token.is_after(token):
117138 return _NotificationListener(defer.succeed(self.current_token))
118139 else:
119140 return _NotificationListener(self.notify_deferred.observe())
282303 if user_stream is None:
283304 current_token = yield self.event_sources.get_current_token()
284305 if room_ids is None:
285 rooms = yield self.store.get_rooms_for_user(user_id)
286 room_ids = [room.room_id for room in rooms]
306 room_ids = yield self.store.get_rooms_for_user(user_id)
287307 user_stream = _NotifierUserStream(
288308 user_id=user_id,
289309 rooms=room_ids,
293313 self._register_with_keys(user_stream)
294314
295315 result = None
316 prev_token = from_token
296317 if timeout:
297318 end_time = self.clock.time_msec() + timeout
298319
299 prev_token = from_token
300320 while not result:
301321 try:
302 current_token = user_stream.current_token
303
304 result = yield callback(prev_token, current_token)
305 if result:
306 break
307
308322 now = self.clock.time_msec()
309323 if end_time <= now:
310324 break
311325
312326 # Now we wait for the _NotifierUserStream to be told there
313327 # is a new token.
314 # We need to supply the token we supplied to callback so
315 # that we don't miss any current_token updates.
316 prev_token = current_token
317328 listener = user_stream.new_listener(prev_token)
318329 with PreserveLoggingContext():
319330 yield self.clock.time_bound_deferred(
320331 listener.deferred,
321332 time_out=(end_time - now) / 1000.
322333 )
334
335 current_token = user_stream.current_token
336
337 result = yield callback(prev_token, current_token)
338 if result:
339 break
340
341 # Update the prev_token to the current_token since nothing
342 # has happened between the old prev_token and the current_token
343 prev_token = current_token
323344 except DeferredTimedOutError:
324345 break
325346 except defer.CancelledError:
326347 break
327 else:
348
349 if result is None:
350 # This happened if there was no timeout or if the timeout had
351 # already expired.
328352 current_token = user_stream.current_token
329 result = yield callback(from_token, current_token)
353 result = yield callback(prev_token, current_token)
330354
331355 defer.returnValue(result)
332356
387411 new_events,
388412 is_peeking=is_peeking,
389413 )
414 elif name == "presence":
415 now = self.clock.time_msec()
416 new_events[:] = [
417 {
418 "type": "m.presence",
419 "content": format_user_presence_state(event, now),
420 }
421 for event in new_events
422 ]
390423
391424 events.extend(new_events)
392425 end_token = end_token.copy_and_replace(keyname, new_key)
419452
420453 @defer.inlineCallbacks
421454 def _get_room_ids(self, user, explicit_room_id):
422 joined_rooms = yield self.store.get_rooms_for_user(user.to_string())
423 joined_room_ids = map(lambda r: r.room_id, joined_rooms)
455 joined_room_ids = yield self.store.get_rooms_for_user(user.to_string())
424456 if explicit_room_id:
425457 if explicit_room_id in joined_room_ids:
426458 defer.returnValue(([explicit_room_id], True))
138138
139139 @defer.inlineCallbacks
140140 def _fetch_room_state(room_id):
141 room_state = yield self.state_handler.get_current_state_ids(room_id)
141 room_state = yield self.store.get_current_state_ids(room_id)
142142 state_by_room[room_id] = room_state
143143
144144 # Run at most 3 of these at once: sync does 10 at a time but email
1616 import re
1717
1818 from synapse.types import UserID
19 from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
1920 from synapse.util.caches.lrucache import LruCache
2021
2122 logger = logging.getLogger(__name__)
124125 return self._value_cache.get(dotted_key, None)
125126
126127
128 # Caches (glob, word_boundary) -> regex for push. See _glob_matches
129 regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR)
130 register_cache("regex_push_cache", regex_cache)
131
132
127133 def _glob_matches(glob, value, word_boundary=False):
128134 """Tests if value matches glob.
129135
136142 Returns:
137143 bool
138144 """
145
139146 try:
140 if IS_GLOB.search(glob):
141 r = re.escape(glob)
142
143 r = r.replace(r'\*', '.*?')
144 r = r.replace(r'\?', '.')
145
146 # handle [abc], [a-z] and [!a-z] style ranges.
147 r = GLOB_REGEX.sub(
148 lambda x: (
149 '[%s%s]' % (
150 x.group(1) and '^' or '',
151 x.group(2).replace(r'\\\-', '-')
152 )
153 ),
154 r,
155 )
156 if word_boundary:
157 r = r"\b%s\b" % (r,)
158 r = _compile_regex(r)
159
160 return r.search(value)
161 else:
162 r = r + "$"
163 r = _compile_regex(r)
164
165 return r.match(value)
166 elif word_boundary:
167 r = re.escape(glob)
168 r = r"\b%s\b" % (r,)
169 r = _compile_regex(r)
170
171 return r.search(value)
172 else:
173 return value.lower() == glob.lower()
147 r = regex_cache.get((glob, word_boundary), None)
148 if not r:
149 r = _glob_to_re(glob, word_boundary)
150 regex_cache[(glob, word_boundary)] = r
151 return r.search(value)
174152 except re.error:
175153 logger.warn("Failed to parse glob to regex: %r", glob)
176154 return False
155
156
157 def _glob_to_re(glob, word_boundary):
158 """Generates regex for a given glob.
159
160 Args:
161 glob (string)
162 word_boundary (bool): Whether to match against word boundaries or entire
163 string. Defaults to False.
164
165 Returns:
166 regex object
167 """
168 if IS_GLOB.search(glob):
169 r = re.escape(glob)
170
171 r = r.replace(r'\*', '.*?')
172 r = r.replace(r'\?', '.')
173
174 # handle [abc], [a-z] and [!a-z] style ranges.
175 r = GLOB_REGEX.sub(
176 lambda x: (
177 '[%s%s]' % (
178 x.group(1) and '^' or '',
179 x.group(2).replace(r'\\\-', '-')
180 )
181 ),
182 r,
183 )
184 if word_boundary:
185 r = r"\b%s\b" % (r,)
186
187 return re.compile(r, flags=re.IGNORECASE)
188 else:
189 r = "^" + r + "$"
190
191 return re.compile(r, flags=re.IGNORECASE)
192 elif word_boundary:
193 r = re.escape(glob)
194 r = r"\b%s\b" % (r,)
195
196 return re.compile(r, flags=re.IGNORECASE)
197 else:
198 r = "^" + re.escape(glob) + "$"
199 return re.compile(r, flags=re.IGNORECASE)
177200
178201
179202 def _flatten_dict(d, prefix=[], result={}):
184207 _flatten_dict(value, prefix=(prefix + [key]), result=result)
185208
186209 return result
187
188
189 regex_cache = LruCache(5000)
190
191
192 def _compile_regex(regex_str):
193 r = regex_cache.get(regex_str, None)
194 if r:
195 return r
196
197 r = re.compile(regex_str, flags=re.IGNORECASE)
198 regex_cache[regex_str] = r
199 return r
3232
3333 badge = len(invites)
3434
35 for r in joins:
36 if r.room_id in my_receipts_by_room:
37 last_unread_event_id = my_receipts_by_room[r.room_id]
35 for room_id in joins:
36 if room_id in my_receipts_by_room:
37 last_unread_event_id = my_receipts_by_room[room_id]
3838
3939 notifs = yield (
4040 store.get_unread_event_push_actions_by_room_for_user(
41 r.room_id, user_id, last_unread_event_id
41 room_id, user_id, last_unread_event_id
4242 )
4343 )
4444 # return one badge count per conversation, as count per
00 # Copyright 2015, 2016 OpenMarket Ltd
1 # Copyright 2017 Vector Creations Ltd
12 #
23 # Licensed under the Apache License, Version 2.0 (the "License");
34 # you may not use this file except in compliance with the License.
1718 logger = logging.getLogger(__name__)
1819
1920 REQUIREMENTS = {
21 "jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
2022 "frozendict>=0.4": ["frozendict"],
2123 "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
2224 "canonicaljson>=1.0.0": ["canonicaljson>=1.0.0"],
3638 "pysaml2>=3.0.0,<4.0.0": ["saml2>=3.0.0,<4.0.0"],
3739 "pymacaroons-pynacl": ["pymacaroons"],
3840 "msgpack-python>=0.3.0": ["msgpack"],
41 "phonenumbers>=8.2.0": ["phonenumbers"],
3942 }
4043 CONDITIONAL_REQUIREMENTS = {
4144 "web_client": {
282282
283283 if request_events != upto_events_token:
284284 writer.write_header_and_rows("events", res.new_forward_events, (
285 "position", "internal", "json", "state_group"
285 "position", "event_id", "room_id", "type", "state_key",
286286 ), position=upto_events_token)
287287
288288 if request_backfill != upto_backfill_token:
289289 writer.write_header_and_rows("backfill", res.new_backfill_events, (
290 "position", "internal", "json", "state_group",
290 "position", "event_id", "room_id", "type", "state_key", "redacts",
291291 ), position=upto_backfill_token)
292292
293293 writer.write_header_and_rows(
2626 self._current = (max if self.step > 0 else min)(self._current, new_id)
2727
2828 def get_current_token(self):
29 """
30
31 Returns:
32 int
33 """
2934 return self._current
1515 from ._slaved_id_tracker import SlavedIdTracker
1616
1717 from synapse.api.constants import EventTypes
18 from synapse.events import FrozenEvent
1918 from synapse.storage import DataStore
2019 from synapse.storage.roommember import RoomMemberStore
2120 from synapse.storage.event_federation import EventFederationStore
2423 from synapse.storage.stream import StreamStore
2524 from synapse.util.caches.stream_change_cache import StreamChangeCache
2625
27 import ujson as json
2826 import logging
2927
3028
108106 get_recent_event_ids_for_room = (
109107 StreamStore.__dict__["get_recent_event_ids_for_room"]
110108 )
109 get_current_state_ids = (
110 StateStore.__dict__["get_current_state_ids"]
111 )
112 has_room_changed_since = DataStore.has_room_changed_since.__func__
111113
112114 get_unread_push_actions_for_user_in_range_for_http = (
113115 DataStore.get_unread_push_actions_for_user_in_range_for_http.__func__
164166 _get_rooms_for_user_where_membership_is_txn = (
165167 DataStore._get_rooms_for_user_where_membership_is_txn.__func__
166168 )
167 _get_members_rows_txn = DataStore._get_members_rows_txn.__func__
168169 _get_state_for_groups = DataStore._get_state_for_groups.__func__
169170 _get_all_state_from_cache = DataStore._get_all_state_from_cache.__func__
170171 _get_events_around_txn = DataStore._get_events_around_txn.__func__
237238 return super(SlavedEventStore, self).process_replication(result)
238239
239240 def _process_replication_row(self, row, backfilled):
240 internal = json.loads(row[1])
241 event_json = json.loads(row[2])
242 event = FrozenEvent(event_json, internal_metadata_dict=internal)
241 stream_ordering = row[0] if not backfilled else -row[0]
243242 self.invalidate_caches_for_event(
244 event, backfilled,
245 )
246
247 def invalidate_caches_for_event(self, event, backfilled):
248 self._invalidate_get_event_cache(event.event_id)
249
250 self.get_latest_event_ids_in_room.invalidate((event.room_id,))
243 stream_ordering, row[1], row[2], row[3], row[4], row[5],
244 backfilled=backfilled,
245 )
246
247 def invalidate_caches_for_event(self, stream_ordering, event_id, room_id,
248 etype, state_key, redacts, backfilled):
249 self._invalidate_get_event_cache(event_id)
250
251 self.get_latest_event_ids_in_room.invalidate((room_id,))
251252
252253 self.get_unread_event_push_actions_by_room_for_user.invalidate_many(
253 (event.room_id,)
254 (room_id,)
254255 )
255256
256257 if not backfilled:
257258 self._events_stream_cache.entity_has_changed(
258 event.room_id, event.internal_metadata.stream_ordering
259 room_id, stream_ordering
259260 )
260261
261 # self.get_unread_event_push_actions_by_room_for_user.invalidate_many(
262 # (event.room_id,)
263 # )
264
265 if event.type == EventTypes.Redaction:
266 self._invalidate_get_event_cache(event.redacts)
267
268 if event.type == EventTypes.Member:
262 if redacts:
263 self._invalidate_get_event_cache(redacts)
264
265 if etype == EventTypes.Member:
269266 self._membership_stream_cache.entity_has_changed(
270 event.state_key, event.internal_metadata.stream_ordering
267 state_key, stream_ordering
271268 )
272 self.get_invited_rooms_for_user.invalidate((event.state_key,))
273
274 if not event.is_state():
275 return
276
277 if backfilled:
278 return
279
280 if (not event.internal_metadata.is_invite_from_remote()
281 and event.internal_metadata.is_outlier()):
282 return
269 self.get_invited_rooms_for_user.invalidate((state_key,))
5656 self.presence_stream_cache.entity_has_changed(
5757 user_id, position
5858 )
59 self._get_presence_for_user.invalidate((user_id,))
5960
6061 return super(SlavedPresenceStore, self).process_replication(result)
1818 from synapse.types import UserID
1919 from synapse.http.server import finish_request
2020 from synapse.http.servlet import parse_json_object_from_request
21 from synapse.util.msisdn import phone_number_to_msisdn
2122
2223 from .base import ClientV1RestServlet, client_path_patterns
2324
3233
3334 import xml.etree.ElementTree as ET
3435
36 from twisted.web.client import PartialDownloadError
37
3538
3639 logger = logging.getLogger(__name__)
40
41
42 def login_submission_legacy_convert(submission):
43 """
44 If the input login submission is an old style object
45 (ie. with top-level user / medium / address) convert it
46 to a typed object.
47 """
48 if "user" in submission:
49 submission["identifier"] = {
50 "type": "m.id.user",
51 "user": submission["user"],
52 }
53 del submission["user"]
54
55 if "medium" in submission and "address" in submission:
56 submission["identifier"] = {
57 "type": "m.id.thirdparty",
58 "medium": submission["medium"],
59 "address": submission["address"],
60 }
61 del submission["medium"]
62 del submission["address"]
63
64
65 def login_id_thirdparty_from_phone(identifier):
66 """
67 Convert a phone login identifier type to a generic threepid identifier
68 Args:
69 identifier(dict): Login identifier dict of type 'm.id.phone'
70
71 Returns: Login identifier dict of type 'm.id.threepid'
72 """
73 if "country" not in identifier or "number" not in identifier:
74 raise SynapseError(400, "Invalid phone-type identifier")
75
76 msisdn = phone_number_to_msisdn(identifier["country"], identifier["number"])
77
78 return {
79 "type": "m.id.thirdparty",
80 "medium": "msisdn",
81 "address": msisdn,
82 }
3783
3884
3985 class LoginRestServlet(ClientV1RestServlet):
116162
117163 @defer.inlineCallbacks
118164 def do_password_login(self, login_submission):
119 if 'medium' in login_submission and 'address' in login_submission:
120 address = login_submission['address']
121 if login_submission['medium'] == 'email':
165 if "password" not in login_submission:
166 raise SynapseError(400, "Missing parameter: password")
167
168 login_submission_legacy_convert(login_submission)
169
170 if "identifier" not in login_submission:
171 raise SynapseError(400, "Missing param: identifier")
172
173 identifier = login_submission["identifier"]
174 if "type" not in identifier:
175 raise SynapseError(400, "Login identifier has no type")
176
177 # convert phone type identifiers to generic threepids
178 if identifier["type"] == "m.id.phone":
179 identifier = login_id_thirdparty_from_phone(identifier)
180
181 # convert threepid identifiers to user IDs
182 if identifier["type"] == "m.id.thirdparty":
183 if 'medium' not in identifier or 'address' not in identifier:
184 raise SynapseError(400, "Invalid thirdparty identifier")
185
186 address = identifier['address']
187 if identifier['medium'] == 'email':
122188 # For emails, transform the address to lowercase.
123189 # We store all email addreses as lowercase in the DB.
124190 # (See add_threepid in synapse/handlers/auth.py)
125191 address = address.lower()
126192 user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
127 login_submission['medium'], address
193 identifier['medium'], address
128194 )
129195 if not user_id:
130196 raise LoginError(403, "", errcode=Codes.FORBIDDEN)
131 else:
132 user_id = login_submission['user']
197
198 identifier = {
199 "type": "m.id.user",
200 "user": user_id,
201 }
202
203 # by this point, the identifier should be an m.id.user: if it's anything
204 # else, we haven't understood it.
205 if identifier["type"] != "m.id.user":
206 raise SynapseError(400, "Unknown login identifier type")
207 if "user" not in identifier:
208 raise SynapseError(400, "User identifier is missing 'user' key")
209
210 user_id = identifier["user"]
133211
134212 if not user_id.startswith('@'):
135213 user_id = UserID.create(
340418 "ticket": request.args["ticket"],
341419 "service": self.cas_service_url
342420 }
343 body = yield http_client.get_raw(uri, args)
421 try:
422 body = yield http_client.get_raw(uri, args)
423 except PartialDownloadError as pde:
424 # Twisted raises this error if the connection is closed,
425 # even if that's being used old-http style to signal end-of-data
426 body = pde.response
344427 result = yield self.handle_cas_response(request, body, client_redirect_url)
345428 defer.returnValue(result)
346429
1818
1919 from synapse.api.errors import SynapseError, AuthError
2020 from synapse.types import UserID
21 from synapse.handlers.presence import format_user_presence_state
2122 from synapse.http.servlet import parse_json_object_from_request
2223 from .base import ClientV1RestServlet, client_path_patterns
2324
3233 def __init__(self, hs):
3334 super(PresenceStatusRestServlet, self).__init__(hs)
3435 self.presence_handler = hs.get_presence_handler()
36 self.clock = hs.get_clock()
3537
3638 @defer.inlineCallbacks
3739 def on_GET(self, request, user_id):
4749 raise AuthError(403, "You are not allowed to see their presence.")
4850
4951 state = yield self.presence_handler.get_state(target_user=user)
52 state = format_user_presence_state(state, self.clock.time_msec())
5053
5154 defer.returnValue((200, state))
5255
747747 def on_GET(self, request):
748748 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
749749
750 rooms = yield self.store.get_rooms_for_user(requester.user.to_string())
751 room_ids = set(r.room_id for r in rooms) # Ensure they're unique.
750 room_ids = yield self.store.get_rooms_for_user(requester.user.to_string())
752751 defer.returnValue((200, {"joined_rooms": list(room_ids)}))
753752
754753
00 # -*- coding: utf-8 -*-
11 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1617
1718 from synapse.api.constants import LoginType
1819 from synapse.api.errors import LoginError, SynapseError, Codes
19 from synapse.http.servlet import RestServlet, parse_json_object_from_request
20 from synapse.http.servlet import (
21 RestServlet, parse_json_object_from_request, assert_params_in_request
22 )
2023 from synapse.util.async import run_on_reactor
24 from synapse.util.msisdn import phone_number_to_msisdn
2125
2226 from ._base import client_v2_patterns
2327
2731 logger = logging.getLogger(__name__)
2832
2933
30 class PasswordRequestTokenRestServlet(RestServlet):
34 class EmailPasswordRequestTokenRestServlet(RestServlet):
3135 PATTERNS = client_v2_patterns("/account/password/email/requestToken$")
3236
3337 def __init__(self, hs):
34 super(PasswordRequestTokenRestServlet, self).__init__()
35 self.hs = hs
36 self.identity_handler = hs.get_handlers().identity_handler
37
38 @defer.inlineCallbacks
39 def on_POST(self, request):
40 body = parse_json_object_from_request(request)
41
42 required = ['id_server', 'client_secret', 'email', 'send_attempt']
43 absent = []
44 for k in required:
45 if k not in body:
46 absent.append(k)
47
48 if absent:
49 raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM)
38 super(EmailPasswordRequestTokenRestServlet, self).__init__()
39 self.hs = hs
40 self.identity_handler = hs.get_handlers().identity_handler
41
42 @defer.inlineCallbacks
43 def on_POST(self, request):
44 body = parse_json_object_from_request(request)
45
46 assert_params_in_request(body, [
47 'id_server', 'client_secret', 'email', 'send_attempt'
48 ])
5049
5150 existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
5251 'email', body['email']
5958 defer.returnValue((200, ret))
6059
6160
61 class MsisdnPasswordRequestTokenRestServlet(RestServlet):
62 PATTERNS = client_v2_patterns("/account/password/msisdn/requestToken$")
63
64 def __init__(self, hs):
65 super(MsisdnPasswordRequestTokenRestServlet, self).__init__()
66 self.hs = hs
67 self.datastore = self.hs.get_datastore()
68 self.identity_handler = hs.get_handlers().identity_handler
69
70 @defer.inlineCallbacks
71 def on_POST(self, request):
72 body = parse_json_object_from_request(request)
73
74 assert_params_in_request(body, [
75 'id_server', 'client_secret',
76 'country', 'phone_number', 'send_attempt',
77 ])
78
79 msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
80
81 existingUid = yield self.datastore.get_user_id_by_threepid(
82 'msisdn', msisdn
83 )
84
85 if existingUid is None:
86 raise SynapseError(400, "MSISDN not found", Codes.THREEPID_NOT_FOUND)
87
88 ret = yield self.identity_handler.requestMsisdnToken(**body)
89 defer.returnValue((200, ret))
90
91
6292 class PasswordRestServlet(RestServlet):
6393 PATTERNS = client_v2_patterns("/account/password$")
6494
6797 self.hs = hs
6898 self.auth = hs.get_auth()
6999 self.auth_handler = hs.get_auth_handler()
100 self.datastore = self.hs.get_datastore()
70101
71102 @defer.inlineCallbacks
72103 def on_POST(self, request):
76107
77108 authed, result, params, _ = yield self.auth_handler.check_auth([
78109 [LoginType.PASSWORD],
79 [LoginType.EMAIL_IDENTITY]
110 [LoginType.EMAIL_IDENTITY],
111 [LoginType.MSISDN],
80112 ], body, self.hs.get_ip_from_request(request))
81113
82114 if not authed:
101133 # (See add_threepid in synapse/handlers/auth.py)
102134 threepid['address'] = threepid['address'].lower()
103135 # if using email, we must know about the email they're authing with!
104 threepid_user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
136 threepid_user_id = yield self.datastore.get_user_id_by_threepid(
105137 threepid['medium'], threepid['address']
106138 )
107139 if not threepid_user_id:
168200 defer.returnValue((200, {}))
169201
170202
171 class ThreepidRequestTokenRestServlet(RestServlet):
203 class EmailThreepidRequestTokenRestServlet(RestServlet):
172204 PATTERNS = client_v2_patterns("/account/3pid/email/requestToken$")
173205
174206 def __init__(self, hs):
175207 self.hs = hs
176 super(ThreepidRequestTokenRestServlet, self).__init__()
177 self.identity_handler = hs.get_handlers().identity_handler
208 super(EmailThreepidRequestTokenRestServlet, self).__init__()
209 self.identity_handler = hs.get_handlers().identity_handler
210 self.datastore = self.hs.get_datastore()
178211
179212 @defer.inlineCallbacks
180213 def on_POST(self, request):
189222 if absent:
190223 raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM)
191224
192 existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
225 existingUid = yield self.datastore.get_user_id_by_threepid(
193226 'email', body['email']
194227 )
195228
200233 defer.returnValue((200, ret))
201234
202235
236 class MsisdnThreepidRequestTokenRestServlet(RestServlet):
237 PATTERNS = client_v2_patterns("/account/3pid/msisdn/requestToken$")
238
239 def __init__(self, hs):
240 self.hs = hs
241 super(MsisdnThreepidRequestTokenRestServlet, self).__init__()
242 self.identity_handler = hs.get_handlers().identity_handler
243 self.datastore = self.hs.get_datastore()
244
245 @defer.inlineCallbacks
246 def on_POST(self, request):
247 body = parse_json_object_from_request(request)
248
249 required = [
250 'id_server', 'client_secret',
251 'country', 'phone_number', 'send_attempt',
252 ]
253 absent = []
254 for k in required:
255 if k not in body:
256 absent.append(k)
257
258 if absent:
259 raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM)
260
261 msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
262
263 existingUid = yield self.datastore.get_user_id_by_threepid(
264 'msisdn', msisdn
265 )
266
267 if existingUid is not None:
268 raise SynapseError(400, "MSISDN is already in use", Codes.THREEPID_IN_USE)
269
270 ret = yield self.identity_handler.requestMsisdnToken(**body)
271 defer.returnValue((200, ret))
272
273
203274 class ThreepidRestServlet(RestServlet):
204275 PATTERNS = client_v2_patterns("/account/3pid$")
205276
209280 self.identity_handler = hs.get_handlers().identity_handler
210281 self.auth = hs.get_auth()
211282 self.auth_handler = hs.get_auth_handler()
283 self.datastore = self.hs.get_datastore()
212284
213285 @defer.inlineCallbacks
214286 def on_GET(self, request):
216288
217289 requester = yield self.auth.get_user_by_req(request)
218290
219 threepids = yield self.hs.get_datastore().user_get_threepids(
291 threepids = yield self.datastore.user_get_threepids(
220292 requester.user.to_string()
221293 )
222294
257329
258330 if 'bind' in body and body['bind']:
259331 logger.debug(
260 "Binding emails %s to %s",
332 "Binding threepid %s to %s",
261333 threepid, user_id
262334 )
263335 yield self.identity_handler.bind_threepid(
301373
302374
303375 def register_servlets(hs, http_server):
304 PasswordRequestTokenRestServlet(hs).register(http_server)
376 EmailPasswordRequestTokenRestServlet(hs).register(http_server)
377 MsisdnPasswordRequestTokenRestServlet(hs).register(http_server)
305378 PasswordRestServlet(hs).register(http_server)
306379 DeactivateAccountRestServlet(hs).register(http_server)
307 ThreepidRequestTokenRestServlet(hs).register(http_server)
380 EmailThreepidRequestTokenRestServlet(hs).register(http_server)
381 MsisdnThreepidRequestTokenRestServlet(hs).register(http_server)
308382 ThreepidRestServlet(hs).register(http_server)
309383 ThreepidDeleteRestServlet(hs).register(http_server)
4343 requester.user.to_string()
4444 )
4545 defer.returnValue((200, {"devices": devices}))
46
47
48 class DeleteDevicesRestServlet(servlet.RestServlet):
49 """
50 API for bulk deletion of devices. Accepts a JSON object with a devices
51 key which lists the device_ids to delete. Requires user interactive auth.
52 """
53 PATTERNS = client_v2_patterns("/delete_devices", releases=[], v2_alpha=False)
54
55 def __init__(self, hs):
56 super(DeleteDevicesRestServlet, self).__init__()
57 self.hs = hs
58 self.auth = hs.get_auth()
59 self.device_handler = hs.get_device_handler()
60 self.auth_handler = hs.get_auth_handler()
61
62 @defer.inlineCallbacks
63 def on_POST(self, request):
64 try:
65 body = servlet.parse_json_object_from_request(request)
66 except errors.SynapseError as e:
67 if e.errcode == errors.Codes.NOT_JSON:
68 # deal with older clients which didn't pass a J*DELETESON dict
69 # the same as those that pass an empty dict
70 body = {}
71 else:
72 raise e
73
74 if 'devices' not in body:
75 raise errors.SynapseError(
76 400, "No devices supplied", errcode=errors.Codes.MISSING_PARAM
77 )
78
79 authed, result, params, _ = yield self.auth_handler.check_auth([
80 [constants.LoginType.PASSWORD],
81 ], body, self.hs.get_ip_from_request(request))
82
83 if not authed:
84 defer.returnValue((401, result))
85
86 requester = yield self.auth.get_user_by_req(request)
87 yield self.device_handler.delete_devices(
88 requester.user.to_string(),
89 body['devices'],
90 )
91 defer.returnValue((200, {}))
4692
4793
4894 class DeviceRestServlet(servlet.RestServlet):
110156
111157
112158 def register_servlets(hs, http_server):
159 DeleteDevicesRestServlet(hs).register(http_server)
113160 DevicesRestServlet(hs).register(http_server)
114161 DeviceRestServlet(hs).register(http_server)
00 # -*- coding: utf-8 -*-
11 # Copyright 2015 - 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1819 from synapse.api.auth import get_access_token_from_request, has_access_token
1920 from synapse.api.constants import LoginType
2021 from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError
21 from synapse.http.servlet import RestServlet, parse_json_object_from_request
22 from synapse.http.servlet import (
23 RestServlet, parse_json_object_from_request, assert_params_in_request
24 )
25 from synapse.util.msisdn import phone_number_to_msisdn
2226
2327 from ._base import client_v2_patterns
2428
4246 logger = logging.getLogger(__name__)
4347
4448
45 class RegisterRequestTokenRestServlet(RestServlet):
49 class EmailRegisterRequestTokenRestServlet(RestServlet):
4650 PATTERNS = client_v2_patterns("/register/email/requestToken$")
4751
4852 def __init__(self, hs):
5054 Args:
5155 hs (synapse.server.HomeServer): server
5256 """
53 super(RegisterRequestTokenRestServlet, self).__init__()
57 super(EmailRegisterRequestTokenRestServlet, self).__init__()
5458 self.hs = hs
5559 self.identity_handler = hs.get_handlers().identity_handler
5660
5862 def on_POST(self, request):
5963 body = parse_json_object_from_request(request)
6064
61 required = ['id_server', 'client_secret', 'email', 'send_attempt']
62 absent = []
63 for k in required:
64 if k not in body:
65 absent.append(k)
66
67 if len(absent) > 0:
68 raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM)
65 assert_params_in_request(body, [
66 'id_server', 'client_secret', 'email', 'send_attempt'
67 ])
6968
7069 existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
7170 'email', body['email']
7574 raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
7675
7776 ret = yield self.identity_handler.requestEmailToken(**body)
77 defer.returnValue((200, ret))
78
79
80 class MsisdnRegisterRequestTokenRestServlet(RestServlet):
81 PATTERNS = client_v2_patterns("/register/msisdn/requestToken$")
82
83 def __init__(self, hs):
84 """
85 Args:
86 hs (synapse.server.HomeServer): server
87 """
88 super(MsisdnRegisterRequestTokenRestServlet, self).__init__()
89 self.hs = hs
90 self.identity_handler = hs.get_handlers().identity_handler
91
92 @defer.inlineCallbacks
93 def on_POST(self, request):
94 body = parse_json_object_from_request(request)
95
96 assert_params_in_request(body, [
97 'id_server', 'client_secret',
98 'country', 'phone_number',
99 'send_attempt',
100 ])
101
102 msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
103
104 existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
105 'msisdn', msisdn
106 )
107
108 if existingUid is not None:
109 raise SynapseError(
110 400, "Phone number is already in use", Codes.THREEPID_IN_USE
111 )
112
113 ret = yield self.identity_handler.requestMsisdnToken(**body)
78114 defer.returnValue((200, ret))
79115
80116
199235 assigned_user_id=registered_user_id,
200236 )
201237
238 # Only give msisdn flows if the x_show_msisdn flag is given:
239 # this is a hack to work around the fact that clients were shipped
240 # that use fallback registration if they see any flows that they don't
241 # recognise, which means we break registration for these clients if we
242 # advertise msisdn flows. Once usage of Riot iOS <=0.3.9 and Riot
243 # Android <=0.6.9 have fallen below an acceptable threshold, this
244 # parameter should go away and we should always advertise msisdn flows.
245 show_msisdn = False
246 if 'x_show_msisdn' in body and body['x_show_msisdn']:
247 show_msisdn = True
248
202249 if self.hs.config.enable_registration_captcha:
203250 flows = [
204251 [LoginType.RECAPTCHA],
205 [LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA]
252 [LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA],
206253 ]
254 if show_msisdn:
255 flows.extend([
256 [LoginType.MSISDN, LoginType.RECAPTCHA],
257 [LoginType.MSISDN, LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA],
258 ])
207259 else:
208260 flows = [
209261 [LoginType.DUMMY],
210 [LoginType.EMAIL_IDENTITY]
262 [LoginType.EMAIL_IDENTITY],
211263 ]
264 if show_msisdn:
265 flows.extend([
266 [LoginType.MSISDN],
267 [LoginType.MSISDN, LoginType.EMAIL_IDENTITY],
268 ])
212269
213270 authed, auth_result, params, session_id = yield self.auth_handler.check_auth(
214271 flows, body, self.hs.get_ip_from_request(request)
223280 "Already registered user ID %r for this session",
224281 registered_user_id
225282 )
226 # don't re-register the email address
283 # don't re-register the threepids
227284 add_email = False
285 add_msisdn = False
228286 else:
229287 # NB: This may be from the auth handler and NOT from the POST
230288 if 'password' not in params:
249307 )
250308
251309 add_email = True
310 add_msisdn = True
252311
253312 return_dict = yield self._create_registration_details(
254313 registered_user_id, params
259318 yield self._register_email_threepid(
260319 registered_user_id, threepid, return_dict["access_token"],
261320 params.get("bind_email")
321 )
322
323 if add_msisdn and auth_result and LoginType.MSISDN in auth_result:
324 threepid = auth_result[LoginType.MSISDN]
325 yield self._register_msisdn_threepid(
326 registered_user_id, threepid, return_dict["access_token"],
327 params.get("bind_msisdn")
262328 )
263329
264330 defer.returnValue((200, return_dict))
322388 """
323389 reqd = ('medium', 'address', 'validated_at')
324390 if any(x not in threepid for x in reqd):
391 # This will only happen if the ID server returns a malformed response
325392 logger.info("Can't add incomplete 3pid")
326 defer.returnValue()
393 return
327394
328395 yield self.auth_handler.add_threepid(
329396 user_id,
371438 logger.info("bind_email not specified: not binding email")
372439
373440 @defer.inlineCallbacks
441 def _register_msisdn_threepid(self, user_id, threepid, token, bind_msisdn):
442 """Add a phone number as a 3pid identifier
443
444 Also optionally binds msisdn to the given user_id on the identity server
445
446 Args:
447 user_id (str): id of user
448 threepid (object): m.login.msisdn auth response
449 token (str): access_token for the user
450 bind_email (bool): true if the client requested the email to be
451 bound at the identity server
452 Returns:
453 defer.Deferred:
454 """
455 reqd = ('medium', 'address', 'validated_at')
456 if any(x not in threepid for x in reqd):
457 # This will only happen if the ID server returns a malformed response
458 logger.info("Can't add incomplete 3pid")
459 defer.returnValue()
460
461 yield self.auth_handler.add_threepid(
462 user_id,
463 threepid['medium'],
464 threepid['address'],
465 threepid['validated_at'],
466 )
467
468 if bind_msisdn:
469 logger.info("bind_msisdn specified: binding")
470 logger.debug("Binding msisdn %s to %s", threepid, user_id)
471 yield self.identity_handler.bind_threepid(
472 threepid['threepid_creds'], user_id
473 )
474 else:
475 logger.info("bind_msisdn not specified: not binding msisdn")
476
477 @defer.inlineCallbacks
374478 def _create_registration_details(self, user_id, params):
375479 """Complete registration of newly-registered user
376480
432536 # we have nowhere to store it.
433537 device_id = synapse.api.auth.GUEST_DEVICE_ID
434538 initial_display_name = params.get("initial_device_display_name")
435 self.device_handler.check_device_registered(
539 yield self.device_handler.check_device_registered(
436540 user_id, device_id, initial_display_name
437541 )
438542
448552
449553
450554 def register_servlets(hs, http_server):
451 RegisterRequestTokenRestServlet(hs).register(http_server)
555 EmailRegisterRequestTokenRestServlet(hs).register(http_server)
556 MsisdnRegisterRequestTokenRestServlet(hs).register(http_server)
452557 RegisterRestServlet(hs).register(http_server)
1717 from synapse.http.servlet import (
1818 RestServlet, parse_string, parse_integer, parse_boolean
1919 )
20 from synapse.handlers.presence import format_user_presence_state
2021 from synapse.handlers.sync import SyncConfig
2122 from synapse.types import StreamToken
2223 from synapse.events.utils import (
2728 from synapse.api.constants import PresenceState
2829 from ._base import client_v2_patterns
2930
30 import copy
3131 import itertools
3232 import logging
3333
193193 defer.returnValue((200, response_content))
194194
195195 def encode_presence(self, events, time_now):
196 formatted = []
197 for event in events:
198 event = copy.deepcopy(event)
199 event['sender'] = event['content'].pop('user_id')
200 formatted.append(event)
201 return {"events": formatted}
196 return {
197 "events": [
198 {
199 "type": "m.presence",
200 "sender": event.user_id,
201 "content": format_user_presence_state(
202 event, time_now, include_user_id=False
203 ),
204 }
205 for event in events
206 ]
207 }
202208
203209 def encode_joined(self, rooms, time_now, token_id, event_fields):
204210 """
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import synapse.http.servlet
1415
1516 from ._base import parse_media_id, respond_with_file, respond_404
1617 from twisted.web.resource import Resource
8081
8182 @defer.inlineCallbacks
8283 def _respond_remote_file(self, request, server_name, media_id, name):
84 # don't forward requests for remote media if allow_remote is false
85 allow_remote = synapse.http.servlet.parse_boolean(
86 request, "allow_remote", default=True)
87 if not allow_remote:
88 logger.info(
89 "Rejecting request for remote media %s/%s due to allow_remote",
90 server_name, media_id,
91 )
92 respond_404(request)
93 return
94
8395 media_info = yield self.media_repo.get_remote_media(server_name, media_id)
8496
8597 media_type = media_info["media_type"]
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from twisted.internet import defer, threads
16 import twisted.internet.error
17 import twisted.web.http
18 from twisted.web.resource import Resource
19
1520 from .upload_resource import UploadResource
1621 from .download_resource import DownloadResource
1722 from .thumbnail_resource import ThumbnailResource
1823 from .identicon_resource import IdenticonResource
1924 from .preview_url_resource import PreviewUrlResource
2025 from .filepath import MediaFilePaths
21
22 from twisted.web.resource import Resource
23
2426 from .thumbnailer import Thumbnailer
2527
2628 from synapse.http.matrixfederationclient import MatrixFederationHttpClient
2729 from synapse.util.stringutils import random_string
28 from synapse.api.errors import SynapseError
29
30 from twisted.internet import defer, threads
30 from synapse.api.errors import SynapseError, HttpResponseException, \
31 NotFoundError
3132
3233 from synapse.util.async import Linearizer
3334 from synapse.util.stringutils import is_ascii
156157 try:
157158 length, headers = yield self.client.get_file(
158159 server_name, request_path, output_stream=f,
159 max_size=self.max_upload_size,
160 max_size=self.max_upload_size, args={
161 # tell the remote server to 404 if it doesn't
162 # recognise the server_name, to make sure we don't
163 # end up with a routing loop.
164 "allow_remote": "false",
165 }
160166 )
161 except Exception as e:
162 logger.warn("Failed to fetch remoted media %r", e)
163 raise SynapseError(502, "Failed to fetch remoted media")
167 except twisted.internet.error.DNSLookupError as e:
168 logger.warn("HTTP error fetching remote media %s/%s: %r",
169 server_name, media_id, e)
170 raise NotFoundError()
171
172 except HttpResponseException as e:
173 logger.warn("HTTP error fetching remote media %s/%s: %s",
174 server_name, media_id, e.response)
175 if e.code == twisted.web.http.NOT_FOUND:
176 raise SynapseError.from_http_response_exception(e)
177 raise SynapseError(502, "Failed to fetch remote media")
178
179 except SynapseError:
180 logger.exception("Failed to fetch remote media %s/%s",
181 server_name, media_id)
182 raise
183
184 except Exception:
185 logger.exception("Failed to fetch remote media %s/%s",
186 server_name, media_id)
187 raise SynapseError(502, "Failed to fetch remote media")
164188
165189 media_type = headers["Content-Type"][0]
166190 time_now_ms = self.clock.time_msec()
176176
177177 @defer.inlineCallbacks
178178 def compute_event_context(self, event, old_state=None):
179 """ Fills out the context with the `current state` of the graph. The
180 `current state` here is defined to be the state of the event graph
181 just before the event - i.e. it never includes `event`
182
183 If `event` has `auth_events` then this will also fill out the
184 `auth_events` field on `context` from the `current_state`.
179 """Build an EventContext structure for the event.
185180
186181 Args:
187 event (EventBase)
182 event (synapse.events.EventBase):
188183 Returns:
189 an EventContext
184 synapse.events.snapshot.EventContext:
190185 """
191186 context = EventContext()
192187
199194 (s.type, s.state_key): s.event_id for s in old_state
200195 }
201196 if event.is_state():
202 context.current_state_events = dict(context.prev_state_ids)
197 context.current_state_ids = dict(context.prev_state_ids)
203198 key = (event.type, event.state_key)
204 context.current_state_events[key] = event.event_id
199 context.current_state_ids[key] = event.event_id
205200 else:
206 context.current_state_events = context.prev_state_ids
201 context.current_state_ids = context.prev_state_ids
207202 else:
208203 context.current_state_ids = {}
209204 context.prev_state_ids = {}
7272 def __setattr__(self, name, value):
7373 setattr(self.txn, name, value)
7474
75 def __iter__(self):
76 return self.txn.__iter__()
77
7578 def execute(self, sql, *args):
7679 self._do_execute(self.txn.execute, sql, *args)
7780
131134
132135 def interval(self, interval_duration, limit=3):
133136 counters = []
134 for name, (count, cum_time) in self.current_counters.items():
137 for name, (count, cum_time) in self.current_counters.iteritems():
135138 prev_count, prev_time = self.previous_counters.get(name, (0, 0))
136139 counters.append((
137140 (cum_time - prev_time) / interval_duration,
356359 """
357360 col_headers = list(intern(column[0]) for column in cursor.description)
358361 results = list(
359 dict(zip(col_headers, row)) for row in cursor.fetchall()
362 dict(zip(col_headers, row)) for row in cursor
360363 )
361364 return results
362365
564567 @staticmethod
565568 def _simple_select_onecol_txn(txn, table, keyvalues, retcol):
566569 if keyvalues:
567 where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.keys())
570 where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys())
568571 else:
569572 where = ""
570573
578581
579582 txn.execute(sql, keyvalues.values())
580583
581 return [r[0] for r in txn.fetchall()]
584 return [r[0] for r in txn]
582585
583586 def _simple_select_onecol(self, table, keyvalues, retcol,
584587 desc="_simple_select_onecol"):
711714 )
712715 values.extend(iterable)
713716
714 for key, value in keyvalues.items():
717 for key, value in keyvalues.iteritems():
715718 clauses.append("%s = ?" % (key,))
716719 values.append(value)
717720
752755 @staticmethod
753756 def _simple_update_one_txn(txn, table, keyvalues, updatevalues):
754757 if keyvalues:
755 where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.keys())
758 where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys())
756759 else:
757760 where = ""
758761
838841 )
839842
840843 return txn.execute(sql, keyvalues.values())
844
845 def _simple_delete_many(self, table, column, iterable, keyvalues, desc):
846 return self.runInteraction(
847 desc, self._simple_delete_many_txn, table, column, iterable, keyvalues
848 )
849
850 @staticmethod
851 def _simple_delete_many_txn(txn, table, column, iterable, keyvalues):
852 """Executes a DELETE query on the named table.
853
854 Filters rows by if value of `column` is in `iterable`.
855
856 Args:
857 txn : Transaction object
858 table : string giving the table name
859 column : column name to test for inclusion against `iterable`
860 iterable : list
861 keyvalues : dict of column names and values to select the rows with
862 """
863 if not iterable:
864 return
865
866 sql = "DELETE FROM %s" % table
867
868 clauses = []
869 values = []
870 clauses.append(
871 "%s IN (%s)" % (column, ",".join("?" for _ in iterable))
872 )
873 values.extend(iterable)
874
875 for key, value in keyvalues.iteritems():
876 clauses.append("%s = ?" % (key,))
877 values.append(value)
878
879 if clauses:
880 sql = "%s WHERE %s" % (
881 sql,
882 " AND ".join(clauses),
883 )
884 return txn.execute(sql, values)
841885
842886 def _get_cache_dict(self, db_conn, table, entity_column, stream_column,
843887 max_value, limit=100000):
859903
860904 txn = db_conn.cursor()
861905 txn.execute(sql, (int(max_value),))
862 rows = txn.fetchall()
863 txn.close()
864906
865907 cache = {
866908 row[0]: int(row[1])
867 for row in rows
909 for row in txn
868910 }
869911
912 txn.close()
913
870914 if cache:
871 min_val = min(cache.values())
915 min_val = min(cache.itervalues())
872916 else:
873917 min_val = max_value
874918
181181 txn.execute(sql, (user_id, stream_id))
182182
183183 global_account_data = {
184 row[0]: json.loads(row[1]) for row in txn.fetchall()
184 row[0]: json.loads(row[1]) for row in txn
185185 }
186186
187187 sql = (
192192 txn.execute(sql, (user_id, stream_id))
193193
194194 account_data_by_room = {}
195 for row in txn.fetchall():
195 for row in txn:
196196 room_account_data = account_data_by_room.setdefault(row[0], {})
197197 room_account_data[row[1]] = json.loads(row[2])
198198
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import synapse.util.async
1415
1516 from ._base import SQLBaseStore
1617 from . import engines
8384 self._background_update_performance = {}
8485 self._background_update_queue = []
8586 self._background_update_handlers = {}
86 self._background_update_timer = None
8787
8888 @defer.inlineCallbacks
8989 def start_doing_background_updates(self):
90 assert self._background_update_timer is None, \
91 "background updates already running"
92
9390 logger.info("Starting background schema updates")
9491
9592 while True:
96 sleep = defer.Deferred()
97 self._background_update_timer = self._clock.call_later(
98 self.BACKGROUND_UPDATE_INTERVAL_MS / 1000., sleep.callback, None
99 )
100 try:
101 yield sleep
102 finally:
103 self._background_update_timer = None
93 yield synapse.util.async.sleep(
94 self.BACKGROUND_UPDATE_INTERVAL_MS / 1000.)
10495
10596 try:
10697 result = yield self.do_next_background_update(
177177 )
178178 txn.execute(sql, (user_id,))
179179 message_json = ujson.dumps(messages_by_device["*"])
180 for row in txn.fetchall():
180 for row in txn:
181181 # Add the message for all devices for this user on this
182182 # server.
183183 device = row[0]
194194 # TODO: Maybe this needs to be done in batches if there are
195195 # too many local devices for a given user.
196196 txn.execute(sql, [user_id] + devices)
197 for row in txn.fetchall():
197 for row in txn:
198198 # Only insert into the local inbox if the device exists on
199199 # this server
200200 device = row[0]
250250 user_id, device_id, last_stream_id, current_stream_id, limit
251251 ))
252252 messages = []
253 for row in txn.fetchall():
253 for row in txn:
254254 stream_pos = row[0]
255255 messages.append(ujson.loads(row[1]))
256256 if len(messages) < limit:
339339 " ORDER BY stream_id ASC"
340340 )
341341 txn.execute(sql, (last_pos, upper_pos))
342 rows.extend(txn.fetchall())
342 rows.extend(txn)
343343
344344 return rows
345345
356356 """
357357 Args:
358358 destination(str): The name of the remote server.
359 last_stream_id(int): The last position of the device message stream
359 last_stream_id(int|long): The last position of the device message stream
360360 that the server sent up to.
361 current_stream_id(int): The current position of the device
361 current_stream_id(int|long): The current position of the device
362362 message stream.
363363 Returns:
364 Deferred ([dict], int): List of messages for the device and where
364 Deferred ([dict], int|long): List of messages for the device and where
365365 in the stream the messages got to.
366366 """
367367
383383 destination, last_stream_id, current_stream_id, limit
384384 ))
385385 messages = []
386 for row in txn.fetchall():
386 for row in txn:
387387 stream_pos = row[0]
388388 messages.append(ujson.loads(row[1]))
389389 if len(messages) < limit:
107107 desc="delete_device",
108108 )
109109
110 def delete_devices(self, user_id, device_ids):
111 """Deletes several devices.
112
113 Args:
114 user_id (str): The ID of the user which owns the devices
115 device_ids (list): The IDs of the devices to delete
116 Returns:
117 defer.Deferred
118 """
119 return self._simple_delete_many(
120 table="devices",
121 column="device_id",
122 iterable=device_ids,
123 keyvalues={"user_id": user_id},
124 desc="delete_devices",
125 )
126
110127 def update_device(self, user_id, device_id, new_display_name=None):
111128 """Update a device.
112129
290307 """Get stream of updates to send to remote servers
291308
292309 Returns:
293 (now_stream_id, [ { updates }, .. ])
310 (int, list[dict]): current stream id and list of updates
294311 """
295312 now_stream_id = self._device_list_id_gen.get_current_token()
296313
311328 SELECT user_id, device_id, max(stream_id) FROM device_lists_outbound_pokes
312329 WHERE destination = ? AND ? < stream_id AND stream_id <= ? AND sent = ?
313330 GROUP BY user_id, device_id
331 LIMIT 20
314332 """
315333 txn.execute(
316334 sql, (destination, from_stream_id, now_stream_id, False)
317335 )
318 rows = txn.fetchall()
319
320 if not rows:
336
337 # maps (user_id, device_id) -> stream_id
338 query_map = {(r[0], r[1]): r[2] for r in txn}
339 if not query_map:
321340 return (now_stream_id, [])
322341
323 # maps (user_id, device_id) -> stream_id
324 query_map = {(r[0], r[1]): r[2] for r in rows}
342 if len(query_map) >= 20:
343 now_stream_id = max(stream_id for stream_id in query_map.itervalues())
344
325345 devices = self._get_e2e_device_keys_txn(
326346 txn, query_map.keys(), include_all_devices=True
327347 )
1313 # limitations under the License.
1414 from twisted.internet import defer
1515
16 from synapse.api.errors import SynapseError
17
1618 from canonicaljson import encode_canonical_json
1719 import ujson as json
1820
119121
120122 return result
121123
124 @defer.inlineCallbacks
122125 def add_e2e_one_time_keys(self, user_id, device_id, time_now, key_list):
126 """Insert some new one time keys for a device.
127
128 Checks if any of the keys are already inserted, if they are then check
129 if they match. If they don't then we raise an error.
130 """
131
132 # First we check if we have already persisted any of the keys.
133 rows = yield self._simple_select_many_batch(
134 table="e2e_one_time_keys_json",
135 column="key_id",
136 iterable=[key_id for _, key_id, _ in key_list],
137 retcols=("algorithm", "key_id", "key_json",),
138 keyvalues={
139 "user_id": user_id,
140 "device_id": device_id,
141 },
142 desc="add_e2e_one_time_keys_check",
143 )
144
145 existing_key_map = {
146 (row["algorithm"], row["key_id"]): row["key_json"] for row in rows
147 }
148
149 new_keys = [] # Keys that we need to insert
150 for algorithm, key_id, json_bytes in key_list:
151 ex_bytes = existing_key_map.get((algorithm, key_id), None)
152 if ex_bytes:
153 if json_bytes != ex_bytes:
154 raise SynapseError(
155 400, "One time key with key_id %r already exists" % (key_id,)
156 )
157 else:
158 new_keys.append((algorithm, key_id, json_bytes))
159
123160 def _add_e2e_one_time_keys(txn):
124 for (algorithm, key_id, json_bytes) in key_list:
125 self._simple_upsert_txn(
126 txn, table="e2e_one_time_keys_json",
127 keyvalues={
161 # We are protected from race between lookup and insertion due to
162 # a unique constraint. If there is a race of two calls to
163 # `add_e2e_one_time_keys` then they'll conflict and we will only
164 # insert one set.
165 self._simple_insert_many_txn(
166 txn, table="e2e_one_time_keys_json",
167 values=[
168 {
128169 "user_id": user_id,
129170 "device_id": device_id,
130171 "algorithm": algorithm,
131172 "key_id": key_id,
132 },
133 values={
134173 "ts_added_ms": time_now,
135174 "key_json": json_bytes,
136175 }
137 )
138 return self.runInteraction(
139 "add_e2e_one_time_keys", _add_e2e_one_time_keys
176 for algorithm, key_id, json_bytes in new_keys
177 ],
178 )
179 yield self.runInteraction(
180 "add_e2e_one_time_keys_insert", _add_e2e_one_time_keys
140181 )
141182
142183 def count_e2e_one_time_keys(self, user_id, device_id):
152193 )
153194 txn.execute(sql, (user_id, device_id))
154195 result = {}
155 for algorithm, key_count in txn.fetchall():
196 for algorithm, key_count in txn:
156197 result[algorithm] = key_count
157198 return result
158199 return self.runInteraction(
173214 user_result = result.setdefault(user_id, {})
174215 device_result = user_result.setdefault(device_id, {})
175216 txn.execute(sql, (user_id, device_id, algorithm))
176 for key_id, key_json in txn.fetchall():
217 for key_id, key_json in txn:
177218 device_result[algorithm + ":" + key_id] = key_json
178219 delete.append((user_id, device_id, algorithm, key_id))
179220 sql = (
7373 base_sql % (",".join(["?"] * len(chunk)),),
7474 chunk
7575 )
76 new_front.update([r[0] for r in txn.fetchall()])
76 new_front.update([r[0] for r in txn])
7777
7878 new_front -= results
7979
109109
110110 txn.execute(sql, (room_id, False,))
111111
112 return dict(txn.fetchall())
112 return dict(txn)
113113
114114 def _get_oldest_events_in_room_txn(self, txn, room_id):
115115 return self._simple_select_onecol_txn(
200200 def _update_min_depth_for_room_txn(self, txn, room_id, depth):
201201 min_depth = self._get_min_depth_interaction(txn, room_id)
202202
203 do_insert = depth < min_depth if min_depth else True
204
205 if do_insert:
206 self._simple_upsert_txn(
207 txn,
208 table="room_depth",
209 keyvalues={
210 "room_id": room_id,
211 },
212 values={
213 "min_depth": depth,
214 },
215 )
203 if min_depth and depth >= min_depth:
204 return
205
206 self._simple_upsert_txn(
207 txn,
208 table="room_depth",
209 keyvalues={
210 "room_id": room_id,
211 },
212 values={
213 "min_depth": depth,
214 },
215 )
216216
217217 def _handle_mult_prev_events(self, txn, events):
218218 """
333333
334334 def get_forward_extremeties_for_room_txn(txn):
335335 txn.execute(sql, (stream_ordering, room_id))
336 rows = txn.fetchall()
337 return [event_id for event_id, in rows]
336 return [event_id for event_id, in txn]
338337
339338 return self.runInteraction(
340339 "get_forward_extremeties_for_room",
435434 (room_id, event_id, False, limit - len(event_results))
436435 )
437436
438 for row in txn.fetchall():
437 for row in txn:
439438 if row[1] not in event_results:
440439 queue.put((-row[0], row[1]))
441440
481480 (room_id, event_id, False, limit - len(event_results))
482481 )
483482
484 for e_id, in txn.fetchall():
483 for e_id, in txn:
485484 new_front.add(e_id)
486485
487486 new_front -= earliest_events
205205 " stream_ordering >= ? AND stream_ordering <= ?"
206206 )
207207 txn.execute(sql, (min_stream_ordering, max_stream_ordering))
208 return [r[0] for r in txn.fetchall()]
208 return [r[0] for r in txn]
209209 ret = yield self.runInteraction("get_push_action_users_in_range", f)
210210 defer.returnValue(ret)
211211
3333 from collections import deque, namedtuple, OrderedDict
3434 from functools import wraps
3535
36 import synapse
3736 import synapse.metrics
38
3937
4038 import logging
4139 import math
4240 import ujson as json
41
42 # these are only included to make the type annotations work
43 from synapse.events import EventBase # noqa: F401
44 from synapse.events.snapshot import EventContext # noqa: F401
4345
4446 logger = logging.getLogger(__name__)
4547
8183
8284 def add_to_queue(self, room_id, events_and_contexts, backfilled):
8385 """Add events to the queue, with the given persist_event options.
86
87 Args:
88 room_id (str):
89 events_and_contexts (list[(EventBase, EventContext)]):
90 backfilled (bool):
8491 """
8592 queue = self._event_persist_queues.setdefault(room_id, deque())
8693 if queue:
209216 partitioned.setdefault(event.room_id, []).append((event, ctx))
210217
211218 deferreds = []
212 for room_id, evs_ctxs in partitioned.items():
219 for room_id, evs_ctxs in partitioned.iteritems():
213220 d = preserve_fn(self._event_persist_queue.add_to_queue)(
214221 room_id, evs_ctxs,
215222 backfilled=backfilled,
216223 )
217224 deferreds.append(d)
218225
219 for room_id in partitioned.keys():
226 for room_id in partitioned:
220227 self._maybe_start_persisting(room_id)
221228
222229 return preserve_context_over_deferred(
226233 @defer.inlineCallbacks
227234 @log_function
228235 def persist_event(self, event, context, backfilled=False):
236 """
237
238 Args:
239 event (EventBase):
240 context (EventContext):
241 backfilled (bool):
242
243 Returns:
244 Deferred: resolves to (int, int): the stream ordering of ``event``,
245 and the stream ordering of the latest persisted event
246 """
229247 deferred = self._event_persist_queue.add_to_queue(
230248 event.room_id, [(event, context)],
231249 backfilled=backfilled,
252270 @defer.inlineCallbacks
253271 def _persist_events(self, events_and_contexts, backfilled=False,
254272 delete_existing=False):
273 """Persist events to db
274
275 Args:
276 events_and_contexts (list[(EventBase, EventContext)]):
277 backfilled (bool):
278 delete_existing (bool):
279
280 Returns:
281 Deferred: resolves when the events have been persisted
282 """
255283 if not events_and_contexts:
256284 return
257285
294322 (event, context)
295323 )
296324
297 for room_id, ev_ctx_rm in events_by_room.items():
325 for room_id, ev_ctx_rm in events_by_room.iteritems():
298326 # Work out new extremities by recursively adding and removing
299327 # the new events.
300328 latest_event_ids = yield self.get_latest_event_ids_in_room(
399427 # Now we need to work out the different state sets for
400428 # each state extremities
401429 state_sets = []
430 state_groups = set()
402431 missing_event_ids = []
403432 was_updated = False
404433 for event_id in new_latest_event_ids:
408437 if event_id == ev.event_id:
409438 if ctx.current_state_ids is None:
410439 raise Exception("Unknown current state")
411 state_sets.append(ctx.current_state_ids)
412 if ctx.delta_ids or hasattr(ev, "state_key"):
413 was_updated = True
440
441 # If we've already seen the state group don't bother adding
442 # it to the state sets again
443 if ctx.state_group not in state_groups:
444 state_sets.append(ctx.current_state_ids)
445 if ctx.delta_ids or hasattr(ev, "state_key"):
446 was_updated = True
447 if ctx.state_group:
448 # Add this as a seen state group (if it has a state
449 # group)
450 state_groups.add(ctx.state_group)
414451 break
415452 else:
416453 # If we couldn't find it, then we'll need to pull
424461 missing_event_ids,
425462 )
426463
427 groups = set(event_to_groups.values())
428 group_to_state = yield self._get_state_for_groups(groups)
429
430 state_sets.extend(group_to_state.values())
464 groups = set(event_to_groups.itervalues()) - state_groups
465
466 if groups:
467 group_to_state = yield self._get_state_for_groups(groups)
468 state_sets.extend(group_to_state.itervalues())
431469
432470 if not new_latest_event_ids:
433471 current_state = {}
434472 elif was_updated:
435 current_state = yield resolve_events(
436 state_sets,
437 state_map_factory=lambda ev_ids: self.get_events(
438 ev_ids, get_prev_content=False, check_redacted=False,
439 ),
440 )
473 if len(state_sets) == 1:
474 # If there is only one state set, then we know what the current
475 # state is.
476 current_state = state_sets[0]
477 else:
478 # We work out the current state by passing the state sets to the
479 # state resolution algorithm. It may ask for some events, including
480 # the events we have yet to persist, so we need a slightly more
481 # complicated event lookup function than simply looking the events
482 # up in the db.
483 events_map = {ev.event_id: ev for ev, _ in events_context}
484
485 @defer.inlineCallbacks
486 def get_events(ev_ids):
487 # We get the events by first looking at the list of events we
488 # are trying to persist, and then fetching the rest from the DB.
489 db = []
490 to_return = {}
491 for ev_id in ev_ids:
492 ev = events_map.get(ev_id, None)
493 if ev:
494 to_return[ev_id] = ev
495 else:
496 db.append(ev_id)
497
498 if db:
499 evs = yield self.get_events(
500 ev_ids, get_prev_content=False, check_redacted=False,
501 )
502 to_return.update(evs)
503 defer.returnValue(to_return)
504
505 current_state = yield resolve_events(
506 state_sets,
507 state_map_factory=get_events,
508 )
441509 else:
442510 return
443511
444 existing_state_rows = yield self._simple_select_list(
445 table="current_state_events",
446 keyvalues={"room_id": room_id},
447 retcols=["event_id", "type", "state_key"],
448 desc="_calculate_state_delta",
449 )
450
451 existing_events = set(row["event_id"] for row in existing_state_rows)
512 existing_state = yield self.get_current_state_ids(room_id)
513
514 existing_events = set(existing_state.itervalues())
452515 new_events = set(ev_id for ev_id in current_state.itervalues())
453516 changed_events = existing_events ^ new_events
454517
456519 return
457520
458521 to_delete = {
459 (row["type"], row["state_key"]): row["event_id"]
460 for row in existing_state_rows
461 if row["event_id"] in changed_events
522 key: ev_id for key, ev_id in existing_state.iteritems()
523 if ev_id in changed_events
462524 }
463525 events_to_insert = (new_events - existing_events)
464526 to_insert = {
534596 and the rejections table. Things reading from those table will need to check
535597 whether the event was rejected.
536598
537 If delete_existing is True then existing events will be purged from the
538 database before insertion. This is useful when retrying due to IntegrityError.
539 """
599 Args:
600 txn (twisted.enterprise.adbapi.Connection): db connection
601 events_and_contexts (list[(EventBase, EventContext)]):
602 events to persist
603 backfilled (bool): True if the events were backfilled
604 delete_existing (bool): True to purge existing table rows for the
605 events from the database. This is useful when retrying due to
606 IntegrityError.
607 current_state_for_room (dict[str, (list[str], list[str])]):
608 The current-state delta for each room. For each room, a tuple
609 (to_delete, to_insert), being a list of event ids to be removed
610 from the current state, and a list of event ids to be added to
611 the current state.
612 new_forward_extremeties (dict[str, list[str]]):
613 The new forward extremities for each room. For each room, a
614 list of the event ids which are the forward extremities.
615
616 """
617 self._update_current_state_txn(txn, current_state_for_room)
618
540619 max_stream_order = events_and_contexts[-1][0].internal_metadata.stream_ordering
541 for room_id, current_state_tuple in current_state_for_room.iteritems():
620 self._update_forward_extremities_txn(
621 txn,
622 new_forward_extremities=new_forward_extremeties,
623 max_stream_order=max_stream_order,
624 )
625
626 # Ensure that we don't have the same event twice.
627 events_and_contexts = self._filter_events_and_contexts_for_duplicates(
628 events_and_contexts,
629 )
630
631 self._update_room_depths_txn(
632 txn,
633 events_and_contexts=events_and_contexts,
634 backfilled=backfilled,
635 )
636
637 # _update_outliers_txn filters out any events which have already been
638 # persisted, and returns the filtered list.
639 events_and_contexts = self._update_outliers_txn(
640 txn,
641 events_and_contexts=events_and_contexts,
642 )
643
644 # From this point onwards the events are only events that we haven't
645 # seen before.
646
647 if delete_existing:
648 # For paranoia reasons, we go and delete all the existing entries
649 # for these events so we can reinsert them.
650 # This gets around any problems with some tables already having
651 # entries.
652 self._delete_existing_rows_txn(
653 txn,
654 events_and_contexts=events_and_contexts,
655 )
656
657 self._store_event_txn(
658 txn,
659 events_and_contexts=events_and_contexts,
660 )
661
662 # Insert into the state_groups, state_groups_state, and
663 # event_to_state_groups tables.
664 self._store_mult_state_groups_txn(txn, events_and_contexts)
665
666 # _store_rejected_events_txn filters out any events which were
667 # rejected, and returns the filtered list.
668 events_and_contexts = self._store_rejected_events_txn(
669 txn,
670 events_and_contexts=events_and_contexts,
671 )
672
673 # From this point onwards the events are only ones that weren't
674 # rejected.
675
676 self._update_metadata_tables_txn(
677 txn,
678 events_and_contexts=events_and_contexts,
679 backfilled=backfilled,
680 )
681
682 def _update_current_state_txn(self, txn, state_delta_by_room):
683 for room_id, current_state_tuple in state_delta_by_room.iteritems():
542684 to_delete, to_insert = current_state_tuple
543685 txn.executemany(
544686 "DELETE FROM current_state_events WHERE event_id = ?",
584726 txn, self.get_users_in_room, (room_id,)
585727 )
586728
587 for room_id, new_extrem in new_forward_extremeties.items():
729 self._invalidate_cache_and_stream(
730 txn, self.get_current_state_ids, (room_id,)
731 )
732
733 def _update_forward_extremities_txn(self, txn, new_forward_extremities,
734 max_stream_order):
735 for room_id, new_extrem in new_forward_extremities.iteritems():
588736 self._simple_delete_txn(
589737 txn,
590738 table="event_forward_extremities",
602750 "event_id": ev_id,
603751 "room_id": room_id,
604752 }
605 for room_id, new_extrem in new_forward_extremeties.items()
753 for room_id, new_extrem in new_forward_extremities.iteritems()
606754 for ev_id in new_extrem
607755 ],
608756 )
619767 "event_id": event_id,
620768 "stream_ordering": max_stream_order,
621769 }
622 for room_id, new_extrem in new_forward_extremeties.items()
770 for room_id, new_extrem in new_forward_extremities.iteritems()
623771 for event_id in new_extrem
624772 ]
625773 )
626774
627 # Ensure that we don't have the same event twice.
628 # Pick the earliest non-outlier if there is one, else the earliest one.
775 @classmethod
776 def _filter_events_and_contexts_for_duplicates(cls, events_and_contexts):
777 """Ensure that we don't have the same event twice.
778
779 Pick the earliest non-outlier if there is one, else the earliest one.
780
781 Args:
782 events_and_contexts (list[(EventBase, EventContext)]):
783 Returns:
784 list[(EventBase, EventContext)]: filtered list
785 """
629786 new_events_and_contexts = OrderedDict()
630787 for event, context in events_and_contexts:
631788 prev_event_context = new_events_and_contexts.get(event.event_id)
638795 new_events_and_contexts[event.event_id] = (event, context)
639796 else:
640797 new_events_and_contexts[event.event_id] = (event, context)
641
642 events_and_contexts = new_events_and_contexts.values()
643
798 return new_events_and_contexts.values()
799
800 def _update_room_depths_txn(self, txn, events_and_contexts, backfilled):
801 """Update min_depth for each room
802
803 Args:
804 txn (twisted.enterprise.adbapi.Connection): db connection
805 events_and_contexts (list[(EventBase, EventContext)]): events
806 we are persisting
807 backfilled (bool): True if the events were backfilled
808 """
644809 depth_updates = {}
645810 for event, context in events_and_contexts:
646811 # Remove the any existing cache entries for the event_ids
656821 event.depth, depth_updates.get(event.room_id, event.depth)
657822 )
658823
659 for room_id, depth in depth_updates.items():
824 for room_id, depth in depth_updates.iteritems():
660825 self._update_min_depth_for_room_txn(txn, room_id, depth)
661826
827 def _update_outliers_txn(self, txn, events_and_contexts):
828 """Update any outliers with new event info.
829
830 This turns outliers into ex-outliers (unless the new event was
831 rejected).
832
833 Args:
834 txn (twisted.enterprise.adbapi.Connection): db connection
835 events_and_contexts (list[(EventBase, EventContext)]): events
836 we are persisting
837
838 Returns:
839 list[(EventBase, EventContext)] new list, without events which
840 are already in the events table.
841 """
662842 txn.execute(
663843 "SELECT event_id, outlier FROM events WHERE event_id in (%s)" % (
664844 ",".join(["?"] * len(events_and_contexts)),
668848
669849 have_persisted = {
670850 event_id: outlier
671 for event_id, outlier in txn.fetchall()
851 for event_id, outlier in txn
672852 }
673853
674854 to_remove = set()
675855 for event, context in events_and_contexts:
856 if event.event_id not in have_persisted:
857 continue
858
859 to_remove.add(event)
860
676861 if context.rejected:
677862 # If the event is rejected then we don't care if the event
678863 # was an outlier or not.
679 if event.event_id in have_persisted:
680 # If we have already seen the event then ignore it.
681 to_remove.add(event)
682864 continue
683
684 if event.event_id not in have_persisted:
685 continue
686
687 to_remove.add(event)
688865
689866 outlier_persisted = have_persisted[event.event_id]
690867 if not event.internal_metadata.is_outlier() and outlier_persisted:
740917 # event isn't an outlier any more.
741918 self._update_backward_extremeties(txn, [event])
742919
743 events_and_contexts = [
920 return [
744921 ec for ec in events_and_contexts if ec[0] not in to_remove
745922 ]
746923
924 @classmethod
925 def _delete_existing_rows_txn(cls, txn, events_and_contexts):
747926 if not events_and_contexts:
748 # Make sure we don't pass an empty list to functions that expect to
749 # be storing at least one element.
927 # nothing to do here
750928 return
751929
752 # From this point onwards the events are only events that we haven't
753 # seen before.
754
755 def event_dict(event):
756 return {
757 k: v
758 for k, v in event.get_dict().items()
759 if k not in [
760 "redacted",
761 "redacted_because",
762 ]
763 }
764
765 if delete_existing:
766 # For paranoia reasons, we go and delete all the existing entries
767 # for these events so we can reinsert them.
768 # This gets around any problems with some tables already having
769 # entries.
770
771 logger.info("Deleting existing")
772
773 for table in (
930 logger.info("Deleting existing")
931
932 for table in (
774933 "events",
775934 "event_auth",
776935 "event_json",
793952 "redactions",
794953 "room_memberships",
795954 "topics"
796 ):
797 txn.executemany(
798 "DELETE FROM %s WHERE event_id = ?" % (table,),
799 [(ev.event_id,) for ev, _ in events_and_contexts]
800 )
955 ):
956 txn.executemany(
957 "DELETE FROM %s WHERE event_id = ?" % (table,),
958 [(ev.event_id,) for ev, _ in events_and_contexts]
959 )
960
961 def _store_event_txn(self, txn, events_and_contexts):
962 """Insert new events into the event and event_json tables
963
964 Args:
965 txn (twisted.enterprise.adbapi.Connection): db connection
966 events_and_contexts (list[(EventBase, EventContext)]): events
967 we are persisting
968 """
969
970 if not events_and_contexts:
971 # nothing to do here
972 return
973
974 def event_dict(event):
975 d = event.get_dict()
976 d.pop("redacted", None)
977 d.pop("redacted_because", None)
978 return d
801979
802980 self._simple_insert_many_txn(
803981 txn,
8411019 ],
8421020 )
8431021
1022 def _store_rejected_events_txn(self, txn, events_and_contexts):
1023 """Add rows to the 'rejections' table for received events which were
1024 rejected
1025
1026 Args:
1027 txn (twisted.enterprise.adbapi.Connection): db connection
1028 events_and_contexts (list[(EventBase, EventContext)]): events
1029 we are persisting
1030
1031 Returns:
1032 list[(EventBase, EventContext)] new list, without the rejected
1033 events.
1034 """
8441035 # Remove the rejected events from the list now that we've added them
8451036 # to the events table and the events_json table.
8461037 to_remove = set()
8521043 )
8531044 to_remove.add(event)
8541045
855 events_and_contexts = [
1046 return [
8561047 ec for ec in events_and_contexts if ec[0] not in to_remove
8571048 ]
8581049
1050 def _update_metadata_tables_txn(self, txn, events_and_contexts, backfilled):
1051 """Update all the miscellaneous tables for new events
1052
1053 Args:
1054 txn (twisted.enterprise.adbapi.Connection): db connection
1055 events_and_contexts (list[(EventBase, EventContext)]): events
1056 we are persisting
1057 backfilled (bool): True if the events were backfilled
1058 """
1059
8591060 if not events_and_contexts:
860 # Make sure we don't pass an empty list to functions that expect to
861 # be storing at least one element.
1061 # nothing to do here
8621062 return
863
864 # From this point onwards the events are only ones that weren't rejected.
8651063
8661064 for event, context in events_and_contexts:
8671065 # Insert all the push actions into the event_push_actions table.
8901088 for auth_id, _ in event.auth_events
8911089 ],
8921090 )
893
894 # Insert into the state_groups, state_groups_state, and
895 # event_to_state_groups tables.
896 self._store_mult_state_groups_txn(txn, events_and_contexts)
8971091
8981092 # Update the event_forward_extremities, event_backward_extremities and
8991093 # event_edges tables.
9811175 # Prefill the event cache
9821176 self._add_to_cache(txn, events_and_contexts)
9831177
984 if backfilled:
985 # Backfilled events come before the current state so we don't need
986 # to update the current state table
987 return
988
989 return
990
9911178 def _add_to_cache(self, txn, events_and_contexts):
9921179 to_prefill = []
9931180
15961783
15971784 def get_all_new_events_txn(txn):
15981785 sql = (
1599 "SELECT e.stream_ordering, ej.internal_metadata, ej.json, eg.state_group"
1600 " FROM events as e"
1601 " JOIN event_json as ej"
1602 " ON e.event_id = ej.event_id AND e.room_id = ej.room_id"
1603 " LEFT JOIN event_to_state_groups as eg"
1604 " ON e.event_id = eg.event_id"
1605 " WHERE ? < e.stream_ordering AND e.stream_ordering <= ?"
1606 " ORDER BY e.stream_ordering ASC"
1786 "SELECT e.stream_ordering, e.event_id, e.room_id, e.type,"
1787 " state_key, redacts"
1788 " FROM events AS e"
1789 " LEFT JOIN redactions USING (event_id)"
1790 " LEFT JOIN state_events USING (event_id)"
1791 " WHERE ? < stream_ordering AND stream_ordering <= ?"
1792 " ORDER BY stream_ordering ASC"
16071793 " LIMIT ?"
16081794 )
16091795 if have_forward_events:
16291815 forward_ex_outliers = []
16301816
16311817 sql = (
1632 "SELECT -e.stream_ordering, ej.internal_metadata, ej.json,"
1633 " eg.state_group"
1634 " FROM events as e"
1635 " JOIN event_json as ej"
1636 " ON e.event_id = ej.event_id AND e.room_id = ej.room_id"
1637 " LEFT JOIN event_to_state_groups as eg"
1638 " ON e.event_id = eg.event_id"
1639 " WHERE ? > e.stream_ordering AND e.stream_ordering >= ?"
1640 " ORDER BY e.stream_ordering DESC"
1818 "SELECT -e.stream_ordering, e.event_id, e.room_id, e.type,"
1819 " state_key, redacts"
1820 " FROM events AS e"
1821 " LEFT JOIN redactions USING (event_id)"
1822 " LEFT JOIN state_events USING (event_id)"
1823 " WHERE ? > stream_ordering AND stream_ordering >= ?"
1824 " ORDER BY stream_ordering DESC"
16411825 " LIMIT ?"
16421826 )
16431827 if have_backfill_events:
18242008 "state_key": key[1],
18252009 "event_id": state_id,
18262010 }
1827 for key, state_id in curr_state.items()
2011 for key, state_id in curr_state.iteritems()
18282012 ],
18292013 )
18302014
100100 key_ids
101101 Args:
102102 server_name (str): The name of the server.
103 key_ids (list of str): List of key_ids to try and look up.
103 key_ids (iterable[str]): key_ids to try and look up.
104104 Returns:
105 (list of VerifyKey): The verification keys.
105 Deferred: resolves to dict[str, VerifyKey]: map from
106 key_id to verification key.
106107 """
107108 keys = {}
108109 for key_id in key_ids:
355355 ),
356356 (current_version,)
357357 )
358 applied_deltas = [d for d, in txn.fetchall()]
358 applied_deltas = [d for d, in txn]
359359 return current_version, applied_deltas, upgraded
360360
361361 return None
8484 self.presence_stream_cache.entity_has_changed,
8585 state.user_id, stream_id,
8686 )
87 self._invalidate_cache_and_stream(
88 txn, self._get_presence_for_user, (state.user_id,)
87 txn.call_after(
88 self._get_presence_for_user.invalidate, (state.user_id,)
8989 )
9090
9191 # Actually insert new rows
312312 )
313313
314314 txn.execute(sql, (room_id, receipt_type, user_id))
315 results = txn.fetchall()
316
317 if results and topological_ordering:
318 for to, so, _ in results:
315
316 if topological_ordering:
317 for to, so, _ in txn:
319318 if int(to) > topological_ordering:
320319 return False
321320 elif int(to) == topological_ordering and int(so) >= stream_ordering:
208208 " WHERE lower(name) = lower(?)"
209209 )
210210 txn.execute(sql, (user_id,))
211 return dict(txn.fetchall())
211 return dict(txn)
212212
213213 return self.runInteraction("get_users_by_id_case_insensitive", f)
214214
395395 sql % ("AND appservice_id IS NULL",),
396396 (stream_id,)
397397 )
398 return dict(txn.fetchall())
398 return dict(txn)
399399 else:
400400 # We want to get from all lists, so we need to aggregate the results
401401
421421
422422 results = {}
423423 # A room is visible if its visible on any list.
424 for room_id, visibility in txn.fetchall():
424 for room_id, visibility in txn:
425425 results[room_id] = bool(visibility) or results.get(room_id, False)
426426
427427 return results
128128 with self._stream_id_gen.get_next() as stream_ordering:
129129 yield self.runInteraction("locally_reject_invite", f, stream_ordering)
130130
131 @cachedInlineCallbacks(max_entries=100000, iterable=True, cache_context=True)
132 def get_hosts_in_room(self, room_id, cache_context):
133 """Returns the set of all hosts currently in the room
134 """
135 user_ids = yield self.get_users_in_room(
136 room_id, on_invalidate=cache_context.invalidate,
137 )
138 hosts = frozenset(get_domain_from_id(user_id) for user_id in user_ids)
139 defer.returnValue(hosts)
140
131141 @cached(max_entries=500000, iterable=True)
132142 def get_users_in_room(self, room_id):
133143 def f(txn):
134
135 rows = self._get_members_rows_txn(
136 txn,
137 room_id=room_id,
138 membership=Membership.JOIN,
139 )
140
141 return [r["user_id"] for r in rows]
144 sql = (
145 "SELECT m.user_id FROM room_memberships as m"
146 " INNER JOIN current_state_events as c"
147 " ON m.event_id = c.event_id "
148 " AND m.room_id = c.room_id "
149 " AND m.user_id = c.state_key"
150 " WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?"
151 )
152
153 txn.execute(sql, (room_id, Membership.JOIN,))
154 return [r[0] for r in txn]
142155 return self.runInteraction("get_users_in_room", f)
143156
144157 @cached()
245258
246259 return results
247260
248 def _get_members_rows_txn(self, txn, room_id, membership=None, user_id=None):
249 where_clause = "c.room_id = ?"
250 where_values = [room_id]
251
252 if membership:
253 where_clause += " AND m.membership = ?"
254 where_values.append(membership)
255
256 if user_id:
257 where_clause += " AND m.user_id = ?"
258 where_values.append(user_id)
259
260 sql = (
261 "SELECT m.* FROM room_memberships as m"
262 " INNER JOIN current_state_events as c"
263 " ON m.event_id = c.event_id "
264 " AND m.room_id = c.room_id "
265 " AND m.user_id = c.state_key"
266 " WHERE c.type = 'm.room.member' AND %(where)s"
267 ) % {
268 "where": where_clause,
269 }
270
271 txn.execute(sql, where_values)
272 rows = self.cursor_to_dict(txn)
273
274 return rows
275
276 @cached(max_entries=500000, iterable=True)
261 @cachedInlineCallbacks(max_entries=500000, iterable=True)
277262 def get_rooms_for_user(self, user_id):
278 return self.get_rooms_for_user_where_membership_is(
263 """Returns a set of room_ids the user is currently joined to
264 """
265 rooms = yield self.get_rooms_for_user_where_membership_is(
279266 user_id, membership_list=[Membership.JOIN],
280267 )
268 defer.returnValue(frozenset(r.room_id for r in rooms))
281269
282270 @cachedInlineCallbacks(max_entries=500000, cache_context=True, iterable=True)
283271 def get_users_who_share_room_with_user(self, user_id, cache_context):
284272 """Returns the set of users who share a room with `user_id`
285273 """
286 rooms = yield self.get_rooms_for_user(
274 room_ids = yield self.get_rooms_for_user(
287275 user_id, on_invalidate=cache_context.invalidate,
288276 )
289277
290278 user_who_share_room = set()
291 for room in rooms:
279 for room_id in room_ids:
292280 user_ids = yield self.get_users_in_room(
293 room.room_id, on_invalidate=cache_context.invalidate,
281 room_id, on_invalidate=cache_context.invalidate,
294282 )
295283 user_who_share_room.update(user_ids)
296284
7171 " WHERE event_id = ?"
7272 )
7373 txn.execute(query, (event_id, ))
74 return {k: v for k, v in txn.fetchall()}
74 return {k: v for k, v in txn}
7575
7676 def _store_event_reference_hashes_txn(self, txn, events):
7777 """Store a hash for a PDU
1313 # limitations under the License.
1414
1515 from ._base import SQLBaseStore
16 from synapse.util.caches.descriptors import cached, cachedList
16 from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks
1717 from synapse.util.caches import intern_string
1818 from synapse.storage.engines import PostgresEngine
1919
6868 where_clause="type='m.room.member'",
6969 )
7070
71 @cachedInlineCallbacks(max_entries=100000, iterable=True)
72 def get_current_state_ids(self, room_id):
73 rows = yield self._simple_select_list(
74 table="current_state_events",
75 keyvalues={"room_id": room_id},
76 retcols=["event_id", "type", "state_key"],
77 desc="_calculate_state_delta",
78 )
79 defer.returnValue({
80 (r["type"], r["state_key"]): r["event_id"] for r in rows
81 })
82
7183 @defer.inlineCallbacks
7284 def get_state_groups_ids(self, room_id, event_ids):
7385 if not event_ids:
7789 event_ids,
7890 )
7991
80 groups = set(event_to_groups.values())
92 groups = set(event_to_groups.itervalues())
8193 group_to_state = yield self._get_state_for_groups(groups)
8294
8395 defer.returnValue(group_to_state)
95107
96108 state_event_map = yield self.get_events(
97109 [
98 ev_id for group_ids in group_to_ids.values()
99 for ev_id in group_ids.values()
110 ev_id for group_ids in group_to_ids.itervalues()
111 for ev_id in group_ids.itervalues()
100112 ],
101113 get_prev_content=False
102114 )
103115
104116 defer.returnValue({
105117 group: [
106 state_event_map[v] for v in event_id_map.values() if v in state_event_map
118 state_event_map[v] for v in event_id_map.itervalues()
119 if v in state_event_map
107120 ]
108 for group, event_id_map in group_to_ids.items()
121 for group, event_id_map in group_to_ids.iteritems()
109122 })
110123
111124 def _have_persisted_state_group_txn(self, txn, state_group):
123136 continue
124137
125138 if context.current_state_ids is None:
139 # AFAIK, this can never happen
140 logger.error(
141 "Non-outlier event %s had current_state_ids==None",
142 event.event_id)
143 continue
144
145 # if the event was rejected, just give it the same state as its
146 # predecessor.
147 if context.rejected:
148 state_groups[event.event_id] = context.prev_group
126149 continue
127150
128151 state_groups[event.event_id] = context.state_group
167190 "state_key": key[1],
168191 "event_id": state_id,
169192 }
170 for key, state_id in context.delta_ids.items()
193 for key, state_id in context.delta_ids.iteritems()
171194 ],
172195 )
173196 else:
182205 "state_key": key[1],
183206 "event_id": state_id,
184207 }
185 for key, state_id in context.current_state_ids.items()
208 for key, state_id in context.current_state_ids.iteritems()
186209 ],
187210 )
188211
194217 "state_group": state_group_id,
195218 "event_id": event_id,
196219 }
197 for event_id, state_group_id in state_groups.items()
220 for event_id, state_group_id in state_groups.iteritems()
198221 ],
199222 )
200223
318341 args.extend(where_args)
319342
320343 txn.execute(sql % (where_clause,), args)
321 rows = self.cursor_to_dict(txn)
322 for row in rows:
323 key = (row["type"], row["state_key"])
324 results[group][key] = row["event_id"]
344 for row in txn:
345 typ, state_key, event_id = row
346 key = (typ, state_key)
347 results[group][key] = event_id
325348 else:
326349 if types is not None:
327350 where_clause = "AND (%s)" % (
350373 " WHERE state_group = ? %s" % (where_clause,),
351374 args
352375 )
353 rows = txn.fetchall()
354 results[group].update({
355 (typ, state_key): event_id
356 for typ, state_key, event_id in rows
376 results[group].update(
377 ((typ, state_key), event_id)
378 for typ, state_key, event_id in txn
357379 if (typ, state_key) not in results[group]
358 })
380 )
359381
360382 # If the lengths match then we must have all the types,
361383 # so no need to go walk further down the tree.
392414 event_ids,
393415 )
394416
395 groups = set(event_to_groups.values())
417 groups = set(event_to_groups.itervalues())
396418 group_to_state = yield self._get_state_for_groups(groups, types)
397419
398420 state_event_map = yield self.get_events(
399 [ev_id for sd in group_to_state.values() for ev_id in sd.values()],
421 [ev_id for sd in group_to_state.itervalues() for ev_id in sd.itervalues()],
400422 get_prev_content=False
401423 )
402424
403425 event_to_state = {
404426 event_id: {
405427 k: state_event_map[v]
406 for k, v in group_to_state[group].items()
428 for k, v in group_to_state[group].iteritems()
407429 if v in state_event_map
408430 }
409 for event_id, group in event_to_groups.items()
431 for event_id, group in event_to_groups.iteritems()
410432 }
411433
412434 defer.returnValue({event: event_to_state[event] for event in event_ids})
429451 event_ids,
430452 )
431453
432 groups = set(event_to_groups.values())
454 groups = set(event_to_groups.itervalues())
433455 group_to_state = yield self._get_state_for_groups(groups, types)
434456
435457 event_to_state = {
436458 event_id: group_to_state[group]
437 for event_id, group in event_to_groups.items()
459 for event_id, group in event_to_groups.iteritems()
438460 }
439461
440462 defer.returnValue({event: event_to_state[event] for event in event_ids})
473495 state_map = yield self.get_state_ids_for_events([event_id], types)
474496 defer.returnValue(state_map[event_id])
475497
476 @cached(num_args=2, max_entries=10000)
498 @cached(num_args=2, max_entries=100000)
477499 def _get_state_group_for_event(self, room_id, event_id):
478500 return self._simple_select_one_onecol(
479501 table="event_to_state_groups",
546568 got_all = not (missing_types or types is None)
547569
548570 return {
549 k: v for k, v in state_dict_ids.items()
571 k: v for k, v in state_dict_ids.iteritems()
550572 if include(k[0], k[1])
551573 }, missing_types, got_all
552574
605627
606628 # Now we want to update the cache with all the things we fetched
607629 # from the database.
608 for group, group_state_dict in group_to_state_dict.items():
630 for group, group_state_dict in group_to_state_dict.iteritems():
609631 if types:
610632 # We delibrately put key -> None mappings into the cache to
611633 # cache absence of the key, on the assumption that if we've
620642 else:
621643 state_dict = results[group]
622644
623 state_dict.update({
624 (intern_string(k[0]), intern_string(k[1])): v
625 for k, v in group_state_dict.items()
626 })
645 state_dict.update(
646 ((intern_string(k[0]), intern_string(k[1])), v)
647 for k, v in group_state_dict.iteritems()
648 )
627649
628650 self._state_group_cache.update(
629651 cache_seq_num,
634656
635657 # Remove all the entries with None values. The None values were just
636658 # used for bookkeeping in the cache.
637 for group, state_dict in results.items():
659 for group, state_dict in results.iteritems():
638660 results[group] = {
639661 key: event_id
640 for key, event_id in state_dict.items()
662 for key, event_id in state_dict.iteritems()
641663 if event_id
642664 }
643665
726748 # of keys
727749
728750 delta_state = {
729 key: value for key, value in curr_state.items()
751 key: value for key, value in curr_state.iteritems()
730752 if prev_state.get(key, None) != value
731753 }
732754
766788 "state_key": key[1],
767789 "event_id": state_id,
768790 }
769 for key, state_id in delta_state.items()
791 for key, state_id in delta_state.iteritems()
770792 ],
771793 )
772794
828828 updatevalues={"stream_id": stream_id},
829829 desc="update_federation_out_pos",
830830 )
831
832 def has_room_changed_since(self, room_id, stream_id):
833 return self._events_stream_cache.has_entity_changed(room_id, stream_id)
9494 for stream_id, user_id, room_id in tag_ids:
9595 txn.execute(sql, (user_id, room_id))
9696 tags = []
97 for tag, content in txn.fetchall():
97 for tag, content in txn:
9898 tags.append(json.dumps(tag) + ":" + content)
9999 tag_json = "{" + ",".join(tags) + "}"
100100 results.append((stream_id, user_id, room_id, tag_json))
131131 " WHERE user_id = ? AND stream_id > ?"
132132 )
133133 txn.execute(sql, (user_id, stream_id))
134 room_ids = [row[0] for row in txn.fetchall()]
134 room_ids = [row[0] for row in txn]
135135 return room_ids
136136
137137 changed = self._account_data_stream_cache.has_entity_changed(
2929
3030
3131 def _load_current_id(db_conn, table, column, step=1):
32 """
33
34 Args:
35 db_conn (object):
36 table (str):
37 column (str):
38 step (int):
39
40 Returns:
41 int
42 """
3243 cur = db_conn.cursor()
3344 if step == 1:
3445 cur.execute("SELECT MAX(%s) FROM %s" % (column, table,))
130141 def get_current_token(self):
131142 """Returns the maximum stream id such that all stream ids less than or
132143 equal to it have been successfully persisted.
144
145 Returns:
146 int
133147 """
134148 with self._lock:
135149 if self._unfinished_ids:
2525
2626 class DeferredTimedOutError(SynapseError):
2727 def __init__(self):
28 super(SynapseError).__init__(504, "Timed out")
28 super(SynapseError, self).__init__(504, "Timed out")
2929
3030
3131 def unwrapFirstError(failure):
9292 ret_deferred = defer.Deferred()
9393
9494 def timed_out_fn():
95 e = DeferredTimedOutError()
96
9597 try:
96 ret_deferred.errback(DeferredTimedOutError())
98 ret_deferred.errback(e)
9799 except:
98100 pass
99101
113115
114116 ret_deferred.addBoth(cancel)
115117
116 def sucess(res):
118 def success(res):
117119 try:
118120 ret_deferred.callback(res)
119121 except:
127129 except:
128130 pass
129131
130 given_deferred.addCallbacks(callback=sucess, errback=err)
132 given_deferred.addCallbacks(callback=success, errback=err)
131133
132134 timer = self.call_later(time_out, timed_out_fn)
133135
1414 import logging
1515
1616 from synapse.util.async import ObservableDeferred
17 from synapse.util import unwrapFirstError
17 from synapse.util import unwrapFirstError, logcontext
1818 from synapse.util.caches.lrucache import LruCache
1919 from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry
20 from synapse.util.logcontext import (
21 PreserveLoggingContext, preserve_context_over_deferred, preserve_context_over_fn
22 )
2320
2421 from . import DEBUG_CACHES, register_cache
2522
188185 self.cache.clear()
189186
190187
191 class CacheDescriptor(object):
188 class _CacheDescriptorBase(object):
189 def __init__(self, orig, num_args, inlineCallbacks, cache_context=False):
190 self.orig = orig
191
192 if inlineCallbacks:
193 self.function_to_call = defer.inlineCallbacks(orig)
194 else:
195 self.function_to_call = orig
196
197 arg_spec = inspect.getargspec(orig)
198 all_args = arg_spec.args
199
200 if "cache_context" in all_args:
201 if not cache_context:
202 raise ValueError(
203 "Cannot have a 'cache_context' arg without setting"
204 " cache_context=True"
205 )
206 elif cache_context:
207 raise ValueError(
208 "Cannot have cache_context=True without having an arg"
209 " named `cache_context`"
210 )
211
212 if num_args is None:
213 num_args = len(all_args) - 1
214 if cache_context:
215 num_args -= 1
216
217 if len(all_args) < num_args + 1:
218 raise Exception(
219 "Not enough explicit positional arguments to key off for %r: "
220 "got %i args, but wanted %i. (@cached cannot key off *args or "
221 "**kwargs)"
222 % (orig.__name__, len(all_args), num_args)
223 )
224
225 self.num_args = num_args
226 self.arg_names = all_args[1:num_args + 1]
227
228 if "cache_context" in self.arg_names:
229 raise Exception(
230 "cache_context arg cannot be included among the cache keys"
231 )
232
233 self.add_cache_context = cache_context
234
235
236 class CacheDescriptor(_CacheDescriptorBase):
192237 """ A method decorator that applies a memoizing cache around the function.
193238
194239 This caches deferreds, rather than the results themselves. Deferreds that
216261 r2 = yield self.bar2(key, on_invalidate=cache_context.invalidate)
217262 defer.returnValue(r1 + r2)
218263
264 Args:
265 num_args (int): number of positional arguments (excluding ``self`` and
266 ``cache_context``) to use as cache keys. Defaults to all named
267 args of the function.
219268 """
220 def __init__(self, orig, max_entries=1000, num_args=1, tree=False,
269 def __init__(self, orig, max_entries=1000, num_args=None, tree=False,
221270 inlineCallbacks=False, cache_context=False, iterable=False):
271
272 super(CacheDescriptor, self).__init__(
273 orig, num_args=num_args, inlineCallbacks=inlineCallbacks,
274 cache_context=cache_context)
275
222276 max_entries = int(max_entries * CACHE_SIZE_FACTOR)
223277
224 self.orig = orig
225
226 if inlineCallbacks:
227 self.function_to_call = defer.inlineCallbacks(orig)
228 else:
229 self.function_to_call = orig
230
231278 self.max_entries = max_entries
232 self.num_args = num_args
233279 self.tree = tree
234
235280 self.iterable = iterable
236
237 all_args = inspect.getargspec(orig)
238 self.arg_names = all_args.args[1:num_args + 1]
239
240 if "cache_context" in all_args.args:
241 if not cache_context:
242 raise ValueError(
243 "Cannot have a 'cache_context' arg without setting"
244 " cache_context=True"
245 )
246 try:
247 self.arg_names.remove("cache_context")
248 except ValueError:
249 pass
250 elif cache_context:
251 raise ValueError(
252 "Cannot have cache_context=True without having an arg"
253 " named `cache_context`"
254 )
255
256 self.add_cache_context = cache_context
257
258 if len(self.arg_names) < self.num_args:
259 raise Exception(
260 "Not enough explicit positional arguments to key off of for %r."
261 " (@cached cannot key off of *args or **kwargs)"
262 % (orig.__name__,)
263 )
264281
265282 def __get__(self, obj, objtype=None):
266283 cache = Cache(
307324 defer.returnValue(cached_result)
308325 observer.addCallback(check_result)
309326
310 return preserve_context_over_deferred(observer)
311327 except KeyError:
312328 ret = defer.maybeDeferred(
313 preserve_context_over_fn,
314 self.function_to_call,
329 logcontext.preserve_fn(self.function_to_call),
315330 obj, *args, **kwargs
316331 )
317332
321336
322337 ret.addErrback(onErr)
323338
324 ret = ObservableDeferred(ret, consumeErrors=True)
325 cache.set(cache_key, ret, callback=invalidate_callback)
326
327 return preserve_context_over_deferred(ret.observe())
339 result_d = ObservableDeferred(ret, consumeErrors=True)
340 cache.set(cache_key, result_d, callback=invalidate_callback)
341 observer = result_d.observe()
342
343 return logcontext.make_deferred_yieldable(observer)
328344
329345 wrapped.invalidate = cache.invalidate
330346 wrapped.invalidate_all = cache.invalidate_all
337353 return wrapped
338354
339355
340 class CacheListDescriptor(object):
356 class CacheListDescriptor(_CacheDescriptorBase):
341357 """Wraps an existing cache to support bulk fetching of keys.
342358
343359 Given a list of keys it looks in the cache to find any hits, then passes
344 the list of missing keys to the wrapped fucntion.
360 the list of missing keys to the wrapped function.
361
362 Once wrapped, the function returns either a Deferred which resolves to
363 the list of results, or (if all results were cached), just the list of
364 results.
345365 """
346366
347 def __init__(self, orig, cached_method_name, list_name, num_args=1,
367 def __init__(self, orig, cached_method_name, list_name, num_args=None,
348368 inlineCallbacks=False):
349369 """
350370 Args:
351371 orig (function)
352 method_name (str); The name of the chached method.
372 cached_method_name (str): The name of the chached method.
353373 list_name (str): Name of the argument which is the bulk lookup list
354 num_args (int)
374 num_args (int): number of positional arguments (excluding ``self``,
375 but including list_name) to use as cache keys. Defaults to all
376 named args of the function.
355377 inlineCallbacks (bool): Whether orig is a generator that should
356378 be wrapped by defer.inlineCallbacks
357379 """
358 self.orig = orig
359
360 if inlineCallbacks:
361 self.function_to_call = defer.inlineCallbacks(orig)
362 else:
363 self.function_to_call = orig
364
365 self.num_args = num_args
380 super(CacheListDescriptor, self).__init__(
381 orig, num_args=num_args, inlineCallbacks=inlineCallbacks)
382
366383 self.list_name = list_name
367384
368 self.arg_names = inspect.getargspec(orig).args[1:num_args + 1]
369385 self.list_pos = self.arg_names.index(self.list_name)
370
371386 self.cached_method_name = cached_method_name
372387
373388 self.sentinel = object()
374
375 if len(self.arg_names) < self.num_args:
376 raise Exception(
377 "Not enough explicit positional arguments to key off of for %r."
378 " (@cached cannot key off of *args or **kwars)"
379 % (orig.__name__,)
380 )
381389
382390 if self.list_name not in self.arg_names:
383391 raise Exception(
424432 args_to_call[self.list_name] = missing
425433
426434 ret_d = defer.maybeDeferred(
427 preserve_context_over_fn,
428 self.function_to_call,
435 logcontext.preserve_fn(self.function_to_call),
429436 **args_to_call
430437 )
431438
434441 # We need to create deferreds for each arg in the list so that
435442 # we can insert the new deferred into the cache.
436443 for arg in missing:
437 with PreserveLoggingContext():
438 observer = ret_d.observe()
444 observer = ret_d.observe()
439445 observer.addCallback(lambda r, arg: r.get(arg, None), arg)
440446
441447 observer = ObservableDeferred(observer)
462468 results.update(res)
463469 return results
464470
465 return preserve_context_over_deferred(defer.gatherResults(
471 return logcontext.make_deferred_yieldable(defer.gatherResults(
466472 cached_defers.values(),
467473 consumeErrors=True,
468474 ).addCallback(update_results_dict).addErrback(
486492 self.cache.invalidate(self.key)
487493
488494
489 def cached(max_entries=1000, num_args=1, tree=False, cache_context=False,
495 def cached(max_entries=1000, num_args=None, tree=False, cache_context=False,
490496 iterable=False):
491497 return lambda orig: CacheDescriptor(
492498 orig,
498504 )
499505
500506
501 def cachedInlineCallbacks(max_entries=1000, num_args=1, tree=False, cache_context=False,
502 iterable=False):
507 def cachedInlineCallbacks(max_entries=1000, num_args=None, tree=False,
508 cache_context=False, iterable=False):
503509 return lambda orig: CacheDescriptor(
504510 orig,
505511 max_entries=max_entries,
511517 )
512518
513519
514 def cachedList(cached_method_name, list_name, num_args=1, inlineCallbacks=False):
520 def cachedList(cached_method_name, list_name, num_args=None, inlineCallbacks=False):
515521 """Creates a descriptor that wraps a function in a `CacheListDescriptor`.
516522
517523 Used to do batch lookups for an already created cache. A single argument
524530 cache (Cache): The underlying cache to use.
525531 list_name (str): The name of the argument that is the list to use to
526532 do batch lookups in the cache.
527 num_args (int): Number of arguments to use as the key in the cache.
533 num_args (int): Number of arguments to use as the key in the cache
534 (including list_name). Defaults to all named parameters.
528535 inlineCallbacks (bool): Should the function be wrapped in an
529536 `defer.inlineCallbacks`?
530537
4949 def has_entity_changed(self, entity, stream_pos):
5050 """Returns True if the entity may have been updated since stream_pos
5151 """
52 assert type(stream_pos) is int
52 assert type(stream_pos) is int or type(stream_pos) is long
5353
5454 if stream_pos < self._earliest_known_stream_pos:
5555 self.metrics.inc_misses()
1010 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1111 # See the License for the specific language governing permissions and
1212 # limitations under the License.
13
14 """ Thread-local-alike tracking of log contexts within synapse
15
16 This module provides objects and utilities for tracking contexts through
17 synapse code, so that log lines can include a request identifier, and so that
18 CPU and database activity can be accounted for against the request that caused
19 them.
20
21 See doc/log_contexts.rst for details on how this works.
22 """
1323
1424 from twisted.internet import defer
1525
299309 def preserve_context_over_deferred(deferred, context=None):
300310 """Given a deferred wrap it such that any callbacks added later to it will
301311 be invoked with the current context.
312
313 Deprecated: this almost certainly doesn't do want you want, ie make
314 the deferred follow the synapse logcontext rules: try
315 ``make_deferred_yieldable`` instead.
302316 """
303317 if context is None:
304318 context = LoggingContext.current_context()
308322
309323
310324 def preserve_fn(f):
311 """Ensures that function is called with correct context and that context is
312 restored after return. Useful for wrapping functions that return a deferred
313 which you don't yield on.
314 """
325 """Wraps a function, to ensure that the current context is restored after
326 return from the function, and that the sentinel context is set once the
327 deferred returned by the funtion completes.
328
329 Useful for wrapping functions that return a deferred which you don't yield
330 on.
331 """
332 def reset_context(result):
333 LoggingContext.set_current_context(LoggingContext.sentinel)
334 return result
335
336 # XXX: why is this here rather than inside g? surely we want to preserve
337 # the context from the time the function was called, not when it was
338 # wrapped?
315339 current = LoggingContext.current_context()
316340
317341 def g(*args, **kwargs):
318 with PreserveLoggingContext(current):
319 res = f(*args, **kwargs)
320 if isinstance(res, defer.Deferred):
321 return preserve_context_over_deferred(
322 res, context=LoggingContext.sentinel
323 )
324 else:
325 return res
342 res = f(*args, **kwargs)
343 if isinstance(res, defer.Deferred) and not res.called:
344 # The function will have reset the context before returning, so
345 # we need to restore it now.
346 LoggingContext.set_current_context(current)
347
348 # The original context will be restored when the deferred
349 # completes, but there is nothing waiting for it, so it will
350 # get leaked into the reactor or some other function which
351 # wasn't expecting it. We therefore need to reset the context
352 # here.
353 #
354 # (If this feels asymmetric, consider it this way: we are
355 # effectively forking a new thread of execution. We are
356 # probably currently within a ``with LoggingContext()`` block,
357 # which is supposed to have a single entry and exit point. But
358 # by spawning off another deferred, we are effectively
359 # adding a new exit point.)
360 res.addBoth(reset_context)
361 return res
326362 return g
363
364
365 @defer.inlineCallbacks
366 def make_deferred_yieldable(deferred):
367 """Given a deferred, make it follow the Synapse logcontext rules:
368
369 If the deferred has completed (or is not actually a Deferred), essentially
370 does nothing (just returns another completed deferred with the
371 result/failure).
372
373 If the deferred has not yet completed, resets the logcontext before
374 returning a deferred. Then, when the deferred completes, restores the
375 current logcontext before running callbacks/errbacks.
376
377 (This is more-or-less the opposite operation to preserve_fn.)
378 """
379 with PreserveLoggingContext():
380 r = yield deferred
381 defer.returnValue(r)
327382
328383
329384 # modules to ignore in `logcontext_tracer`
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 Vector Creations Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import phonenumbers
16 from synapse.api.errors import SynapseError
17
18
19 def phone_number_to_msisdn(country, number):
20 """
21 Takes an ISO-3166-1 2 letter country code and phone number and
22 returns an msisdn representing the canonical version of that
23 phone number.
24 Args:
25 country (str): ISO-3166-1 2 letter country code
26 number (str): Phone number in a national or international format
27
28 Returns:
29 (str) The canonical form of the phone number, as an msisdn
30 Raises:
31 SynapseError if the number could not be parsed.
32 """
33 try:
34 phoneNumber = phonenumbers.parse(number, country)
35 except phonenumbers.NumberParseException:
36 raise SynapseError(400, "Unable to parse phone number")
37 return phonenumbers.format_number(
38 phoneNumber, phonenumbers.PhoneNumberFormat.E164
39 )[1:]
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
14 import synapse.util.logcontext
1515 from twisted.internet import defer
1616
1717 from synapse.api.errors import CodeMessageException
3434
3535
3636 @defer.inlineCallbacks
37 def get_retry_limiter(destination, clock, store, **kwargs):
37 def get_retry_limiter(destination, clock, store, ignore_backoff=False,
38 **kwargs):
3839 """For a given destination check if we have previously failed to
3940 send a request there and are waiting before retrying the destination.
4041 If we are not ready to retry the destination, this will raise a
4142 NotRetryingDestination exception. Otherwise, will return a Context Manager
4243 that will mark the destination as down if an exception is thrown (excluding
4344 CodeMessageException with code < 500)
45
46 Args:
47 destination (str): name of homeserver
48 clock (synapse.util.clock): timing source
49 store (synapse.storage.transactions.TransactionStore): datastore
50 ignore_backoff (bool): true to ignore the historical backoff data and
51 try the request anyway. We will still update the next
52 retry_interval on success/failure.
4453
4554 Example usage:
4655
6574
6675 now = int(clock.time_msec())
6776
68 if retry_last_ts + retry_interval > now:
77 if not ignore_backoff and retry_last_ts + retry_interval > now:
6978 raise NotRetryingDestination(
7079 retry_last_ts=retry_last_ts,
7180 retry_interval=retry_interval,
123132
124133 def __exit__(self, exc_type, exc_val, exc_tb):
125134 valid_err_code = False
126 if exc_type is not None and issubclass(exc_type, CodeMessageException):
135 if exc_type is None:
136 valid_err_code = True
137 elif not issubclass(exc_type, Exception):
138 # avoid treating exceptions which don't derive from Exception as
139 # failures; this is mostly so as not to catch defer._DefGen.
140 valid_err_code = True
141 elif issubclass(exc_type, CodeMessageException):
127142 # Some error codes are perfectly fine for some APIs, whereas other
128143 # APIs may expect to never received e.g. a 404. It's important to
129144 # handle 404 as some remote servers will return a 404 when the HS
141156 else:
142157 valid_err_code = False
143158
144 if exc_type is None or valid_err_code:
159 if valid_err_code:
145160 # We connected successfully.
146161 if not self.retry_interval:
147162 return
148163
164 logger.debug("Connection to %s was successful; clearing backoff",
165 self.destination)
149166 retry_last_ts = 0
150167 self.retry_interval = 0
151168 else:
159176 else:
160177 self.retry_interval = self.min_retry_interval
161178
179 logger.debug(
180 "Connection to %s was unsuccessful (%s(%s)); backoff now %i",
181 self.destination, exc_type, exc_val, self.retry_interval
182 )
162183 retry_last_ts = int(self.clock.time_msec())
163184
164185 @defer.inlineCallbacks
172193 "Failed to store set_destination_retry_timings",
173194 )
174195
175 store_retry_timings()
196 # we deliberately do this in the background.
197 synapse.util.logcontext.preserve_fn(store_retry_timings)()
133133 if prev_membership not in MEMBERSHIP_PRIORITY:
134134 prev_membership = "leave"
135135
136 # Always allow the user to see their own leave events, otherwise
137 # they won't see the room disappear if they reject the invite
138 if membership == "leave" and (
139 prev_membership == "join" or prev_membership == "invite"
140 ):
141 return True
142
136143 new_priority = MEMBERSHIP_PRIORITY.index(membership)
137144 old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
138145 if old_priority < new_priority:
2222
2323 from synapse.api.filtering import Filter
2424 from synapse.events import FrozenEvent
25 from synapse.api.errors import SynapseError
26
27 import jsonschema
2528
2629 user_localpart = "test_user"
2730
5255 self.filtering = hs.get_filtering()
5356
5457 self.datastore = hs.get_datastore()
58
59 def test_errors_on_invalid_filters(self):
60 invalid_filters = [
61 {"boom": {}},
62 {"account_data": "Hello World"},
63 {"event_fields": ["\\foo"]},
64 {"room": {"timeline": {"limit": 0}, "state": {"not_bars": ["*"]}}},
65 {"event_format": "other"},
66 {"room": {"not_rooms": ["#foo:pik-test"]}},
67 {"presence": {"senders": ["@bar;pik.test.com"]}}
68 ]
69 for filter in invalid_filters:
70 with self.assertRaises(SynapseError) as check_filter_error:
71 self.filtering.check_valid_filter(filter)
72 self.assertIsInstance(check_filter_error.exception, SynapseError)
73
74 def test_valid_filters(self):
75 valid_filters = [
76 {
77 "room": {
78 "timeline": {"limit": 20},
79 "state": {"not_types": ["m.room.member"]},
80 "ephemeral": {"limit": 0, "not_types": ["*"]},
81 "include_leave": False,
82 "rooms": ["!dee:pik-test"],
83 "not_rooms": ["!gee:pik-test"],
84 "account_data": {"limit": 0, "types": ["*"]}
85 }
86 },
87 {
88 "room": {
89 "state": {
90 "types": ["m.room.*"],
91 "not_rooms": ["!726s6s6q:example.com"]
92 },
93 "timeline": {
94 "limit": 10,
95 "types": ["m.room.message"],
96 "not_rooms": ["!726s6s6q:example.com"],
97 "not_senders": ["@spam:example.com"]
98 },
99 "ephemeral": {
100 "types": ["m.receipt", "m.typing"],
101 "not_rooms": ["!726s6s6q:example.com"],
102 "not_senders": ["@spam:example.com"]
103 }
104 },
105 "presence": {
106 "types": ["m.presence"],
107 "not_senders": ["@alice:example.com"]
108 },
109 "event_format": "client",
110 "event_fields": ["type", "content", "sender"]
111 }
112 ]
113 for filter in valid_filters:
114 try:
115 self.filtering.check_valid_filter(filter)
116 except jsonschema.ValidationError as e:
117 self.fail(e)
118
119 def test_limits_are_applied(self):
120 # TODO
121 pass
55122
56123 def test_definition_types_works_with_literals(self):
57124 definition = {
9292 "room_alias": "#another:remote",
9393 },
9494 retry_on_dns_fail=False,
95 ignore_backoff=True,
9596 )
9697
9798 @defer.inlineCallbacks
323323 state = UserPresenceState.default(user_id)
324324 state = state.copy_and_replace(
325325 state=PresenceState.ONLINE,
326 last_active_ts=now,
326 last_active_ts=0,
327327 last_user_sync_ts=now - SYNC_ONLINE_TIMEOUT - 1,
328328 )
329329
118118 self.mock_federation.make_query.assert_called_with(
119119 destination="remote",
120120 query_type="profile",
121 args={"user_id": "@alice:remote", "field": "displayname"}
121 args={"user_id": "@alice:remote", "field": "displayname"},
122 ignore_backoff=True,
122123 )
123124
124125 @defer.inlineCallbacks
191191 ),
192192 json_data_callback=ANY,
193193 long_retries=True,
194 backoff_on_404=True,
194195 ),
195196 defer.succeed((200, "OK"))
196197 )
262263 ),
263264 json_data_callback=ANY,
264265 long_retries=True,
266 backoff_on_404=True,
265267 ),
266268 defer.succeed((200, "OK"))
267269 )
6767 code, body = yield get
6868 self.assertEquals(code, 200)
6969 self.assertEquals(body["events"]["field_names"], [
70 "position", "internal", "json", "state_group"
70 "position", "event_id", "room_id", "type", "state_key",
7171 ])
7272
7373 @defer.inlineCallbacks
3232 class FilterTestCase(unittest.TestCase):
3333
3434 USER_ID = "@apple:test"
35 EXAMPLE_FILTER = {"type": ["m.*"]}
36 EXAMPLE_FILTER_JSON = '{"type": ["m.*"]}'
35 EXAMPLE_FILTER = {"room": {"timeline": {"types": ["m.room.message"]}}}
36 EXAMPLE_FILTER_JSON = '{"room": {"timeline": {"types": ["m.room.message"]}}}'
3737 TO_REGISTER = [filter]
3838
3939 @defer.inlineCallbacks
8888 @defer.inlineCallbacks
8989 def test_select_one_1col(self):
9090 self.mock_txn.rowcount = 1
91 self.mock_txn.fetchall.return_value = [("Value",)]
91 self.mock_txn.__iter__ = Mock(return_value=iter([("Value",)]))
9292
9393 value = yield self.datastore._simple_select_one_onecol(
9494 table="tablename",
135135 @defer.inlineCallbacks
136136 def test_select_list(self):
137137 self.mock_txn.rowcount = 3
138 self.mock_txn.fetchall.return_value = ((1,), (2,), (3,))
138 self.mock_txn.__iter__ = Mock(return_value=iter([(1,), (2,), (3,)]))
139139 self.mock_txn.description = (
140140 ("colA", None, None, None, None, None, None),
141141 )
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 Vector Creations Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import signedjson.key
16 from twisted.internet import defer
17
18 import tests.unittest
19 import tests.utils
20
21
22 class KeyStoreTestCase(tests.unittest.TestCase):
23 def __init__(self, *args, **kwargs):
24 super(KeyStoreTestCase, self).__init__(*args, **kwargs)
25 self.store = None # type: synapse.storage.keys.KeyStore
26
27 @defer.inlineCallbacks
28 def setUp(self):
29 hs = yield tests.utils.setup_test_homeserver()
30 self.store = hs.get_datastore()
31
32 @defer.inlineCallbacks
33 def test_get_server_verify_keys(self):
34 key1 = signedjson.key.decode_verify_key_base64(
35 "ed25519", "key1", "fP5l4JzpZPq/zdbBg5xx6lQGAAOM9/3w94cqiJ5jPrw"
36 )
37 key2 = signedjson.key.decode_verify_key_base64(
38 "ed25519", "key2", "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
39 )
40 yield self.store.store_server_verify_key(
41 "server1", "from_server", 0, key1
42 )
43 yield self.store.store_server_verify_key(
44 "server1", "from_server", 0, key2
45 )
46
47 res = yield self.store.get_server_verify_keys(
48 "server1", ["ed25519:key1", "ed25519:key2", "ed25519:key3"])
49
50 self.assertEqual(len(res.keys()), 2)
51 self.assertEqual(res["ed25519:key1"].version, "key1")
52 self.assertEqual(res["ed25519:key2"].version, "key2")
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 Vector Creations Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
0 # -*- coding: utf-8 -*-
1 # Copyright 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import logging
15
16 import mock
17 from synapse.api.errors import SynapseError
18 from synapse.util import async
19 from synapse.util import logcontext
20 from twisted.internet import defer
21 from synapse.util.caches import descriptors
22 from tests import unittest
23
24 logger = logging.getLogger(__name__)
25
26
27 class DescriptorTestCase(unittest.TestCase):
28 @defer.inlineCallbacks
29 def test_cache(self):
30 class Cls(object):
31 def __init__(self):
32 self.mock = mock.Mock()
33
34 @descriptors.cached()
35 def fn(self, arg1, arg2):
36 return self.mock(arg1, arg2)
37
38 obj = Cls()
39
40 obj.mock.return_value = 'fish'
41 r = yield obj.fn(1, 2)
42 self.assertEqual(r, 'fish')
43 obj.mock.assert_called_once_with(1, 2)
44 obj.mock.reset_mock()
45
46 # a call with different params should call the mock again
47 obj.mock.return_value = 'chips'
48 r = yield obj.fn(1, 3)
49 self.assertEqual(r, 'chips')
50 obj.mock.assert_called_once_with(1, 3)
51 obj.mock.reset_mock()
52
53 # the two values should now be cached
54 r = yield obj.fn(1, 2)
55 self.assertEqual(r, 'fish')
56 r = yield obj.fn(1, 3)
57 self.assertEqual(r, 'chips')
58 obj.mock.assert_not_called()
59
60 @defer.inlineCallbacks
61 def test_cache_num_args(self):
62 """Only the first num_args arguments should matter to the cache"""
63
64 class Cls(object):
65 def __init__(self):
66 self.mock = mock.Mock()
67
68 @descriptors.cached(num_args=1)
69 def fn(self, arg1, arg2):
70 return self.mock(arg1, arg2)
71
72 obj = Cls()
73 obj.mock.return_value = 'fish'
74 r = yield obj.fn(1, 2)
75 self.assertEqual(r, 'fish')
76 obj.mock.assert_called_once_with(1, 2)
77 obj.mock.reset_mock()
78
79 # a call with different params should call the mock again
80 obj.mock.return_value = 'chips'
81 r = yield obj.fn(2, 3)
82 self.assertEqual(r, 'chips')
83 obj.mock.assert_called_once_with(2, 3)
84 obj.mock.reset_mock()
85
86 # the two values should now be cached; we should be able to vary
87 # the second argument and still get the cached result.
88 r = yield obj.fn(1, 4)
89 self.assertEqual(r, 'fish')
90 r = yield obj.fn(2, 5)
91 self.assertEqual(r, 'chips')
92 obj.mock.assert_not_called()
93
94 def test_cache_logcontexts(self):
95 """Check that logcontexts are set and restored correctly when
96 using the cache."""
97
98 complete_lookup = defer.Deferred()
99
100 class Cls(object):
101 @descriptors.cached()
102 def fn(self, arg1):
103 @defer.inlineCallbacks
104 def inner_fn():
105 with logcontext.PreserveLoggingContext():
106 yield complete_lookup
107 defer.returnValue(1)
108
109 return inner_fn()
110
111 @defer.inlineCallbacks
112 def do_lookup():
113 with logcontext.LoggingContext() as c1:
114 c1.name = "c1"
115 r = yield obj.fn(1)
116 self.assertEqual(logcontext.LoggingContext.current_context(),
117 c1)
118 defer.returnValue(r)
119
120 def check_result(r):
121 self.assertEqual(r, 1)
122
123 obj = Cls()
124
125 # set off a deferred which will do a cache lookup
126 d1 = do_lookup()
127 self.assertEqual(logcontext.LoggingContext.current_context(),
128 logcontext.LoggingContext.sentinel)
129 d1.addCallback(check_result)
130
131 # and another
132 d2 = do_lookup()
133 self.assertEqual(logcontext.LoggingContext.current_context(),
134 logcontext.LoggingContext.sentinel)
135 d2.addCallback(check_result)
136
137 # let the lookup complete
138 complete_lookup.callback(None)
139
140 return defer.gatherResults([d1, d2])
141
142 def test_cache_logcontexts_with_exception(self):
143 """Check that the cache sets and restores logcontexts correctly when
144 the lookup function throws an exception"""
145
146 class Cls(object):
147 @descriptors.cached()
148 def fn(self, arg1):
149 @defer.inlineCallbacks
150 def inner_fn():
151 yield async.run_on_reactor()
152 raise SynapseError(400, "blah")
153
154 return inner_fn()
155
156 @defer.inlineCallbacks
157 def do_lookup():
158 with logcontext.LoggingContext() as c1:
159 c1.name = "c1"
160 try:
161 yield obj.fn(1)
162 self.fail("No exception thrown")
163 except SynapseError:
164 pass
165
166 self.assertEqual(logcontext.LoggingContext.current_context(),
167 c1)
168
169 obj = Cls()
170
171 # set off a deferred which will do a cache lookup
172 d1 = do_lookup()
173 self.assertEqual(logcontext.LoggingContext.current_context(),
174 logcontext.LoggingContext.sentinel)
175
176 return d1
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 Vector Creations Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from synapse import util
15 from twisted.internet import defer
16 from tests import unittest
17
18
19 class ClockTestCase(unittest.TestCase):
20 @defer.inlineCallbacks
21 def test_time_bound_deferred(self):
22 # just a deferred which never resolves
23 slow_deferred = defer.Deferred()
24
25 clock = util.Clock()
26 time_bound = clock.time_bound_deferred(slow_deferred, 0.001)
27
28 try:
29 yield time_bound
30 self.fail("Expected timedout error, but got nothing")
31 except util.DeferredTimedOutError:
32 pass
0 import twisted.python.failure
01 from twisted.internet import defer
12 from twisted.internet import reactor
23 from .. import unittest
34
45 from synapse.util.async import sleep
6 from synapse.util import logcontext
57 from synapse.util.logcontext import LoggingContext
68
79
3234 context_one.test_key = "one"
3335 yield sleep(0)
3436 self._check_test_key("one")
37
38 def _test_preserve_fn(self, function):
39 sentinel_context = LoggingContext.current_context()
40
41 callback_completed = [False]
42
43 @defer.inlineCallbacks
44 def cb():
45 context_one.test_key = "one"
46 yield function()
47 self._check_test_key("one")
48
49 callback_completed[0] = True
50
51 with LoggingContext() as context_one:
52 context_one.test_key = "one"
53
54 # fire off function, but don't wait on it.
55 logcontext.preserve_fn(cb)()
56
57 self._check_test_key("one")
58
59 # now wait for the function under test to have run, and check that
60 # the logcontext is left in a sane state.
61 d2 = defer.Deferred()
62
63 def check_logcontext():
64 if not callback_completed[0]:
65 reactor.callLater(0.01, check_logcontext)
66 return
67
68 # make sure that the context was reset before it got thrown back
69 # into the reactor
70 try:
71 self.assertIs(LoggingContext.current_context(),
72 sentinel_context)
73 d2.callback(None)
74 except BaseException:
75 d2.errback(twisted.python.failure.Failure())
76
77 reactor.callLater(0.01, check_logcontext)
78
79 # test is done once d2 finishes
80 return d2
81
82 def test_preserve_fn_with_blocking_fn(self):
83 @defer.inlineCallbacks
84 def blocking_function():
85 yield sleep(0)
86
87 return self._test_preserve_fn(blocking_function)
88
89 def test_preserve_fn_with_non_blocking_fn(self):
90 @defer.inlineCallbacks
91 def nonblocking_function():
92 with logcontext.PreserveLoggingContext():
93 yield defer.succeed(None)
94
95 return self._test_preserve_fn(nonblocking_function)