Imported Upstream version 0.20.0
Erik Johnston
7 years ago
0 | Changes in synapse v0.20.0 (2017-04-11) | |
1 | ======================================= | |
2 | ||
3 | Bug fixes: | |
4 | ||
5 | * Fix joining rooms over federation where not all servers in the room saw the | |
6 | new server had joined (PR #2094) | |
7 | ||
8 | ||
9 | Changes in synapse v0.20.0-rc1 (2017-03-30) | |
10 | =========================================== | |
11 | ||
12 | Features: | |
13 | ||
14 | * Add delete_devices API (PR #1993) | |
15 | * Add phone number registration/login support (PR #1994, #2055) | |
16 | ||
17 | ||
18 | Changes: | |
19 | ||
20 | * Use JSONSchema for validation of filters. Thanks @pik! (PR #1783) | |
21 | * Reread log config on SIGHUP (PR #1982) | |
22 | * Speed up public room list (PR #1989) | |
23 | * Add helpful texts to logger config options (PR #1990) | |
24 | * Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022) | |
25 | * Add some debug to help diagnose weird federation issue (PR #2035) | |
26 | * Correctly limit retries for all federation requests (PR #2050, #2061) | |
27 | * Don't lock table when persisting new one time keys (PR #2053) | |
28 | * Reduce some CPU work on DB threads (PR #2054) | |
29 | * Cache hosts in room (PR #2060) | |
30 | * Batch sending of device list pokes (PR #2063) | |
31 | * Speed up persist event path in certain edge cases (PR #2070) | |
32 | ||
33 | ||
34 | Bug fixes: | |
35 | ||
36 | * Fix bug where current_state_events renamed to current_state_ids (PR #1849) | |
37 | * Fix routing loop when fetching remote media (PR #1992) | |
38 | * Fix current_state_events table to not lie (PR #1996) | |
39 | * Fix CAS login to handle PartialDownloadError (PR #1997) | |
40 | * Fix assertion to stop transaction queue getting wedged (PR #2010) | |
41 | * Fix presence to fallback to last_active_ts if it beats the last sync time. | |
42 | Thanks @Half-Shot! (PR #2014) | |
43 | * Fix bug when federation received a PDU while a room join is in progress (PR | |
44 | #2016) | |
45 | * Fix resetting state on rejected events (PR #2025) | |
46 | * Fix installation issues in readme. Thanks @ricco386 (PR #2037) | |
47 | * Fix caching of remote servers' signature keys (PR #2042) | |
48 | * Fix some leaking log context (PR #2048, #2049, #2057, #2058) | |
49 | * Fix rejection of invites not reaching sync (PR #2056) | |
50 | ||
51 | ||
52 | ||
0 | 53 | Changes in synapse v0.19.3 (2017-03-20) |
1 | 54 | ======================================= |
2 | 55 |
145 | 145 | |
146 | 146 | virtualenv -p python2.7 ~/.synapse |
147 | 147 | source ~/.synapse/bin/activate |
148 | pip install --upgrade pip | |
148 | 149 | pip install --upgrade setuptools |
149 | 150 | pip install https://github.com/matrix-org/synapse/tarball/master |
150 | 151 | |
227 | 228 | New user localpart: erikj |
228 | 229 | Password: |
229 | 230 | Confirm password: |
231 | Make admin [no]: | |
230 | 232 | Success! |
231 | 233 | |
232 | 234 | This process uses a setting ``registration_shared_secret`` in |
807 | 809 | Synapse has a number of external dependencies, that are easiest |
808 | 810 | to install using pip and a virtualenv:: |
809 | 811 | |
810 | virtualenv env | |
812 | virtualenv -p python2.7 env | |
811 | 813 | source env/bin/activate |
812 | 814 | python synapse/python_dependencies.py | xargs pip install |
813 | 815 | pip install lxml mock |
38 | 38 | synapse: |
39 | 39 | level: INFO |
40 | 40 | |
41 | synapse.storage: | |
41 | synapse.storage.SQL: | |
42 | # beware: increasing this to DEBUG will make synapse log sensitive | |
43 | # information such as access tokens. | |
42 | 44 | level: INFO |
43 | ||
45 | ||
44 | 46 | # example of enabling debugging for a component: |
45 | 47 | # |
46 | 48 | # synapse.federation.transport.server: |
0 | What do I do about "Unexpected logging context" debug log-lines everywhere? | |
1 | ||
2 | <Mjark> The logging context lives in thread local storage | |
3 | <Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context. | |
4 | <Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context? | |
5 | <Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed. | |
6 | <Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor. | |
7 | <Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults | |
8 | ||
9 | Unanswered: how and when should we preserve log context?⏎ | |
0 | Log contexts | |
1 | ============ | |
2 | ||
3 | .. contents:: | |
4 | ||
5 | To help track the processing of individual requests, synapse uses a | |
6 | 'log context' to track which request it is handling at any given moment. This | |
7 | is done via a thread-local variable; a ``logging.Filter`` is then used to fish | |
8 | the information back out of the thread-local variable and add it to each log | |
9 | record. | |
10 | ||
11 | Logcontexts are also used for CPU and database accounting, so that we can track | |
12 | which requests were responsible for high CPU use or database activity. | |
13 | ||
14 | The ``synapse.util.logcontext`` module provides a facilities for managing the | |
15 | current log context (as well as providing the ``LoggingContextFilter`` class). | |
16 | ||
17 | Deferreds make the whole thing complicated, so this document describes how it | |
18 | all works, and how to write code which follows the rules. | |
19 | ||
20 | Logcontexts without Deferreds | |
21 | ----------------------------- | |
22 | ||
23 | In the absence of any Deferred voodoo, things are simple enough. As with any | |
24 | code of this nature, the rule is that our function should leave things as it | |
25 | found them: | |
26 | ||
27 | .. code:: python | |
28 | ||
29 | from synapse.util import logcontext # omitted from future snippets | |
30 | ||
31 | def handle_request(request_id): | |
32 | request_context = logcontext.LoggingContext() | |
33 | ||
34 | calling_context = logcontext.LoggingContext.current_context() | |
35 | logcontext.LoggingContext.set_current_context(request_context) | |
36 | try: | |
37 | request_context.request = request_id | |
38 | do_request_handling() | |
39 | logger.debug("finished") | |
40 | finally: | |
41 | logcontext.LoggingContext.set_current_context(calling_context) | |
42 | ||
43 | def do_request_handling(): | |
44 | logger.debug("phew") # this will be logged against request_id | |
45 | ||
46 | ||
47 | LoggingContext implements the context management methods, so the above can be | |
48 | written much more succinctly as: | |
49 | ||
50 | .. code:: python | |
51 | ||
52 | def handle_request(request_id): | |
53 | with logcontext.LoggingContext() as request_context: | |
54 | request_context.request = request_id | |
55 | do_request_handling() | |
56 | logger.debug("finished") | |
57 | ||
58 | def do_request_handling(): | |
59 | logger.debug("phew") | |
60 | ||
61 | ||
62 | Using logcontexts with Deferreds | |
63 | -------------------------------- | |
64 | ||
65 | Deferreds — and in particular, ``defer.inlineCallbacks`` — break | |
66 | the linear flow of code so that there is no longer a single entry point where | |
67 | we should set the logcontext and a single exit point where we should remove it. | |
68 | ||
69 | Consider the example above, where ``do_request_handling`` needs to do some | |
70 | blocking operation, and returns a deferred: | |
71 | ||
72 | .. code:: python | |
73 | ||
74 | @defer.inlineCallbacks | |
75 | def handle_request(request_id): | |
76 | with logcontext.LoggingContext() as request_context: | |
77 | request_context.request = request_id | |
78 | yield do_request_handling() | |
79 | logger.debug("finished") | |
80 | ||
81 | ||
82 | In the above flow: | |
83 | ||
84 | * The logcontext is set | |
85 | * ``do_request_handling`` is called, and returns a deferred | |
86 | * ``handle_request`` yields the deferred | |
87 | * The ``inlineCallbacks`` wrapper of ``handle_request`` returns a deferred | |
88 | ||
89 | So we have stopped processing the request (and will probably go on to start | |
90 | processing the next), without clearing the logcontext. | |
91 | ||
92 | To circumvent this problem, synapse code assumes that, wherever you have a | |
93 | deferred, you will want to yield on it. To that end, whereever functions return | |
94 | a deferred, we adopt the following conventions: | |
95 | ||
96 | **Rules for functions returning deferreds:** | |
97 | ||
98 | * If the deferred is already complete, the function returns with the same | |
99 | logcontext it started with. | |
100 | * If the deferred is incomplete, the function clears the logcontext before | |
101 | returning; when the deferred completes, it restores the logcontext before | |
102 | running any callbacks. | |
103 | ||
104 | That sounds complicated, but actually it means a lot of code (including the | |
105 | example above) "just works". There are two cases: | |
106 | ||
107 | * If ``do_request_handling`` returns a completed deferred, then the logcontext | |
108 | will still be in place. In this case, execution will continue immediately | |
109 | after the ``yield``; the "finished" line will be logged against the right | |
110 | context, and the ``with`` block restores the original context before we | |
111 | return to the caller. | |
112 | ||
113 | * If the returned deferred is incomplete, ``do_request_handling`` clears the | |
114 | logcontext before returning. The logcontext is therefore clear when | |
115 | ``handle_request`` yields the deferred. At that point, the ``inlineCallbacks`` | |
116 | wrapper adds a callback to the deferred, and returns another (incomplete) | |
117 | deferred to the caller, and it is safe to begin processing the next request. | |
118 | ||
119 | Once ``do_request_handling``'s deferred completes, it will reinstate the | |
120 | logcontext, before running the callback added by the ``inlineCallbacks`` | |
121 | wrapper. That callback runs the second half of ``handle_request``, so again | |
122 | the "finished" line will be logged against the right | |
123 | context, and the ``with`` block restores the original context. | |
124 | ||
125 | As an aside, it's worth noting that ``handle_request`` follows our rules - | |
126 | though that only matters if the caller has its own logcontext which it cares | |
127 | about. | |
128 | ||
129 | The following sections describe pitfalls and helpful patterns when implementing | |
130 | these rules. | |
131 | ||
132 | Always yield your deferreds | |
133 | --------------------------- | |
134 | ||
135 | Whenever you get a deferred back from a function, you should ``yield`` on it | |
136 | as soon as possible. (Returning it directly to your caller is ok too, if you're | |
137 | not doing ``inlineCallbacks``.) Do not pass go; do not do any logging; do not | |
138 | call any other functions. | |
139 | ||
140 | .. code:: python | |
141 | ||
142 | @defer.inlineCallbacks | |
143 | def fun(): | |
144 | logger.debug("starting") | |
145 | yield do_some_stuff() # just like this | |
146 | ||
147 | d = more_stuff() | |
148 | result = yield d # also fine, of course | |
149 | ||
150 | defer.returnValue(result) | |
151 | ||
152 | def nonInlineCallbacksFun(): | |
153 | logger.debug("just a wrapper really") | |
154 | return do_some_stuff() # this is ok too - the caller will yield on | |
155 | # it anyway. | |
156 | ||
157 | Provided this pattern is followed all the way back up to the callchain to where | |
158 | the logcontext was set, this will make things work out ok: provided | |
159 | ``do_some_stuff`` and ``more_stuff`` follow the rules above, then so will | |
160 | ``fun`` (as wrapped by ``inlineCallbacks``) and ``nonInlineCallbacksFun``. | |
161 | ||
162 | It's all too easy to forget to ``yield``: for instance if we forgot that | |
163 | ``do_some_stuff`` returned a deferred, we might plough on regardless. This | |
164 | leads to a mess; it will probably work itself out eventually, but not before | |
165 | a load of stuff has been logged against the wrong content. (Normally, other | |
166 | things will break, more obviously, if you forget to ``yield``, so this tends | |
167 | not to be a major problem in practice.) | |
168 | ||
169 | Of course sometimes you need to do something a bit fancier with your Deferreds | |
170 | - not all code follows the linear A-then-B-then-C pattern. Notes on | |
171 | implementing more complex patterns are in later sections. | |
172 | ||
173 | Where you create a new Deferred, make it follow the rules | |
174 | --------------------------------------------------------- | |
175 | ||
176 | Most of the time, a Deferred comes from another synapse function. Sometimes, | |
177 | though, we need to make up a new Deferred, or we get a Deferred back from | |
178 | external code. We need to make it follow our rules. | |
179 | ||
180 | The easy way to do it is with a combination of ``defer.inlineCallbacks``, and | |
181 | ``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``, | |
182 | which returns a deferred which will run its callbacks after a given number of | |
183 | seconds. That might look like: | |
184 | ||
185 | .. code:: python | |
186 | ||
187 | # not a logcontext-rules-compliant function | |
188 | def get_sleep_deferred(seconds): | |
189 | d = defer.Deferred() | |
190 | reactor.callLater(seconds, d.callback, None) | |
191 | return d | |
192 | ||
193 | That doesn't follow the rules, but we can fix it by wrapping it with | |
194 | ``PreserveLoggingContext`` and ``yield`` ing on it: | |
195 | ||
196 | .. code:: python | |
197 | ||
198 | @defer.inlineCallbacks | |
199 | def sleep(seconds): | |
200 | with PreserveLoggingContext(): | |
201 | yield get_sleep_deferred(seconds) | |
202 | ||
203 | This technique works equally for external functions which return deferreds, | |
204 | or deferreds we have made ourselves. | |
205 | ||
206 | You can also use ``logcontext.make_deferred_yieldable``, which just does the | |
207 | boilerplate for you, so the above could be written: | |
208 | ||
209 | .. code:: python | |
210 | ||
211 | def sleep(seconds): | |
212 | return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds)) | |
213 | ||
214 | ||
215 | Fire-and-forget | |
216 | --------------- | |
217 | ||
218 | Sometimes you want to fire off a chain of execution, but not wait for its | |
219 | result. That might look a bit like this: | |
220 | ||
221 | .. code:: python | |
222 | ||
223 | @defer.inlineCallbacks | |
224 | def do_request_handling(): | |
225 | yield foreground_operation() | |
226 | ||
227 | # *don't* do this | |
228 | background_operation() | |
229 | ||
230 | logger.debug("Request handling complete") | |
231 | ||
232 | @defer.inlineCallbacks | |
233 | def background_operation(): | |
234 | yield first_background_step() | |
235 | logger.debug("Completed first step") | |
236 | yield second_background_step() | |
237 | logger.debug("Completed second step") | |
238 | ||
239 | The above code does a couple of steps in the background after | |
240 | ``do_request_handling`` has finished. The log lines are still logged against | |
241 | the ``request_context`` logcontext, which may or may not be desirable. There | |
242 | are two big problems with the above, however. The first problem is that, if | |
243 | ``background_operation`` returns an incomplete Deferred, it will expect its | |
244 | caller to ``yield`` immediately, so will have cleared the logcontext. In this | |
245 | example, that means that 'Request handling complete' will be logged without any | |
246 | context. | |
247 | ||
248 | The second problem, which is potentially even worse, is that when the Deferred | |
249 | returned by ``background_operation`` completes, it will restore the original | |
250 | logcontext. There is nothing waiting on that Deferred, so the logcontext will | |
251 | leak into the reactor and possibly get attached to some arbitrary future | |
252 | operation. | |
253 | ||
254 | There are two potential solutions to this. | |
255 | ||
256 | One option is to surround the call to ``background_operation`` with a | |
257 | ``PreserveLoggingContext`` call. That will reset the logcontext before | |
258 | starting ``background_operation`` (so the context restored when the deferred | |
259 | completes will be the empty logcontext), and will restore the current | |
260 | logcontext before continuing the foreground process: | |
261 | ||
262 | .. code:: python | |
263 | ||
264 | @defer.inlineCallbacks | |
265 | def do_request_handling(): | |
266 | yield foreground_operation() | |
267 | ||
268 | # start background_operation off in the empty logcontext, to | |
269 | # avoid leaking the current context into the reactor. | |
270 | with PreserveLoggingContext(): | |
271 | background_operation() | |
272 | ||
273 | # this will now be logged against the request context | |
274 | logger.debug("Request handling complete") | |
275 | ||
276 | Obviously that option means that the operations done in | |
277 | ``background_operation`` would be not be logged against a logcontext (though | |
278 | that might be fixed by setting a different logcontext via a ``with | |
279 | LoggingContext(...)`` in ``background_operation``). | |
280 | ||
281 | The second option is to use ``logcontext.preserve_fn``, which wraps a function | |
282 | so that it doesn't reset the logcontext even when it returns an incomplete | |
283 | deferred, and adds a callback to the returned deferred to reset the | |
284 | logcontext. In other words, it turns a function that follows the Synapse rules | |
285 | about logcontexts and Deferreds into one which behaves more like an external | |
286 | function — the opposite operation to that described in the previous section. | |
287 | It can be used like this: | |
288 | ||
289 | .. code:: python | |
290 | ||
291 | @defer.inlineCallbacks | |
292 | def do_request_handling(): | |
293 | yield foreground_operation() | |
294 | ||
295 | logcontext.preserve_fn(background_operation)() | |
296 | ||
297 | # this will now be logged against the request context | |
298 | logger.debug("Request handling complete") | |
299 | ||
300 | XXX: I think ``preserve_context_over_fn`` is supposed to do the first option, | |
301 | but the fact that it does ``preserve_context_over_deferred`` on its results | |
302 | means that its use is fraught with difficulty. | |
303 | ||
304 | Passing synapse deferreds into third-party functions | |
305 | ---------------------------------------------------- | |
306 | ||
307 | A typical example of this is where we want to collect together two or more | |
308 | deferred via ``defer.gatherResults``: | |
309 | ||
310 | .. code:: python | |
311 | ||
312 | d1 = operation1() | |
313 | d2 = operation2() | |
314 | d3 = defer.gatherResults([d1, d2]) | |
315 | ||
316 | This is really a variation of the fire-and-forget problem above, in that we are | |
317 | firing off ``d1`` and ``d2`` without yielding on them. The difference | |
318 | is that we now have third-party code attached to their callbacks. Anyway either | |
319 | technique given in the `Fire-and-forget`_ section will work. | |
320 | ||
321 | Of course, the new Deferred returned by ``gatherResults`` needs to be wrapped | |
322 | in order to make it follow the logcontext rules before we can yield it, as | |
323 | described in `Where you create a new Deferred, make it follow the rules`_. | |
324 | ||
325 | So, option one: reset the logcontext before starting the operations to be | |
326 | gathered: | |
327 | ||
328 | .. code:: python | |
329 | ||
330 | @defer.inlineCallbacks | |
331 | def do_request_handling(): | |
332 | with PreserveLoggingContext(): | |
333 | d1 = operation1() | |
334 | d2 = operation2() | |
335 | result = yield defer.gatherResults([d1, d2]) | |
336 | ||
337 | In this case particularly, though, option two, of using | |
338 | ``logcontext.preserve_fn`` almost certainly makes more sense, so that | |
339 | ``operation1`` and ``operation2`` are both logged against the original | |
340 | logcontext. This looks like: | |
341 | ||
342 | .. code:: python | |
343 | ||
344 | @defer.inlineCallbacks | |
345 | def do_request_handling(): | |
346 | d1 = logcontext.preserve_fn(operation1)() | |
347 | d2 = logcontext.preserve_fn(operation2)() | |
348 | ||
349 | with PreserveLoggingContext(): | |
350 | result = yield defer.gatherResults([d1, d2]) | |
351 | ||
352 | ||
353 | Was all this really necessary? | |
354 | ------------------------------ | |
355 | ||
356 | The conventions used work fine for a linear flow where everything happens in | |
357 | series via ``defer.inlineCallbacks`` and ``yield``, but are certainly tricky to | |
358 | follow for any more exotic flows. It's hard not to wonder if we could have done | |
359 | something else. | |
360 | ||
361 | We're not going to rewrite Synapse now, so the following is entirely of | |
362 | academic interest, but I'd like to record some thoughts on an alternative | |
363 | approach. | |
364 | ||
365 | I briefly prototyped some code following an alternative set of rules. I think | |
366 | it would work, but I certainly didn't get as far as thinking how it would | |
367 | interact with concepts as complicated as the cache descriptors. | |
368 | ||
369 | My alternative rules were: | |
370 | ||
371 | * functions always preserve the logcontext of their caller, whether or not they | |
372 | are returning a Deferred. | |
373 | ||
374 | * Deferreds returned by synapse functions run their callbacks in the same | |
375 | context as the function was orignally called in. | |
376 | ||
377 | The main point of this scheme is that everywhere that sets the logcontext is | |
378 | responsible for clearing it before returning control to the reactor. | |
379 | ||
380 | So, for example, if you were the function which started a ``with | |
381 | LoggingContext`` block, you wouldn't ``yield`` within it — instead you'd start | |
382 | off the background process, and then leave the ``with`` block to wait for it: | |
383 | ||
384 | .. code:: python | |
385 | ||
386 | def handle_request(request_id): | |
387 | with logcontext.LoggingContext() as request_context: | |
388 | request_context.request = request_id | |
389 | d = do_request_handling() | |
390 | ||
391 | def cb(r): | |
392 | logger.debug("finished") | |
393 | ||
394 | d.addCallback(cb) | |
395 | return d | |
396 | ||
397 | (in general, mixing ``with LoggingContext`` blocks and | |
398 | ``defer.inlineCallbacks`` in the same function leads to slighly | |
399 | counter-intuitive code, under this scheme). | |
400 | ||
401 | Because we leave the original ``with`` block as soon as the Deferred is | |
402 | returned (as opposed to waiting for it to be resolved, as we do today), the | |
403 | logcontext is cleared before control passes back to the reactor; so if there is | |
404 | some code within ``do_request_handling`` which needs to wait for a Deferred to | |
405 | complete, there is no need for it to worry about clearing the logcontext before | |
406 | doing so: | |
407 | ||
408 | .. code:: python | |
409 | ||
410 | def handle_request(): | |
411 | r = do_some_stuff() | |
412 | r.addCallback(do_some_more_stuff) | |
413 | return r | |
414 | ||
415 | — and provided ``do_some_stuff`` follows the rules of returning a Deferred which | |
416 | runs its callbacks in the original logcontext, all is happy. | |
417 | ||
418 | The business of a Deferred which runs its callbacks in the original logcontext | |
419 | isn't hard to achieve — we have it today, in the shape of | |
420 | ``logcontext._PreservingContextDeferred``: | |
421 | ||
422 | .. code:: python | |
423 | ||
424 | def do_some_stuff(): | |
425 | deferred = do_some_io() | |
426 | pcd = _PreservingContextDeferred(LoggingContext.current_context()) | |
427 | deferred.chainDeferred(pcd) | |
428 | return pcd | |
429 | ||
430 | It turns out that, thanks to the way that Deferreds chain together, we | |
431 | automatically get the property of a context-preserving deferred with | |
432 | ``defer.inlineCallbacks``, provided the final Defered the function ``yields`` | |
433 | on has that property. So we can just write: | |
434 | ||
435 | .. code:: python | |
436 | ||
437 | @defer.inlineCallbacks | |
438 | def handle_request(): | |
439 | yield do_some_stuff() | |
440 | yield do_some_more_stuff() | |
441 | ||
442 | To conclude: I think this scheme would have worked equally well, with less | |
443 | danger of messing it up, and probably made some more esoteric code easier to | |
444 | write. But again — changing the conventions of the entire Synapse codebase is | |
445 | not a sensible option for the marginal improvement offered. |
15 | 15 | """ This is a reference implementation of a Matrix home server. |
16 | 16 | """ |
17 | 17 | |
18 | __version__ = "0.19.3" | |
18 | __version__ = "0.20.0" |
22 | 22 | from synapse.api.constants import EventTypes, Membership, JoinRules |
23 | 23 | from synapse.api.errors import AuthError, Codes |
24 | 24 | from synapse.types import UserID |
25 | from synapse.util.logcontext import preserve_context_over_fn | |
25 | from synapse.util import logcontext | |
26 | 26 | from synapse.util.metrics import Measure |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
208 | 208 | default=[""] |
209 | 209 | )[0] |
210 | 210 | if user and access_token and ip_addr: |
211 | preserve_context_over_fn( | |
212 | self.store.insert_client_ip, | |
211 | logcontext.preserve_fn(self.store.insert_client_ip)( | |
213 | 212 | user=user, |
214 | 213 | access_token=access_token, |
215 | 214 | ip=ip_addr, |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | # Copyright 2017 Vector Creations Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
43 | 44 | class LoginType(object): |
44 | 45 | PASSWORD = u"m.login.password" |
45 | 46 | EMAIL_IDENTITY = u"m.login.email.identity" |
47 | MSISDN = u"m.login.msisdn" | |
46 | 48 | RECAPTCHA = u"m.login.recaptcha" |
47 | 49 | DUMMY = u"m.login.dummy" |
48 | 50 |
14 | 14 | |
15 | 15 | """Contains exceptions and error codes.""" |
16 | 16 | |
17 | import json | |
17 | 18 | import logging |
18 | 19 | |
19 | 20 | logger = logging.getLogger(__name__) |
49 | 50 | |
50 | 51 | |
51 | 52 | class CodeMessageException(RuntimeError): |
52 | """An exception with integer code and message string attributes.""" | |
53 | ||
53 | """An exception with integer code and message string attributes. | |
54 | ||
55 | Attributes: | |
56 | code (int): HTTP error code | |
57 | msg (str): string describing the error | |
58 | """ | |
54 | 59 | def __init__(self, code, msg): |
55 | 60 | super(CodeMessageException, self).__init__("%d: %s" % (code, msg)) |
56 | 61 | self.code = code |
57 | 62 | self.msg = msg |
58 | self.response_code_message = None | |
59 | 63 | |
60 | 64 | def error_dict(self): |
61 | 65 | return cs_error(self.msg) |
62 | 66 | |
63 | 67 | |
64 | 68 | class SynapseError(CodeMessageException): |
65 | """A base error which can be caught for all synapse events.""" | |
69 | """A base exception type for matrix errors which have an errcode and error | |
70 | message (as well as an HTTP status code). | |
71 | ||
72 | Attributes: | |
73 | errcode (str): Matrix error code e.g 'M_FORBIDDEN' | |
74 | """ | |
66 | 75 | def __init__(self, code, msg, errcode=Codes.UNKNOWN): |
67 | 76 | """Constructs a synapse error. |
68 | 77 | |
69 | 78 | Args: |
70 | 79 | code (int): The integer error code (an HTTP response code) |
71 | 80 | msg (str): The human-readable error message. |
72 | err (str): The error code e.g 'M_FORBIDDEN' | |
81 | errcode (str): The matrix error code e.g 'M_FORBIDDEN' | |
73 | 82 | """ |
74 | 83 | super(SynapseError, self).__init__(code, msg) |
75 | 84 | self.errcode = errcode |
79 | 88 | self.msg, |
80 | 89 | self.errcode, |
81 | 90 | ) |
91 | ||
92 | @classmethod | |
93 | def from_http_response_exception(cls, err): | |
94 | """Make a SynapseError based on an HTTPResponseException | |
95 | ||
96 | This is useful when a proxied request has failed, and we need to | |
97 | decide how to map the failure onto a matrix error to send back to the | |
98 | client. | |
99 | ||
100 | An attempt is made to parse the body of the http response as a matrix | |
101 | error. If that succeeds, the errcode and error message from the body | |
102 | are used as the errcode and error message in the new synapse error. | |
103 | ||
104 | Otherwise, the errcode is set to M_UNKNOWN, and the error message is | |
105 | set to the reason code from the HTTP response. | |
106 | ||
107 | Args: | |
108 | err (HttpResponseException): | |
109 | ||
110 | Returns: | |
111 | SynapseError: | |
112 | """ | |
113 | # try to parse the body as json, to get better errcode/msg, but | |
114 | # default to M_UNKNOWN with the HTTP status as the error text | |
115 | try: | |
116 | j = json.loads(err.response) | |
117 | except ValueError: | |
118 | j = {} | |
119 | errcode = j.get('errcode', Codes.UNKNOWN) | |
120 | errmsg = j.get('error', err.msg) | |
121 | ||
122 | res = SynapseError(err.code, errmsg, errcode) | |
123 | return res | |
82 | 124 | |
83 | 125 | |
84 | 126 | class RegistrationError(SynapseError): |
105 | 147 | |
106 | 148 | class NotFoundError(SynapseError): |
107 | 149 | """An error indicating we can't find the thing you asked for""" |
108 | def __init__(self, *args, **kwargs): | |
109 | if "errcode" not in kwargs: | |
110 | kwargs["errcode"] = Codes.NOT_FOUND | |
150 | def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND): | |
111 | 151 | super(NotFoundError, self).__init__( |
112 | 152 | 404, |
113 | "Not found", | |
114 | **kwargs | |
153 | msg, | |
154 | errcode=errcode | |
115 | 155 | ) |
116 | 156 | |
117 | 157 | |
172 | 212 | errcode=Codes.LIMIT_EXCEEDED): |
173 | 213 | super(LimitExceededError, self).__init__(code, msg, errcode) |
174 | 214 | self.retry_after_ms = retry_after_ms |
175 | self.response_code_message = "Too Many Requests" | |
176 | 215 | |
177 | 216 | def error_dict(self): |
178 | 217 | return cs_error( |
242 | 281 | |
243 | 282 | |
244 | 283 | class HttpResponseException(CodeMessageException): |
284 | """ | |
285 | Represents an HTTP-level failure of an outbound request | |
286 | ||
287 | Attributes: | |
288 | response (str): body of response | |
289 | """ | |
245 | 290 | def __init__(self, code, msg, response): |
291 | """ | |
292 | ||
293 | Args: | |
294 | code (int): HTTP status code | |
295 | msg (str): reason phrase from HTTP response status line | |
296 | response (str): body of response | |
297 | """ | |
298 | super(HttpResponseException, self).__init__(code, msg) | |
246 | 299 | self.response = response |
247 | super(HttpResponseException, self).__init__(code, msg) |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | from synapse.api.errors import SynapseError |
15 | from synapse.storage.presence import UserPresenceState | |
15 | 16 | from synapse.types import UserID, RoomID |
16 | ||
17 | 17 | from twisted.internet import defer |
18 | 18 | |
19 | 19 | import ujson as json |
20 | import jsonschema | |
21 | from jsonschema import FormatChecker | |
22 | ||
23 | FILTER_SCHEMA = { | |
24 | "additionalProperties": False, | |
25 | "type": "object", | |
26 | "properties": { | |
27 | "limit": { | |
28 | "type": "number" | |
29 | }, | |
30 | "senders": { | |
31 | "$ref": "#/definitions/user_id_array" | |
32 | }, | |
33 | "not_senders": { | |
34 | "$ref": "#/definitions/user_id_array" | |
35 | }, | |
36 | # TODO: We don't limit event type values but we probably should... | |
37 | # check types are valid event types | |
38 | "types": { | |
39 | "type": "array", | |
40 | "items": { | |
41 | "type": "string" | |
42 | } | |
43 | }, | |
44 | "not_types": { | |
45 | "type": "array", | |
46 | "items": { | |
47 | "type": "string" | |
48 | } | |
49 | } | |
50 | } | |
51 | } | |
52 | ||
53 | ROOM_FILTER_SCHEMA = { | |
54 | "additionalProperties": False, | |
55 | "type": "object", | |
56 | "properties": { | |
57 | "not_rooms": { | |
58 | "$ref": "#/definitions/room_id_array" | |
59 | }, | |
60 | "rooms": { | |
61 | "$ref": "#/definitions/room_id_array" | |
62 | }, | |
63 | "ephemeral": { | |
64 | "$ref": "#/definitions/room_event_filter" | |
65 | }, | |
66 | "include_leave": { | |
67 | "type": "boolean" | |
68 | }, | |
69 | "state": { | |
70 | "$ref": "#/definitions/room_event_filter" | |
71 | }, | |
72 | "timeline": { | |
73 | "$ref": "#/definitions/room_event_filter" | |
74 | }, | |
75 | "account_data": { | |
76 | "$ref": "#/definitions/room_event_filter" | |
77 | }, | |
78 | } | |
79 | } | |
80 | ||
81 | ROOM_EVENT_FILTER_SCHEMA = { | |
82 | "additionalProperties": False, | |
83 | "type": "object", | |
84 | "properties": { | |
85 | "limit": { | |
86 | "type": "number" | |
87 | }, | |
88 | "senders": { | |
89 | "$ref": "#/definitions/user_id_array" | |
90 | }, | |
91 | "not_senders": { | |
92 | "$ref": "#/definitions/user_id_array" | |
93 | }, | |
94 | "types": { | |
95 | "type": "array", | |
96 | "items": { | |
97 | "type": "string" | |
98 | } | |
99 | }, | |
100 | "not_types": { | |
101 | "type": "array", | |
102 | "items": { | |
103 | "type": "string" | |
104 | } | |
105 | }, | |
106 | "rooms": { | |
107 | "$ref": "#/definitions/room_id_array" | |
108 | }, | |
109 | "not_rooms": { | |
110 | "$ref": "#/definitions/room_id_array" | |
111 | }, | |
112 | "contains_url": { | |
113 | "type": "boolean" | |
114 | } | |
115 | } | |
116 | } | |
117 | ||
118 | USER_ID_ARRAY_SCHEMA = { | |
119 | "type": "array", | |
120 | "items": { | |
121 | "type": "string", | |
122 | "format": "matrix_user_id" | |
123 | } | |
124 | } | |
125 | ||
126 | ROOM_ID_ARRAY_SCHEMA = { | |
127 | "type": "array", | |
128 | "items": { | |
129 | "type": "string", | |
130 | "format": "matrix_room_id" | |
131 | } | |
132 | } | |
133 | ||
134 | USER_FILTER_SCHEMA = { | |
135 | "$schema": "http://json-schema.org/draft-04/schema#", | |
136 | "description": "schema for a Sync filter", | |
137 | "type": "object", | |
138 | "definitions": { | |
139 | "room_id_array": ROOM_ID_ARRAY_SCHEMA, | |
140 | "user_id_array": USER_ID_ARRAY_SCHEMA, | |
141 | "filter": FILTER_SCHEMA, | |
142 | "room_filter": ROOM_FILTER_SCHEMA, | |
143 | "room_event_filter": ROOM_EVENT_FILTER_SCHEMA | |
144 | }, | |
145 | "properties": { | |
146 | "presence": { | |
147 | "$ref": "#/definitions/filter" | |
148 | }, | |
149 | "account_data": { | |
150 | "$ref": "#/definitions/filter" | |
151 | }, | |
152 | "room": { | |
153 | "$ref": "#/definitions/room_filter" | |
154 | }, | |
155 | "event_format": { | |
156 | "type": "string", | |
157 | "enum": ["client", "federation"] | |
158 | }, | |
159 | "event_fields": { | |
160 | "type": "array", | |
161 | "items": { | |
162 | "type": "string", | |
163 | # Don't allow '\\' in event field filters. This makes matching | |
164 | # events a lot easier as we can then use a negative lookbehind | |
165 | # assertion to split '\.' If we allowed \\ then it would | |
166 | # incorrectly split '\\.' See synapse.events.utils.serialize_event | |
167 | "pattern": "^((?!\\\).)*$" | |
168 | } | |
169 | } | |
170 | }, | |
171 | "additionalProperties": False | |
172 | } | |
173 | ||
174 | ||
175 | @FormatChecker.cls_checks('matrix_room_id') | |
176 | def matrix_room_id_validator(room_id_str): | |
177 | return RoomID.from_string(room_id_str) | |
178 | ||
179 | ||
180 | @FormatChecker.cls_checks('matrix_user_id') | |
181 | def matrix_user_id_validator(user_id_str): | |
182 | return UserID.from_string(user_id_str) | |
20 | 183 | |
21 | 184 | |
22 | 185 | class Filtering(object): |
51 | 214 | # NB: Filters are the complete json blobs. "Definitions" are an |
52 | 215 | # individual top-level key e.g. public_user_data. Filters are made of |
53 | 216 | # many definitions. |
54 | ||
55 | top_level_definitions = [ | |
56 | "presence", "account_data" | |
57 | ] | |
58 | ||
59 | room_level_definitions = [ | |
60 | "state", "timeline", "ephemeral", "account_data" | |
61 | ] | |
62 | ||
63 | for key in top_level_definitions: | |
64 | if key in user_filter_json: | |
65 | self._check_definition(user_filter_json[key]) | |
66 | ||
67 | if "room" in user_filter_json: | |
68 | self._check_definition_room_lists(user_filter_json["room"]) | |
69 | for key in room_level_definitions: | |
70 | if key in user_filter_json["room"]: | |
71 | self._check_definition(user_filter_json["room"][key]) | |
72 | ||
73 | if "event_fields" in user_filter_json: | |
74 | if type(user_filter_json["event_fields"]) != list: | |
75 | raise SynapseError(400, "event_fields must be a list of strings") | |
76 | for field in user_filter_json["event_fields"]: | |
77 | if not isinstance(field, basestring): | |
78 | raise SynapseError(400, "Event field must be a string") | |
79 | # Don't allow '\\' in event field filters. This makes matching | |
80 | # events a lot easier as we can then use a negative lookbehind | |
81 | # assertion to split '\.' If we allowed \\ then it would | |
82 | # incorrectly split '\\.' See synapse.events.utils.serialize_event | |
83 | if r'\\' in field: | |
84 | raise SynapseError( | |
85 | 400, r'The escape character \ cannot itself be escaped' | |
86 | ) | |
87 | ||
88 | def _check_definition_room_lists(self, definition): | |
89 | """Check that "rooms" and "not_rooms" are lists of room ids if they | |
90 | are present | |
91 | ||
92 | Args: | |
93 | definition(dict): The filter definition | |
94 | Raises: | |
95 | SynapseError: If there was a problem with this definition. | |
96 | """ | |
97 | # check rooms are valid room IDs | |
98 | room_id_keys = ["rooms", "not_rooms"] | |
99 | for key in room_id_keys: | |
100 | if key in definition: | |
101 | if type(definition[key]) != list: | |
102 | raise SynapseError(400, "Expected %s to be a list." % key) | |
103 | for room_id in definition[key]: | |
104 | RoomID.from_string(room_id) | |
105 | ||
106 | def _check_definition(self, definition): | |
107 | """Check if the provided definition is valid. | |
108 | ||
109 | This inspects not only the types but also the values to make sure they | |
110 | make sense. | |
111 | ||
112 | Args: | |
113 | definition(dict): The filter definition | |
114 | Raises: | |
115 | SynapseError: If there was a problem with this definition. | |
116 | """ | |
117 | # NB: Filters are the complete json blobs. "Definitions" are an | |
118 | # individual top-level key e.g. public_user_data. Filters are made of | |
119 | # many definitions. | |
120 | if type(definition) != dict: | |
121 | raise SynapseError( | |
122 | 400, "Expected JSON object, not %s" % (definition,) | |
123 | ) | |
124 | ||
125 | self._check_definition_room_lists(definition) | |
126 | ||
127 | # check senders are valid user IDs | |
128 | user_id_keys = ["senders", "not_senders"] | |
129 | for key in user_id_keys: | |
130 | if key in definition: | |
131 | if type(definition[key]) != list: | |
132 | raise SynapseError(400, "Expected %s to be a list." % key) | |
133 | for user_id in definition[key]: | |
134 | UserID.from_string(user_id) | |
135 | ||
136 | # TODO: We don't limit event type values but we probably should... | |
137 | # check types are valid event types | |
138 | event_keys = ["types", "not_types"] | |
139 | for key in event_keys: | |
140 | if key in definition: | |
141 | if type(definition[key]) != list: | |
142 | raise SynapseError(400, "Expected %s to be a list." % key) | |
143 | for event_type in definition[key]: | |
144 | if not isinstance(event_type, basestring): | |
145 | raise SynapseError(400, "Event type should be a string") | |
217 | try: | |
218 | jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA, | |
219 | format_checker=FormatChecker()) | |
220 | except jsonschema.ValidationError as e: | |
221 | raise SynapseError(400, e.message) | |
146 | 222 | |
147 | 223 | |
148 | 224 | class FilterCollection(object): |
252 | 328 | Returns: |
253 | 329 | bool: True if the event matches |
254 | 330 | """ |
255 | sender = event.get("sender", None) | |
256 | if not sender: | |
257 | # Presence events have their 'sender' in content.user_id | |
258 | content = event.get("content") | |
259 | # account_data has been allowed to have non-dict content, so check type first | |
260 | if isinstance(content, dict): | |
261 | sender = content.get("user_id") | |
331 | # We usually get the full "events" as dictionaries coming through, | |
332 | # except for presence which actually gets passed around as its own | |
333 | # namedtuple type. | |
334 | if isinstance(event, UserPresenceState): | |
335 | sender = event.user_id | |
336 | room_id = None | |
337 | ev_type = "m.presence" | |
338 | is_url = False | |
339 | else: | |
340 | sender = event.get("sender", None) | |
341 | if not sender: | |
342 | # Presence events had their 'sender' in content.user_id, but are | |
343 | # now handled above. We don't know if anything else uses this | |
344 | # form. TODO: Check this and probably remove it. | |
345 | content = event.get("content") | |
346 | # account_data has been allowed to have non-dict content, so | |
347 | # check type first | |
348 | if isinstance(content, dict): | |
349 | sender = content.get("user_id") | |
350 | ||
351 | room_id = event.get("room_id", None) | |
352 | ev_type = event.get("type", None) | |
353 | is_url = "url" in event.get("content", {}) | |
262 | 354 | |
263 | 355 | return self.check_fields( |
264 | event.get("room_id", None), | |
356 | room_id, | |
265 | 357 | sender, |
266 | event.get("type", None), | |
267 | "url" in event.get("content", {}) | |
358 | ev_type, | |
359 | is_url, | |
268 | 360 | ) |
269 | 361 | |
270 | 362 | def check_fields(self, room_id, sender, event_type, contains_url): |
28 | 28 | from synapse.storage.engines import create_engine |
29 | 29 | from synapse.util.async import sleep |
30 | 30 | from synapse.util.httpresourcetree import create_resource_tree |
31 | from synapse.util.logcontext import LoggingContext | |
31 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
32 | 32 | from synapse.util.manhole import manhole |
33 | 33 | from synapse.util.rlimit import change_resource_limit |
34 | 34 | from synapse.util.versionstring import get_version_string |
156 | 156 | |
157 | 157 | assert config.worker_app == "synapse.app.appservice" |
158 | 158 | |
159 | setup_logging(config.worker_log_config, config.worker_log_file) | |
159 | setup_logging(config, use_worker_options=True) | |
160 | 160 | |
161 | 161 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
162 | 162 | |
186 | 186 | ps.start_listening(config.worker_listeners) |
187 | 187 | |
188 | 188 | def run(): |
189 | with LoggingContext("run"): | |
189 | # make sure that we run the reactor with the sentinel log context, | |
190 | # otherwise other PreserveLoggingContext instances will get confused | |
191 | # and complain when they see the logcontext arbitrarily swapping | |
192 | # between the sentinel and `run` logcontexts. | |
193 | with PreserveLoggingContext(): | |
190 | 194 | logger.info("Running") |
191 | 195 | change_resource_limit(config.soft_file_limit) |
192 | 196 | if config.gc_thresholds: |
28 | 28 | from synapse.replication.slave.storage.room import RoomStore |
29 | 29 | from synapse.replication.slave.storage.directory import DirectoryStore |
30 | 30 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore |
31 | from synapse.replication.slave.storage.transactions import TransactionStore | |
31 | 32 | from synapse.rest.client.v1.room import PublicRoomListRestServlet |
32 | 33 | from synapse.server import HomeServer |
33 | 34 | from synapse.storage.client_ips import ClientIpStore |
34 | 35 | from synapse.storage.engines import create_engine |
35 | 36 | from synapse.util.async import sleep |
36 | 37 | from synapse.util.httpresourcetree import create_resource_tree |
37 | from synapse.util.logcontext import LoggingContext | |
38 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
38 | 39 | from synapse.util.manhole import manhole |
39 | 40 | from synapse.util.rlimit import change_resource_limit |
40 | 41 | from synapse.util.versionstring import get_version_string |
62 | 63 | DirectoryStore, |
63 | 64 | SlavedApplicationServiceStore, |
64 | 65 | SlavedRegistrationStore, |
66 | TransactionStore, | |
65 | 67 | BaseSlavedStore, |
66 | 68 | ClientIpStore, # After BaseSlavedStore because the constructor is different |
67 | 69 | ): |
170 | 172 | |
171 | 173 | assert config.worker_app == "synapse.app.client_reader" |
172 | 174 | |
173 | setup_logging(config.worker_log_config, config.worker_log_file) | |
175 | setup_logging(config, use_worker_options=True) | |
174 | 176 | |
175 | 177 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
176 | 178 | |
192 | 194 | ss.start_listening(config.worker_listeners) |
193 | 195 | |
194 | 196 | def run(): |
195 | with LoggingContext("run"): | |
197 | # make sure that we run the reactor with the sentinel log context, | |
198 | # otherwise other PreserveLoggingContext instances will get confused | |
199 | # and complain when they see the logcontext arbitrarily swapping | |
200 | # between the sentinel and `run` logcontexts. | |
201 | with PreserveLoggingContext(): | |
196 | 202 | logger.info("Running") |
197 | 203 | change_resource_limit(config.soft_file_limit) |
198 | 204 | if config.gc_thresholds: |
30 | 30 | from synapse.storage.engines import create_engine |
31 | 31 | from synapse.util.async import sleep |
32 | 32 | from synapse.util.httpresourcetree import create_resource_tree |
33 | from synapse.util.logcontext import LoggingContext | |
33 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
34 | 34 | from synapse.util.manhole import manhole |
35 | 35 | from synapse.util.rlimit import change_resource_limit |
36 | 36 | from synapse.util.versionstring import get_version_string |
161 | 161 | |
162 | 162 | assert config.worker_app == "synapse.app.federation_reader" |
163 | 163 | |
164 | setup_logging(config.worker_log_config, config.worker_log_file) | |
164 | setup_logging(config, use_worker_options=True) | |
165 | 165 | |
166 | 166 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
167 | 167 | |
183 | 183 | ss.start_listening(config.worker_listeners) |
184 | 184 | |
185 | 185 | def run(): |
186 | with LoggingContext("run"): | |
186 | # make sure that we run the reactor with the sentinel log context, | |
187 | # otherwise other PreserveLoggingContext instances will get confused | |
188 | # and complain when they see the logcontext arbitrarily swapping | |
189 | # between the sentinel and `run` logcontexts. | |
190 | with PreserveLoggingContext(): | |
187 | 191 | logger.info("Running") |
188 | 192 | change_resource_limit(config.soft_file_limit) |
189 | 193 | if config.gc_thresholds: |
34 | 34 | from synapse.storage.presence import UserPresenceState |
35 | 35 | from synapse.util.async import sleep |
36 | 36 | from synapse.util.httpresourcetree import create_resource_tree |
37 | from synapse.util.logcontext import LoggingContext | |
37 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
38 | 38 | from synapse.util.manhole import manhole |
39 | 39 | from synapse.util.rlimit import change_resource_limit |
40 | 40 | from synapse.util.versionstring import get_version_string |
159 | 159 | |
160 | 160 | assert config.worker_app == "synapse.app.federation_sender" |
161 | 161 | |
162 | setup_logging(config.worker_log_config, config.worker_log_file) | |
162 | setup_logging(config, use_worker_options=True) | |
163 | 163 | |
164 | 164 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
165 | 165 | |
192 | 192 | ps.start_listening(config.worker_listeners) |
193 | 193 | |
194 | 194 | def run(): |
195 | with LoggingContext("run"): | |
195 | # make sure that we run the reactor with the sentinel log context, | |
196 | # otherwise other PreserveLoggingContext instances will get confused | |
197 | # and complain when they see the logcontext arbitrarily swapping | |
198 | # between the sentinel and `run` logcontexts. | |
199 | with PreserveLoggingContext(): | |
196 | 200 | logger.info("Running") |
197 | 201 | change_resource_limit(config.soft_file_limit) |
198 | 202 | if config.gc_thresholds: |
19 | 19 | import logging |
20 | 20 | import os |
21 | 21 | import sys |
22 | ||
23 | import synapse.config.logger | |
22 | 24 | from synapse.config._base import ConfigError |
23 | 25 | |
24 | 26 | from synapse.python_dependencies import ( |
49 | 51 | ) |
50 | 52 | from synapse.config.homeserver import HomeServerConfig |
51 | 53 | from synapse.crypto import context_factory |
52 | from synapse.util.logcontext import LoggingContext | |
54 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
53 | 55 | from synapse.metrics import register_memory_metrics, get_metrics_for |
54 | 56 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX |
55 | 57 | from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX |
285 | 287 | # generating config files and shouldn't try to continue. |
286 | 288 | sys.exit(0) |
287 | 289 | |
288 | config.setup_logging() | |
290 | synapse.config.logger.setup_logging(config, use_worker_options=False) | |
289 | 291 | |
290 | 292 | # check any extra requirements we have now we have a config |
291 | 293 | check_requirements(config) |
453 | 455 | def in_thread(): |
454 | 456 | # Uncomment to enable tracing of log context changes. |
455 | 457 | # sys.settrace(logcontext_tracer) |
456 | with LoggingContext("run"): | |
458 | ||
459 | # make sure that we run the reactor with the sentinel log context, | |
460 | # otherwise other PreserveLoggingContext instances will get confused | |
461 | # and complain when they see the logcontext arbitrarily swapping | |
462 | # between the sentinel and `run` logcontexts. | |
463 | with PreserveLoggingContext(): | |
457 | 464 | change_resource_limit(hs.config.soft_file_limit) |
458 | 465 | if hs.config.gc_thresholds: |
459 | 466 | gc.set_threshold(*hs.config.gc_thresholds) |
23 | 23 | from synapse.replication.slave.storage._base import BaseSlavedStore |
24 | 24 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
25 | 25 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore |
26 | from synapse.replication.slave.storage.transactions import TransactionStore | |
26 | 27 | from synapse.rest.media.v0.content_repository import ContentRepoResource |
27 | 28 | from synapse.rest.media.v1.media_repository import MediaRepositoryResource |
28 | 29 | from synapse.server import HomeServer |
31 | 32 | from synapse.storage.media_repository import MediaRepositoryStore |
32 | 33 | from synapse.util.async import sleep |
33 | 34 | from synapse.util.httpresourcetree import create_resource_tree |
34 | from synapse.util.logcontext import LoggingContext | |
35 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
35 | 36 | from synapse.util.manhole import manhole |
36 | 37 | from synapse.util.rlimit import change_resource_limit |
37 | 38 | from synapse.util.versionstring import get_version_string |
58 | 59 | class MediaRepositorySlavedStore( |
59 | 60 | SlavedApplicationServiceStore, |
60 | 61 | SlavedRegistrationStore, |
62 | TransactionStore, | |
61 | 63 | BaseSlavedStore, |
62 | 64 | MediaRepositoryStore, |
63 | 65 | ClientIpStore, |
167 | 169 | |
168 | 170 | assert config.worker_app == "synapse.app.media_repository" |
169 | 171 | |
170 | setup_logging(config.worker_log_config, config.worker_log_file) | |
172 | setup_logging(config, use_worker_options=True) | |
171 | 173 | |
172 | 174 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
173 | 175 | |
189 | 191 | ss.start_listening(config.worker_listeners) |
190 | 192 | |
191 | 193 | def run(): |
192 | with LoggingContext("run"): | |
194 | # make sure that we run the reactor with the sentinel log context, | |
195 | # otherwise other PreserveLoggingContext instances will get confused | |
196 | # and complain when they see the logcontext arbitrarily swapping | |
197 | # between the sentinel and `run` logcontexts. | |
198 | with PreserveLoggingContext(): | |
193 | 199 | logger.info("Running") |
194 | 200 | change_resource_limit(config.soft_file_limit) |
195 | 201 | if config.gc_thresholds: |
30 | 30 | from synapse.storage import DataStore |
31 | 31 | from synapse.util.async import sleep |
32 | 32 | from synapse.util.httpresourcetree import create_resource_tree |
33 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
33 | from synapse.util.logcontext import LoggingContext, preserve_fn, \ | |
34 | PreserveLoggingContext | |
34 | 35 | from synapse.util.manhole import manhole |
35 | 36 | from synapse.util.rlimit import change_resource_limit |
36 | 37 | from synapse.util.versionstring import get_version_string |
244 | 245 | |
245 | 246 | assert config.worker_app == "synapse.app.pusher" |
246 | 247 | |
247 | setup_logging(config.worker_log_config, config.worker_log_file) | |
248 | setup_logging(config, use_worker_options=True) | |
248 | 249 | |
249 | 250 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
250 | 251 | |
274 | 275 | ps.start_listening(config.worker_listeners) |
275 | 276 | |
276 | 277 | def run(): |
277 | with LoggingContext("run"): | |
278 | # make sure that we run the reactor with the sentinel log context, | |
279 | # otherwise other PreserveLoggingContext instances will get confused | |
280 | # and complain when they see the logcontext arbitrarily swapping | |
281 | # between the sentinel and `run` logcontexts. | |
282 | with PreserveLoggingContext(): | |
278 | 283 | logger.info("Running") |
279 | 284 | change_resource_limit(config.soft_file_limit) |
280 | 285 | if config.gc_thresholds: |
19 | 19 | from synapse.config._base import ConfigError |
20 | 20 | from synapse.config.homeserver import HomeServerConfig |
21 | 21 | from synapse.config.logger import setup_logging |
22 | from synapse.events import FrozenEvent | |
23 | 22 | from synapse.handlers.presence import PresenceHandler |
24 | 23 | from synapse.http.site import SynapseSite |
25 | 24 | from synapse.http.server import JsonResource |
47 | 46 | from synapse.storage.roommember import RoomMemberStore |
48 | 47 | from synapse.util.async import sleep |
49 | 48 | from synapse.util.httpresourcetree import create_resource_tree |
50 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
49 | from synapse.util.logcontext import LoggingContext, preserve_fn, \ | |
50 | PreserveLoggingContext | |
51 | 51 | from synapse.util.manhole import manhole |
52 | 52 | from synapse.util.rlimit import change_resource_limit |
53 | 53 | from synapse.util.stringutils import random_string |
398 | 398 | position = row[position_index] |
399 | 399 | user_id = row[user_index] |
400 | 400 | |
401 | rooms = yield store.get_rooms_for_user(user_id) | |
402 | room_ids = [r.room_id for r in rooms] | |
401 | room_ids = yield store.get_rooms_for_user(user_id) | |
403 | 402 | |
404 | 403 | notifier.on_new_event( |
405 | 404 | "device_list_key", position, rooms=room_ids, |
410 | 409 | stream = result.get("events") |
411 | 410 | if stream: |
412 | 411 | max_position = stream["position"] |
412 | ||
413 | event_map = yield store.get_events([row[1] for row in stream["rows"]]) | |
414 | ||
413 | 415 | for row in stream["rows"]: |
414 | 416 | position = row[0] |
415 | internal = json.loads(row[1]) | |
416 | event_json = json.loads(row[2]) | |
417 | event = FrozenEvent(event_json, internal_metadata_dict=internal) | |
417 | event_id = row[1] | |
418 | event = event_map.get(event_id, None) | |
419 | if not event: | |
420 | continue | |
421 | ||
418 | 422 | extra_users = () |
419 | 423 | if event.type == EventTypes.Member: |
420 | 424 | extra_users = (event.state_key,) |
477 | 481 | |
478 | 482 | assert config.worker_app == "synapse.app.synchrotron" |
479 | 483 | |
480 | setup_logging(config.worker_log_config, config.worker_log_file) | |
484 | setup_logging(config, use_worker_options=True) | |
481 | 485 | |
482 | 486 | synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts |
483 | 487 | |
496 | 500 | ss.start_listening(config.worker_listeners) |
497 | 501 | |
498 | 502 | def run(): |
499 | with LoggingContext("run"): | |
503 | # make sure that we run the reactor with the sentinel log context, | |
504 | # otherwise other PreserveLoggingContext instances will get confused | |
505 | # and complain when they see the logcontext arbitrarily swapping | |
506 | # between the sentinel and `run` logcontexts. | |
507 | with PreserveLoggingContext(): | |
500 | 508 | logger.info("Running") |
501 | 509 | change_resource_limit(config.soft_file_limit) |
502 | 510 | if config.gc_thresholds: |
22 | 22 | import subprocess |
23 | 23 | import sys |
24 | 24 | import yaml |
25 | import errno | |
26 | import time | |
25 | 27 | |
26 | 28 | SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"] |
27 | 29 | |
28 | 30 | GREEN = "\x1b[1;32m" |
31 | YELLOW = "\x1b[1;33m" | |
29 | 32 | RED = "\x1b[1;31m" |
30 | 33 | NORMAL = "\x1b[m" |
34 | ||
35 | ||
36 | def pid_running(pid): | |
37 | try: | |
38 | os.kill(pid, 0) | |
39 | return True | |
40 | except OSError, err: | |
41 | if err.errno == errno.EPERM: | |
42 | return True | |
43 | return False | |
31 | 44 | |
32 | 45 | |
33 | 46 | def write(message, colour=NORMAL, stream=sys.stdout): |
37 | 50 | stream.write(colour + message + NORMAL + "\n") |
38 | 51 | |
39 | 52 | |
53 | def abort(message, colour=RED, stream=sys.stderr): | |
54 | write(message, colour, stream) | |
55 | sys.exit(1) | |
56 | ||
57 | ||
40 | 58 | def start(configfile): |
41 | 59 | write("Starting ...") |
42 | 60 | args = SYNAPSE |
44 | 62 | |
45 | 63 | try: |
46 | 64 | subprocess.check_call(args) |
47 | write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN) | |
65 | write("started synapse.app.homeserver(%r)" % | |
66 | (configfile,), colour=GREEN) | |
48 | 67 | except subprocess.CalledProcessError as e: |
49 | 68 | write( |
50 | 69 | "error starting (exit code: %d); see above for logs" % e.returncode, |
75 | 94 | def stop(pidfile, app): |
76 | 95 | if os.path.exists(pidfile): |
77 | 96 | pid = int(open(pidfile).read()) |
78 | os.kill(pid, signal.SIGTERM) | |
79 | write("stopped %s" % (app,), colour=GREEN) | |
97 | try: | |
98 | os.kill(pid, signal.SIGTERM) | |
99 | write("stopped %s" % (app,), colour=GREEN) | |
100 | except OSError, err: | |
101 | if err.errno == errno.ESRCH: | |
102 | write("%s not running" % (app,), colour=YELLOW) | |
103 | elif err.errno == errno.EPERM: | |
104 | abort("Cannot stop %s: Operation not permitted" % (app,)) | |
105 | else: | |
106 | abort("Cannot stop %s: Unknown error" % (app,)) | |
80 | 107 | |
81 | 108 | |
82 | 109 | Worker = collections.namedtuple("Worker", [ |
189 | 216 | if start_stop_synapse: |
190 | 217 | stop(pidfile, "synapse.app.homeserver") |
191 | 218 | |
192 | # TODO: Wait for synapse to actually shutdown before starting it again | |
219 | # Wait for synapse to actually shutdown before starting it again | |
220 | if action == "restart": | |
221 | running_pids = [] | |
222 | if start_stop_synapse and os.path.exists(pidfile): | |
223 | running_pids.append(int(open(pidfile).read())) | |
224 | for worker in workers: | |
225 | if os.path.exists(worker.pidfile): | |
226 | running_pids.append(int(open(worker.pidfile).read())) | |
227 | if len(running_pids) > 0: | |
228 | write("Waiting for process to exit before restarting...") | |
229 | for running_pid in running_pids: | |
230 | while pid_running(running_pid): | |
231 | time.sleep(0.2) | |
193 | 232 | |
194 | 233 | if action == "start" or action == "restart": |
195 | 234 | if start_stop_synapse: |
44 | 44 | maxBytes: 104857600 |
45 | 45 | backupCount: 10 |
46 | 46 | filters: [context] |
47 | level: INFO | |
48 | 47 | console: |
49 | 48 | class: logging.StreamHandler |
50 | 49 | formatter: precise |
55 | 54 | level: INFO |
56 | 55 | |
57 | 56 | synapse.storage.SQL: |
57 | # beware: increasing this to DEBUG will make synapse log sensitive | |
58 | # information such as access tokens. | |
58 | 59 | level: INFO |
59 | 60 | |
60 | 61 | root: |
67 | 68 | |
68 | 69 | def read_config(self, config): |
69 | 70 | self.verbosity = config.get("verbose", 0) |
71 | self.no_redirect_stdio = config.get("no_redirect_stdio", False) | |
70 | 72 | self.log_config = self.abspath(config.get("log_config")) |
71 | 73 | self.log_file = self.abspath(config.get("log_file")) |
72 | 74 | |
76 | 78 | os.path.join(config_dir_path, server_name + ".log.config") |
77 | 79 | ) |
78 | 80 | return """ |
79 | # Logging verbosity level. | |
81 | # Logging verbosity level. Ignored if log_config is specified. | |
80 | 82 | verbose: 0 |
81 | 83 | |
82 | # File to write logging to | |
84 | # File to write logging to. Ignored if log_config is specified. | |
83 | 85 | log_file: "%(log_file)s" |
84 | 86 | |
85 | 87 | # A yaml python logging config file |
89 | 91 | def read_arguments(self, args): |
90 | 92 | if args.verbose is not None: |
91 | 93 | self.verbosity = args.verbose |
94 | if args.no_redirect_stdio is not None: | |
95 | self.no_redirect_stdio = args.no_redirect_stdio | |
92 | 96 | if args.log_config is not None: |
93 | 97 | self.log_config = args.log_config |
94 | 98 | if args.log_file is not None: |
98 | 102 | logging_group = parser.add_argument_group("logging") |
99 | 103 | logging_group.add_argument( |
100 | 104 | '-v', '--verbose', dest="verbose", action='count', |
101 | help="The verbosity level." | |
105 | help="The verbosity level. Specify multiple times to increase " | |
106 | "verbosity. (Ignored if --log-config is specified.)" | |
102 | 107 | ) |
103 | 108 | logging_group.add_argument( |
104 | 109 | '-f', '--log-file', dest="log_file", |
105 | help="File to log to." | |
110 | help="File to log to. (Ignored if --log-config is specified.)" | |
106 | 111 | ) |
107 | 112 | logging_group.add_argument( |
108 | 113 | '--log-config', dest="log_config", default=None, |
109 | 114 | help="Python logging config file" |
115 | ) | |
116 | logging_group.add_argument( | |
117 | '-n', '--no-redirect-stdio', | |
118 | action='store_true', default=None, | |
119 | help="Do not redirect stdout/stderr to the log" | |
110 | 120 | ) |
111 | 121 | |
112 | 122 | def generate_files(self, config): |
117 | 127 | DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"]) |
118 | 128 | ) |
119 | 129 | |
120 | def setup_logging(self): | |
121 | setup_logging(self.log_config, self.log_file, self.verbosity) | |
122 | ||
123 | ||
124 | def setup_logging(log_config=None, log_file=None, verbosity=None): | |
130 | ||
131 | def setup_logging(config, use_worker_options=False): | |
132 | """ Set up python logging | |
133 | ||
134 | Args: | |
135 | config (LoggingConfig | synapse.config.workers.WorkerConfig): | |
136 | configuration data | |
137 | ||
138 | use_worker_options (bool): True to use 'worker_log_config' and | |
139 | 'worker_log_file' options instead of 'log_config' and 'log_file'. | |
140 | """ | |
141 | log_config = (config.worker_log_config if use_worker_options | |
142 | else config.log_config) | |
143 | log_file = (config.worker_log_file if use_worker_options | |
144 | else config.log_file) | |
145 | ||
125 | 146 | log_format = ( |
126 | 147 | "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s" |
127 | 148 | " - %(message)s" |
130 | 151 | |
131 | 152 | level = logging.INFO |
132 | 153 | level_for_storage = logging.INFO |
133 | if verbosity: | |
154 | if config.verbosity: | |
134 | 155 | level = logging.DEBUG |
135 | if verbosity > 1: | |
156 | if config.verbosity > 1: | |
136 | 157 | level_for_storage = logging.DEBUG |
137 | 158 | |
138 | 159 | # FIXME: we need a logging.WARN for a -q quiet option |
152 | 173 | logger.info("Closing log file due to SIGHUP") |
153 | 174 | handler.doRollover() |
154 | 175 | logger.info("Opened new log file due to SIGHUP") |
155 | ||
156 | # TODO(paul): obviously this is a terrible mechanism for | |
157 | # stealing SIGHUP, because it means no other part of synapse | |
158 | # can use it instead. If we want to catch SIGHUP anywhere | |
159 | # else as well, I'd suggest we find a nicer way to broadcast | |
160 | # it around. | |
161 | if getattr(signal, "SIGHUP"): | |
162 | signal.signal(signal.SIGHUP, sighup) | |
163 | 176 | else: |
164 | 177 | handler = logging.StreamHandler() |
165 | 178 | handler.setFormatter(formatter) |
168 | 181 | |
169 | 182 | logger.addHandler(handler) |
170 | 183 | else: |
171 | with open(log_config, 'r') as f: | |
172 | logging.config.dictConfig(yaml.load(f)) | |
184 | def load_log_config(): | |
185 | with open(log_config, 'r') as f: | |
186 | logging.config.dictConfig(yaml.load(f)) | |
187 | ||
188 | def sighup(signum, stack): | |
189 | # it might be better to use a file watcher or something for this. | |
190 | logging.info("Reloading log config from %s due to SIGHUP", | |
191 | log_config) | |
192 | load_log_config() | |
193 | ||
194 | load_log_config() | |
195 | ||
196 | # TODO(paul): obviously this is a terrible mechanism for | |
197 | # stealing SIGHUP, because it means no other part of synapse | |
198 | # can use it instead. If we want to catch SIGHUP anywhere | |
199 | # else as well, I'd suggest we find a nicer way to broadcast | |
200 | # it around. | |
201 | if getattr(signal, "SIGHUP"): | |
202 | signal.signal(signal.SIGHUP, sighup) | |
173 | 203 | |
174 | 204 | # It's critical to point twisted's internal logging somewhere, otherwise it |
175 | 205 | # stacks up and leaks kup to 64K object; |
182 | 212 | # |
183 | 213 | # However this may not be too much of a problem if we are just writing to a file. |
184 | 214 | observer = STDLibLogObserver() |
185 | globalLogBeginner.beginLoggingTo([observer]) | |
215 | globalLogBeginner.beginLoggingTo( | |
216 | [observer], | |
217 | redirectStandardIO=not config.no_redirect_stdio, | |
218 | ) |
14 | 14 | |
15 | 15 | from synapse.crypto.keyclient import fetch_server_key |
16 | 16 | from synapse.api.errors import SynapseError, Codes |
17 | from synapse.util.retryutils import get_retry_limiter | |
18 | 17 | from synapse.util import unwrapFirstError |
19 | 18 | from synapse.util.async import ObservableDeferred |
20 | 19 | from synapse.util.logcontext import ( |
95 | 94 | verify_requests = [] |
96 | 95 | |
97 | 96 | for server_name, json_object in server_and_json: |
98 | logger.debug("Verifying for %s", server_name) | |
99 | 97 | |
100 | 98 | key_ids = signature_ids(json_object, server_name) |
101 | 99 | if not key_ids: |
100 | logger.warn("Request from %s: no supported signature keys", | |
101 | server_name) | |
102 | 102 | deferred = defer.fail(SynapseError( |
103 | 103 | 400, |
104 | 104 | "Not signed with a supported algorithm", |
106 | 106 | )) |
107 | 107 | else: |
108 | 108 | deferred = defer.Deferred() |
109 | ||
110 | logger.debug("Verifying for %s with key_ids %s", | |
111 | server_name, key_ids) | |
109 | 112 | |
110 | 113 | verify_request = VerifyKeyRequest( |
111 | 114 | server_name, key_ids, json_object, deferred |
141 | 144 | |
142 | 145 | json_object = verify_request.json_object |
143 | 146 | |
147 | logger.debug("Got key %s %s:%s for server %s, verifying" % ( | |
148 | key_id, verify_key.alg, verify_key.version, server_name, | |
149 | )) | |
144 | 150 | try: |
145 | 151 | verify_signed_json(json_object, server_name, verify_key) |
146 | 152 | except: |
230 | 236 | d.addBoth(rm, server_name) |
231 | 237 | |
232 | 238 | def get_server_verify_keys(self, verify_requests): |
233 | """Takes a dict of KeyGroups and tries to find at least one key for | |
234 | each group. | |
239 | """Tries to find at least one key for each verify request | |
240 | ||
241 | For each verify_request, verify_request.deferred is called back with | |
242 | params (server_name, key_id, VerifyKey) if a key is found, or errbacked | |
243 | with a SynapseError if none of the keys are found. | |
244 | ||
245 | Args: | |
246 | verify_requests (list[VerifyKeyRequest]): list of verify requests | |
235 | 247 | """ |
236 | 248 | |
237 | 249 | # These are functions that produce keys given a list of key ids |
244 | 256 | @defer.inlineCallbacks |
245 | 257 | def do_iterations(): |
246 | 258 | with Measure(self.clock, "get_server_verify_keys"): |
259 | # dict[str, dict[str, VerifyKey]]: results so far. | |
260 | # map server_name -> key_id -> VerifyKey | |
247 | 261 | merged_results = {} |
248 | 262 | |
263 | # dict[str, set(str)]: keys to fetch for each server | |
249 | 264 | missing_keys = {} |
250 | 265 | for verify_request in verify_requests: |
251 | 266 | missing_keys.setdefault(verify_request.server_name, set()).update( |
307 | 322 | |
308 | 323 | @defer.inlineCallbacks |
309 | 324 | def get_keys_from_store(self, server_name_and_key_ids): |
325 | """ | |
326 | ||
327 | Args: | |
328 | server_name_and_key_ids (list[(str, iterable[str])]): | |
329 | list of (server_name, iterable[key_id]) tuples to fetch keys for | |
330 | ||
331 | Returns: | |
332 | Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from | |
333 | server_name -> key_id -> VerifyKey | |
334 | """ | |
310 | 335 | res = yield preserve_context_over_deferred(defer.gatherResults( |
311 | 336 | [ |
312 | 337 | preserve_fn(self.store.get_server_verify_keys)( |
355 | 380 | def get_keys_from_server(self, server_name_and_key_ids): |
356 | 381 | @defer.inlineCallbacks |
357 | 382 | def get_key(server_name, key_ids): |
358 | limiter = yield get_retry_limiter( | |
359 | server_name, | |
360 | self.clock, | |
361 | self.store, | |
362 | ) | |
363 | with limiter: | |
364 | keys = None | |
365 | try: | |
366 | keys = yield self.get_server_verify_key_v2_direct( | |
367 | server_name, key_ids | |
368 | ) | |
369 | except Exception as e: | |
370 | logger.info( | |
371 | "Unable to get key %r for %r directly: %s %s", | |
372 | key_ids, server_name, | |
373 | type(e).__name__, str(e.message), | |
374 | ) | |
375 | ||
376 | if not keys: | |
377 | keys = yield self.get_server_verify_key_v1_direct( | |
378 | server_name, key_ids | |
379 | ) | |
380 | ||
381 | keys = {server_name: keys} | |
383 | keys = None | |
384 | try: | |
385 | keys = yield self.get_server_verify_key_v2_direct( | |
386 | server_name, key_ids | |
387 | ) | |
388 | except Exception as e: | |
389 | logger.info( | |
390 | "Unable to get key %r for %r directly: %s %s", | |
391 | key_ids, server_name, | |
392 | type(e).__name__, str(e.message), | |
393 | ) | |
394 | ||
395 | if not keys: | |
396 | keys = yield self.get_server_verify_key_v1_direct( | |
397 | server_name, key_ids | |
398 | ) | |
399 | ||
400 | keys = {server_name: keys} | |
382 | 401 | |
383 | 402 | defer.returnValue(keys) |
384 | 403 |
14 | 14 | |
15 | 15 | |
16 | 16 | class EventContext(object): |
17 | """ | |
18 | Attributes: | |
19 | current_state_ids (dict[(str, str), str]): | |
20 | The current state map including the current event. | |
21 | (type, state_key) -> event_id | |
22 | ||
23 | prev_state_ids (dict[(str, str), str]): | |
24 | The current state map excluding the current event. | |
25 | (type, state_key) -> event_id | |
26 | ||
27 | state_group (int): state group id | |
28 | rejected (bool|str): A rejection reason if the event was rejected, else | |
29 | False | |
30 | ||
31 | push_actions (list[(str, list[object])]): list of (user_id, actions) | |
32 | tuples | |
33 | ||
34 | prev_group (int): Previously persisted state group. ``None`` for an | |
35 | outlier. | |
36 | delta_ids (dict[(str, str), str]): Delta from ``prev_group``. | |
37 | (type, state_key) -> event_id. ``None`` for an outlier. | |
38 | ||
39 | prev_state_events (?): XXX: is this ever set to anything other than | |
40 | the empty list? | |
41 | """ | |
42 | ||
17 | 43 | __slots__ = [ |
18 | 44 | "current_state_ids", |
19 | 45 | "prev_state_ids", |
28 | 28 | from synapse.events import FrozenEvent, builder |
29 | 29 | import synapse.metrics |
30 | 30 | |
31 | from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination | |
31 | from synapse.util.retryutils import NotRetryingDestination | |
32 | 32 | |
33 | 33 | import copy |
34 | 34 | import itertools |
87 | 87 | |
88 | 88 | @log_function |
89 | 89 | def make_query(self, destination, query_type, args, |
90 | retry_on_dns_fail=False): | |
90 | retry_on_dns_fail=False, ignore_backoff=False): | |
91 | 91 | """Sends a federation Query to a remote homeserver of the given type |
92 | 92 | and arguments. |
93 | 93 | |
97 | 97 | handler name used in register_query_handler(). |
98 | 98 | args (dict): Mapping of strings to strings containing the details |
99 | 99 | of the query request. |
100 | ignore_backoff (bool): true to ignore the historical backoff data | |
101 | and try the request anyway. | |
100 | 102 | |
101 | 103 | Returns: |
102 | 104 | a Deferred which will eventually yield a JSON object from the |
105 | 107 | sent_queries_counter.inc(query_type) |
106 | 108 | |
107 | 109 | return self.transport_layer.make_query( |
108 | destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail | |
110 | destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail, | |
111 | ignore_backoff=ignore_backoff, | |
109 | 112 | ) |
110 | 113 | |
111 | 114 | @log_function |
233 | 236 | continue |
234 | 237 | |
235 | 238 | try: |
236 | limiter = yield get_retry_limiter( | |
237 | destination, | |
238 | self._clock, | |
239 | self.store, | |
240 | ) | |
241 | ||
242 | with limiter: | |
243 | transaction_data = yield self.transport_layer.get_event( | |
244 | destination, event_id, timeout=timeout, | |
245 | ) | |
246 | ||
247 | logger.debug("transaction_data %r", transaction_data) | |
248 | ||
249 | pdu_list = [ | |
250 | self.event_from_pdu_json(p, outlier=outlier) | |
251 | for p in transaction_data["pdus"] | |
252 | ] | |
253 | ||
254 | if pdu_list and pdu_list[0]: | |
255 | pdu = pdu_list[0] | |
256 | ||
257 | # Check signatures are correct. | |
258 | signed_pdu = yield self._check_sigs_and_hashes([pdu])[0] | |
259 | ||
260 | break | |
239 | transaction_data = yield self.transport_layer.get_event( | |
240 | destination, event_id, timeout=timeout, | |
241 | ) | |
242 | ||
243 | logger.debug("transaction_data %r", transaction_data) | |
244 | ||
245 | pdu_list = [ | |
246 | self.event_from_pdu_json(p, outlier=outlier) | |
247 | for p in transaction_data["pdus"] | |
248 | ] | |
249 | ||
250 | if pdu_list and pdu_list[0]: | |
251 | pdu = pdu_list[0] | |
252 | ||
253 | # Check signatures are correct. | |
254 | signed_pdu = yield self._check_sigs_and_hashes([pdu])[0] | |
255 | ||
256 | break | |
261 | 257 | |
262 | 258 | pdu_attempts[destination] = now |
263 | 259 |
51 | 51 | |
52 | 52 | self.auth = hs.get_auth() |
53 | 53 | |
54 | self._room_pdu_linearizer = Linearizer("fed_room_pdu") | |
55 | 54 | self._server_linearizer = Linearizer("fed_server") |
56 | 55 | |
57 | 56 | # We cache responses to state queries, as they take a while and often |
146 | 145 | # check that it's actually being sent from a valid destination to |
147 | 146 | # workaround bug #1753 in 0.18.5 and 0.18.6 |
148 | 147 | if transaction.origin != get_domain_from_id(pdu.event_id): |
148 | # We continue to accept join events from any server; this is | |
149 | # necessary for the federation join dance to work correctly. | |
150 | # (When we join over federation, the "helper" server is | |
151 | # responsible for sending out the join event, rather than the | |
152 | # origin. See bug #1893). | |
149 | 153 | if not ( |
150 | 154 | pdu.type == 'm.room.member' and |
151 | 155 | pdu.content and |
152 | pdu.content.get("membership", None) == 'join' and | |
153 | self.hs.is_mine_id(pdu.state_key) | |
156 | pdu.content.get("membership", None) == 'join' | |
154 | 157 | ): |
155 | 158 | logger.info( |
156 | 159 | "Discarding PDU %s from invalid origin %s", |
164 | 167 | ) |
165 | 168 | |
166 | 169 | try: |
167 | yield self._handle_new_pdu(transaction.origin, pdu) | |
170 | yield self._handle_received_pdu(transaction.origin, pdu) | |
168 | 171 | results.append({}) |
169 | 172 | except FederationError as e: |
170 | 173 | self.send_failure(e, transaction.origin) |
496 | 499 | ) |
497 | 500 | |
498 | 501 | @defer.inlineCallbacks |
499 | @log_function | |
500 | def _handle_new_pdu(self, origin, pdu, get_missing=True): | |
501 | ||
502 | # We reprocess pdus when we have seen them only as outliers | |
503 | existing = yield self._get_persisted_pdu( | |
504 | origin, pdu.event_id, do_auth=False | |
505 | ) | |
506 | ||
507 | # FIXME: Currently we fetch an event again when we already have it | |
508 | # if it has been marked as an outlier. | |
509 | ||
510 | already_seen = ( | |
511 | existing and ( | |
512 | not existing.internal_metadata.is_outlier() | |
513 | or pdu.internal_metadata.is_outlier() | |
514 | ) | |
515 | ) | |
516 | if already_seen: | |
517 | logger.debug("Already seen pdu %s", pdu.event_id) | |
518 | return | |
519 | ||
502 | def _handle_received_pdu(self, origin, pdu): | |
503 | """ Process a PDU received in a federation /send/ transaction. | |
504 | ||
505 | Args: | |
506 | origin (str): server which sent the pdu | |
507 | pdu (FrozenEvent): received pdu | |
508 | ||
509 | Returns (Deferred): completes with None | |
510 | Raises: FederationError if the signatures / hash do not match | |
511 | """ | |
520 | 512 | # Check signature. |
521 | 513 | try: |
522 | 514 | pdu = yield self._check_sigs_and_hash(pdu) |
528 | 520 | affected=pdu.event_id, |
529 | 521 | ) |
530 | 522 | |
531 | state = None | |
532 | ||
533 | auth_chain = [] | |
534 | ||
535 | have_seen = yield self.store.have_events( | |
536 | [ev for ev, _ in pdu.prev_events] | |
537 | ) | |
538 | ||
539 | fetch_state = False | |
540 | ||
541 | # Get missing pdus if necessary. | |
542 | if not pdu.internal_metadata.is_outlier(): | |
543 | # We only backfill backwards to the min depth. | |
544 | min_depth = yield self.handler.get_min_depth_for_context( | |
545 | pdu.room_id | |
546 | ) | |
547 | ||
548 | logger.debug( | |
549 | "_handle_new_pdu min_depth for %s: %d", | |
550 | pdu.room_id, min_depth | |
551 | ) | |
552 | ||
553 | prevs = {e_id for e_id, _ in pdu.prev_events} | |
554 | seen = set(have_seen.keys()) | |
555 | ||
556 | if min_depth and pdu.depth < min_depth: | |
557 | # This is so that we don't notify the user about this | |
558 | # message, to work around the fact that some events will | |
559 | # reference really really old events we really don't want to | |
560 | # send to the clients. | |
561 | pdu.internal_metadata.outlier = True | |
562 | elif min_depth and pdu.depth > min_depth: | |
563 | if get_missing and prevs - seen: | |
564 | # If we're missing stuff, ensure we only fetch stuff one | |
565 | # at a time. | |
566 | logger.info( | |
567 | "Acquiring lock for room %r to fetch %d missing events: %r...", | |
568 | pdu.room_id, len(prevs - seen), list(prevs - seen)[:5], | |
569 | ) | |
570 | with (yield self._room_pdu_linearizer.queue(pdu.room_id)): | |
571 | logger.info( | |
572 | "Acquired lock for room %r to fetch %d missing events", | |
573 | pdu.room_id, len(prevs - seen), | |
574 | ) | |
575 | ||
576 | # We recalculate seen, since it may have changed. | |
577 | have_seen = yield self.store.have_events(prevs) | |
578 | seen = set(have_seen.keys()) | |
579 | ||
580 | if prevs - seen: | |
581 | latest = yield self.store.get_latest_event_ids_in_room( | |
582 | pdu.room_id | |
583 | ) | |
584 | ||
585 | # We add the prev events that we have seen to the latest | |
586 | # list to ensure the remote server doesn't give them to us | |
587 | latest = set(latest) | |
588 | latest |= seen | |
589 | ||
590 | logger.info( | |
591 | "Missing %d events for room %r: %r...", | |
592 | len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] | |
593 | ) | |
594 | ||
595 | # XXX: we set timeout to 10s to help workaround | |
596 | # https://github.com/matrix-org/synapse/issues/1733. | |
597 | # The reason is to avoid holding the linearizer lock | |
598 | # whilst processing inbound /send transactions, causing | |
599 | # FDs to stack up and block other inbound transactions | |
600 | # which empirically can currently take up to 30 minutes. | |
601 | # | |
602 | # N.B. this explicitly disables retry attempts. | |
603 | # | |
604 | # N.B. this also increases our chances of falling back to | |
605 | # fetching fresh state for the room if the missing event | |
606 | # can't be found, which slightly reduces our security. | |
607 | # it may also increase our DAG extremity count for the room, | |
608 | # causing additional state resolution? See #1760. | |
609 | # However, fetching state doesn't hold the linearizer lock | |
610 | # apparently. | |
611 | # | |
612 | # see https://github.com/matrix-org/synapse/pull/1744 | |
613 | ||
614 | missing_events = yield self.get_missing_events( | |
615 | origin, | |
616 | pdu.room_id, | |
617 | earliest_events_ids=list(latest), | |
618 | latest_events=[pdu], | |
619 | limit=10, | |
620 | min_depth=min_depth, | |
621 | timeout=10000, | |
622 | ) | |
623 | ||
624 | # We want to sort these by depth so we process them and | |
625 | # tell clients about them in order. | |
626 | missing_events.sort(key=lambda x: x.depth) | |
627 | ||
628 | for e in missing_events: | |
629 | yield self._handle_new_pdu( | |
630 | origin, | |
631 | e, | |
632 | get_missing=False | |
633 | ) | |
634 | ||
635 | have_seen = yield self.store.have_events( | |
636 | [ev for ev, _ in pdu.prev_events] | |
637 | ) | |
638 | ||
639 | prevs = {e_id for e_id, _ in pdu.prev_events} | |
640 | seen = set(have_seen.keys()) | |
641 | if prevs - seen: | |
642 | logger.info( | |
643 | "Still missing %d events for room %r: %r...", | |
644 | len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] | |
645 | ) | |
646 | fetch_state = True | |
647 | ||
648 | if fetch_state: | |
649 | # We need to get the state at this event, since we haven't | |
650 | # processed all the prev events. | |
651 | logger.debug( | |
652 | "_handle_new_pdu getting state for %s", | |
653 | pdu.room_id | |
654 | ) | |
655 | try: | |
656 | state, auth_chain = yield self.get_state_for_room( | |
657 | origin, pdu.room_id, pdu.event_id, | |
658 | ) | |
659 | except: | |
660 | logger.exception("Failed to get state for event: %s", pdu.event_id) | |
661 | ||
662 | yield self.handler.on_receive_pdu( | |
663 | origin, | |
664 | pdu, | |
665 | state=state, | |
666 | auth_chain=auth_chain, | |
667 | ) | |
523 | yield self.handler.on_receive_pdu(origin, pdu, get_missing=True) | |
668 | 524 | |
669 | 525 | def __str__(self): |
670 | 526 | return "<ReplicationLayer(%s)>" % self.server_name |
53 | 53 | def __init__(self, hs): |
54 | 54 | self.server_name = hs.hostname |
55 | 55 | self.clock = hs.get_clock() |
56 | self.notifier = hs.get_notifier() | |
56 | 57 | |
57 | 58 | self.presence_map = {} |
58 | 59 | self.presence_changed = sorteddict() |
185 | 186 | else: |
186 | 187 | self.edus[pos] = edu |
187 | 188 | |
189 | self.notifier.on_new_replication_data() | |
190 | ||
188 | 191 | def send_presence(self, destination, states): |
189 | 192 | """As per TransactionQueue""" |
190 | 193 | pos = self._next_pos() |
198 | 201 | (destination, state.user_id) for state in states |
199 | 202 | ] |
200 | 203 | |
204 | self.notifier.on_new_replication_data() | |
205 | ||
201 | 206 | def send_failure(self, failure, destination): |
202 | 207 | """As per TransactionQueue""" |
203 | 208 | pos = self._next_pos() |
204 | 209 | |
205 | 210 | self.failures[pos] = (destination, str(failure)) |
211 | self.notifier.on_new_replication_data() | |
206 | 212 | |
207 | 213 | def send_device_messages(self, destination): |
208 | 214 | """As per TransactionQueue""" |
209 | 215 | pos = self._next_pos() |
210 | 216 | self.device_messages[pos] = destination |
217 | self.notifier.on_new_replication_data() | |
211 | 218 | |
212 | 219 | def get_current_token(self): |
213 | 220 | return self.pos - 1 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
14 | import datetime | |
15 | 15 | |
16 | 16 | from twisted.internet import defer |
17 | 17 | |
21 | 21 | from synapse.api.errors import HttpResponseException |
22 | 22 | from synapse.util.async import run_on_reactor |
23 | 23 | from synapse.util.logcontext import preserve_context_over_fn |
24 | from synapse.util.retryutils import ( | |
25 | get_retry_limiter, NotRetryingDestination, | |
26 | ) | |
24 | from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter | |
27 | 25 | from synapse.util.metrics import measure_func |
28 | 26 | from synapse.types import get_domain_from_id |
29 | 27 | from synapse.handlers.presence import format_user_presence_state |
98 | 96 | # destination -> list of tuple(failure, deferred) |
99 | 97 | self.pending_failures_by_dest = {} |
100 | 98 | |
99 | # destination -> stream_id of last successfully sent to-device message. | |
100 | # NB: may be a long or an int. | |
101 | 101 | self.last_device_stream_id_by_dest = {} |
102 | ||
103 | # destination -> stream_id of last successfully sent device list | |
104 | # update. | |
102 | 105 | self.last_device_list_stream_id_by_dest = {} |
103 | 106 | |
104 | 107 | # HACK to get unique tx id |
299 | 302 | ) |
300 | 303 | return |
301 | 304 | |
305 | pending_pdus = [] | |
302 | 306 | try: |
303 | 307 | self.pending_transactions[destination] = 1 |
304 | 308 | |
309 | # This will throw if we wouldn't retry. We do this here so we fail | |
310 | # quickly, but we will later check this again in the http client, | |
311 | # hence why we throw the result away. | |
312 | yield get_retry_limiter(destination, self.clock, self.store) | |
313 | ||
305 | 314 | # XXX: what's this for? |
306 | 315 | yield run_on_reactor() |
307 | 316 | |
317 | pending_pdus = [] | |
308 | 318 | while True: |
309 | limiter = yield get_retry_limiter( | |
310 | destination, | |
311 | self.clock, | |
312 | self.store, | |
313 | backoff_on_404=True, # If we get a 404 the other side has gone | |
314 | ) | |
315 | ||
316 | 319 | device_message_edus, device_stream_id, dev_list_id = ( |
317 | 320 | yield self._get_new_device_messages(destination) |
318 | 321 | ) |
368 | 371 | |
369 | 372 | success = yield self._send_new_transaction( |
370 | 373 | destination, pending_pdus, pending_edus, pending_failures, |
371 | limiter=limiter, | |
372 | 374 | ) |
373 | 375 | if success: |
374 | 376 | # Remove the acknowledged device messages from the database |
386 | 388 | self.last_device_list_stream_id_by_dest[destination] = dev_list_id |
387 | 389 | else: |
388 | 390 | break |
389 | except NotRetryingDestination: | |
391 | except NotRetryingDestination as e: | |
390 | 392 | logger.debug( |
391 | "TX [%s] not ready for retry yet - " | |
393 | "TX [%s] not ready for retry yet (next retry at %s) - " | |
392 | 394 | "dropping transaction for now", |
393 | 395 | destination, |
394 | ) | |
396 | datetime.datetime.fromtimestamp( | |
397 | (e.retry_last_ts + e.retry_interval) / 1000.0 | |
398 | ), | |
399 | ) | |
400 | except Exception as e: | |
401 | logger.warn( | |
402 | "TX [%s] Failed to send transaction: %s", | |
403 | destination, | |
404 | e, | |
405 | ) | |
406 | for p, _ in pending_pdus: | |
407 | logger.info("Failed to send event %s to %s", p.event_id, | |
408 | destination) | |
395 | 409 | finally: |
396 | 410 | # We want to be *very* sure we delete this after we stop processing |
397 | 411 | self.pending_transactions.pop(destination, None) |
431 | 445 | @measure_func("_send_new_transaction") |
432 | 446 | @defer.inlineCallbacks |
433 | 447 | def _send_new_transaction(self, destination, pending_pdus, pending_edus, |
434 | pending_failures, limiter): | |
448 | pending_failures): | |
435 | 449 | |
436 | 450 | # Sort based on the order field |
437 | 451 | pending_pdus.sort(key=lambda t: t[1]) |
441 | 455 | |
442 | 456 | success = True |
443 | 457 | |
458 | logger.debug("TX [%s] _attempt_new_transaction", destination) | |
459 | ||
460 | txn_id = str(self._next_txn_id) | |
461 | ||
462 | logger.debug( | |
463 | "TX [%s] {%s} Attempting new transaction" | |
464 | " (pdus: %d, edus: %d, failures: %d)", | |
465 | destination, txn_id, | |
466 | len(pdus), | |
467 | len(edus), | |
468 | len(failures) | |
469 | ) | |
470 | ||
471 | logger.debug("TX [%s] Persisting transaction...", destination) | |
472 | ||
473 | transaction = Transaction.create_new( | |
474 | origin_server_ts=int(self.clock.time_msec()), | |
475 | transaction_id=txn_id, | |
476 | origin=self.server_name, | |
477 | destination=destination, | |
478 | pdus=pdus, | |
479 | edus=edus, | |
480 | pdu_failures=failures, | |
481 | ) | |
482 | ||
483 | self._next_txn_id += 1 | |
484 | ||
485 | yield self.transaction_actions.prepare_to_send(transaction) | |
486 | ||
487 | logger.debug("TX [%s] Persisted transaction", destination) | |
488 | logger.info( | |
489 | "TX [%s] {%s} Sending transaction [%s]," | |
490 | " (PDUs: %d, EDUs: %d, failures: %d)", | |
491 | destination, txn_id, | |
492 | transaction.transaction_id, | |
493 | len(pdus), | |
494 | len(edus), | |
495 | len(failures), | |
496 | ) | |
497 | ||
498 | # Actually send the transaction | |
499 | ||
500 | # FIXME (erikj): This is a bit of a hack to make the Pdu age | |
501 | # keys work | |
502 | def json_data_cb(): | |
503 | data = transaction.get_dict() | |
504 | now = int(self.clock.time_msec()) | |
505 | if "pdus" in data: | |
506 | for p in data["pdus"]: | |
507 | if "age_ts" in p: | |
508 | unsigned = p.setdefault("unsigned", {}) | |
509 | unsigned["age"] = now - int(p["age_ts"]) | |
510 | del p["age_ts"] | |
511 | return data | |
512 | ||
444 | 513 | try: |
445 | logger.debug("TX [%s] _attempt_new_transaction", destination) | |
446 | ||
447 | txn_id = str(self._next_txn_id) | |
448 | ||
449 | logger.debug( | |
450 | "TX [%s] {%s} Attempting new transaction" | |
451 | " (pdus: %d, edus: %d, failures: %d)", | |
452 | destination, txn_id, | |
453 | len(pdus), | |
454 | len(edus), | |
455 | len(failures) | |
456 | ) | |
457 | ||
458 | logger.debug("TX [%s] Persisting transaction...", destination) | |
459 | ||
460 | transaction = Transaction.create_new( | |
461 | origin_server_ts=int(self.clock.time_msec()), | |
462 | transaction_id=txn_id, | |
463 | origin=self.server_name, | |
464 | destination=destination, | |
465 | pdus=pdus, | |
466 | edus=edus, | |
467 | pdu_failures=failures, | |
468 | ) | |
469 | ||
470 | self._next_txn_id += 1 | |
471 | ||
472 | yield self.transaction_actions.prepare_to_send(transaction) | |
473 | ||
474 | logger.debug("TX [%s] Persisted transaction", destination) | |
475 | logger.info( | |
476 | "TX [%s] {%s} Sending transaction [%s]," | |
477 | " (PDUs: %d, EDUs: %d, failures: %d)", | |
478 | destination, txn_id, | |
479 | transaction.transaction_id, | |
480 | len(pdus), | |
481 | len(edus), | |
482 | len(failures), | |
483 | ) | |
484 | ||
485 | with limiter: | |
486 | # Actually send the transaction | |
487 | ||
488 | # FIXME (erikj): This is a bit of a hack to make the Pdu age | |
489 | # keys work | |
490 | def json_data_cb(): | |
491 | data = transaction.get_dict() | |
492 | now = int(self.clock.time_msec()) | |
493 | if "pdus" in data: | |
494 | for p in data["pdus"]: | |
495 | if "age_ts" in p: | |
496 | unsigned = p.setdefault("unsigned", {}) | |
497 | unsigned["age"] = now - int(p["age_ts"]) | |
498 | del p["age_ts"] | |
499 | return data | |
500 | ||
501 | try: | |
502 | response = yield self.transport_layer.send_transaction( | |
503 | transaction, json_data_cb | |
504 | ) | |
505 | code = 200 | |
506 | ||
507 | if response: | |
508 | for e_id, r in response.get("pdus", {}).items(): | |
509 | if "error" in r: | |
510 | logger.warn( | |
511 | "Transaction returned error for %s: %s", | |
512 | e_id, r, | |
513 | ) | |
514 | except HttpResponseException as e: | |
515 | code = e.code | |
516 | response = e.response | |
517 | ||
518 | if e.code in (401, 404, 429) or 500 <= e.code: | |
519 | logger.info( | |
520 | "TX [%s] {%s} got %d response", | |
521 | destination, txn_id, code | |
514 | response = yield self.transport_layer.send_transaction( | |
515 | transaction, json_data_cb | |
516 | ) | |
517 | code = 200 | |
518 | ||
519 | if response: | |
520 | for e_id, r in response.get("pdus", {}).items(): | |
521 | if "error" in r: | |
522 | logger.warn( | |
523 | "Transaction returned error for %s: %s", | |
524 | e_id, r, | |
522 | 525 | ) |
523 | raise e | |
524 | ||
526 | except HttpResponseException as e: | |
527 | code = e.code | |
528 | response = e.response | |
529 | ||
530 | if e.code in (401, 404, 429) or 500 <= e.code: | |
525 | 531 | logger.info( |
526 | 532 | "TX [%s] {%s} got %d response", |
527 | 533 | destination, txn_id, code |
528 | 534 | ) |
529 | ||
530 | logger.debug("TX [%s] Sent transaction", destination) | |
531 | logger.debug("TX [%s] Marking as delivered...", destination) | |
532 | ||
533 | yield self.transaction_actions.delivered( | |
534 | transaction, code, response | |
535 | ) | |
536 | ||
537 | logger.debug("TX [%s] Marked as delivered", destination) | |
538 | ||
539 | if code != 200: | |
540 | for p in pdus: | |
541 | logger.info( | |
542 | "Failed to send event %s to %s", p.event_id, destination | |
543 | ) | |
544 | success = False | |
545 | except RuntimeError as e: | |
546 | # We capture this here as there as nothing actually listens | |
547 | # for this finishing functions deferred. | |
548 | logger.warn( | |
549 | "TX [%s] Problem in _attempt_transaction: %s", | |
550 | destination, | |
551 | e, | |
552 | ) | |
553 | ||
535 | raise e | |
536 | ||
537 | logger.info( | |
538 | "TX [%s] {%s} got %d response", | |
539 | destination, txn_id, code | |
540 | ) | |
541 | ||
542 | logger.debug("TX [%s] Sent transaction", destination) | |
543 | logger.debug("TX [%s] Marking as delivered...", destination) | |
544 | ||
545 | yield self.transaction_actions.delivered( | |
546 | transaction, code, response | |
547 | ) | |
548 | ||
549 | logger.debug("TX [%s] Marked as delivered", destination) | |
550 | ||
551 | if code != 200: | |
552 | for p in pdus: | |
553 | logger.info( | |
554 | "Failed to send event %s to %s", p.event_id, destination | |
555 | ) | |
554 | 556 | success = False |
555 | 557 | |
556 | for p in pdus: | |
557 | logger.info("Failed to send event %s to %s", p.event_id, destination) | |
558 | except Exception as e: | |
559 | # We capture this here as there as nothing actually listens | |
560 | # for this finishing functions deferred. | |
561 | logger.warn( | |
562 | "TX [%s] Problem in _attempt_transaction: %s", | |
563 | destination, | |
564 | e, | |
565 | ) | |
566 | ||
567 | success = False | |
568 | ||
569 | for p in pdus: | |
570 | logger.info("Failed to send event %s to %s", p.event_id, destination) | |
571 | ||
572 | 558 | defer.returnValue(success) |
162 | 162 | data=json_data, |
163 | 163 | json_data_callback=json_data_callback, |
164 | 164 | long_retries=True, |
165 | backoff_on_404=True, # If we get a 404 the other side has gone | |
165 | 166 | ) |
166 | 167 | |
167 | 168 | logger.debug( |
173 | 174 | |
174 | 175 | @defer.inlineCallbacks |
175 | 176 | @log_function |
176 | def make_query(self, destination, query_type, args, retry_on_dns_fail): | |
177 | def make_query(self, destination, query_type, args, retry_on_dns_fail, | |
178 | ignore_backoff=False): | |
177 | 179 | path = PREFIX + "/query/%s" % query_type |
178 | 180 | |
179 | 181 | content = yield self.client.get_json( |
182 | 184 | args=args, |
183 | 185 | retry_on_dns_fail=retry_on_dns_fail, |
184 | 186 | timeout=10000, |
187 | ignore_backoff=ignore_backoff, | |
185 | 188 | ) |
186 | 189 | |
187 | 190 | defer.returnValue(content) |
241 | 244 | destination=destination, |
242 | 245 | path=path, |
243 | 246 | data=content, |
247 | ignore_backoff=True, | |
244 | 248 | ) |
245 | 249 | |
246 | 250 | defer.returnValue(response) |
268 | 272 | destination=remote_server, |
269 | 273 | path=path, |
270 | 274 | args=args, |
275 | ignore_backoff=True, | |
271 | 276 | ) |
272 | 277 | |
273 | 278 | defer.returnValue(response) |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014 - 2016 OpenMarket Ltd |
2 | # Copyright 2017 Vector Creations Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
46 | 47 | LoginType.PASSWORD: self._check_password_auth, |
47 | 48 | LoginType.RECAPTCHA: self._check_recaptcha, |
48 | 49 | LoginType.EMAIL_IDENTITY: self._check_email_identity, |
50 | LoginType.MSISDN: self._check_msisdn, | |
49 | 51 | LoginType.DUMMY: self._check_dummy_auth, |
50 | 52 | } |
51 | 53 | self.bcrypt_rounds = hs.config.bcrypt_rounds |
306 | 308 | defer.returnValue(True) |
307 | 309 | raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) |
308 | 310 | |
309 | @defer.inlineCallbacks | |
310 | 311 | def _check_email_identity(self, authdict, _): |
311 | yield run_on_reactor() | |
312 | ||
313 | if 'threepid_creds' not in authdict: | |
314 | raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM) | |
315 | ||
316 | threepid_creds = authdict['threepid_creds'] | |
317 | identity_handler = self.hs.get_handlers().identity_handler | |
318 | ||
319 | logger.info("Getting validated threepid. threepidcreds: %r" % (threepid_creds,)) | |
320 | threepid = yield identity_handler.threepid_from_creds(threepid_creds) | |
321 | ||
322 | if not threepid: | |
323 | raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) | |
324 | ||
325 | threepid['threepid_creds'] = authdict['threepid_creds'] | |
326 | ||
327 | defer.returnValue(threepid) | |
312 | return self._check_threepid('email', authdict) | |
313 | ||
314 | def _check_msisdn(self, authdict, _): | |
315 | return self._check_threepid('msisdn', authdict) | |
328 | 316 | |
329 | 317 | @defer.inlineCallbacks |
330 | 318 | def _check_dummy_auth(self, authdict, _): |
331 | 319 | yield run_on_reactor() |
332 | 320 | defer.returnValue(True) |
321 | ||
322 | @defer.inlineCallbacks | |
323 | def _check_threepid(self, medium, authdict): | |
324 | yield run_on_reactor() | |
325 | ||
326 | if 'threepid_creds' not in authdict: | |
327 | raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM) | |
328 | ||
329 | threepid_creds = authdict['threepid_creds'] | |
330 | ||
331 | identity_handler = self.hs.get_handlers().identity_handler | |
332 | ||
333 | logger.info("Getting validated threepid. threepidcreds: %r", (threepid_creds,)) | |
334 | threepid = yield identity_handler.threepid_from_creds(threepid_creds) | |
335 | ||
336 | if not threepid: | |
337 | raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) | |
338 | ||
339 | if threepid['medium'] != medium: | |
340 | raise LoginError( | |
341 | 401, | |
342 | "Expecting threepid of type '%s', got '%s'" % ( | |
343 | medium, threepid['medium'], | |
344 | ), | |
345 | errcode=Codes.UNAUTHORIZED | |
346 | ) | |
347 | ||
348 | threepid['threepid_creds'] = authdict['threepid_creds'] | |
349 | ||
350 | defer.returnValue(threepid) | |
333 | 351 | |
334 | 352 | def _get_params_recaptcha(self): |
335 | 353 | return {"public_key": self.hs.config.recaptcha_public_key} |
169 | 169 | yield self.notify_device_update(user_id, [device_id]) |
170 | 170 | |
171 | 171 | @defer.inlineCallbacks |
172 | def delete_devices(self, user_id, device_ids): | |
173 | """ Delete several devices | |
174 | ||
175 | Args: | |
176 | user_id (str): | |
177 | device_ids (str): The list of device IDs to delete | |
178 | ||
179 | Returns: | |
180 | defer.Deferred: | |
181 | """ | |
182 | ||
183 | try: | |
184 | yield self.store.delete_devices(user_id, device_ids) | |
185 | except errors.StoreError, e: | |
186 | if e.code == 404: | |
187 | # no match | |
188 | pass | |
189 | else: | |
190 | raise | |
191 | ||
192 | # Delete access tokens and e2e keys for each device. Not optimised as it is not | |
193 | # considered as part of a critical path. | |
194 | for device_id in device_ids: | |
195 | yield self.store.user_delete_access_tokens( | |
196 | user_id, device_id=device_id, | |
197 | delete_refresh_tokens=True, | |
198 | ) | |
199 | yield self.store.delete_e2e_keys_by_device( | |
200 | user_id=user_id, device_id=device_id | |
201 | ) | |
202 | ||
203 | yield self.notify_device_update(user_id, device_ids) | |
204 | ||
205 | @defer.inlineCallbacks | |
172 | 206 | def update_device(self, user_id, device_id, content): |
173 | 207 | """ Update the given device |
174 | 208 | |
213 | 247 | user_id, device_ids, list(hosts) |
214 | 248 | ) |
215 | 249 | |
216 | rooms = yield self.store.get_rooms_for_user(user_id) | |
217 | room_ids = [r.room_id for r in rooms] | |
250 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
218 | 251 | |
219 | 252 | yield self.notifier.on_new_event( |
220 | 253 | "device_list_key", position, rooms=room_ids, |
235 | 268 | user_id (str) |
236 | 269 | from_token (StreamToken) |
237 | 270 | """ |
238 | rooms = yield self.store.get_rooms_for_user(user_id) | |
239 | room_ids = set(r.room_id for r in rooms) | |
271 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
240 | 272 | |
241 | 273 | # First we check if any devices have changed |
242 | 274 | changed = yield self.store.get_user_whose_devices_changed( |
261 | 293 | # ordering: treat it the same as a new room |
262 | 294 | event_ids = [] |
263 | 295 | |
264 | current_state_ids = yield self.state.get_current_state_ids(room_id) | |
296 | current_state_ids = yield self.store.get_current_state_ids(room_id) | |
265 | 297 | |
266 | 298 | # special-case for an empty prev state: include all members |
267 | 299 | # in the changed list |
312 | 344 | @defer.inlineCallbacks |
313 | 345 | def user_left_room(self, user, room_id): |
314 | 346 | user_id = user.to_string() |
315 | rooms = yield self.store.get_rooms_for_user(user_id) | |
316 | if not rooms: | |
347 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
348 | if not room_ids: | |
317 | 349 | # We no longer share rooms with this user, so we'll no longer |
318 | 350 | # receive device updates. Mark this in DB. |
319 | 351 | yield self.store.mark_remote_user_device_list_as_unsubscribed(user_id) |
369 | 401 | logger.warning("Got device list update edu for %r from %r", user_id, origin) |
370 | 402 | return |
371 | 403 | |
372 | rooms = yield self.store.get_rooms_for_user(user_id) | |
373 | if not rooms: | |
404 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
405 | if not room_ids: | |
374 | 406 | # We don't share any rooms with this user. Ignore update, as we |
375 | 407 | # probably won't get any further updates. |
376 | 408 | return |
174 | 174 | "room_alias": room_alias.to_string(), |
175 | 175 | }, |
176 | 176 | retry_on_dns_fail=False, |
177 | ignore_backoff=True, | |
177 | 178 | ) |
178 | 179 | except CodeMessageException as e: |
179 | 180 | logging.warn("Error retrieving alias") |
21 | 21 | from synapse.api.errors import SynapseError, CodeMessageException |
22 | 22 | from synapse.types import get_domain_from_id |
23 | 23 | from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred |
24 | from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination | |
24 | from synapse.util.retryutils import NotRetryingDestination | |
25 | 25 | |
26 | 26 | logger = logging.getLogger(__name__) |
27 | 27 | |
120 | 120 | def do_remote_query(destination): |
121 | 121 | destination_query = remote_queries_not_in_cache[destination] |
122 | 122 | try: |
123 | limiter = yield get_retry_limiter( | |
124 | destination, self.clock, self.store | |
123 | remote_result = yield self.federation.query_client_keys( | |
124 | destination, | |
125 | {"device_keys": destination_query}, | |
126 | timeout=timeout | |
125 | 127 | ) |
126 | with limiter: | |
127 | remote_result = yield self.federation.query_client_keys( | |
128 | destination, | |
129 | {"device_keys": destination_query}, | |
130 | timeout=timeout | |
131 | ) | |
132 | 128 | |
133 | 129 | for user_id, keys in remote_result["device_keys"].items(): |
134 | 130 | if user_id in destination_query: |
238 | 234 | def claim_client_keys(destination): |
239 | 235 | device_keys = remote_queries[destination] |
240 | 236 | try: |
241 | limiter = yield get_retry_limiter( | |
242 | destination, self.clock, self.store | |
237 | remote_result = yield self.federation.claim_client_keys( | |
238 | destination, | |
239 | {"one_time_keys": device_keys}, | |
240 | timeout=timeout | |
243 | 241 | ) |
244 | with limiter: | |
245 | remote_result = yield self.federation.claim_client_keys( | |
246 | destination, | |
247 | {"one_time_keys": device_keys}, | |
248 | timeout=timeout | |
249 | ) | |
250 | for user_id, keys in remote_result["one_time_keys"].items(): | |
251 | if user_id in device_keys: | |
252 | json_result[user_id] = keys | |
242 | for user_id, keys in remote_result["one_time_keys"].items(): | |
243 | if user_id in device_keys: | |
244 | json_result[user_id] = keys | |
253 | 245 | except CodeMessageException as e: |
254 | 246 | failures[destination] = { |
255 | 247 | "status": e.code, "message": e.message |
315 | 307 | # old access_token without an associated device_id. Either way, we |
316 | 308 | # need to double-check the device is registered to avoid ending up with |
317 | 309 | # keys without a corresponding device. |
318 | self.device_handler.check_device_registered(user_id, device_id) | |
310 | yield self.device_handler.check_device_registered(user_id, device_id) | |
319 | 311 | |
320 | 312 | result = yield self.store.count_e2e_one_time_keys(user_id, device_id) |
321 | 313 |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | """Contains handlers for federation events.""" |
16 | import synapse.util.logcontext | |
16 | 17 | from signedjson.key import decode_verify_key_bytes |
17 | 18 | from signedjson.sign import verify_signed_json |
18 | 19 | from unpaddedbase64 import decode_base64 |
30 | 31 | ) |
31 | 32 | from synapse.util.metrics import measure_func |
32 | 33 | from synapse.util.logutils import log_function |
33 | from synapse.util.async import run_on_reactor | |
34 | from synapse.util.async import run_on_reactor, Linearizer | |
34 | 35 | from synapse.util.frozenutils import unfreeze |
35 | 36 | from synapse.crypto.event_signing import ( |
36 | 37 | compute_event_signature, add_hashes_and_signatures, |
78 | 79 | |
79 | 80 | # When joining a room we need to queue any events for that room up |
80 | 81 | self.room_queues = {} |
81 | ||
82 | self._room_pdu_linearizer = Linearizer("fed_room_pdu") | |
83 | ||
84 | @defer.inlineCallbacks | |
82 | 85 | @log_function |
83 | @defer.inlineCallbacks | |
84 | def on_receive_pdu(self, origin, pdu, state=None, auth_chain=None): | |
85 | """ Called by the ReplicationLayer when we have a new pdu. We need to | |
86 | do auth checks and put it through the StateHandler. | |
87 | ||
88 | auth_chain and state are None if we already have the necessary state | |
89 | and prev_events in the db | |
90 | """ | |
91 | event = pdu | |
92 | ||
93 | logger.debug("Got event: %s", event.event_id) | |
86 | def on_receive_pdu(self, origin, pdu, get_missing=True): | |
87 | """ Process a PDU received via a federation /send/ transaction, or | |
88 | via backfill of missing prev_events | |
89 | ||
90 | Args: | |
91 | origin (str): server which initiated the /send/ transaction. Will | |
92 | be used to fetch missing events or state. | |
93 | pdu (FrozenEvent): received PDU | |
94 | get_missing (bool): True if we should fetch missing prev_events | |
95 | ||
96 | Returns (Deferred): completes with None | |
97 | """ | |
98 | ||
99 | # We reprocess pdus when we have seen them only as outliers | |
100 | existing = yield self.get_persisted_pdu( | |
101 | origin, pdu.event_id, do_auth=False | |
102 | ) | |
103 | ||
104 | # FIXME: Currently we fetch an event again when we already have it | |
105 | # if it has been marked as an outlier. | |
106 | ||
107 | already_seen = ( | |
108 | existing and ( | |
109 | not existing.internal_metadata.is_outlier() | |
110 | or pdu.internal_metadata.is_outlier() | |
111 | ) | |
112 | ) | |
113 | if already_seen: | |
114 | logger.debug("Already seen pdu %s", pdu.event_id) | |
115 | return | |
94 | 116 | |
95 | 117 | # If we are currently in the process of joining this room, then we |
96 | 118 | # queue up events for later processing. |
97 | if event.room_id in self.room_queues: | |
98 | self.room_queues[event.room_id].append((pdu, origin)) | |
119 | if pdu.room_id in self.room_queues: | |
120 | logger.info("Ignoring PDU %s for room %s from %s for now; join " | |
121 | "in progress", pdu.event_id, pdu.room_id, origin) | |
122 | self.room_queues[pdu.room_id].append((pdu, origin)) | |
99 | 123 | return |
100 | 124 | |
101 | logger.debug("Processing event: %s", event.event_id) | |
102 | ||
103 | logger.debug("Event: %s", event) | |
125 | state = None | |
126 | ||
127 | auth_chain = [] | |
128 | ||
129 | have_seen = yield self.store.have_events( | |
130 | [ev for ev, _ in pdu.prev_events] | |
131 | ) | |
132 | ||
133 | fetch_state = False | |
134 | ||
135 | # Get missing pdus if necessary. | |
136 | if not pdu.internal_metadata.is_outlier(): | |
137 | # We only backfill backwards to the min depth. | |
138 | min_depth = yield self.get_min_depth_for_context( | |
139 | pdu.room_id | |
140 | ) | |
141 | ||
142 | logger.debug( | |
143 | "_handle_new_pdu min_depth for %s: %d", | |
144 | pdu.room_id, min_depth | |
145 | ) | |
146 | ||
147 | prevs = {e_id for e_id, _ in pdu.prev_events} | |
148 | seen = set(have_seen.keys()) | |
149 | ||
150 | if min_depth and pdu.depth < min_depth: | |
151 | # This is so that we don't notify the user about this | |
152 | # message, to work around the fact that some events will | |
153 | # reference really really old events we really don't want to | |
154 | # send to the clients. | |
155 | pdu.internal_metadata.outlier = True | |
156 | elif min_depth and pdu.depth > min_depth: | |
157 | if get_missing and prevs - seen: | |
158 | # If we're missing stuff, ensure we only fetch stuff one | |
159 | # at a time. | |
160 | logger.info( | |
161 | "Acquiring lock for room %r to fetch %d missing events: %r...", | |
162 | pdu.room_id, len(prevs - seen), list(prevs - seen)[:5], | |
163 | ) | |
164 | with (yield self._room_pdu_linearizer.queue(pdu.room_id)): | |
165 | logger.info( | |
166 | "Acquired lock for room %r to fetch %d missing events", | |
167 | pdu.room_id, len(prevs - seen), | |
168 | ) | |
169 | ||
170 | yield self._get_missing_events_for_pdu( | |
171 | origin, pdu, prevs, min_depth | |
172 | ) | |
173 | ||
174 | prevs = {e_id for e_id, _ in pdu.prev_events} | |
175 | seen = set(have_seen.keys()) | |
176 | if prevs - seen: | |
177 | logger.info( | |
178 | "Still missing %d events for room %r: %r...", | |
179 | len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] | |
180 | ) | |
181 | fetch_state = True | |
182 | ||
183 | if fetch_state: | |
184 | # We need to get the state at this event, since we haven't | |
185 | # processed all the prev events. | |
186 | logger.debug( | |
187 | "_handle_new_pdu getting state for %s", | |
188 | pdu.room_id | |
189 | ) | |
190 | try: | |
191 | state, auth_chain = yield self.replication_layer.get_state_for_room( | |
192 | origin, pdu.room_id, pdu.event_id, | |
193 | ) | |
194 | except: | |
195 | logger.exception("Failed to get state for event: %s", pdu.event_id) | |
196 | ||
197 | yield self._process_received_pdu( | |
198 | origin, | |
199 | pdu, | |
200 | state=state, | |
201 | auth_chain=auth_chain, | |
202 | ) | |
203 | ||
204 | @defer.inlineCallbacks | |
205 | def _get_missing_events_for_pdu(self, origin, pdu, prevs, min_depth): | |
206 | """ | |
207 | Args: | |
208 | origin (str): Origin of the pdu. Will be called to get the missing events | |
209 | pdu: received pdu | |
210 | prevs (str[]): List of event ids which we are missing | |
211 | min_depth (int): Minimum depth of events to return. | |
212 | ||
213 | Returns: | |
214 | Deferred<dict(str, str?)>: updated have_seen dictionary | |
215 | """ | |
216 | # We recalculate seen, since it may have changed. | |
217 | have_seen = yield self.store.have_events(prevs) | |
218 | seen = set(have_seen.keys()) | |
219 | ||
220 | if not prevs - seen: | |
221 | # nothing left to do | |
222 | defer.returnValue(have_seen) | |
223 | ||
224 | latest = yield self.store.get_latest_event_ids_in_room( | |
225 | pdu.room_id | |
226 | ) | |
227 | ||
228 | # We add the prev events that we have seen to the latest | |
229 | # list to ensure the remote server doesn't give them to us | |
230 | latest = set(latest) | |
231 | latest |= seen | |
232 | ||
233 | logger.info( | |
234 | "Missing %d events for room %r: %r...", | |
235 | len(prevs - seen), pdu.room_id, list(prevs - seen)[:5] | |
236 | ) | |
237 | ||
238 | # XXX: we set timeout to 10s to help workaround | |
239 | # https://github.com/matrix-org/synapse/issues/1733. | |
240 | # The reason is to avoid holding the linearizer lock | |
241 | # whilst processing inbound /send transactions, causing | |
242 | # FDs to stack up and block other inbound transactions | |
243 | # which empirically can currently take up to 30 minutes. | |
244 | # | |
245 | # N.B. this explicitly disables retry attempts. | |
246 | # | |
247 | # N.B. this also increases our chances of falling back to | |
248 | # fetching fresh state for the room if the missing event | |
249 | # can't be found, which slightly reduces our security. | |
250 | # it may also increase our DAG extremity count for the room, | |
251 | # causing additional state resolution? See #1760. | |
252 | # However, fetching state doesn't hold the linearizer lock | |
253 | # apparently. | |
254 | # | |
255 | # see https://github.com/matrix-org/synapse/pull/1744 | |
256 | ||
257 | missing_events = yield self.replication_layer.get_missing_events( | |
258 | origin, | |
259 | pdu.room_id, | |
260 | earliest_events_ids=list(latest), | |
261 | latest_events=[pdu], | |
262 | limit=10, | |
263 | min_depth=min_depth, | |
264 | timeout=10000, | |
265 | ) | |
266 | ||
267 | # We want to sort these by depth so we process them and | |
268 | # tell clients about them in order. | |
269 | missing_events.sort(key=lambda x: x.depth) | |
270 | ||
271 | for e in missing_events: | |
272 | yield self.on_receive_pdu( | |
273 | origin, | |
274 | e, | |
275 | get_missing=False | |
276 | ) | |
277 | ||
278 | have_seen = yield self.store.have_events( | |
279 | [ev for ev, _ in pdu.prev_events] | |
280 | ) | |
281 | defer.returnValue(have_seen) | |
282 | ||
283 | @log_function | |
284 | @defer.inlineCallbacks | |
285 | def _process_received_pdu(self, origin, pdu, state, auth_chain): | |
286 | """ Called when we have a new pdu. We need to do auth checks and put it | |
287 | through the StateHandler. | |
288 | """ | |
289 | event = pdu | |
290 | ||
291 | logger.debug("Processing event: %s", event) | |
104 | 292 | |
105 | 293 | # FIXME (erikj): Awful hack to make the case where we are not currently |
106 | 294 | # in the room work |
669 | 857 | """ |
670 | 858 | logger.debug("Joining %s to %s", joinee, room_id) |
671 | 859 | |
672 | yield self.store.clean_room_for_join(room_id) | |
673 | ||
674 | 860 | origin, event = yield self._make_and_verify_event( |
675 | 861 | target_hosts, |
676 | 862 | room_id, |
679 | 865 | content, |
680 | 866 | ) |
681 | 867 | |
868 | # This shouldn't happen, because the RoomMemberHandler has a | |
869 | # linearizer lock which only allows one operation per user per room | |
870 | # at a time - so this is just paranoia. | |
871 | assert (room_id not in self.room_queues) | |
872 | ||
682 | 873 | self.room_queues[room_id] = [] |
874 | ||
875 | yield self.store.clean_room_for_join(room_id) | |
876 | ||
683 | 877 | handled_events = set() |
684 | 878 | |
685 | 879 | try: |
732 | 926 | room_queue = self.room_queues[room_id] |
733 | 927 | del self.room_queues[room_id] |
734 | 928 | |
735 | for p, origin in room_queue: | |
736 | if p.event_id in handled_events: | |
737 | continue | |
738 | ||
739 | try: | |
740 | self.on_receive_pdu(origin, p) | |
741 | except: | |
742 | logger.exception("Couldn't handle pdu") | |
929 | # we don't need to wait for the queued events to be processed - | |
930 | # it's just a best-effort thing at this point. We do want to do | |
931 | # them roughly in order, though, otherwise we'll end up making | |
932 | # lots of requests for missing prev_events which we do actually | |
933 | # have. Hence we fire off the deferred, but don't wait for it. | |
934 | ||
935 | synapse.util.logcontext.preserve_fn(self._handle_queued_pdus)( | |
936 | room_queue | |
937 | ) | |
743 | 938 | |
744 | 939 | defer.returnValue(True) |
940 | ||
941 | @defer.inlineCallbacks | |
942 | def _handle_queued_pdus(self, room_queue): | |
943 | """Process PDUs which got queued up while we were busy send_joining. | |
944 | ||
945 | Args: | |
946 | room_queue (list[FrozenEvent, str]): list of PDUs to be processed | |
947 | and the servers that sent them | |
948 | """ | |
949 | for p, origin in room_queue: | |
950 | try: | |
951 | logger.info("Processing queued PDU %s which was received " | |
952 | "while we were joining %s", p.event_id, p.room_id) | |
953 | yield self.on_receive_pdu(origin, p) | |
954 | except Exception as e: | |
955 | logger.warn( | |
956 | "Error handling queued PDU %s from %s: %s", | |
957 | p.event_id, origin, e) | |
745 | 958 | |
746 | 959 | @defer.inlineCallbacks |
747 | 960 | @log_function |
790 | 1003 | ) |
791 | 1004 | |
792 | 1005 | event.internal_metadata.outlier = False |
793 | # Send this event on behalf of the origin server since they may not | |
794 | # have an up to data view of the state of the room at this event so | |
795 | # will not know which servers to send the event to. | |
1006 | # Send this event on behalf of the origin server. | |
1007 | # | |
1008 | # The reasons we have the destination server rather than the origin | |
1009 | # server send it are slightly mysterious: the origin server should have | |
1010 | # all the neccessary state once it gets the response to the send_join, | |
1011 | # so it could send the event itself if it wanted to. It may be that | |
1012 | # doing it this way reduces failure modes, or avoids certain attacks | |
1013 | # where a new server selectively tells a subset of the federation that | |
1014 | # it has joined. | |
1015 | # | |
1016 | # The fact is that, as of the current writing, Synapse doesn't send out | |
1017 | # the join event over federation after joining, and changing it now | |
1018 | # would introduce the danger of backwards-compatibility problems. | |
796 | 1019 | event.internal_metadata.send_on_behalf_of = origin |
797 | 1020 | |
798 | 1021 | context, event_stream_id, max_stream_id = yield self._handle_new_event( |
877 | 1100 | user_id, |
878 | 1101 | "leave" |
879 | 1102 | ) |
880 | signed_event = self._sign_event(event) | |
1103 | event = self._sign_event(event) | |
881 | 1104 | except SynapseError: |
882 | 1105 | raise |
883 | 1106 | except CodeMessageException as e: |
884 | 1107 | logger.warn("Failed to reject invite: %s", e) |
885 | 1108 | raise SynapseError(500, "Failed to reject invite") |
886 | 1109 | |
887 | # Try the host we successfully got a response to /make_join/ | |
888 | # request first. | |
1110 | # Try the host that we succesfully called /make_leave/ on first for | |
1111 | # the /send_leave/ request. | |
889 | 1112 | try: |
890 | 1113 | target_hosts.remove(origin) |
891 | 1114 | target_hosts.insert(0, origin) |
895 | 1118 | try: |
896 | 1119 | yield self.replication_layer.send_leave( |
897 | 1120 | target_hosts, |
898 | signed_event | |
1121 | event | |
899 | 1122 | ) |
900 | 1123 | except SynapseError: |
901 | 1124 | raise |
1324 | 1547 | |
1325 | 1548 | @defer.inlineCallbacks |
1326 | 1549 | def _prep_event(self, origin, event, state=None, auth_events=None): |
1327 | ||
1550 | """ | |
1551 | ||
1552 | Args: | |
1553 | origin: | |
1554 | event: | |
1555 | state: | |
1556 | auth_events: | |
1557 | ||
1558 | Returns: | |
1559 | Deferred, which resolves to synapse.events.snapshot.EventContext | |
1560 | """ | |
1328 | 1561 | context = yield self.state_handler.compute_event_context( |
1329 | 1562 | event, old_state=state, |
1330 | 1563 | ) |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2015, 2016 OpenMarket Ltd |
2 | # Copyright 2017 Vector Creations Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
149 | 150 | params.update(kwargs) |
150 | 151 | |
151 | 152 | try: |
152 | data = yield self.http_client.post_urlencoded_get_json( | |
153 | data = yield self.http_client.post_json_get_json( | |
153 | 154 | "https://%s%s" % ( |
154 | 155 | id_server, |
155 | 156 | "/_matrix/identity/api/v1/validate/email/requestToken" |
160 | 161 | except CodeMessageException as e: |
161 | 162 | logger.info("Proxied requestToken failed: %r", e) |
162 | 163 | raise e |
164 | ||
165 | @defer.inlineCallbacks | |
166 | def requestMsisdnToken( | |
167 | self, id_server, country, phone_number, | |
168 | client_secret, send_attempt, **kwargs | |
169 | ): | |
170 | yield run_on_reactor() | |
171 | ||
172 | if not self._should_trust_id_server(id_server): | |
173 | raise SynapseError( | |
174 | 400, "Untrusted ID server '%s'" % id_server, | |
175 | Codes.SERVER_NOT_TRUSTED | |
176 | ) | |
177 | ||
178 | params = { | |
179 | 'country': country, | |
180 | 'phone_number': phone_number, | |
181 | 'client_secret': client_secret, | |
182 | 'send_attempt': send_attempt, | |
183 | } | |
184 | params.update(kwargs) | |
185 | ||
186 | try: | |
187 | data = yield self.http_client.post_json_get_json( | |
188 | "https://%s%s" % ( | |
189 | id_server, | |
190 | "/_matrix/identity/api/v1/validate/msisdn/requestToken" | |
191 | ), | |
192 | params | |
193 | ) | |
194 | defer.returnValue(data) | |
195 | except CodeMessageException as e: | |
196 | logger.info("Proxied requestToken failed: %r", e) | |
197 | raise e |
18 | 18 | from synapse.api.errors import AuthError, Codes |
19 | 19 | from synapse.events.utils import serialize_event |
20 | 20 | from synapse.events.validator import EventValidator |
21 | from synapse.handlers.presence import format_user_presence_state | |
21 | 22 | from synapse.streams.config import PaginationConfig |
22 | 23 | from synapse.types import ( |
23 | 24 | UserID, StreamToken, |
224 | 225 | "content": content, |
225 | 226 | }) |
226 | 227 | |
228 | now = self.clock.time_msec() | |
229 | ||
227 | 230 | ret = { |
228 | 231 | "rooms": rooms_ret, |
229 | "presence": presence, | |
232 | "presence": [ | |
233 | { | |
234 | "type": "m.presence", | |
235 | "content": format_user_presence_state(event, now), | |
236 | } | |
237 | for event in presence | |
238 | ], | |
230 | 239 | "account_data": account_data_events, |
231 | 240 | "receipts": receipt, |
232 | 241 | "end": now_token.to_string(), |
28 | 28 | from synapse.api.constants import PresenceState |
29 | 29 | from synapse.storage.presence import UserPresenceState |
30 | 30 | |
31 | from synapse.util.caches.descriptors import cachedInlineCallbacks | |
31 | 32 | from synapse.util.logcontext import preserve_fn |
32 | 33 | from synapse.util.logutils import log_function |
33 | 34 | from synapse.util.metrics import Measure |
555 | 556 | room_ids_to_states = {} |
556 | 557 | users_to_states = {} |
557 | 558 | for state in states: |
558 | events = yield self.store.get_rooms_for_user(state.user_id) | |
559 | for e in events: | |
560 | room_ids_to_states.setdefault(e.room_id, []).append(state) | |
559 | room_ids = yield self.store.get_rooms_for_user(state.user_id) | |
560 | for room_id in room_ids: | |
561 | room_ids_to_states.setdefault(room_id, []).append(state) | |
561 | 562 | |
562 | 563 | plist = yield self.store.get_presence_list_observers_accepted(state.user_id) |
563 | 564 | for u in plist: |
573 | 574 | if not local_states: |
574 | 575 | continue |
575 | 576 | |
576 | users = yield self.store.get_users_in_room(room_id) | |
577 | hosts = set(get_domain_from_id(u) for u in users) | |
577 | hosts = yield self.store.get_hosts_in_room(room_id) | |
578 | 578 | |
579 | 579 | for host in hosts: |
580 | 580 | hosts_to_states.setdefault(host, []).extend(local_states) |
718 | 718 | for state in updates |
719 | 719 | ]) |
720 | 720 | else: |
721 | defer.returnValue([ | |
722 | format_user_presence_state(state, now) for state in updates | |
723 | ]) | |
721 | defer.returnValue(updates) | |
724 | 722 | |
725 | 723 | @defer.inlineCallbacks |
726 | 724 | def set_state(self, target_user, state, ignore_status_msg=False): |
793 | 791 | target_user_ids=[row["observed_user_id"] for row in presence_list], |
794 | 792 | as_event=False, |
795 | 793 | ) |
794 | ||
795 | now = self.clock.time_msec() | |
796 | results[:] = [format_user_presence_state(r, now) for r in results] | |
796 | 797 | |
797 | 798 | is_accepted = { |
798 | 799 | row["observed_user_id"]: row["accepted"] for row in presence_list |
846 | 847 | ) |
847 | 848 | |
848 | 849 | state_dict = yield self.get_state(observed_user, as_event=False) |
850 | state_dict = format_user_presence_state(state_dict, self.clock.time_msec()) | |
849 | 851 | |
850 | 852 | self.federation.send_edu( |
851 | 853 | destination=observer_user.domain, |
909 | 911 | def is_visible(self, observed_user, observer_user): |
910 | 912 | """Returns whether a user can see another user's presence. |
911 | 913 | """ |
912 | observer_rooms = yield self.store.get_rooms_for_user(observer_user.to_string()) | |
913 | observed_rooms = yield self.store.get_rooms_for_user(observed_user.to_string()) | |
914 | ||
915 | observer_room_ids = set(r.room_id for r in observer_rooms) | |
916 | observed_room_ids = set(r.room_id for r in observed_rooms) | |
914 | observer_room_ids = yield self.store.get_rooms_for_user( | |
915 | observer_user.to_string() | |
916 | ) | |
917 | observed_room_ids = yield self.store.get_rooms_for_user( | |
918 | observed_user.to_string() | |
919 | ) | |
917 | 920 | |
918 | 921 | if observer_room_ids & observed_room_ids: |
919 | 922 | defer.returnValue(True) |
978 | 981 | return False |
979 | 982 | |
980 | 983 | |
981 | def format_user_presence_state(state, now): | |
984 | def format_user_presence_state(state, now, include_user_id=True): | |
982 | 985 | """Convert UserPresenceState to a format that can be sent down to clients |
983 | 986 | and to other servers. |
987 | ||
988 | The "user_id" is optional so that this function can be used to format presence | |
989 | updates for client /sync responses and for federation /send requests. | |
984 | 990 | """ |
985 | 991 | content = { |
986 | 992 | "presence": state.state, |
987 | "user_id": state.user_id, | |
988 | 993 | } |
994 | if include_user_id: | |
995 | content["user_id"] = state.user_id | |
989 | 996 | if state.last_active_ts: |
990 | 997 | content["last_active_ago"] = now - state.last_active_ts |
991 | 998 | if state.status_msg and state.state != PresenceState.OFFLINE: |
1024 | 1031 | # sending down the rare duplicate is not a concern. |
1025 | 1032 | |
1026 | 1033 | with Measure(self.clock, "presence.get_new_events"): |
1027 | user_id = user.to_string() | |
1028 | 1034 | if from_key is not None: |
1029 | 1035 | from_key = int(from_key) |
1030 | 1036 | |
1033 | 1039 | |
1034 | 1040 | max_token = self.store.get_current_presence_token() |
1035 | 1041 | |
1036 | plist = yield self.store.get_presence_list_accepted(user.localpart) | |
1037 | users_interested_in = set(row["observed_user_id"] for row in plist) | |
1038 | users_interested_in.add(user_id) # So that we receive our own presence | |
1039 | ||
1040 | users_who_share_room = yield self.store.get_users_who_share_room_with_user( | |
1041 | user_id | |
1042 | ) | |
1043 | users_interested_in.update(users_who_share_room) | |
1044 | ||
1045 | if explicit_room_id: | |
1046 | user_ids = yield self.store.get_users_in_room(explicit_room_id) | |
1047 | users_interested_in.update(user_ids) | |
1042 | users_interested_in = yield self._get_interested_in(user, explicit_room_id) | |
1048 | 1043 | |
1049 | 1044 | user_ids_changed = set() |
1050 | 1045 | changed = None |
1072 | 1067 | |
1073 | 1068 | updates = yield presence.current_state_for_users(user_ids_changed) |
1074 | 1069 | |
1075 | now = self.clock.time_msec() | |
1076 | ||
1077 | defer.returnValue(([ | |
1078 | { | |
1079 | "type": "m.presence", | |
1080 | "content": format_user_presence_state(s, now), | |
1081 | } | |
1082 | for s in updates.values() | |
1083 | if include_offline or s.state != PresenceState.OFFLINE | |
1084 | ], max_token)) | |
1070 | if include_offline: | |
1071 | defer.returnValue((updates.values(), max_token)) | |
1072 | else: | |
1073 | defer.returnValue(([ | |
1074 | s for s in updates.itervalues() | |
1075 | if s.state != PresenceState.OFFLINE | |
1076 | ], max_token)) | |
1085 | 1077 | |
1086 | 1078 | def get_current_key(self): |
1087 | 1079 | return self.store.get_current_presence_token() |
1088 | 1080 | |
1089 | 1081 | def get_pagination_rows(self, user, pagination_config, key): |
1090 | 1082 | return self.get_new_events(user, from_key=None, include_offline=False) |
1083 | ||
1084 | @cachedInlineCallbacks(num_args=2, cache_context=True) | |
1085 | def _get_interested_in(self, user, explicit_room_id, cache_context): | |
1086 | """Returns the set of users that the given user should see presence | |
1087 | updates for | |
1088 | """ | |
1089 | user_id = user.to_string() | |
1090 | plist = yield self.store.get_presence_list_accepted( | |
1091 | user.localpart, on_invalidate=cache_context.invalidate, | |
1092 | ) | |
1093 | users_interested_in = set(row["observed_user_id"] for row in plist) | |
1094 | users_interested_in.add(user_id) # So that we receive our own presence | |
1095 | ||
1096 | users_who_share_room = yield self.store.get_users_who_share_room_with_user( | |
1097 | user_id, on_invalidate=cache_context.invalidate, | |
1098 | ) | |
1099 | users_interested_in.update(users_who_share_room) | |
1100 | ||
1101 | if explicit_room_id: | |
1102 | user_ids = yield self.store.get_users_in_room( | |
1103 | explicit_room_id, on_invalidate=cache_context.invalidate, | |
1104 | ) | |
1105 | users_interested_in.update(user_ids) | |
1106 | ||
1107 | defer.returnValue(users_interested_in) | |
1091 | 1108 | |
1092 | 1109 | |
1093 | 1110 | def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now): |
1156 | 1173 | # If there are have been no sync for a while (and none ongoing), |
1157 | 1174 | # set presence to offline |
1158 | 1175 | if user_id not in syncing_user_ids: |
1159 | if now - state.last_user_sync_ts > SYNC_ONLINE_TIMEOUT: | |
1176 | # If the user has done something recently but hasn't synced, | |
1177 | # don't set them as offline. | |
1178 | sync_or_active = max(state.last_user_sync_ts, state.last_active_ts) | |
1179 | if now - sync_or_active > SYNC_ONLINE_TIMEOUT: | |
1160 | 1180 | state = state.copy_and_replace( |
1161 | 1181 | state=PresenceState.OFFLINE, |
1162 | 1182 | status_msg=None, |
51 | 51 | args={ |
52 | 52 | "user_id": target_user.to_string(), |
53 | 53 | "field": "displayname", |
54 | } | |
54 | }, | |
55 | ignore_backoff=True, | |
55 | 56 | ) |
56 | 57 | except CodeMessageException as e: |
57 | 58 | if e.code != 404: |
98 | 99 | args={ |
99 | 100 | "user_id": target_user.to_string(), |
100 | 101 | "field": "avatar_url", |
101 | } | |
102 | }, | |
103 | ignore_backoff=True, | |
102 | 104 | ) |
103 | 105 | except CodeMessageException as e: |
104 | 106 | if e.code != 404: |
155 | 157 | |
156 | 158 | self.ratelimit(requester) |
157 | 159 | |
158 | joins = yield self.store.get_rooms_for_user( | |
160 | room_ids = yield self.store.get_rooms_for_user( | |
159 | 161 | user.to_string(), |
160 | 162 | ) |
161 | 163 | |
162 | for j in joins: | |
164 | for room_id in room_ids: | |
163 | 165 | handler = self.hs.get_handlers().room_member_handler |
164 | 166 | try: |
165 | 167 | # Assume the user isn't a guest because we don't let guests set |
170 | 172 | yield handler.update_membership( |
171 | 173 | requester, |
172 | 174 | user, |
173 | j.room_id, | |
175 | room_id, | |
174 | 176 | "join", # We treat a profile update like a join. |
175 | 177 | ratelimit=False, # Try to hide that these events aren't atomic. |
176 | 178 | ) |
177 | 179 | except Exception as e: |
178 | 180 | logger.warn( |
179 | 181 | "Failed to update join event for room %s - %s", |
180 | j.room_id, str(e.message) | |
182 | room_id, str(e.message) | |
181 | 183 | ) |
209 | 209 | else: |
210 | 210 | from_key = None |
211 | 211 | |
212 | rooms = yield self.store.get_rooms_for_user(user.to_string()) | |
213 | rooms = [room.room_id for room in rooms] | |
212 | room_ids = yield self.store.get_rooms_for_user(user.to_string()) | |
214 | 213 | events = yield self.store.get_linearized_receipts_for_rooms( |
215 | rooms, | |
214 | room_ids, | |
216 | 215 | from_key=from_key, |
217 | 216 | to_key=to_key, |
218 | 217 | ) |
20 | 20 | EventTypes, JoinRules, |
21 | 21 | ) |
22 | 22 | from synapse.util.async import concurrently_execute |
23 | from synapse.util.caches.descriptors import cachedInlineCallbacks | |
23 | 24 | from synapse.util.caches.response_cache import ResponseCache |
24 | 25 | from synapse.types import ThirdPartyInstanceID |
25 | 26 | |
61 | 62 | appservice and network id to use an appservice specific one. |
62 | 63 | Setting to None returns all public rooms across all lists. |
63 | 64 | """ |
65 | logger.info( | |
66 | "Getting public room list: limit=%r, since=%r, search=%r, network=%r", | |
67 | limit, since_token, bool(search_filter), network_tuple, | |
68 | ) | |
64 | 69 | if search_filter: |
65 | 70 | # We explicitly don't bother caching searches or requests for |
66 | 71 | # appservice specific lists. |
90 | 95 | |
91 | 96 | rooms_to_order_value = {} |
92 | 97 | rooms_to_num_joined = {} |
93 | rooms_to_latest_event_ids = {} | |
94 | 98 | |
95 | 99 | newly_visible = [] |
96 | 100 | newly_unpublished = [] |
115 | 119 | |
116 | 120 | @defer.inlineCallbacks |
117 | 121 | def get_order_for_room(room_id): |
118 | latest_event_ids = rooms_to_latest_event_ids.get(room_id, None) | |
119 | if not latest_event_ids: | |
122 | # Most of the rooms won't have changed between the since token and | |
123 | # now (especially if the since token is "now"). So, we can ask what | |
124 | # the current users are in a room (that will hit a cache) and then | |
125 | # check if the room has changed since the since token. (We have to | |
126 | # do it in that order to avoid races). | |
127 | # If things have changed then fall back to getting the current state | |
128 | # at the since token. | |
129 | joined_users = yield self.store.get_users_in_room(room_id) | |
130 | if self.store.has_room_changed_since(room_id, stream_token): | |
120 | 131 | latest_event_ids = yield self.store.get_forward_extremeties_for_room( |
121 | 132 | room_id, stream_token |
122 | 133 | ) |
123 | rooms_to_latest_event_ids[room_id] = latest_event_ids | |
124 | ||
125 | if not latest_event_ids: | |
126 | return | |
127 | ||
128 | joined_users = yield self.state_handler.get_current_user_in_room( | |
129 | room_id, latest_event_ids, | |
130 | ) | |
134 | ||
135 | if not latest_event_ids: | |
136 | return | |
137 | ||
138 | joined_users = yield self.state_handler.get_current_user_in_room( | |
139 | room_id, latest_event_ids, | |
140 | ) | |
141 | ||
131 | 142 | num_joined_users = len(joined_users) |
132 | 143 | rooms_to_num_joined[room_id] = num_joined_users |
133 | 144 | |
164 | 175 | rooms_to_scan = rooms_to_scan[:since_token.current_limit] |
165 | 176 | rooms_to_scan.reverse() |
166 | 177 | |
167 | # Actually generate the entries. _generate_room_entry will append to | |
178 | # Actually generate the entries. _append_room_entry_to_chunk will append to | |
168 | 179 | # chunk but will stop if len(chunk) > limit |
169 | 180 | chunk = [] |
170 | 181 | if limit and not search_filter: |
171 | 182 | step = limit + 1 |
172 | 183 | for i in xrange(0, len(rooms_to_scan), step): |
173 | 184 | # We iterate here because the vast majority of cases we'll stop |
174 | # at first iteration, but occaisonally _generate_room_entry | |
185 | # at first iteration, but occaisonally _append_room_entry_to_chunk | |
175 | 186 | # won't append to the chunk and so we need to loop again. |
176 | 187 | # We don't want to scan over the entire range either as that |
177 | 188 | # would potentially waste a lot of work. |
178 | 189 | yield concurrently_execute( |
179 | lambda r: self._generate_room_entry( | |
190 | lambda r: self._append_room_entry_to_chunk( | |
180 | 191 | r, rooms_to_num_joined[r], |
181 | 192 | chunk, limit, search_filter |
182 | 193 | ), |
186 | 197 | break |
187 | 198 | else: |
188 | 199 | yield concurrently_execute( |
189 | lambda r: self._generate_room_entry( | |
200 | lambda r: self._append_room_entry_to_chunk( | |
190 | 201 | r, rooms_to_num_joined[r], |
191 | 202 | chunk, limit, search_filter |
192 | 203 | ), |
255 | 266 | defer.returnValue(results) |
256 | 267 | |
257 | 268 | @defer.inlineCallbacks |
258 | def _generate_room_entry(self, room_id, num_joined_users, chunk, limit, | |
259 | search_filter): | |
269 | def _append_room_entry_to_chunk(self, room_id, num_joined_users, chunk, limit, | |
270 | search_filter): | |
271 | """Generate the entry for a room in the public room list and append it | |
272 | to the `chunk` if it matches the search filter | |
273 | """ | |
260 | 274 | if limit and len(chunk) > limit + 1: |
261 | 275 | # We've already got enough, so lets just drop it. |
262 | 276 | return |
263 | 277 | |
278 | result = yield self._generate_room_entry(room_id, num_joined_users) | |
279 | ||
280 | if result and _matches_room_entry(result, search_filter): | |
281 | chunk.append(result) | |
282 | ||
283 | @cachedInlineCallbacks(num_args=1, cache_context=True) | |
284 | def _generate_room_entry(self, room_id, num_joined_users, cache_context): | |
285 | """Returns the entry for a room | |
286 | """ | |
264 | 287 | result = { |
265 | 288 | "room_id": room_id, |
266 | 289 | "num_joined_members": num_joined_users, |
267 | 290 | } |
268 | 291 | |
269 | current_state_ids = yield self.state_handler.get_current_state_ids(room_id) | |
292 | current_state_ids = yield self.store.get_current_state_ids( | |
293 | room_id, on_invalidate=cache_context.invalidate, | |
294 | ) | |
270 | 295 | |
271 | 296 | event_map = yield self.store.get_events([ |
272 | event_id for key, event_id in current_state_ids.items() | |
297 | event_id for key, event_id in current_state_ids.iteritems() | |
273 | 298 | if key[0] in ( |
274 | 299 | EventTypes.JoinRules, |
275 | 300 | EventTypes.Name, |
293 | 318 | if join_rule and join_rule != JoinRules.PUBLIC: |
294 | 319 | defer.returnValue(None) |
295 | 320 | |
296 | aliases = yield self.store.get_aliases_for_room(room_id) | |
321 | aliases = yield self.store.get_aliases_for_room( | |
322 | room_id, on_invalidate=cache_context.invalidate | |
323 | ) | |
297 | 324 | if aliases: |
298 | 325 | result["aliases"] = aliases |
299 | 326 | |
333 | 360 | if avatar_url: |
334 | 361 | result["avatar_url"] = avatar_url |
335 | 362 | |
336 | if _matches_room_entry(result, search_filter): | |
337 | chunk.append(result) | |
363 | defer.returnValue(result) | |
338 | 364 | |
339 | 365 | @defer.inlineCallbacks |
340 | 366 | def get_remote_public_room_list(self, server_name, limit=None, since_token=None, |
19 | 19 | from synapse.util.caches.response_cache import ResponseCache |
20 | 20 | from synapse.push.clientformat import format_push_rules_for_user |
21 | 21 | from synapse.visibility import filter_events_for_client |
22 | from synapse.types import RoomStreamToken | |
22 | 23 | |
23 | 24 | from twisted.internet import defer |
24 | 25 | |
224 | 225 | with Measure(self.clock, "ephemeral_by_room"): |
225 | 226 | typing_key = since_token.typing_key if since_token else "0" |
226 | 227 | |
227 | rooms = yield self.store.get_rooms_for_user(sync_config.user.to_string()) | |
228 | room_ids = [room.room_id for room in rooms] | |
228 | room_ids = yield self.store.get_rooms_for_user(sync_config.user.to_string()) | |
229 | 229 | |
230 | 230 | typing_source = self.event_sources.sources["typing"] |
231 | 231 | typing, typing_key = yield typing_source.get_new_events( |
567 | 567 | since_token = sync_result_builder.since_token |
568 | 568 | |
569 | 569 | if since_token and since_token.device_list_key: |
570 | rooms = yield self.store.get_rooms_for_user(user_id) | |
571 | room_ids = set(r.room_id for r in rooms) | |
570 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
572 | 571 | |
573 | 572 | user_ids_changed = set() |
574 | 573 | changed = yield self.store.get_user_whose_devices_changed( |
575 | 574 | since_token.device_list_key |
576 | 575 | ) |
577 | 576 | for other_user_id in changed: |
578 | other_rooms = yield self.store.get_rooms_for_user(other_user_id) | |
579 | if room_ids.intersection(e.room_id for e in other_rooms): | |
577 | other_room_ids = yield self.store.get_rooms_for_user(other_user_id) | |
578 | if room_ids.intersection(other_room_ids): | |
580 | 579 | user_ids_changed.add(other_user_id) |
581 | 580 | |
582 | 581 | defer.returnValue(user_ids_changed) |
720 | 719 | extra_users_ids.update(users) |
721 | 720 | extra_users_ids.discard(user.to_string()) |
722 | 721 | |
723 | states = yield self.presence_handler.get_states( | |
724 | extra_users_ids, | |
725 | as_event=True, | |
726 | ) | |
727 | presence.extend(states) | |
728 | ||
729 | # Deduplicate the presence entries so that there's at most one per user | |
730 | presence = {p["content"]["user_id"]: p for p in presence}.values() | |
722 | if extra_users_ids: | |
723 | states = yield self.presence_handler.get_states( | |
724 | extra_users_ids, | |
725 | ) | |
726 | presence.extend(states) | |
727 | ||
728 | # Deduplicate the presence entries so that there's at most one per user | |
729 | presence = {p.user_id: p for p in presence}.values() | |
731 | 730 | |
732 | 731 | presence = sync_config.filter_collection.filter_presence( |
733 | 732 | presence |
764 | 763 | ) |
765 | 764 | sync_result_builder.now_token = now_token |
766 | 765 | |
766 | # We check up front if anything has changed, if it hasn't then there is | |
767 | # no point in going futher. | |
768 | since_token = sync_result_builder.since_token | |
769 | if not sync_result_builder.full_state: | |
770 | if since_token and not ephemeral_by_room and not account_data_by_room: | |
771 | have_changed = yield self._have_rooms_changed(sync_result_builder) | |
772 | if not have_changed: | |
773 | tags_by_room = yield self.store.get_updated_tags( | |
774 | user_id, | |
775 | since_token.account_data_key, | |
776 | ) | |
777 | if not tags_by_room: | |
778 | logger.debug("no-oping sync") | |
779 | defer.returnValue(([], [])) | |
780 | ||
767 | 781 | ignored_account_data = yield self.store.get_global_account_data_by_type_for_user( |
768 | 782 | "m.ignored_user_list", user_id=user_id, |
769 | 783 | ) |
773 | 787 | else: |
774 | 788 | ignored_users = frozenset() |
775 | 789 | |
776 | if sync_result_builder.since_token: | |
790 | if since_token: | |
777 | 791 | res = yield self._get_rooms_changed(sync_result_builder, ignored_users) |
778 | 792 | room_entries, invited, newly_joined_rooms = res |
779 | 793 | |
780 | 794 | tags_by_room = yield self.store.get_updated_tags( |
781 | user_id, | |
782 | sync_result_builder.since_token.account_data_key, | |
795 | user_id, since_token.account_data_key, | |
783 | 796 | ) |
784 | 797 | else: |
785 | 798 | res = yield self._get_all_rooms(sync_result_builder, ignored_users) |
804 | 817 | |
805 | 818 | # Now we want to get any newly joined users |
806 | 819 | newly_joined_users = set() |
807 | if sync_result_builder.since_token: | |
820 | if since_token: | |
808 | 821 | for joined_sync in sync_result_builder.joined: |
809 | 822 | it = itertools.chain( |
810 | 823 | joined_sync.timeline.events, joined_sync.state.values() |
817 | 830 | defer.returnValue((newly_joined_rooms, newly_joined_users)) |
818 | 831 | |
819 | 832 | @defer.inlineCallbacks |
820 | def _get_rooms_changed(self, sync_result_builder, ignored_users): | |
821 | """Gets the the changes that have happened since the last sync. | |
822 | ||
823 | Args: | |
824 | sync_result_builder(SyncResultBuilder) | |
825 | ignored_users(set(str)): Set of users ignored by user. | |
826 | ||
827 | Returns: | |
828 | Deferred(tuple): Returns a tuple of the form: | |
829 | `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)` | |
833 | def _have_rooms_changed(self, sync_result_builder): | |
834 | """Returns whether there may be any new events that should be sent down | |
835 | the sync. Returns True if there are. | |
830 | 836 | """ |
831 | 837 | user_id = sync_result_builder.sync_config.user.to_string() |
832 | 838 | since_token = sync_result_builder.since_token |
833 | 839 | now_token = sync_result_builder.now_token |
834 | sync_config = sync_result_builder.sync_config | |
835 | 840 | |
836 | 841 | assert since_token |
842 | ||
843 | # Get a list of membership change events that have happened. | |
844 | rooms_changed = yield self.store.get_membership_changes_for_user( | |
845 | user_id, since_token.room_key, now_token.room_key | |
846 | ) | |
847 | ||
848 | if rooms_changed: | |
849 | defer.returnValue(True) | |
837 | 850 | |
838 | 851 | app_service = self.store.get_app_service_by_user_id(user_id) |
839 | 852 | if app_service: |
840 | 853 | rooms = yield self.store.get_app_service_rooms(app_service) |
841 | 854 | joined_room_ids = set(r.room_id for r in rooms) |
842 | 855 | else: |
843 | rooms = yield self.store.get_rooms_for_user(user_id) | |
856 | joined_room_ids = yield self.store.get_rooms_for_user(user_id) | |
857 | ||
858 | stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream | |
859 | for room_id in joined_room_ids: | |
860 | if self.store.has_room_changed_since(room_id, stream_id): | |
861 | defer.returnValue(True) | |
862 | defer.returnValue(False) | |
863 | ||
864 | @defer.inlineCallbacks | |
865 | def _get_rooms_changed(self, sync_result_builder, ignored_users): | |
866 | """Gets the the changes that have happened since the last sync. | |
867 | ||
868 | Args: | |
869 | sync_result_builder(SyncResultBuilder) | |
870 | ignored_users(set(str)): Set of users ignored by user. | |
871 | ||
872 | Returns: | |
873 | Deferred(tuple): Returns a tuple of the form: | |
874 | `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)` | |
875 | """ | |
876 | user_id = sync_result_builder.sync_config.user.to_string() | |
877 | since_token = sync_result_builder.since_token | |
878 | now_token = sync_result_builder.now_token | |
879 | sync_config = sync_result_builder.sync_config | |
880 | ||
881 | assert since_token | |
882 | ||
883 | app_service = self.store.get_app_service_by_user_id(user_id) | |
884 | if app_service: | |
885 | rooms = yield self.store.get_app_service_rooms(app_service) | |
844 | 886 | joined_room_ids = set(r.room_id for r in rooms) |
887 | else: | |
888 | joined_room_ids = yield self.store.get_rooms_for_user(user_id) | |
845 | 889 | |
846 | 890 | # Get a list of membership change events that have happened. |
847 | 891 | rooms_changed = yield self.store.get_membership_changes_for_user( |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
15 | ||
14 | import synapse.util.retryutils | |
16 | 15 | from twisted.internet import defer, reactor, protocol |
17 | 16 | from twisted.internet.error import DNSLookupError |
18 | 17 | from twisted.web.client import readBody, HTTPConnectionPool, Agent |
21 | 20 | |
22 | 21 | from synapse.http.endpoint import matrix_federation_endpoint |
23 | 22 | from synapse.util.async import sleep |
24 | from synapse.util.logcontext import preserve_context_over_fn | |
23 | from synapse.util import logcontext | |
25 | 24 | import synapse.metrics |
26 | 25 | |
27 | 26 | from canonicaljson import encode_canonical_json |
93 | 92 | reactor, MatrixFederationEndpointFactory(hs), pool=pool |
94 | 93 | ) |
95 | 94 | self.clock = hs.get_clock() |
95 | self._store = hs.get_datastore() | |
96 | 96 | self.version_string = hs.version_string |
97 | 97 | self._next_id = 1 |
98 | 98 | |
102 | 102 | ) |
103 | 103 | |
104 | 104 | @defer.inlineCallbacks |
105 | def _create_request(self, destination, method, path_bytes, | |
106 | body_callback, headers_dict={}, param_bytes=b"", | |
107 | query_bytes=b"", retry_on_dns_fail=True, | |
108 | timeout=None, long_retries=False): | |
109 | """ Creates and sends a request to the given url | |
105 | def _request(self, destination, method, path, | |
106 | body_callback, headers_dict={}, param_bytes=b"", | |
107 | query_bytes=b"", retry_on_dns_fail=True, | |
108 | timeout=None, long_retries=False, | |
109 | ignore_backoff=False, | |
110 | backoff_on_404=False): | |
111 | """ Creates and sends a request to the given server | |
112 | Args: | |
113 | destination (str): The remote server to send the HTTP request to. | |
114 | method (str): HTTP method | |
115 | path (str): The HTTP path | |
116 | ignore_backoff (bool): true to ignore the historical backoff data | |
117 | and try the request anyway. | |
118 | backoff_on_404 (bool): Back off if we get a 404 | |
119 | ||
120 | Returns: | |
121 | Deferred: resolves with the http response object on success. | |
122 | ||
123 | Fails with ``HTTPRequestException``: if we get an HTTP response | |
124 | code >= 300. | |
125 | Fails with ``NotRetryingDestination`` if we are not yet ready | |
126 | to retry this server. | |
110 | 127 | """ |
111 | headers_dict[b"User-Agent"] = [self.version_string] | |
112 | headers_dict[b"Host"] = [destination] | |
113 | ||
114 | url_bytes = self._create_url( | |
115 | destination, path_bytes, param_bytes, query_bytes | |
116 | ) | |
117 | ||
118 | txn_id = "%s-O-%s" % (method, self._next_id) | |
119 | self._next_id = (self._next_id + 1) % (sys.maxint - 1) | |
120 | ||
121 | outbound_logger.info( | |
122 | "{%s} [%s] Sending request: %s %s", | |
123 | txn_id, destination, method, url_bytes | |
124 | ) | |
125 | ||
126 | # XXX: Would be much nicer to retry only at the transaction-layer | |
127 | # (once we have reliable transactions in place) | |
128 | if long_retries: | |
129 | retries_left = MAX_LONG_RETRIES | |
130 | else: | |
131 | retries_left = MAX_SHORT_RETRIES | |
132 | ||
133 | http_url_bytes = urlparse.urlunparse( | |
134 | ("", "", path_bytes, param_bytes, query_bytes, "") | |
135 | ) | |
136 | ||
137 | log_result = None | |
138 | try: | |
139 | while True: | |
140 | producer = None | |
141 | if body_callback: | |
142 | producer = body_callback(method, http_url_bytes, headers_dict) | |
143 | ||
144 | try: | |
145 | def send_request(): | |
146 | request_deferred = preserve_context_over_fn( | |
147 | self.agent.request, | |
128 | limiter = yield synapse.util.retryutils.get_retry_limiter( | |
129 | destination, | |
130 | self.clock, | |
131 | self._store, | |
132 | backoff_on_404=backoff_on_404, | |
133 | ignore_backoff=ignore_backoff, | |
134 | ) | |
135 | ||
136 | destination = destination.encode("ascii") | |
137 | path_bytes = path.encode("ascii") | |
138 | with limiter: | |
139 | headers_dict[b"User-Agent"] = [self.version_string] | |
140 | headers_dict[b"Host"] = [destination] | |
141 | ||
142 | url_bytes = self._create_url( | |
143 | destination, path_bytes, param_bytes, query_bytes | |
144 | ) | |
145 | ||
146 | txn_id = "%s-O-%s" % (method, self._next_id) | |
147 | self._next_id = (self._next_id + 1) % (sys.maxint - 1) | |
148 | ||
149 | outbound_logger.info( | |
150 | "{%s} [%s] Sending request: %s %s", | |
151 | txn_id, destination, method, url_bytes | |
152 | ) | |
153 | ||
154 | # XXX: Would be much nicer to retry only at the transaction-layer | |
155 | # (once we have reliable transactions in place) | |
156 | if long_retries: | |
157 | retries_left = MAX_LONG_RETRIES | |
158 | else: | |
159 | retries_left = MAX_SHORT_RETRIES | |
160 | ||
161 | http_url_bytes = urlparse.urlunparse( | |
162 | ("", "", path_bytes, param_bytes, query_bytes, "") | |
163 | ) | |
164 | ||
165 | log_result = None | |
166 | try: | |
167 | while True: | |
168 | producer = None | |
169 | if body_callback: | |
170 | producer = body_callback(method, http_url_bytes, headers_dict) | |
171 | ||
172 | try: | |
173 | def send_request(): | |
174 | request_deferred = self.agent.request( | |
175 | method, | |
176 | url_bytes, | |
177 | Headers(headers_dict), | |
178 | producer | |
179 | ) | |
180 | ||
181 | return self.clock.time_bound_deferred( | |
182 | request_deferred, | |
183 | time_out=timeout / 1000. if timeout else 60, | |
184 | ) | |
185 | ||
186 | with logcontext.PreserveLoggingContext(): | |
187 | response = yield send_request() | |
188 | ||
189 | log_result = "%d %s" % (response.code, response.phrase,) | |
190 | break | |
191 | except Exception as e: | |
192 | if not retry_on_dns_fail and isinstance(e, DNSLookupError): | |
193 | logger.warn( | |
194 | "DNS Lookup failed to %s with %s", | |
195 | destination, | |
196 | e | |
197 | ) | |
198 | log_result = "DNS Lookup failed to %s with %s" % ( | |
199 | destination, e | |
200 | ) | |
201 | raise | |
202 | ||
203 | logger.warn( | |
204 | "{%s} Sending request failed to %s: %s %s: %s - %s", | |
205 | txn_id, | |
206 | destination, | |
148 | 207 | method, |
149 | 208 | url_bytes, |
150 | Headers(headers_dict), | |
151 | producer | |
209 | type(e).__name__, | |
210 | _flatten_response_never_received(e), | |
152 | 211 | ) |
153 | 212 | |
154 | return self.clock.time_bound_deferred( | |
155 | request_deferred, | |
156 | time_out=timeout / 1000. if timeout else 60, | |
213 | log_result = "%s - %s" % ( | |
214 | type(e).__name__, _flatten_response_never_received(e), | |
157 | 215 | ) |
158 | 216 | |
159 | response = yield preserve_context_over_fn(send_request) | |
160 | ||
161 | log_result = "%d %s" % (response.code, response.phrase,) | |
162 | break | |
163 | except Exception as e: | |
164 | if not retry_on_dns_fail and isinstance(e, DNSLookupError): | |
165 | logger.warn( | |
166 | "DNS Lookup failed to %s with %s", | |
167 | destination, | |
168 | e | |
169 | ) | |
170 | log_result = "DNS Lookup failed to %s with %s" % ( | |
171 | destination, e | |
172 | ) | |
173 | raise | |
174 | ||
175 | logger.warn( | |
176 | "{%s} Sending request failed to %s: %s %s: %s - %s", | |
177 | txn_id, | |
178 | destination, | |
179 | method, | |
180 | url_bytes, | |
181 | type(e).__name__, | |
182 | _flatten_response_never_received(e), | |
183 | ) | |
184 | ||
185 | log_result = "%s - %s" % ( | |
186 | type(e).__name__, _flatten_response_never_received(e), | |
187 | ) | |
188 | ||
189 | if retries_left and not timeout: | |
190 | if long_retries: | |
191 | delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left) | |
192 | delay = min(delay, 60) | |
193 | delay *= random.uniform(0.8, 1.4) | |
217 | if retries_left and not timeout: | |
218 | if long_retries: | |
219 | delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left) | |
220 | delay = min(delay, 60) | |
221 | delay *= random.uniform(0.8, 1.4) | |
222 | else: | |
223 | delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left) | |
224 | delay = min(delay, 2) | |
225 | delay *= random.uniform(0.8, 1.4) | |
226 | ||
227 | yield sleep(delay) | |
228 | retries_left -= 1 | |
194 | 229 | else: |
195 | delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left) | |
196 | delay = min(delay, 2) | |
197 | delay *= random.uniform(0.8, 1.4) | |
198 | ||
199 | yield sleep(delay) | |
200 | retries_left -= 1 | |
201 | else: | |
202 | raise | |
203 | finally: | |
204 | outbound_logger.info( | |
205 | "{%s} [%s] Result: %s", | |
206 | txn_id, | |
207 | destination, | |
208 | log_result, | |
209 | ) | |
210 | ||
211 | if 200 <= response.code < 300: | |
212 | pass | |
213 | else: | |
214 | # :'( | |
215 | # Update transactions table? | |
216 | body = yield preserve_context_over_fn(readBody, response) | |
217 | raise HttpResponseException( | |
218 | response.code, response.phrase, body | |
219 | ) | |
220 | ||
221 | defer.returnValue(response) | |
230 | raise | |
231 | finally: | |
232 | outbound_logger.info( | |
233 | "{%s} [%s] Result: %s", | |
234 | txn_id, | |
235 | destination, | |
236 | log_result, | |
237 | ) | |
238 | ||
239 | if 200 <= response.code < 300: | |
240 | pass | |
241 | else: | |
242 | # :'( | |
243 | # Update transactions table? | |
244 | with logcontext.PreserveLoggingContext(): | |
245 | body = yield readBody(response) | |
246 | raise HttpResponseException( | |
247 | response.code, response.phrase, body | |
248 | ) | |
249 | ||
250 | defer.returnValue(response) | |
222 | 251 | |
223 | 252 | def sign_request(self, destination, method, url_bytes, headers_dict, |
224 | 253 | content=None): |
247 | 276 | |
248 | 277 | @defer.inlineCallbacks |
249 | 278 | def put_json(self, destination, path, data={}, json_data_callback=None, |
250 | long_retries=False, timeout=None): | |
279 | long_retries=False, timeout=None, | |
280 | ignore_backoff=False, | |
281 | backoff_on_404=False): | |
251 | 282 | """ Sends the specifed json data using PUT |
252 | 283 | |
253 | 284 | Args: |
262 | 293 | retry for a short or long time. |
263 | 294 | timeout(int): How long to try (in ms) the destination for before |
264 | 295 | giving up. None indicates no timeout. |
296 | ignore_backoff (bool): true to ignore the historical backoff data | |
297 | and try the request anyway. | |
298 | backoff_on_404 (bool): True if we should count a 404 response as | |
299 | a failure of the server (and should therefore back off future | |
300 | requests) | |
265 | 301 | |
266 | 302 | Returns: |
267 | 303 | Deferred: Succeeds when we get a 2xx HTTP response. The result |
268 | 304 | will be the decoded JSON body. On a 4xx or 5xx error response a |
269 | 305 | CodeMessageException is raised. |
306 | ||
307 | Fails with ``NotRetryingDestination`` if we are not yet ready | |
308 | to retry this server. | |
270 | 309 | """ |
271 | 310 | |
272 | 311 | if not json_data_callback: |
281 | 320 | producer = _JsonProducer(json_data) |
282 | 321 | return producer |
283 | 322 | |
284 | response = yield self._create_request( | |
285 | destination.encode("ascii"), | |
323 | response = yield self._request( | |
324 | destination, | |
286 | 325 | "PUT", |
287 | path.encode("ascii"), | |
326 | path, | |
288 | 327 | body_callback=body_callback, |
289 | 328 | headers_dict={"Content-Type": ["application/json"]}, |
290 | 329 | long_retries=long_retries, |
291 | 330 | timeout=timeout, |
331 | ignore_backoff=ignore_backoff, | |
332 | backoff_on_404=backoff_on_404, | |
292 | 333 | ) |
293 | 334 | |
294 | 335 | if 200 <= response.code < 300: |
295 | 336 | # We need to update the transactions table to say it was sent? |
296 | 337 | check_content_type_is_json(response.headers) |
297 | 338 | |
298 | body = yield preserve_context_over_fn(readBody, response) | |
339 | with logcontext.PreserveLoggingContext(): | |
340 | body = yield readBody(response) | |
299 | 341 | defer.returnValue(json.loads(body)) |
300 | 342 | |
301 | 343 | @defer.inlineCallbacks |
302 | 344 | def post_json(self, destination, path, data={}, long_retries=False, |
303 | timeout=None): | |
345 | timeout=None, ignore_backoff=False): | |
304 | 346 | """ Sends the specifed json data using POST |
305 | 347 | |
306 | 348 | Args: |
313 | 355 | retry for a short or long time. |
314 | 356 | timeout(int): How long to try (in ms) the destination for before |
315 | 357 | giving up. None indicates no timeout. |
316 | ||
358 | ignore_backoff (bool): true to ignore the historical backoff data and | |
359 | try the request anyway. | |
317 | 360 | Returns: |
318 | 361 | Deferred: Succeeds when we get a 2xx HTTP response. The result |
319 | 362 | will be the decoded JSON body. On a 4xx or 5xx error response a |
320 | 363 | CodeMessageException is raised. |
364 | ||
365 | Fails with ``NotRetryingDestination`` if we are not yet ready | |
366 | to retry this server. | |
321 | 367 | """ |
322 | 368 | |
323 | 369 | def body_callback(method, url_bytes, headers_dict): |
326 | 372 | ) |
327 | 373 | return _JsonProducer(data) |
328 | 374 | |
329 | response = yield self._create_request( | |
330 | destination.encode("ascii"), | |
375 | response = yield self._request( | |
376 | destination, | |
331 | 377 | "POST", |
332 | path.encode("ascii"), | |
378 | path, | |
333 | 379 | body_callback=body_callback, |
334 | 380 | headers_dict={"Content-Type": ["application/json"]}, |
335 | 381 | long_retries=long_retries, |
336 | 382 | timeout=timeout, |
383 | ignore_backoff=ignore_backoff, | |
337 | 384 | ) |
338 | 385 | |
339 | 386 | if 200 <= response.code < 300: |
340 | 387 | # We need to update the transactions table to say it was sent? |
341 | 388 | check_content_type_is_json(response.headers) |
342 | 389 | |
343 | body = yield preserve_context_over_fn(readBody, response) | |
390 | with logcontext.PreserveLoggingContext(): | |
391 | body = yield readBody(response) | |
344 | 392 | |
345 | 393 | defer.returnValue(json.loads(body)) |
346 | 394 | |
347 | 395 | @defer.inlineCallbacks |
348 | 396 | def get_json(self, destination, path, args={}, retry_on_dns_fail=True, |
349 | timeout=None): | |
397 | timeout=None, ignore_backoff=False): | |
350 | 398 | """ GETs some json from the given host homeserver and path |
351 | 399 | |
352 | 400 | Args: |
358 | 406 | timeout (int): How long to try (in ms) the destination for before |
359 | 407 | giving up. None indicates no timeout and that the request will |
360 | 408 | be retried. |
409 | ignore_backoff (bool): true to ignore the historical backoff data | |
410 | and try the request anyway. | |
361 | 411 | Returns: |
362 | 412 | Deferred: Succeeds when we get *any* HTTP response. |
363 | 413 | |
364 | 414 | The result of the deferred is a tuple of `(code, response)`, |
365 | 415 | where `response` is a dict representing the decoded JSON body. |
416 | ||
417 | Fails with ``NotRetryingDestination`` if we are not yet ready | |
418 | to retry this server. | |
366 | 419 | """ |
367 | 420 | logger.debug("get_json args: %s", args) |
368 | 421 | |
379 | 432 | self.sign_request(destination, method, url_bytes, headers_dict) |
380 | 433 | return None |
381 | 434 | |
382 | response = yield self._create_request( | |
383 | destination.encode("ascii"), | |
435 | response = yield self._request( | |
436 | destination, | |
384 | 437 | "GET", |
385 | path.encode("ascii"), | |
438 | path, | |
386 | 439 | query_bytes=query_bytes, |
387 | 440 | body_callback=body_callback, |
388 | 441 | retry_on_dns_fail=retry_on_dns_fail, |
389 | 442 | timeout=timeout, |
443 | ignore_backoff=ignore_backoff, | |
390 | 444 | ) |
391 | 445 | |
392 | 446 | if 200 <= response.code < 300: |
393 | 447 | # We need to update the transactions table to say it was sent? |
394 | 448 | check_content_type_is_json(response.headers) |
395 | 449 | |
396 | body = yield preserve_context_over_fn(readBody, response) | |
450 | with logcontext.PreserveLoggingContext(): | |
451 | body = yield readBody(response) | |
397 | 452 | |
398 | 453 | defer.returnValue(json.loads(body)) |
399 | 454 | |
400 | 455 | @defer.inlineCallbacks |
401 | 456 | def get_file(self, destination, path, output_stream, args={}, |
402 | retry_on_dns_fail=True, max_size=None): | |
457 | retry_on_dns_fail=True, max_size=None, | |
458 | ignore_backoff=False): | |
403 | 459 | """GETs a file from a given homeserver |
404 | 460 | Args: |
405 | 461 | destination (str): The remote server to send the HTTP request to. |
406 | 462 | path (str): The HTTP path to GET. |
407 | 463 | output_stream (file): File to write the response body to. |
408 | 464 | args (dict): Optional dictionary used to create the query string. |
465 | ignore_backoff (bool): true to ignore the historical backoff data | |
466 | and try the request anyway. | |
409 | 467 | Returns: |
410 | A (int,dict) tuple of the file length and a dict of the response | |
411 | headers. | |
468 | Deferred: resolves with an (int,dict) tuple of the file length and | |
469 | a dict of the response headers. | |
470 | ||
471 | Fails with ``HTTPRequestException`` if we get an HTTP response code | |
472 | >= 300 | |
473 | ||
474 | Fails with ``NotRetryingDestination`` if we are not yet ready | |
475 | to retry this server. | |
412 | 476 | """ |
413 | 477 | |
414 | 478 | encoded_args = {} |
418 | 482 | encoded_args[k] = [v.encode("UTF-8") for v in vs] |
419 | 483 | |
420 | 484 | query_bytes = urllib.urlencode(encoded_args, True) |
421 | logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail) | |
485 | logger.debug("Query bytes: %s Retry DNS: %s", query_bytes, retry_on_dns_fail) | |
422 | 486 | |
423 | 487 | def body_callback(method, url_bytes, headers_dict): |
424 | 488 | self.sign_request(destination, method, url_bytes, headers_dict) |
425 | 489 | return None |
426 | 490 | |
427 | response = yield self._create_request( | |
428 | destination.encode("ascii"), | |
491 | response = yield self._request( | |
492 | destination, | |
429 | 493 | "GET", |
430 | path.encode("ascii"), | |
494 | path, | |
431 | 495 | query_bytes=query_bytes, |
432 | 496 | body_callback=body_callback, |
433 | retry_on_dns_fail=retry_on_dns_fail | |
497 | retry_on_dns_fail=retry_on_dns_fail, | |
498 | ignore_backoff=ignore_backoff, | |
434 | 499 | ) |
435 | 500 | |
436 | 501 | headers = dict(response.headers.getAllRawHeaders()) |
437 | 502 | |
438 | 503 | try: |
439 | length = yield preserve_context_over_fn( | |
440 | _readBodyToFile, | |
441 | response, output_stream, max_size | |
442 | ) | |
504 | with logcontext.PreserveLoggingContext(): | |
505 | length = yield _readBodyToFile( | |
506 | response, output_stream, max_size | |
507 | ) | |
443 | 508 | except: |
444 | 509 | logger.exception("Failed to download body") |
445 | 510 | raise |
191 | 191 | return content |
192 | 192 | |
193 | 193 | |
194 | def assert_params_in_request(body, required): | |
195 | absent = [] | |
196 | for k in required: | |
197 | if k not in body: | |
198 | absent.append(k) | |
199 | ||
200 | if len(absent) > 0: | |
201 | raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) | |
202 | ||
203 | ||
194 | 204 | class RestServlet(object): |
195 | 205 | |
196 | 206 | """ A Synapse REST Servlet. |
15 | 15 | from twisted.internet import defer |
16 | 16 | from synapse.api.constants import EventTypes, Membership |
17 | 17 | from synapse.api.errors import AuthError |
18 | from synapse.handlers.presence import format_user_presence_state | |
18 | 19 | |
19 | 20 | from synapse.util import DeferredTimedOutError |
20 | 21 | from synapse.util.logutils import log_function |
36 | 37 | |
37 | 38 | notified_events_counter = metrics.register_counter("notified_events") |
38 | 39 | |
40 | users_woken_by_stream_counter = metrics.register_counter( | |
41 | "users_woken_by_stream", labels=["stream"] | |
42 | ) | |
43 | ||
39 | 44 | |
40 | 45 | # TODO(paul): Should be shared somewhere |
41 | 46 | def count(func, l): |
72 | 77 | self.user_id = user_id |
73 | 78 | self.rooms = set(rooms) |
74 | 79 | self.current_token = current_token |
80 | ||
81 | # The last token for which we should wake up any streams that have a | |
82 | # token that comes before it. This gets updated everytime we get poked. | |
83 | # We start it at the current token since if we get any streams | |
84 | # that have a token from before we have no idea whether they should be | |
85 | # woken up or not, so lets just wake them up. | |
86 | self.last_notified_token = current_token | |
75 | 87 | self.last_notified_ms = time_now_ms |
76 | 88 | |
77 | 89 | with PreserveLoggingContext(): |
88 | 100 | self.current_token = self.current_token.copy_and_advance( |
89 | 101 | stream_key, stream_id |
90 | 102 | ) |
103 | self.last_notified_token = self.current_token | |
91 | 104 | self.last_notified_ms = time_now_ms |
92 | 105 | noify_deferred = self.notify_deferred |
106 | ||
107 | users_woken_by_stream_counter.inc(stream_key) | |
93 | 108 | |
94 | 109 | with PreserveLoggingContext(): |
95 | 110 | self.notify_deferred = ObservableDeferred(defer.Deferred()) |
112 | 127 | def new_listener(self, token): |
113 | 128 | """Returns a deferred that is resolved when there is a new token |
114 | 129 | greater than the given token. |
115 | """ | |
116 | if self.current_token.is_after(token): | |
130 | ||
131 | Args: | |
132 | token: The token from which we are streaming from, i.e. we shouldn't | |
133 | notify for things that happened before this. | |
134 | """ | |
135 | # Immediately wake up stream if something has already since happened | |
136 | # since their last token. | |
137 | if self.last_notified_token.is_after(token): | |
117 | 138 | return _NotificationListener(defer.succeed(self.current_token)) |
118 | 139 | else: |
119 | 140 | return _NotificationListener(self.notify_deferred.observe()) |
282 | 303 | if user_stream is None: |
283 | 304 | current_token = yield self.event_sources.get_current_token() |
284 | 305 | if room_ids is None: |
285 | rooms = yield self.store.get_rooms_for_user(user_id) | |
286 | room_ids = [room.room_id for room in rooms] | |
306 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
287 | 307 | user_stream = _NotifierUserStream( |
288 | 308 | user_id=user_id, |
289 | 309 | rooms=room_ids, |
293 | 313 | self._register_with_keys(user_stream) |
294 | 314 | |
295 | 315 | result = None |
316 | prev_token = from_token | |
296 | 317 | if timeout: |
297 | 318 | end_time = self.clock.time_msec() + timeout |
298 | 319 | |
299 | prev_token = from_token | |
300 | 320 | while not result: |
301 | 321 | try: |
302 | current_token = user_stream.current_token | |
303 | ||
304 | result = yield callback(prev_token, current_token) | |
305 | if result: | |
306 | break | |
307 | ||
308 | 322 | now = self.clock.time_msec() |
309 | 323 | if end_time <= now: |
310 | 324 | break |
311 | 325 | |
312 | 326 | # Now we wait for the _NotifierUserStream to be told there |
313 | 327 | # is a new token. |
314 | # We need to supply the token we supplied to callback so | |
315 | # that we don't miss any current_token updates. | |
316 | prev_token = current_token | |
317 | 328 | listener = user_stream.new_listener(prev_token) |
318 | 329 | with PreserveLoggingContext(): |
319 | 330 | yield self.clock.time_bound_deferred( |
320 | 331 | listener.deferred, |
321 | 332 | time_out=(end_time - now) / 1000. |
322 | 333 | ) |
334 | ||
335 | current_token = user_stream.current_token | |
336 | ||
337 | result = yield callback(prev_token, current_token) | |
338 | if result: | |
339 | break | |
340 | ||
341 | # Update the prev_token to the current_token since nothing | |
342 | # has happened between the old prev_token and the current_token | |
343 | prev_token = current_token | |
323 | 344 | except DeferredTimedOutError: |
324 | 345 | break |
325 | 346 | except defer.CancelledError: |
326 | 347 | break |
327 | else: | |
348 | ||
349 | if result is None: | |
350 | # This happened if there was no timeout or if the timeout had | |
351 | # already expired. | |
328 | 352 | current_token = user_stream.current_token |
329 | result = yield callback(from_token, current_token) | |
353 | result = yield callback(prev_token, current_token) | |
330 | 354 | |
331 | 355 | defer.returnValue(result) |
332 | 356 | |
387 | 411 | new_events, |
388 | 412 | is_peeking=is_peeking, |
389 | 413 | ) |
414 | elif name == "presence": | |
415 | now = self.clock.time_msec() | |
416 | new_events[:] = [ | |
417 | { | |
418 | "type": "m.presence", | |
419 | "content": format_user_presence_state(event, now), | |
420 | } | |
421 | for event in new_events | |
422 | ] | |
390 | 423 | |
391 | 424 | events.extend(new_events) |
392 | 425 | end_token = end_token.copy_and_replace(keyname, new_key) |
419 | 452 | |
420 | 453 | @defer.inlineCallbacks |
421 | 454 | def _get_room_ids(self, user, explicit_room_id): |
422 | joined_rooms = yield self.store.get_rooms_for_user(user.to_string()) | |
423 | joined_room_ids = map(lambda r: r.room_id, joined_rooms) | |
455 | joined_room_ids = yield self.store.get_rooms_for_user(user.to_string()) | |
424 | 456 | if explicit_room_id: |
425 | 457 | if explicit_room_id in joined_room_ids: |
426 | 458 | defer.returnValue(([explicit_room_id], True)) |
138 | 138 | |
139 | 139 | @defer.inlineCallbacks |
140 | 140 | def _fetch_room_state(room_id): |
141 | room_state = yield self.state_handler.get_current_state_ids(room_id) | |
141 | room_state = yield self.store.get_current_state_ids(room_id) | |
142 | 142 | state_by_room[room_id] = room_state |
143 | 143 | |
144 | 144 | # Run at most 3 of these at once: sync does 10 at a time but email |
16 | 16 | import re |
17 | 17 | |
18 | 18 | from synapse.types import UserID |
19 | from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache | |
19 | 20 | from synapse.util.caches.lrucache import LruCache |
20 | 21 | |
21 | 22 | logger = logging.getLogger(__name__) |
124 | 125 | return self._value_cache.get(dotted_key, None) |
125 | 126 | |
126 | 127 | |
128 | # Caches (glob, word_boundary) -> regex for push. See _glob_matches | |
129 | regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR) | |
130 | register_cache("regex_push_cache", regex_cache) | |
131 | ||
132 | ||
127 | 133 | def _glob_matches(glob, value, word_boundary=False): |
128 | 134 | """Tests if value matches glob. |
129 | 135 | |
136 | 142 | Returns: |
137 | 143 | bool |
138 | 144 | """ |
145 | ||
139 | 146 | try: |
140 | if IS_GLOB.search(glob): | |
141 | r = re.escape(glob) | |
142 | ||
143 | r = r.replace(r'\*', '.*?') | |
144 | r = r.replace(r'\?', '.') | |
145 | ||
146 | # handle [abc], [a-z] and [!a-z] style ranges. | |
147 | r = GLOB_REGEX.sub( | |
148 | lambda x: ( | |
149 | '[%s%s]' % ( | |
150 | x.group(1) and '^' or '', | |
151 | x.group(2).replace(r'\\\-', '-') | |
152 | ) | |
153 | ), | |
154 | r, | |
155 | ) | |
156 | if word_boundary: | |
157 | r = r"\b%s\b" % (r,) | |
158 | r = _compile_regex(r) | |
159 | ||
160 | return r.search(value) | |
161 | else: | |
162 | r = r + "$" | |
163 | r = _compile_regex(r) | |
164 | ||
165 | return r.match(value) | |
166 | elif word_boundary: | |
167 | r = re.escape(glob) | |
168 | r = r"\b%s\b" % (r,) | |
169 | r = _compile_regex(r) | |
170 | ||
171 | return r.search(value) | |
172 | else: | |
173 | return value.lower() == glob.lower() | |
147 | r = regex_cache.get((glob, word_boundary), None) | |
148 | if not r: | |
149 | r = _glob_to_re(glob, word_boundary) | |
150 | regex_cache[(glob, word_boundary)] = r | |
151 | return r.search(value) | |
174 | 152 | except re.error: |
175 | 153 | logger.warn("Failed to parse glob to regex: %r", glob) |
176 | 154 | return False |
155 | ||
156 | ||
157 | def _glob_to_re(glob, word_boundary): | |
158 | """Generates regex for a given glob. | |
159 | ||
160 | Args: | |
161 | glob (string) | |
162 | word_boundary (bool): Whether to match against word boundaries or entire | |
163 | string. Defaults to False. | |
164 | ||
165 | Returns: | |
166 | regex object | |
167 | """ | |
168 | if IS_GLOB.search(glob): | |
169 | r = re.escape(glob) | |
170 | ||
171 | r = r.replace(r'\*', '.*?') | |
172 | r = r.replace(r'\?', '.') | |
173 | ||
174 | # handle [abc], [a-z] and [!a-z] style ranges. | |
175 | r = GLOB_REGEX.sub( | |
176 | lambda x: ( | |
177 | '[%s%s]' % ( | |
178 | x.group(1) and '^' or '', | |
179 | x.group(2).replace(r'\\\-', '-') | |
180 | ) | |
181 | ), | |
182 | r, | |
183 | ) | |
184 | if word_boundary: | |
185 | r = r"\b%s\b" % (r,) | |
186 | ||
187 | return re.compile(r, flags=re.IGNORECASE) | |
188 | else: | |
189 | r = "^" + r + "$" | |
190 | ||
191 | return re.compile(r, flags=re.IGNORECASE) | |
192 | elif word_boundary: | |
193 | r = re.escape(glob) | |
194 | r = r"\b%s\b" % (r,) | |
195 | ||
196 | return re.compile(r, flags=re.IGNORECASE) | |
197 | else: | |
198 | r = "^" + re.escape(glob) + "$" | |
199 | return re.compile(r, flags=re.IGNORECASE) | |
177 | 200 | |
178 | 201 | |
179 | 202 | def _flatten_dict(d, prefix=[], result={}): |
184 | 207 | _flatten_dict(value, prefix=(prefix + [key]), result=result) |
185 | 208 | |
186 | 209 | return result |
187 | ||
188 | ||
189 | regex_cache = LruCache(5000) | |
190 | ||
191 | ||
192 | def _compile_regex(regex_str): | |
193 | r = regex_cache.get(regex_str, None) | |
194 | if r: | |
195 | return r | |
196 | ||
197 | r = re.compile(regex_str, flags=re.IGNORECASE) | |
198 | regex_cache[regex_str] = r | |
199 | return r |
32 | 32 | |
33 | 33 | badge = len(invites) |
34 | 34 | |
35 | for r in joins: | |
36 | if r.room_id in my_receipts_by_room: | |
37 | last_unread_event_id = my_receipts_by_room[r.room_id] | |
35 | for room_id in joins: | |
36 | if room_id in my_receipts_by_room: | |
37 | last_unread_event_id = my_receipts_by_room[room_id] | |
38 | 38 | |
39 | 39 | notifs = yield ( |
40 | 40 | store.get_unread_event_push_actions_by_room_for_user( |
41 | r.room_id, user_id, last_unread_event_id | |
41 | room_id, user_id, last_unread_event_id | |
42 | 42 | ) |
43 | 43 | ) |
44 | 44 | # return one badge count per conversation, as count per |
0 | 0 | # Copyright 2015, 2016 OpenMarket Ltd |
1 | # Copyright 2017 Vector Creations Ltd | |
1 | 2 | # |
2 | 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
3 | 4 | # you may not use this file except in compliance with the License. |
17 | 18 | logger = logging.getLogger(__name__) |
18 | 19 | |
19 | 20 | REQUIREMENTS = { |
21 | "jsonschema>=2.5.1": ["jsonschema>=2.5.1"], | |
20 | 22 | "frozendict>=0.4": ["frozendict"], |
21 | 23 | "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"], |
22 | 24 | "canonicaljson>=1.0.0": ["canonicaljson>=1.0.0"], |
36 | 38 | "pysaml2>=3.0.0,<4.0.0": ["saml2>=3.0.0,<4.0.0"], |
37 | 39 | "pymacaroons-pynacl": ["pymacaroons"], |
38 | 40 | "msgpack-python>=0.3.0": ["msgpack"], |
41 | "phonenumbers>=8.2.0": ["phonenumbers"], | |
39 | 42 | } |
40 | 43 | CONDITIONAL_REQUIREMENTS = { |
41 | 44 | "web_client": { |
282 | 282 | |
283 | 283 | if request_events != upto_events_token: |
284 | 284 | writer.write_header_and_rows("events", res.new_forward_events, ( |
285 | "position", "internal", "json", "state_group" | |
285 | "position", "event_id", "room_id", "type", "state_key", | |
286 | 286 | ), position=upto_events_token) |
287 | 287 | |
288 | 288 | if request_backfill != upto_backfill_token: |
289 | 289 | writer.write_header_and_rows("backfill", res.new_backfill_events, ( |
290 | "position", "internal", "json", "state_group", | |
290 | "position", "event_id", "room_id", "type", "state_key", "redacts", | |
291 | 291 | ), position=upto_backfill_token) |
292 | 292 | |
293 | 293 | writer.write_header_and_rows( |
26 | 26 | self._current = (max if self.step > 0 else min)(self._current, new_id) |
27 | 27 | |
28 | 28 | def get_current_token(self): |
29 | """ | |
30 | ||
31 | Returns: | |
32 | int | |
33 | """ | |
29 | 34 | return self._current |
15 | 15 | from ._slaved_id_tracker import SlavedIdTracker |
16 | 16 | |
17 | 17 | from synapse.api.constants import EventTypes |
18 | from synapse.events import FrozenEvent | |
19 | 18 | from synapse.storage import DataStore |
20 | 19 | from synapse.storage.roommember import RoomMemberStore |
21 | 20 | from synapse.storage.event_federation import EventFederationStore |
24 | 23 | from synapse.storage.stream import StreamStore |
25 | 24 | from synapse.util.caches.stream_change_cache import StreamChangeCache |
26 | 25 | |
27 | import ujson as json | |
28 | 26 | import logging |
29 | 27 | |
30 | 28 | |
108 | 106 | get_recent_event_ids_for_room = ( |
109 | 107 | StreamStore.__dict__["get_recent_event_ids_for_room"] |
110 | 108 | ) |
109 | get_current_state_ids = ( | |
110 | StateStore.__dict__["get_current_state_ids"] | |
111 | ) | |
112 | has_room_changed_since = DataStore.has_room_changed_since.__func__ | |
111 | 113 | |
112 | 114 | get_unread_push_actions_for_user_in_range_for_http = ( |
113 | 115 | DataStore.get_unread_push_actions_for_user_in_range_for_http.__func__ |
164 | 166 | _get_rooms_for_user_where_membership_is_txn = ( |
165 | 167 | DataStore._get_rooms_for_user_where_membership_is_txn.__func__ |
166 | 168 | ) |
167 | _get_members_rows_txn = DataStore._get_members_rows_txn.__func__ | |
168 | 169 | _get_state_for_groups = DataStore._get_state_for_groups.__func__ |
169 | 170 | _get_all_state_from_cache = DataStore._get_all_state_from_cache.__func__ |
170 | 171 | _get_events_around_txn = DataStore._get_events_around_txn.__func__ |
237 | 238 | return super(SlavedEventStore, self).process_replication(result) |
238 | 239 | |
239 | 240 | def _process_replication_row(self, row, backfilled): |
240 | internal = json.loads(row[1]) | |
241 | event_json = json.loads(row[2]) | |
242 | event = FrozenEvent(event_json, internal_metadata_dict=internal) | |
241 | stream_ordering = row[0] if not backfilled else -row[0] | |
243 | 242 | self.invalidate_caches_for_event( |
244 | event, backfilled, | |
245 | ) | |
246 | ||
247 | def invalidate_caches_for_event(self, event, backfilled): | |
248 | self._invalidate_get_event_cache(event.event_id) | |
249 | ||
250 | self.get_latest_event_ids_in_room.invalidate((event.room_id,)) | |
243 | stream_ordering, row[1], row[2], row[3], row[4], row[5], | |
244 | backfilled=backfilled, | |
245 | ) | |
246 | ||
247 | def invalidate_caches_for_event(self, stream_ordering, event_id, room_id, | |
248 | etype, state_key, redacts, backfilled): | |
249 | self._invalidate_get_event_cache(event_id) | |
250 | ||
251 | self.get_latest_event_ids_in_room.invalidate((room_id,)) | |
251 | 252 | |
252 | 253 | self.get_unread_event_push_actions_by_room_for_user.invalidate_many( |
253 | (event.room_id,) | |
254 | (room_id,) | |
254 | 255 | ) |
255 | 256 | |
256 | 257 | if not backfilled: |
257 | 258 | self._events_stream_cache.entity_has_changed( |
258 | event.room_id, event.internal_metadata.stream_ordering | |
259 | room_id, stream_ordering | |
259 | 260 | ) |
260 | 261 | |
261 | # self.get_unread_event_push_actions_by_room_for_user.invalidate_many( | |
262 | # (event.room_id,) | |
263 | # ) | |
264 | ||
265 | if event.type == EventTypes.Redaction: | |
266 | self._invalidate_get_event_cache(event.redacts) | |
267 | ||
268 | if event.type == EventTypes.Member: | |
262 | if redacts: | |
263 | self._invalidate_get_event_cache(redacts) | |
264 | ||
265 | if etype == EventTypes.Member: | |
269 | 266 | self._membership_stream_cache.entity_has_changed( |
270 | event.state_key, event.internal_metadata.stream_ordering | |
267 | state_key, stream_ordering | |
271 | 268 | ) |
272 | self.get_invited_rooms_for_user.invalidate((event.state_key,)) | |
273 | ||
274 | if not event.is_state(): | |
275 | return | |
276 | ||
277 | if backfilled: | |
278 | return | |
279 | ||
280 | if (not event.internal_metadata.is_invite_from_remote() | |
281 | and event.internal_metadata.is_outlier()): | |
282 | return | |
269 | self.get_invited_rooms_for_user.invalidate((state_key,)) |
56 | 56 | self.presence_stream_cache.entity_has_changed( |
57 | 57 | user_id, position |
58 | 58 | ) |
59 | self._get_presence_for_user.invalidate((user_id,)) | |
59 | 60 | |
60 | 61 | return super(SlavedPresenceStore, self).process_replication(result) |
18 | 18 | from synapse.types import UserID |
19 | 19 | from synapse.http.server import finish_request |
20 | 20 | from synapse.http.servlet import parse_json_object_from_request |
21 | from synapse.util.msisdn import phone_number_to_msisdn | |
21 | 22 | |
22 | 23 | from .base import ClientV1RestServlet, client_path_patterns |
23 | 24 | |
32 | 33 | |
33 | 34 | import xml.etree.ElementTree as ET |
34 | 35 | |
36 | from twisted.web.client import PartialDownloadError | |
37 | ||
35 | 38 | |
36 | 39 | logger = logging.getLogger(__name__) |
40 | ||
41 | ||
42 | def login_submission_legacy_convert(submission): | |
43 | """ | |
44 | If the input login submission is an old style object | |
45 | (ie. with top-level user / medium / address) convert it | |
46 | to a typed object. | |
47 | """ | |
48 | if "user" in submission: | |
49 | submission["identifier"] = { | |
50 | "type": "m.id.user", | |
51 | "user": submission["user"], | |
52 | } | |
53 | del submission["user"] | |
54 | ||
55 | if "medium" in submission and "address" in submission: | |
56 | submission["identifier"] = { | |
57 | "type": "m.id.thirdparty", | |
58 | "medium": submission["medium"], | |
59 | "address": submission["address"], | |
60 | } | |
61 | del submission["medium"] | |
62 | del submission["address"] | |
63 | ||
64 | ||
65 | def login_id_thirdparty_from_phone(identifier): | |
66 | """ | |
67 | Convert a phone login identifier type to a generic threepid identifier | |
68 | Args: | |
69 | identifier(dict): Login identifier dict of type 'm.id.phone' | |
70 | ||
71 | Returns: Login identifier dict of type 'm.id.threepid' | |
72 | """ | |
73 | if "country" not in identifier or "number" not in identifier: | |
74 | raise SynapseError(400, "Invalid phone-type identifier") | |
75 | ||
76 | msisdn = phone_number_to_msisdn(identifier["country"], identifier["number"]) | |
77 | ||
78 | return { | |
79 | "type": "m.id.thirdparty", | |
80 | "medium": "msisdn", | |
81 | "address": msisdn, | |
82 | } | |
37 | 83 | |
38 | 84 | |
39 | 85 | class LoginRestServlet(ClientV1RestServlet): |
116 | 162 | |
117 | 163 | @defer.inlineCallbacks |
118 | 164 | def do_password_login(self, login_submission): |
119 | if 'medium' in login_submission and 'address' in login_submission: | |
120 | address = login_submission['address'] | |
121 | if login_submission['medium'] == 'email': | |
165 | if "password" not in login_submission: | |
166 | raise SynapseError(400, "Missing parameter: password") | |
167 | ||
168 | login_submission_legacy_convert(login_submission) | |
169 | ||
170 | if "identifier" not in login_submission: | |
171 | raise SynapseError(400, "Missing param: identifier") | |
172 | ||
173 | identifier = login_submission["identifier"] | |
174 | if "type" not in identifier: | |
175 | raise SynapseError(400, "Login identifier has no type") | |
176 | ||
177 | # convert phone type identifiers to generic threepids | |
178 | if identifier["type"] == "m.id.phone": | |
179 | identifier = login_id_thirdparty_from_phone(identifier) | |
180 | ||
181 | # convert threepid identifiers to user IDs | |
182 | if identifier["type"] == "m.id.thirdparty": | |
183 | if 'medium' not in identifier or 'address' not in identifier: | |
184 | raise SynapseError(400, "Invalid thirdparty identifier") | |
185 | ||
186 | address = identifier['address'] | |
187 | if identifier['medium'] == 'email': | |
122 | 188 | # For emails, transform the address to lowercase. |
123 | 189 | # We store all email addreses as lowercase in the DB. |
124 | 190 | # (See add_threepid in synapse/handlers/auth.py) |
125 | 191 | address = address.lower() |
126 | 192 | user_id = yield self.hs.get_datastore().get_user_id_by_threepid( |
127 | login_submission['medium'], address | |
193 | identifier['medium'], address | |
128 | 194 | ) |
129 | 195 | if not user_id: |
130 | 196 | raise LoginError(403, "", errcode=Codes.FORBIDDEN) |
131 | else: | |
132 | user_id = login_submission['user'] | |
197 | ||
198 | identifier = { | |
199 | "type": "m.id.user", | |
200 | "user": user_id, | |
201 | } | |
202 | ||
203 | # by this point, the identifier should be an m.id.user: if it's anything | |
204 | # else, we haven't understood it. | |
205 | if identifier["type"] != "m.id.user": | |
206 | raise SynapseError(400, "Unknown login identifier type") | |
207 | if "user" not in identifier: | |
208 | raise SynapseError(400, "User identifier is missing 'user' key") | |
209 | ||
210 | user_id = identifier["user"] | |
133 | 211 | |
134 | 212 | if not user_id.startswith('@'): |
135 | 213 | user_id = UserID.create( |
340 | 418 | "ticket": request.args["ticket"], |
341 | 419 | "service": self.cas_service_url |
342 | 420 | } |
343 | body = yield http_client.get_raw(uri, args) | |
421 | try: | |
422 | body = yield http_client.get_raw(uri, args) | |
423 | except PartialDownloadError as pde: | |
424 | # Twisted raises this error if the connection is closed, | |
425 | # even if that's being used old-http style to signal end-of-data | |
426 | body = pde.response | |
344 | 427 | result = yield self.handle_cas_response(request, body, client_redirect_url) |
345 | 428 | defer.returnValue(result) |
346 | 429 |
18 | 18 | |
19 | 19 | from synapse.api.errors import SynapseError, AuthError |
20 | 20 | from synapse.types import UserID |
21 | from synapse.handlers.presence import format_user_presence_state | |
21 | 22 | from synapse.http.servlet import parse_json_object_from_request |
22 | 23 | from .base import ClientV1RestServlet, client_path_patterns |
23 | 24 | |
32 | 33 | def __init__(self, hs): |
33 | 34 | super(PresenceStatusRestServlet, self).__init__(hs) |
34 | 35 | self.presence_handler = hs.get_presence_handler() |
36 | self.clock = hs.get_clock() | |
35 | 37 | |
36 | 38 | @defer.inlineCallbacks |
37 | 39 | def on_GET(self, request, user_id): |
47 | 49 | raise AuthError(403, "You are not allowed to see their presence.") |
48 | 50 | |
49 | 51 | state = yield self.presence_handler.get_state(target_user=user) |
52 | state = format_user_presence_state(state, self.clock.time_msec()) | |
50 | 53 | |
51 | 54 | defer.returnValue((200, state)) |
52 | 55 |
747 | 747 | def on_GET(self, request): |
748 | 748 | requester = yield self.auth.get_user_by_req(request, allow_guest=True) |
749 | 749 | |
750 | rooms = yield self.store.get_rooms_for_user(requester.user.to_string()) | |
751 | room_ids = set(r.room_id for r in rooms) # Ensure they're unique. | |
750 | room_ids = yield self.store.get_rooms_for_user(requester.user.to_string()) | |
752 | 751 | defer.returnValue((200, {"joined_rooms": list(room_ids)})) |
753 | 752 | |
754 | 753 |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2015, 2016 OpenMarket Ltd |
2 | # Copyright 2017 Vector Creations Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
16 | 17 | |
17 | 18 | from synapse.api.constants import LoginType |
18 | 19 | from synapse.api.errors import LoginError, SynapseError, Codes |
19 | from synapse.http.servlet import RestServlet, parse_json_object_from_request | |
20 | from synapse.http.servlet import ( | |
21 | RestServlet, parse_json_object_from_request, assert_params_in_request | |
22 | ) | |
20 | 23 | from synapse.util.async import run_on_reactor |
24 | from synapse.util.msisdn import phone_number_to_msisdn | |
21 | 25 | |
22 | 26 | from ._base import client_v2_patterns |
23 | 27 | |
27 | 31 | logger = logging.getLogger(__name__) |
28 | 32 | |
29 | 33 | |
30 | class PasswordRequestTokenRestServlet(RestServlet): | |
34 | class EmailPasswordRequestTokenRestServlet(RestServlet): | |
31 | 35 | PATTERNS = client_v2_patterns("/account/password/email/requestToken$") |
32 | 36 | |
33 | 37 | def __init__(self, hs): |
34 | super(PasswordRequestTokenRestServlet, self).__init__() | |
35 | self.hs = hs | |
36 | self.identity_handler = hs.get_handlers().identity_handler | |
37 | ||
38 | @defer.inlineCallbacks | |
39 | def on_POST(self, request): | |
40 | body = parse_json_object_from_request(request) | |
41 | ||
42 | required = ['id_server', 'client_secret', 'email', 'send_attempt'] | |
43 | absent = [] | |
44 | for k in required: | |
45 | if k not in body: | |
46 | absent.append(k) | |
47 | ||
48 | if absent: | |
49 | raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) | |
38 | super(EmailPasswordRequestTokenRestServlet, self).__init__() | |
39 | self.hs = hs | |
40 | self.identity_handler = hs.get_handlers().identity_handler | |
41 | ||
42 | @defer.inlineCallbacks | |
43 | def on_POST(self, request): | |
44 | body = parse_json_object_from_request(request) | |
45 | ||
46 | assert_params_in_request(body, [ | |
47 | 'id_server', 'client_secret', 'email', 'send_attempt' | |
48 | ]) | |
50 | 49 | |
51 | 50 | existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( |
52 | 51 | 'email', body['email'] |
59 | 58 | defer.returnValue((200, ret)) |
60 | 59 | |
61 | 60 | |
61 | class MsisdnPasswordRequestTokenRestServlet(RestServlet): | |
62 | PATTERNS = client_v2_patterns("/account/password/msisdn/requestToken$") | |
63 | ||
64 | def __init__(self, hs): | |
65 | super(MsisdnPasswordRequestTokenRestServlet, self).__init__() | |
66 | self.hs = hs | |
67 | self.datastore = self.hs.get_datastore() | |
68 | self.identity_handler = hs.get_handlers().identity_handler | |
69 | ||
70 | @defer.inlineCallbacks | |
71 | def on_POST(self, request): | |
72 | body = parse_json_object_from_request(request) | |
73 | ||
74 | assert_params_in_request(body, [ | |
75 | 'id_server', 'client_secret', | |
76 | 'country', 'phone_number', 'send_attempt', | |
77 | ]) | |
78 | ||
79 | msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) | |
80 | ||
81 | existingUid = yield self.datastore.get_user_id_by_threepid( | |
82 | 'msisdn', msisdn | |
83 | ) | |
84 | ||
85 | if existingUid is None: | |
86 | raise SynapseError(400, "MSISDN not found", Codes.THREEPID_NOT_FOUND) | |
87 | ||
88 | ret = yield self.identity_handler.requestMsisdnToken(**body) | |
89 | defer.returnValue((200, ret)) | |
90 | ||
91 | ||
62 | 92 | class PasswordRestServlet(RestServlet): |
63 | 93 | PATTERNS = client_v2_patterns("/account/password$") |
64 | 94 | |
67 | 97 | self.hs = hs |
68 | 98 | self.auth = hs.get_auth() |
69 | 99 | self.auth_handler = hs.get_auth_handler() |
100 | self.datastore = self.hs.get_datastore() | |
70 | 101 | |
71 | 102 | @defer.inlineCallbacks |
72 | 103 | def on_POST(self, request): |
76 | 107 | |
77 | 108 | authed, result, params, _ = yield self.auth_handler.check_auth([ |
78 | 109 | [LoginType.PASSWORD], |
79 | [LoginType.EMAIL_IDENTITY] | |
110 | [LoginType.EMAIL_IDENTITY], | |
111 | [LoginType.MSISDN], | |
80 | 112 | ], body, self.hs.get_ip_from_request(request)) |
81 | 113 | |
82 | 114 | if not authed: |
101 | 133 | # (See add_threepid in synapse/handlers/auth.py) |
102 | 134 | threepid['address'] = threepid['address'].lower() |
103 | 135 | # if using email, we must know about the email they're authing with! |
104 | threepid_user_id = yield self.hs.get_datastore().get_user_id_by_threepid( | |
136 | threepid_user_id = yield self.datastore.get_user_id_by_threepid( | |
105 | 137 | threepid['medium'], threepid['address'] |
106 | 138 | ) |
107 | 139 | if not threepid_user_id: |
168 | 200 | defer.returnValue((200, {})) |
169 | 201 | |
170 | 202 | |
171 | class ThreepidRequestTokenRestServlet(RestServlet): | |
203 | class EmailThreepidRequestTokenRestServlet(RestServlet): | |
172 | 204 | PATTERNS = client_v2_patterns("/account/3pid/email/requestToken$") |
173 | 205 | |
174 | 206 | def __init__(self, hs): |
175 | 207 | self.hs = hs |
176 | super(ThreepidRequestTokenRestServlet, self).__init__() | |
177 | self.identity_handler = hs.get_handlers().identity_handler | |
208 | super(EmailThreepidRequestTokenRestServlet, self).__init__() | |
209 | self.identity_handler = hs.get_handlers().identity_handler | |
210 | self.datastore = self.hs.get_datastore() | |
178 | 211 | |
179 | 212 | @defer.inlineCallbacks |
180 | 213 | def on_POST(self, request): |
189 | 222 | if absent: |
190 | 223 | raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) |
191 | 224 | |
192 | existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( | |
225 | existingUid = yield self.datastore.get_user_id_by_threepid( | |
193 | 226 | 'email', body['email'] |
194 | 227 | ) |
195 | 228 | |
200 | 233 | defer.returnValue((200, ret)) |
201 | 234 | |
202 | 235 | |
236 | class MsisdnThreepidRequestTokenRestServlet(RestServlet): | |
237 | PATTERNS = client_v2_patterns("/account/3pid/msisdn/requestToken$") | |
238 | ||
239 | def __init__(self, hs): | |
240 | self.hs = hs | |
241 | super(MsisdnThreepidRequestTokenRestServlet, self).__init__() | |
242 | self.identity_handler = hs.get_handlers().identity_handler | |
243 | self.datastore = self.hs.get_datastore() | |
244 | ||
245 | @defer.inlineCallbacks | |
246 | def on_POST(self, request): | |
247 | body = parse_json_object_from_request(request) | |
248 | ||
249 | required = [ | |
250 | 'id_server', 'client_secret', | |
251 | 'country', 'phone_number', 'send_attempt', | |
252 | ] | |
253 | absent = [] | |
254 | for k in required: | |
255 | if k not in body: | |
256 | absent.append(k) | |
257 | ||
258 | if absent: | |
259 | raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) | |
260 | ||
261 | msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) | |
262 | ||
263 | existingUid = yield self.datastore.get_user_id_by_threepid( | |
264 | 'msisdn', msisdn | |
265 | ) | |
266 | ||
267 | if existingUid is not None: | |
268 | raise SynapseError(400, "MSISDN is already in use", Codes.THREEPID_IN_USE) | |
269 | ||
270 | ret = yield self.identity_handler.requestMsisdnToken(**body) | |
271 | defer.returnValue((200, ret)) | |
272 | ||
273 | ||
203 | 274 | class ThreepidRestServlet(RestServlet): |
204 | 275 | PATTERNS = client_v2_patterns("/account/3pid$") |
205 | 276 | |
209 | 280 | self.identity_handler = hs.get_handlers().identity_handler |
210 | 281 | self.auth = hs.get_auth() |
211 | 282 | self.auth_handler = hs.get_auth_handler() |
283 | self.datastore = self.hs.get_datastore() | |
212 | 284 | |
213 | 285 | @defer.inlineCallbacks |
214 | 286 | def on_GET(self, request): |
216 | 288 | |
217 | 289 | requester = yield self.auth.get_user_by_req(request) |
218 | 290 | |
219 | threepids = yield self.hs.get_datastore().user_get_threepids( | |
291 | threepids = yield self.datastore.user_get_threepids( | |
220 | 292 | requester.user.to_string() |
221 | 293 | ) |
222 | 294 | |
257 | 329 | |
258 | 330 | if 'bind' in body and body['bind']: |
259 | 331 | logger.debug( |
260 | "Binding emails %s to %s", | |
332 | "Binding threepid %s to %s", | |
261 | 333 | threepid, user_id |
262 | 334 | ) |
263 | 335 | yield self.identity_handler.bind_threepid( |
301 | 373 | |
302 | 374 | |
303 | 375 | def register_servlets(hs, http_server): |
304 | PasswordRequestTokenRestServlet(hs).register(http_server) | |
376 | EmailPasswordRequestTokenRestServlet(hs).register(http_server) | |
377 | MsisdnPasswordRequestTokenRestServlet(hs).register(http_server) | |
305 | 378 | PasswordRestServlet(hs).register(http_server) |
306 | 379 | DeactivateAccountRestServlet(hs).register(http_server) |
307 | ThreepidRequestTokenRestServlet(hs).register(http_server) | |
380 | EmailThreepidRequestTokenRestServlet(hs).register(http_server) | |
381 | MsisdnThreepidRequestTokenRestServlet(hs).register(http_server) | |
308 | 382 | ThreepidRestServlet(hs).register(http_server) |
309 | 383 | ThreepidDeleteRestServlet(hs).register(http_server) |
43 | 43 | requester.user.to_string() |
44 | 44 | ) |
45 | 45 | defer.returnValue((200, {"devices": devices})) |
46 | ||
47 | ||
48 | class DeleteDevicesRestServlet(servlet.RestServlet): | |
49 | """ | |
50 | API for bulk deletion of devices. Accepts a JSON object with a devices | |
51 | key which lists the device_ids to delete. Requires user interactive auth. | |
52 | """ | |
53 | PATTERNS = client_v2_patterns("/delete_devices", releases=[], v2_alpha=False) | |
54 | ||
55 | def __init__(self, hs): | |
56 | super(DeleteDevicesRestServlet, self).__init__() | |
57 | self.hs = hs | |
58 | self.auth = hs.get_auth() | |
59 | self.device_handler = hs.get_device_handler() | |
60 | self.auth_handler = hs.get_auth_handler() | |
61 | ||
62 | @defer.inlineCallbacks | |
63 | def on_POST(self, request): | |
64 | try: | |
65 | body = servlet.parse_json_object_from_request(request) | |
66 | except errors.SynapseError as e: | |
67 | if e.errcode == errors.Codes.NOT_JSON: | |
68 | # deal with older clients which didn't pass a J*DELETESON dict | |
69 | # the same as those that pass an empty dict | |
70 | body = {} | |
71 | else: | |
72 | raise e | |
73 | ||
74 | if 'devices' not in body: | |
75 | raise errors.SynapseError( | |
76 | 400, "No devices supplied", errcode=errors.Codes.MISSING_PARAM | |
77 | ) | |
78 | ||
79 | authed, result, params, _ = yield self.auth_handler.check_auth([ | |
80 | [constants.LoginType.PASSWORD], | |
81 | ], body, self.hs.get_ip_from_request(request)) | |
82 | ||
83 | if not authed: | |
84 | defer.returnValue((401, result)) | |
85 | ||
86 | requester = yield self.auth.get_user_by_req(request) | |
87 | yield self.device_handler.delete_devices( | |
88 | requester.user.to_string(), | |
89 | body['devices'], | |
90 | ) | |
91 | defer.returnValue((200, {})) | |
46 | 92 | |
47 | 93 | |
48 | 94 | class DeviceRestServlet(servlet.RestServlet): |
110 | 156 | |
111 | 157 | |
112 | 158 | def register_servlets(hs, http_server): |
159 | DeleteDevicesRestServlet(hs).register(http_server) | |
113 | 160 | DevicesRestServlet(hs).register(http_server) |
114 | 161 | DeviceRestServlet(hs).register(http_server) |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2015 - 2016 OpenMarket Ltd |
2 | # Copyright 2017 Vector Creations Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
18 | 19 | from synapse.api.auth import get_access_token_from_request, has_access_token |
19 | 20 | from synapse.api.constants import LoginType |
20 | 21 | from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError |
21 | from synapse.http.servlet import RestServlet, parse_json_object_from_request | |
22 | from synapse.http.servlet import ( | |
23 | RestServlet, parse_json_object_from_request, assert_params_in_request | |
24 | ) | |
25 | from synapse.util.msisdn import phone_number_to_msisdn | |
22 | 26 | |
23 | 27 | from ._base import client_v2_patterns |
24 | 28 | |
42 | 46 | logger = logging.getLogger(__name__) |
43 | 47 | |
44 | 48 | |
45 | class RegisterRequestTokenRestServlet(RestServlet): | |
49 | class EmailRegisterRequestTokenRestServlet(RestServlet): | |
46 | 50 | PATTERNS = client_v2_patterns("/register/email/requestToken$") |
47 | 51 | |
48 | 52 | def __init__(self, hs): |
50 | 54 | Args: |
51 | 55 | hs (synapse.server.HomeServer): server |
52 | 56 | """ |
53 | super(RegisterRequestTokenRestServlet, self).__init__() | |
57 | super(EmailRegisterRequestTokenRestServlet, self).__init__() | |
54 | 58 | self.hs = hs |
55 | 59 | self.identity_handler = hs.get_handlers().identity_handler |
56 | 60 | |
58 | 62 | def on_POST(self, request): |
59 | 63 | body = parse_json_object_from_request(request) |
60 | 64 | |
61 | required = ['id_server', 'client_secret', 'email', 'send_attempt'] | |
62 | absent = [] | |
63 | for k in required: | |
64 | if k not in body: | |
65 | absent.append(k) | |
66 | ||
67 | if len(absent) > 0: | |
68 | raise SynapseError(400, "Missing params: %r" % absent, Codes.MISSING_PARAM) | |
65 | assert_params_in_request(body, [ | |
66 | 'id_server', 'client_secret', 'email', 'send_attempt' | |
67 | ]) | |
69 | 68 | |
70 | 69 | existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( |
71 | 70 | 'email', body['email'] |
75 | 74 | raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE) |
76 | 75 | |
77 | 76 | ret = yield self.identity_handler.requestEmailToken(**body) |
77 | defer.returnValue((200, ret)) | |
78 | ||
79 | ||
80 | class MsisdnRegisterRequestTokenRestServlet(RestServlet): | |
81 | PATTERNS = client_v2_patterns("/register/msisdn/requestToken$") | |
82 | ||
83 | def __init__(self, hs): | |
84 | """ | |
85 | Args: | |
86 | hs (synapse.server.HomeServer): server | |
87 | """ | |
88 | super(MsisdnRegisterRequestTokenRestServlet, self).__init__() | |
89 | self.hs = hs | |
90 | self.identity_handler = hs.get_handlers().identity_handler | |
91 | ||
92 | @defer.inlineCallbacks | |
93 | def on_POST(self, request): | |
94 | body = parse_json_object_from_request(request) | |
95 | ||
96 | assert_params_in_request(body, [ | |
97 | 'id_server', 'client_secret', | |
98 | 'country', 'phone_number', | |
99 | 'send_attempt', | |
100 | ]) | |
101 | ||
102 | msisdn = phone_number_to_msisdn(body['country'], body['phone_number']) | |
103 | ||
104 | existingUid = yield self.hs.get_datastore().get_user_id_by_threepid( | |
105 | 'msisdn', msisdn | |
106 | ) | |
107 | ||
108 | if existingUid is not None: | |
109 | raise SynapseError( | |
110 | 400, "Phone number is already in use", Codes.THREEPID_IN_USE | |
111 | ) | |
112 | ||
113 | ret = yield self.identity_handler.requestMsisdnToken(**body) | |
78 | 114 | defer.returnValue((200, ret)) |
79 | 115 | |
80 | 116 | |
199 | 235 | assigned_user_id=registered_user_id, |
200 | 236 | ) |
201 | 237 | |
238 | # Only give msisdn flows if the x_show_msisdn flag is given: | |
239 | # this is a hack to work around the fact that clients were shipped | |
240 | # that use fallback registration if they see any flows that they don't | |
241 | # recognise, which means we break registration for these clients if we | |
242 | # advertise msisdn flows. Once usage of Riot iOS <=0.3.9 and Riot | |
243 | # Android <=0.6.9 have fallen below an acceptable threshold, this | |
244 | # parameter should go away and we should always advertise msisdn flows. | |
245 | show_msisdn = False | |
246 | if 'x_show_msisdn' in body and body['x_show_msisdn']: | |
247 | show_msisdn = True | |
248 | ||
202 | 249 | if self.hs.config.enable_registration_captcha: |
203 | 250 | flows = [ |
204 | 251 | [LoginType.RECAPTCHA], |
205 | [LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA] | |
252 | [LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA], | |
206 | 253 | ] |
254 | if show_msisdn: | |
255 | flows.extend([ | |
256 | [LoginType.MSISDN, LoginType.RECAPTCHA], | |
257 | [LoginType.MSISDN, LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA], | |
258 | ]) | |
207 | 259 | else: |
208 | 260 | flows = [ |
209 | 261 | [LoginType.DUMMY], |
210 | [LoginType.EMAIL_IDENTITY] | |
262 | [LoginType.EMAIL_IDENTITY], | |
211 | 263 | ] |
264 | if show_msisdn: | |
265 | flows.extend([ | |
266 | [LoginType.MSISDN], | |
267 | [LoginType.MSISDN, LoginType.EMAIL_IDENTITY], | |
268 | ]) | |
212 | 269 | |
213 | 270 | authed, auth_result, params, session_id = yield self.auth_handler.check_auth( |
214 | 271 | flows, body, self.hs.get_ip_from_request(request) |
223 | 280 | "Already registered user ID %r for this session", |
224 | 281 | registered_user_id |
225 | 282 | ) |
226 | # don't re-register the email address | |
283 | # don't re-register the threepids | |
227 | 284 | add_email = False |
285 | add_msisdn = False | |
228 | 286 | else: |
229 | 287 | # NB: This may be from the auth handler and NOT from the POST |
230 | 288 | if 'password' not in params: |
249 | 307 | ) |
250 | 308 | |
251 | 309 | add_email = True |
310 | add_msisdn = True | |
252 | 311 | |
253 | 312 | return_dict = yield self._create_registration_details( |
254 | 313 | registered_user_id, params |
259 | 318 | yield self._register_email_threepid( |
260 | 319 | registered_user_id, threepid, return_dict["access_token"], |
261 | 320 | params.get("bind_email") |
321 | ) | |
322 | ||
323 | if add_msisdn and auth_result and LoginType.MSISDN in auth_result: | |
324 | threepid = auth_result[LoginType.MSISDN] | |
325 | yield self._register_msisdn_threepid( | |
326 | registered_user_id, threepid, return_dict["access_token"], | |
327 | params.get("bind_msisdn") | |
262 | 328 | ) |
263 | 329 | |
264 | 330 | defer.returnValue((200, return_dict)) |
322 | 388 | """ |
323 | 389 | reqd = ('medium', 'address', 'validated_at') |
324 | 390 | if any(x not in threepid for x in reqd): |
391 | # This will only happen if the ID server returns a malformed response | |
325 | 392 | logger.info("Can't add incomplete 3pid") |
326 | defer.returnValue() | |
393 | return | |
327 | 394 | |
328 | 395 | yield self.auth_handler.add_threepid( |
329 | 396 | user_id, |
371 | 438 | logger.info("bind_email not specified: not binding email") |
372 | 439 | |
373 | 440 | @defer.inlineCallbacks |
441 | def _register_msisdn_threepid(self, user_id, threepid, token, bind_msisdn): | |
442 | """Add a phone number as a 3pid identifier | |
443 | ||
444 | Also optionally binds msisdn to the given user_id on the identity server | |
445 | ||
446 | Args: | |
447 | user_id (str): id of user | |
448 | threepid (object): m.login.msisdn auth response | |
449 | token (str): access_token for the user | |
450 | bind_email (bool): true if the client requested the email to be | |
451 | bound at the identity server | |
452 | Returns: | |
453 | defer.Deferred: | |
454 | """ | |
455 | reqd = ('medium', 'address', 'validated_at') | |
456 | if any(x not in threepid for x in reqd): | |
457 | # This will only happen if the ID server returns a malformed response | |
458 | logger.info("Can't add incomplete 3pid") | |
459 | defer.returnValue() | |
460 | ||
461 | yield self.auth_handler.add_threepid( | |
462 | user_id, | |
463 | threepid['medium'], | |
464 | threepid['address'], | |
465 | threepid['validated_at'], | |
466 | ) | |
467 | ||
468 | if bind_msisdn: | |
469 | logger.info("bind_msisdn specified: binding") | |
470 | logger.debug("Binding msisdn %s to %s", threepid, user_id) | |
471 | yield self.identity_handler.bind_threepid( | |
472 | threepid['threepid_creds'], user_id | |
473 | ) | |
474 | else: | |
475 | logger.info("bind_msisdn not specified: not binding msisdn") | |
476 | ||
477 | @defer.inlineCallbacks | |
374 | 478 | def _create_registration_details(self, user_id, params): |
375 | 479 | """Complete registration of newly-registered user |
376 | 480 | |
432 | 536 | # we have nowhere to store it. |
433 | 537 | device_id = synapse.api.auth.GUEST_DEVICE_ID |
434 | 538 | initial_display_name = params.get("initial_device_display_name") |
435 | self.device_handler.check_device_registered( | |
539 | yield self.device_handler.check_device_registered( | |
436 | 540 | user_id, device_id, initial_display_name |
437 | 541 | ) |
438 | 542 | |
448 | 552 | |
449 | 553 | |
450 | 554 | def register_servlets(hs, http_server): |
451 | RegisterRequestTokenRestServlet(hs).register(http_server) | |
555 | EmailRegisterRequestTokenRestServlet(hs).register(http_server) | |
556 | MsisdnRegisterRequestTokenRestServlet(hs).register(http_server) | |
452 | 557 | RegisterRestServlet(hs).register(http_server) |
17 | 17 | from synapse.http.servlet import ( |
18 | 18 | RestServlet, parse_string, parse_integer, parse_boolean |
19 | 19 | ) |
20 | from synapse.handlers.presence import format_user_presence_state | |
20 | 21 | from synapse.handlers.sync import SyncConfig |
21 | 22 | from synapse.types import StreamToken |
22 | 23 | from synapse.events.utils import ( |
27 | 28 | from synapse.api.constants import PresenceState |
28 | 29 | from ._base import client_v2_patterns |
29 | 30 | |
30 | import copy | |
31 | 31 | import itertools |
32 | 32 | import logging |
33 | 33 | |
193 | 193 | defer.returnValue((200, response_content)) |
194 | 194 | |
195 | 195 | def encode_presence(self, events, time_now): |
196 | formatted = [] | |
197 | for event in events: | |
198 | event = copy.deepcopy(event) | |
199 | event['sender'] = event['content'].pop('user_id') | |
200 | formatted.append(event) | |
201 | return {"events": formatted} | |
196 | return { | |
197 | "events": [ | |
198 | { | |
199 | "type": "m.presence", | |
200 | "sender": event.user_id, | |
201 | "content": format_user_presence_state( | |
202 | event, time_now, include_user_id=False | |
203 | ), | |
204 | } | |
205 | for event in events | |
206 | ] | |
207 | } | |
202 | 208 | |
203 | 209 | def encode_joined(self, rooms, time_now, token_id, event_fields): |
204 | 210 | """ |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import synapse.http.servlet | |
14 | 15 | |
15 | 16 | from ._base import parse_media_id, respond_with_file, respond_404 |
16 | 17 | from twisted.web.resource import Resource |
80 | 81 | |
81 | 82 | @defer.inlineCallbacks |
82 | 83 | def _respond_remote_file(self, request, server_name, media_id, name): |
84 | # don't forward requests for remote media if allow_remote is false | |
85 | allow_remote = synapse.http.servlet.parse_boolean( | |
86 | request, "allow_remote", default=True) | |
87 | if not allow_remote: | |
88 | logger.info( | |
89 | "Rejecting request for remote media %s/%s due to allow_remote", | |
90 | server_name, media_id, | |
91 | ) | |
92 | respond_404(request) | |
93 | return | |
94 | ||
83 | 95 | media_info = yield self.media_repo.get_remote_media(server_name, media_id) |
84 | 96 | |
85 | 97 | media_type = media_info["media_type"] |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from twisted.internet import defer, threads | |
16 | import twisted.internet.error | |
17 | import twisted.web.http | |
18 | from twisted.web.resource import Resource | |
19 | ||
15 | 20 | from .upload_resource import UploadResource |
16 | 21 | from .download_resource import DownloadResource |
17 | 22 | from .thumbnail_resource import ThumbnailResource |
18 | 23 | from .identicon_resource import IdenticonResource |
19 | 24 | from .preview_url_resource import PreviewUrlResource |
20 | 25 | from .filepath import MediaFilePaths |
21 | ||
22 | from twisted.web.resource import Resource | |
23 | ||
24 | 26 | from .thumbnailer import Thumbnailer |
25 | 27 | |
26 | 28 | from synapse.http.matrixfederationclient import MatrixFederationHttpClient |
27 | 29 | from synapse.util.stringutils import random_string |
28 | from synapse.api.errors import SynapseError | |
29 | ||
30 | from twisted.internet import defer, threads | |
30 | from synapse.api.errors import SynapseError, HttpResponseException, \ | |
31 | NotFoundError | |
31 | 32 | |
32 | 33 | from synapse.util.async import Linearizer |
33 | 34 | from synapse.util.stringutils import is_ascii |
156 | 157 | try: |
157 | 158 | length, headers = yield self.client.get_file( |
158 | 159 | server_name, request_path, output_stream=f, |
159 | max_size=self.max_upload_size, | |
160 | max_size=self.max_upload_size, args={ | |
161 | # tell the remote server to 404 if it doesn't | |
162 | # recognise the server_name, to make sure we don't | |
163 | # end up with a routing loop. | |
164 | "allow_remote": "false", | |
165 | } | |
160 | 166 | ) |
161 | except Exception as e: | |
162 | logger.warn("Failed to fetch remoted media %r", e) | |
163 | raise SynapseError(502, "Failed to fetch remoted media") | |
167 | except twisted.internet.error.DNSLookupError as e: | |
168 | logger.warn("HTTP error fetching remote media %s/%s: %r", | |
169 | server_name, media_id, e) | |
170 | raise NotFoundError() | |
171 | ||
172 | except HttpResponseException as e: | |
173 | logger.warn("HTTP error fetching remote media %s/%s: %s", | |
174 | server_name, media_id, e.response) | |
175 | if e.code == twisted.web.http.NOT_FOUND: | |
176 | raise SynapseError.from_http_response_exception(e) | |
177 | raise SynapseError(502, "Failed to fetch remote media") | |
178 | ||
179 | except SynapseError: | |
180 | logger.exception("Failed to fetch remote media %s/%s", | |
181 | server_name, media_id) | |
182 | raise | |
183 | ||
184 | except Exception: | |
185 | logger.exception("Failed to fetch remote media %s/%s", | |
186 | server_name, media_id) | |
187 | raise SynapseError(502, "Failed to fetch remote media") | |
164 | 188 | |
165 | 189 | media_type = headers["Content-Type"][0] |
166 | 190 | time_now_ms = self.clock.time_msec() |
176 | 176 | |
177 | 177 | @defer.inlineCallbacks |
178 | 178 | def compute_event_context(self, event, old_state=None): |
179 | """ Fills out the context with the `current state` of the graph. The | |
180 | `current state` here is defined to be the state of the event graph | |
181 | just before the event - i.e. it never includes `event` | |
182 | ||
183 | If `event` has `auth_events` then this will also fill out the | |
184 | `auth_events` field on `context` from the `current_state`. | |
179 | """Build an EventContext structure for the event. | |
185 | 180 | |
186 | 181 | Args: |
187 | event (EventBase) | |
182 | event (synapse.events.EventBase): | |
188 | 183 | Returns: |
189 | an EventContext | |
184 | synapse.events.snapshot.EventContext: | |
190 | 185 | """ |
191 | 186 | context = EventContext() |
192 | 187 | |
199 | 194 | (s.type, s.state_key): s.event_id for s in old_state |
200 | 195 | } |
201 | 196 | if event.is_state(): |
202 | context.current_state_events = dict(context.prev_state_ids) | |
197 | context.current_state_ids = dict(context.prev_state_ids) | |
203 | 198 | key = (event.type, event.state_key) |
204 | context.current_state_events[key] = event.event_id | |
199 | context.current_state_ids[key] = event.event_id | |
205 | 200 | else: |
206 | context.current_state_events = context.prev_state_ids | |
201 | context.current_state_ids = context.prev_state_ids | |
207 | 202 | else: |
208 | 203 | context.current_state_ids = {} |
209 | 204 | context.prev_state_ids = {} |
72 | 72 | def __setattr__(self, name, value): |
73 | 73 | setattr(self.txn, name, value) |
74 | 74 | |
75 | def __iter__(self): | |
76 | return self.txn.__iter__() | |
77 | ||
75 | 78 | def execute(self, sql, *args): |
76 | 79 | self._do_execute(self.txn.execute, sql, *args) |
77 | 80 | |
131 | 134 | |
132 | 135 | def interval(self, interval_duration, limit=3): |
133 | 136 | counters = [] |
134 | for name, (count, cum_time) in self.current_counters.items(): | |
137 | for name, (count, cum_time) in self.current_counters.iteritems(): | |
135 | 138 | prev_count, prev_time = self.previous_counters.get(name, (0, 0)) |
136 | 139 | counters.append(( |
137 | 140 | (cum_time - prev_time) / interval_duration, |
356 | 359 | """ |
357 | 360 | col_headers = list(intern(column[0]) for column in cursor.description) |
358 | 361 | results = list( |
359 | dict(zip(col_headers, row)) for row in cursor.fetchall() | |
362 | dict(zip(col_headers, row)) for row in cursor | |
360 | 363 | ) |
361 | 364 | return results |
362 | 365 | |
564 | 567 | @staticmethod |
565 | 568 | def _simple_select_onecol_txn(txn, table, keyvalues, retcol): |
566 | 569 | if keyvalues: |
567 | where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.keys()) | |
570 | where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys()) | |
568 | 571 | else: |
569 | 572 | where = "" |
570 | 573 | |
578 | 581 | |
579 | 582 | txn.execute(sql, keyvalues.values()) |
580 | 583 | |
581 | return [r[0] for r in txn.fetchall()] | |
584 | return [r[0] for r in txn] | |
582 | 585 | |
583 | 586 | def _simple_select_onecol(self, table, keyvalues, retcol, |
584 | 587 | desc="_simple_select_onecol"): |
711 | 714 | ) |
712 | 715 | values.extend(iterable) |
713 | 716 | |
714 | for key, value in keyvalues.items(): | |
717 | for key, value in keyvalues.iteritems(): | |
715 | 718 | clauses.append("%s = ?" % (key,)) |
716 | 719 | values.append(value) |
717 | 720 | |
752 | 755 | @staticmethod |
753 | 756 | def _simple_update_one_txn(txn, table, keyvalues, updatevalues): |
754 | 757 | if keyvalues: |
755 | where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.keys()) | |
758 | where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys()) | |
756 | 759 | else: |
757 | 760 | where = "" |
758 | 761 | |
838 | 841 | ) |
839 | 842 | |
840 | 843 | return txn.execute(sql, keyvalues.values()) |
844 | ||
845 | def _simple_delete_many(self, table, column, iterable, keyvalues, desc): | |
846 | return self.runInteraction( | |
847 | desc, self._simple_delete_many_txn, table, column, iterable, keyvalues | |
848 | ) | |
849 | ||
850 | @staticmethod | |
851 | def _simple_delete_many_txn(txn, table, column, iterable, keyvalues): | |
852 | """Executes a DELETE query on the named table. | |
853 | ||
854 | Filters rows by if value of `column` is in `iterable`. | |
855 | ||
856 | Args: | |
857 | txn : Transaction object | |
858 | table : string giving the table name | |
859 | column : column name to test for inclusion against `iterable` | |
860 | iterable : list | |
861 | keyvalues : dict of column names and values to select the rows with | |
862 | """ | |
863 | if not iterable: | |
864 | return | |
865 | ||
866 | sql = "DELETE FROM %s" % table | |
867 | ||
868 | clauses = [] | |
869 | values = [] | |
870 | clauses.append( | |
871 | "%s IN (%s)" % (column, ",".join("?" for _ in iterable)) | |
872 | ) | |
873 | values.extend(iterable) | |
874 | ||
875 | for key, value in keyvalues.iteritems(): | |
876 | clauses.append("%s = ?" % (key,)) | |
877 | values.append(value) | |
878 | ||
879 | if clauses: | |
880 | sql = "%s WHERE %s" % ( | |
881 | sql, | |
882 | " AND ".join(clauses), | |
883 | ) | |
884 | return txn.execute(sql, values) | |
841 | 885 | |
842 | 886 | def _get_cache_dict(self, db_conn, table, entity_column, stream_column, |
843 | 887 | max_value, limit=100000): |
859 | 903 | |
860 | 904 | txn = db_conn.cursor() |
861 | 905 | txn.execute(sql, (int(max_value),)) |
862 | rows = txn.fetchall() | |
863 | txn.close() | |
864 | 906 | |
865 | 907 | cache = { |
866 | 908 | row[0]: int(row[1]) |
867 | for row in rows | |
909 | for row in txn | |
868 | 910 | } |
869 | 911 | |
912 | txn.close() | |
913 | ||
870 | 914 | if cache: |
871 | min_val = min(cache.values()) | |
915 | min_val = min(cache.itervalues()) | |
872 | 916 | else: |
873 | 917 | min_val = max_value |
874 | 918 |
181 | 181 | txn.execute(sql, (user_id, stream_id)) |
182 | 182 | |
183 | 183 | global_account_data = { |
184 | row[0]: json.loads(row[1]) for row in txn.fetchall() | |
184 | row[0]: json.loads(row[1]) for row in txn | |
185 | 185 | } |
186 | 186 | |
187 | 187 | sql = ( |
192 | 192 | txn.execute(sql, (user_id, stream_id)) |
193 | 193 | |
194 | 194 | account_data_by_room = {} |
195 | for row in txn.fetchall(): | |
195 | for row in txn: | |
196 | 196 | room_account_data = account_data_by_room.setdefault(row[0], {}) |
197 | 197 | room_account_data[row[1]] = json.loads(row[2]) |
198 | 198 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import synapse.util.async | |
14 | 15 | |
15 | 16 | from ._base import SQLBaseStore |
16 | 17 | from . import engines |
83 | 84 | self._background_update_performance = {} |
84 | 85 | self._background_update_queue = [] |
85 | 86 | self._background_update_handlers = {} |
86 | self._background_update_timer = None | |
87 | 87 | |
88 | 88 | @defer.inlineCallbacks |
89 | 89 | def start_doing_background_updates(self): |
90 | assert self._background_update_timer is None, \ | |
91 | "background updates already running" | |
92 | ||
93 | 90 | logger.info("Starting background schema updates") |
94 | 91 | |
95 | 92 | while True: |
96 | sleep = defer.Deferred() | |
97 | self._background_update_timer = self._clock.call_later( | |
98 | self.BACKGROUND_UPDATE_INTERVAL_MS / 1000., sleep.callback, None | |
99 | ) | |
100 | try: | |
101 | yield sleep | |
102 | finally: | |
103 | self._background_update_timer = None | |
93 | yield synapse.util.async.sleep( | |
94 | self.BACKGROUND_UPDATE_INTERVAL_MS / 1000.) | |
104 | 95 | |
105 | 96 | try: |
106 | 97 | result = yield self.do_next_background_update( |
177 | 177 | ) |
178 | 178 | txn.execute(sql, (user_id,)) |
179 | 179 | message_json = ujson.dumps(messages_by_device["*"]) |
180 | for row in txn.fetchall(): | |
180 | for row in txn: | |
181 | 181 | # Add the message for all devices for this user on this |
182 | 182 | # server. |
183 | 183 | device = row[0] |
194 | 194 | # TODO: Maybe this needs to be done in batches if there are |
195 | 195 | # too many local devices for a given user. |
196 | 196 | txn.execute(sql, [user_id] + devices) |
197 | for row in txn.fetchall(): | |
197 | for row in txn: | |
198 | 198 | # Only insert into the local inbox if the device exists on |
199 | 199 | # this server |
200 | 200 | device = row[0] |
250 | 250 | user_id, device_id, last_stream_id, current_stream_id, limit |
251 | 251 | )) |
252 | 252 | messages = [] |
253 | for row in txn.fetchall(): | |
253 | for row in txn: | |
254 | 254 | stream_pos = row[0] |
255 | 255 | messages.append(ujson.loads(row[1])) |
256 | 256 | if len(messages) < limit: |
339 | 339 | " ORDER BY stream_id ASC" |
340 | 340 | ) |
341 | 341 | txn.execute(sql, (last_pos, upper_pos)) |
342 | rows.extend(txn.fetchall()) | |
342 | rows.extend(txn) | |
343 | 343 | |
344 | 344 | return rows |
345 | 345 | |
356 | 356 | """ |
357 | 357 | Args: |
358 | 358 | destination(str): The name of the remote server. |
359 | last_stream_id(int): The last position of the device message stream | |
359 | last_stream_id(int|long): The last position of the device message stream | |
360 | 360 | that the server sent up to. |
361 | current_stream_id(int): The current position of the device | |
361 | current_stream_id(int|long): The current position of the device | |
362 | 362 | message stream. |
363 | 363 | Returns: |
364 | Deferred ([dict], int): List of messages for the device and where | |
364 | Deferred ([dict], int|long): List of messages for the device and where | |
365 | 365 | in the stream the messages got to. |
366 | 366 | """ |
367 | 367 | |
383 | 383 | destination, last_stream_id, current_stream_id, limit |
384 | 384 | )) |
385 | 385 | messages = [] |
386 | for row in txn.fetchall(): | |
386 | for row in txn: | |
387 | 387 | stream_pos = row[0] |
388 | 388 | messages.append(ujson.loads(row[1])) |
389 | 389 | if len(messages) < limit: |
107 | 107 | desc="delete_device", |
108 | 108 | ) |
109 | 109 | |
110 | def delete_devices(self, user_id, device_ids): | |
111 | """Deletes several devices. | |
112 | ||
113 | Args: | |
114 | user_id (str): The ID of the user which owns the devices | |
115 | device_ids (list): The IDs of the devices to delete | |
116 | Returns: | |
117 | defer.Deferred | |
118 | """ | |
119 | return self._simple_delete_many( | |
120 | table="devices", | |
121 | column="device_id", | |
122 | iterable=device_ids, | |
123 | keyvalues={"user_id": user_id}, | |
124 | desc="delete_devices", | |
125 | ) | |
126 | ||
110 | 127 | def update_device(self, user_id, device_id, new_display_name=None): |
111 | 128 | """Update a device. |
112 | 129 | |
290 | 307 | """Get stream of updates to send to remote servers |
291 | 308 | |
292 | 309 | Returns: |
293 | (now_stream_id, [ { updates }, .. ]) | |
310 | (int, list[dict]): current stream id and list of updates | |
294 | 311 | """ |
295 | 312 | now_stream_id = self._device_list_id_gen.get_current_token() |
296 | 313 | |
311 | 328 | SELECT user_id, device_id, max(stream_id) FROM device_lists_outbound_pokes |
312 | 329 | WHERE destination = ? AND ? < stream_id AND stream_id <= ? AND sent = ? |
313 | 330 | GROUP BY user_id, device_id |
331 | LIMIT 20 | |
314 | 332 | """ |
315 | 333 | txn.execute( |
316 | 334 | sql, (destination, from_stream_id, now_stream_id, False) |
317 | 335 | ) |
318 | rows = txn.fetchall() | |
319 | ||
320 | if not rows: | |
336 | ||
337 | # maps (user_id, device_id) -> stream_id | |
338 | query_map = {(r[0], r[1]): r[2] for r in txn} | |
339 | if not query_map: | |
321 | 340 | return (now_stream_id, []) |
322 | 341 | |
323 | # maps (user_id, device_id) -> stream_id | |
324 | query_map = {(r[0], r[1]): r[2] for r in rows} | |
342 | if len(query_map) >= 20: | |
343 | now_stream_id = max(stream_id for stream_id in query_map.itervalues()) | |
344 | ||
325 | 345 | devices = self._get_e2e_device_keys_txn( |
326 | 346 | txn, query_map.keys(), include_all_devices=True |
327 | 347 | ) |
13 | 13 | # limitations under the License. |
14 | 14 | from twisted.internet import defer |
15 | 15 | |
16 | from synapse.api.errors import SynapseError | |
17 | ||
16 | 18 | from canonicaljson import encode_canonical_json |
17 | 19 | import ujson as json |
18 | 20 | |
119 | 121 | |
120 | 122 | return result |
121 | 123 | |
124 | @defer.inlineCallbacks | |
122 | 125 | def add_e2e_one_time_keys(self, user_id, device_id, time_now, key_list): |
126 | """Insert some new one time keys for a device. | |
127 | ||
128 | Checks if any of the keys are already inserted, if they are then check | |
129 | if they match. If they don't then we raise an error. | |
130 | """ | |
131 | ||
132 | # First we check if we have already persisted any of the keys. | |
133 | rows = yield self._simple_select_many_batch( | |
134 | table="e2e_one_time_keys_json", | |
135 | column="key_id", | |
136 | iterable=[key_id for _, key_id, _ in key_list], | |
137 | retcols=("algorithm", "key_id", "key_json",), | |
138 | keyvalues={ | |
139 | "user_id": user_id, | |
140 | "device_id": device_id, | |
141 | }, | |
142 | desc="add_e2e_one_time_keys_check", | |
143 | ) | |
144 | ||
145 | existing_key_map = { | |
146 | (row["algorithm"], row["key_id"]): row["key_json"] for row in rows | |
147 | } | |
148 | ||
149 | new_keys = [] # Keys that we need to insert | |
150 | for algorithm, key_id, json_bytes in key_list: | |
151 | ex_bytes = existing_key_map.get((algorithm, key_id), None) | |
152 | if ex_bytes: | |
153 | if json_bytes != ex_bytes: | |
154 | raise SynapseError( | |
155 | 400, "One time key with key_id %r already exists" % (key_id,) | |
156 | ) | |
157 | else: | |
158 | new_keys.append((algorithm, key_id, json_bytes)) | |
159 | ||
123 | 160 | def _add_e2e_one_time_keys(txn): |
124 | for (algorithm, key_id, json_bytes) in key_list: | |
125 | self._simple_upsert_txn( | |
126 | txn, table="e2e_one_time_keys_json", | |
127 | keyvalues={ | |
161 | # We are protected from race between lookup and insertion due to | |
162 | # a unique constraint. If there is a race of two calls to | |
163 | # `add_e2e_one_time_keys` then they'll conflict and we will only | |
164 | # insert one set. | |
165 | self._simple_insert_many_txn( | |
166 | txn, table="e2e_one_time_keys_json", | |
167 | values=[ | |
168 | { | |
128 | 169 | "user_id": user_id, |
129 | 170 | "device_id": device_id, |
130 | 171 | "algorithm": algorithm, |
131 | 172 | "key_id": key_id, |
132 | }, | |
133 | values={ | |
134 | 173 | "ts_added_ms": time_now, |
135 | 174 | "key_json": json_bytes, |
136 | 175 | } |
137 | ) | |
138 | return self.runInteraction( | |
139 | "add_e2e_one_time_keys", _add_e2e_one_time_keys | |
176 | for algorithm, key_id, json_bytes in new_keys | |
177 | ], | |
178 | ) | |
179 | yield self.runInteraction( | |
180 | "add_e2e_one_time_keys_insert", _add_e2e_one_time_keys | |
140 | 181 | ) |
141 | 182 | |
142 | 183 | def count_e2e_one_time_keys(self, user_id, device_id): |
152 | 193 | ) |
153 | 194 | txn.execute(sql, (user_id, device_id)) |
154 | 195 | result = {} |
155 | for algorithm, key_count in txn.fetchall(): | |
196 | for algorithm, key_count in txn: | |
156 | 197 | result[algorithm] = key_count |
157 | 198 | return result |
158 | 199 | return self.runInteraction( |
173 | 214 | user_result = result.setdefault(user_id, {}) |
174 | 215 | device_result = user_result.setdefault(device_id, {}) |
175 | 216 | txn.execute(sql, (user_id, device_id, algorithm)) |
176 | for key_id, key_json in txn.fetchall(): | |
217 | for key_id, key_json in txn: | |
177 | 218 | device_result[algorithm + ":" + key_id] = key_json |
178 | 219 | delete.append((user_id, device_id, algorithm, key_id)) |
179 | 220 | sql = ( |
73 | 73 | base_sql % (",".join(["?"] * len(chunk)),), |
74 | 74 | chunk |
75 | 75 | ) |
76 | new_front.update([r[0] for r in txn.fetchall()]) | |
76 | new_front.update([r[0] for r in txn]) | |
77 | 77 | |
78 | 78 | new_front -= results |
79 | 79 | |
109 | 109 | |
110 | 110 | txn.execute(sql, (room_id, False,)) |
111 | 111 | |
112 | return dict(txn.fetchall()) | |
112 | return dict(txn) | |
113 | 113 | |
114 | 114 | def _get_oldest_events_in_room_txn(self, txn, room_id): |
115 | 115 | return self._simple_select_onecol_txn( |
200 | 200 | def _update_min_depth_for_room_txn(self, txn, room_id, depth): |
201 | 201 | min_depth = self._get_min_depth_interaction(txn, room_id) |
202 | 202 | |
203 | do_insert = depth < min_depth if min_depth else True | |
204 | ||
205 | if do_insert: | |
206 | self._simple_upsert_txn( | |
207 | txn, | |
208 | table="room_depth", | |
209 | keyvalues={ | |
210 | "room_id": room_id, | |
211 | }, | |
212 | values={ | |
213 | "min_depth": depth, | |
214 | }, | |
215 | ) | |
203 | if min_depth and depth >= min_depth: | |
204 | return | |
205 | ||
206 | self._simple_upsert_txn( | |
207 | txn, | |
208 | table="room_depth", | |
209 | keyvalues={ | |
210 | "room_id": room_id, | |
211 | }, | |
212 | values={ | |
213 | "min_depth": depth, | |
214 | }, | |
215 | ) | |
216 | 216 | |
217 | 217 | def _handle_mult_prev_events(self, txn, events): |
218 | 218 | """ |
333 | 333 | |
334 | 334 | def get_forward_extremeties_for_room_txn(txn): |
335 | 335 | txn.execute(sql, (stream_ordering, room_id)) |
336 | rows = txn.fetchall() | |
337 | return [event_id for event_id, in rows] | |
336 | return [event_id for event_id, in txn] | |
338 | 337 | |
339 | 338 | return self.runInteraction( |
340 | 339 | "get_forward_extremeties_for_room", |
435 | 434 | (room_id, event_id, False, limit - len(event_results)) |
436 | 435 | ) |
437 | 436 | |
438 | for row in txn.fetchall(): | |
437 | for row in txn: | |
439 | 438 | if row[1] not in event_results: |
440 | 439 | queue.put((-row[0], row[1])) |
441 | 440 | |
481 | 480 | (room_id, event_id, False, limit - len(event_results)) |
482 | 481 | ) |
483 | 482 | |
484 | for e_id, in txn.fetchall(): | |
483 | for e_id, in txn: | |
485 | 484 | new_front.add(e_id) |
486 | 485 | |
487 | 486 | new_front -= earliest_events |
205 | 205 | " stream_ordering >= ? AND stream_ordering <= ?" |
206 | 206 | ) |
207 | 207 | txn.execute(sql, (min_stream_ordering, max_stream_ordering)) |
208 | return [r[0] for r in txn.fetchall()] | |
208 | return [r[0] for r in txn] | |
209 | 209 | ret = yield self.runInteraction("get_push_action_users_in_range", f) |
210 | 210 | defer.returnValue(ret) |
211 | 211 |
33 | 33 | from collections import deque, namedtuple, OrderedDict |
34 | 34 | from functools import wraps |
35 | 35 | |
36 | import synapse | |
37 | 36 | import synapse.metrics |
38 | ||
39 | 37 | |
40 | 38 | import logging |
41 | 39 | import math |
42 | 40 | import ujson as json |
41 | ||
42 | # these are only included to make the type annotations work | |
43 | from synapse.events import EventBase # noqa: F401 | |
44 | from synapse.events.snapshot import EventContext # noqa: F401 | |
43 | 45 | |
44 | 46 | logger = logging.getLogger(__name__) |
45 | 47 | |
81 | 83 | |
82 | 84 | def add_to_queue(self, room_id, events_and_contexts, backfilled): |
83 | 85 | """Add events to the queue, with the given persist_event options. |
86 | ||
87 | Args: | |
88 | room_id (str): | |
89 | events_and_contexts (list[(EventBase, EventContext)]): | |
90 | backfilled (bool): | |
84 | 91 | """ |
85 | 92 | queue = self._event_persist_queues.setdefault(room_id, deque()) |
86 | 93 | if queue: |
209 | 216 | partitioned.setdefault(event.room_id, []).append((event, ctx)) |
210 | 217 | |
211 | 218 | deferreds = [] |
212 | for room_id, evs_ctxs in partitioned.items(): | |
219 | for room_id, evs_ctxs in partitioned.iteritems(): | |
213 | 220 | d = preserve_fn(self._event_persist_queue.add_to_queue)( |
214 | 221 | room_id, evs_ctxs, |
215 | 222 | backfilled=backfilled, |
216 | 223 | ) |
217 | 224 | deferreds.append(d) |
218 | 225 | |
219 | for room_id in partitioned.keys(): | |
226 | for room_id in partitioned: | |
220 | 227 | self._maybe_start_persisting(room_id) |
221 | 228 | |
222 | 229 | return preserve_context_over_deferred( |
226 | 233 | @defer.inlineCallbacks |
227 | 234 | @log_function |
228 | 235 | def persist_event(self, event, context, backfilled=False): |
236 | """ | |
237 | ||
238 | Args: | |
239 | event (EventBase): | |
240 | context (EventContext): | |
241 | backfilled (bool): | |
242 | ||
243 | Returns: | |
244 | Deferred: resolves to (int, int): the stream ordering of ``event``, | |
245 | and the stream ordering of the latest persisted event | |
246 | """ | |
229 | 247 | deferred = self._event_persist_queue.add_to_queue( |
230 | 248 | event.room_id, [(event, context)], |
231 | 249 | backfilled=backfilled, |
252 | 270 | @defer.inlineCallbacks |
253 | 271 | def _persist_events(self, events_and_contexts, backfilled=False, |
254 | 272 | delete_existing=False): |
273 | """Persist events to db | |
274 | ||
275 | Args: | |
276 | events_and_contexts (list[(EventBase, EventContext)]): | |
277 | backfilled (bool): | |
278 | delete_existing (bool): | |
279 | ||
280 | Returns: | |
281 | Deferred: resolves when the events have been persisted | |
282 | """ | |
255 | 283 | if not events_and_contexts: |
256 | 284 | return |
257 | 285 | |
294 | 322 | (event, context) |
295 | 323 | ) |
296 | 324 | |
297 | for room_id, ev_ctx_rm in events_by_room.items(): | |
325 | for room_id, ev_ctx_rm in events_by_room.iteritems(): | |
298 | 326 | # Work out new extremities by recursively adding and removing |
299 | 327 | # the new events. |
300 | 328 | latest_event_ids = yield self.get_latest_event_ids_in_room( |
399 | 427 | # Now we need to work out the different state sets for |
400 | 428 | # each state extremities |
401 | 429 | state_sets = [] |
430 | state_groups = set() | |
402 | 431 | missing_event_ids = [] |
403 | 432 | was_updated = False |
404 | 433 | for event_id in new_latest_event_ids: |
408 | 437 | if event_id == ev.event_id: |
409 | 438 | if ctx.current_state_ids is None: |
410 | 439 | raise Exception("Unknown current state") |
411 | state_sets.append(ctx.current_state_ids) | |
412 | if ctx.delta_ids or hasattr(ev, "state_key"): | |
413 | was_updated = True | |
440 | ||
441 | # If we've already seen the state group don't bother adding | |
442 | # it to the state sets again | |
443 | if ctx.state_group not in state_groups: | |
444 | state_sets.append(ctx.current_state_ids) | |
445 | if ctx.delta_ids or hasattr(ev, "state_key"): | |
446 | was_updated = True | |
447 | if ctx.state_group: | |
448 | # Add this as a seen state group (if it has a state | |
449 | # group) | |
450 | state_groups.add(ctx.state_group) | |
414 | 451 | break |
415 | 452 | else: |
416 | 453 | # If we couldn't find it, then we'll need to pull |
424 | 461 | missing_event_ids, |
425 | 462 | ) |
426 | 463 | |
427 | groups = set(event_to_groups.values()) | |
428 | group_to_state = yield self._get_state_for_groups(groups) | |
429 | ||
430 | state_sets.extend(group_to_state.values()) | |
464 | groups = set(event_to_groups.itervalues()) - state_groups | |
465 | ||
466 | if groups: | |
467 | group_to_state = yield self._get_state_for_groups(groups) | |
468 | state_sets.extend(group_to_state.itervalues()) | |
431 | 469 | |
432 | 470 | if not new_latest_event_ids: |
433 | 471 | current_state = {} |
434 | 472 | elif was_updated: |
435 | current_state = yield resolve_events( | |
436 | state_sets, | |
437 | state_map_factory=lambda ev_ids: self.get_events( | |
438 | ev_ids, get_prev_content=False, check_redacted=False, | |
439 | ), | |
440 | ) | |
473 | if len(state_sets) == 1: | |
474 | # If there is only one state set, then we know what the current | |
475 | # state is. | |
476 | current_state = state_sets[0] | |
477 | else: | |
478 | # We work out the current state by passing the state sets to the | |
479 | # state resolution algorithm. It may ask for some events, including | |
480 | # the events we have yet to persist, so we need a slightly more | |
481 | # complicated event lookup function than simply looking the events | |
482 | # up in the db. | |
483 | events_map = {ev.event_id: ev for ev, _ in events_context} | |
484 | ||
485 | @defer.inlineCallbacks | |
486 | def get_events(ev_ids): | |
487 | # We get the events by first looking at the list of events we | |
488 | # are trying to persist, and then fetching the rest from the DB. | |
489 | db = [] | |
490 | to_return = {} | |
491 | for ev_id in ev_ids: | |
492 | ev = events_map.get(ev_id, None) | |
493 | if ev: | |
494 | to_return[ev_id] = ev | |
495 | else: | |
496 | db.append(ev_id) | |
497 | ||
498 | if db: | |
499 | evs = yield self.get_events( | |
500 | ev_ids, get_prev_content=False, check_redacted=False, | |
501 | ) | |
502 | to_return.update(evs) | |
503 | defer.returnValue(to_return) | |
504 | ||
505 | current_state = yield resolve_events( | |
506 | state_sets, | |
507 | state_map_factory=get_events, | |
508 | ) | |
441 | 509 | else: |
442 | 510 | return |
443 | 511 | |
444 | existing_state_rows = yield self._simple_select_list( | |
445 | table="current_state_events", | |
446 | keyvalues={"room_id": room_id}, | |
447 | retcols=["event_id", "type", "state_key"], | |
448 | desc="_calculate_state_delta", | |
449 | ) | |
450 | ||
451 | existing_events = set(row["event_id"] for row in existing_state_rows) | |
512 | existing_state = yield self.get_current_state_ids(room_id) | |
513 | ||
514 | existing_events = set(existing_state.itervalues()) | |
452 | 515 | new_events = set(ev_id for ev_id in current_state.itervalues()) |
453 | 516 | changed_events = existing_events ^ new_events |
454 | 517 | |
456 | 519 | return |
457 | 520 | |
458 | 521 | to_delete = { |
459 | (row["type"], row["state_key"]): row["event_id"] | |
460 | for row in existing_state_rows | |
461 | if row["event_id"] in changed_events | |
522 | key: ev_id for key, ev_id in existing_state.iteritems() | |
523 | if ev_id in changed_events | |
462 | 524 | } |
463 | 525 | events_to_insert = (new_events - existing_events) |
464 | 526 | to_insert = { |
534 | 596 | and the rejections table. Things reading from those table will need to check |
535 | 597 | whether the event was rejected. |
536 | 598 | |
537 | If delete_existing is True then existing events will be purged from the | |
538 | database before insertion. This is useful when retrying due to IntegrityError. | |
539 | """ | |
599 | Args: | |
600 | txn (twisted.enterprise.adbapi.Connection): db connection | |
601 | events_and_contexts (list[(EventBase, EventContext)]): | |
602 | events to persist | |
603 | backfilled (bool): True if the events were backfilled | |
604 | delete_existing (bool): True to purge existing table rows for the | |
605 | events from the database. This is useful when retrying due to | |
606 | IntegrityError. | |
607 | current_state_for_room (dict[str, (list[str], list[str])]): | |
608 | The current-state delta for each room. For each room, a tuple | |
609 | (to_delete, to_insert), being a list of event ids to be removed | |
610 | from the current state, and a list of event ids to be added to | |
611 | the current state. | |
612 | new_forward_extremeties (dict[str, list[str]]): | |
613 | The new forward extremities for each room. For each room, a | |
614 | list of the event ids which are the forward extremities. | |
615 | ||
616 | """ | |
617 | self._update_current_state_txn(txn, current_state_for_room) | |
618 | ||
540 | 619 | max_stream_order = events_and_contexts[-1][0].internal_metadata.stream_ordering |
541 | for room_id, current_state_tuple in current_state_for_room.iteritems(): | |
620 | self._update_forward_extremities_txn( | |
621 | txn, | |
622 | new_forward_extremities=new_forward_extremeties, | |
623 | max_stream_order=max_stream_order, | |
624 | ) | |
625 | ||
626 | # Ensure that we don't have the same event twice. | |
627 | events_and_contexts = self._filter_events_and_contexts_for_duplicates( | |
628 | events_and_contexts, | |
629 | ) | |
630 | ||
631 | self._update_room_depths_txn( | |
632 | txn, | |
633 | events_and_contexts=events_and_contexts, | |
634 | backfilled=backfilled, | |
635 | ) | |
636 | ||
637 | # _update_outliers_txn filters out any events which have already been | |
638 | # persisted, and returns the filtered list. | |
639 | events_and_contexts = self._update_outliers_txn( | |
640 | txn, | |
641 | events_and_contexts=events_and_contexts, | |
642 | ) | |
643 | ||
644 | # From this point onwards the events are only events that we haven't | |
645 | # seen before. | |
646 | ||
647 | if delete_existing: | |
648 | # For paranoia reasons, we go and delete all the existing entries | |
649 | # for these events so we can reinsert them. | |
650 | # This gets around any problems with some tables already having | |
651 | # entries. | |
652 | self._delete_existing_rows_txn( | |
653 | txn, | |
654 | events_and_contexts=events_and_contexts, | |
655 | ) | |
656 | ||
657 | self._store_event_txn( | |
658 | txn, | |
659 | events_and_contexts=events_and_contexts, | |
660 | ) | |
661 | ||
662 | # Insert into the state_groups, state_groups_state, and | |
663 | # event_to_state_groups tables. | |
664 | self._store_mult_state_groups_txn(txn, events_and_contexts) | |
665 | ||
666 | # _store_rejected_events_txn filters out any events which were | |
667 | # rejected, and returns the filtered list. | |
668 | events_and_contexts = self._store_rejected_events_txn( | |
669 | txn, | |
670 | events_and_contexts=events_and_contexts, | |
671 | ) | |
672 | ||
673 | # From this point onwards the events are only ones that weren't | |
674 | # rejected. | |
675 | ||
676 | self._update_metadata_tables_txn( | |
677 | txn, | |
678 | events_and_contexts=events_and_contexts, | |
679 | backfilled=backfilled, | |
680 | ) | |
681 | ||
682 | def _update_current_state_txn(self, txn, state_delta_by_room): | |
683 | for room_id, current_state_tuple in state_delta_by_room.iteritems(): | |
542 | 684 | to_delete, to_insert = current_state_tuple |
543 | 685 | txn.executemany( |
544 | 686 | "DELETE FROM current_state_events WHERE event_id = ?", |
584 | 726 | txn, self.get_users_in_room, (room_id,) |
585 | 727 | ) |
586 | 728 | |
587 | for room_id, new_extrem in new_forward_extremeties.items(): | |
729 | self._invalidate_cache_and_stream( | |
730 | txn, self.get_current_state_ids, (room_id,) | |
731 | ) | |
732 | ||
733 | def _update_forward_extremities_txn(self, txn, new_forward_extremities, | |
734 | max_stream_order): | |
735 | for room_id, new_extrem in new_forward_extremities.iteritems(): | |
588 | 736 | self._simple_delete_txn( |
589 | 737 | txn, |
590 | 738 | table="event_forward_extremities", |
602 | 750 | "event_id": ev_id, |
603 | 751 | "room_id": room_id, |
604 | 752 | } |
605 | for room_id, new_extrem in new_forward_extremeties.items() | |
753 | for room_id, new_extrem in new_forward_extremities.iteritems() | |
606 | 754 | for ev_id in new_extrem |
607 | 755 | ], |
608 | 756 | ) |
619 | 767 | "event_id": event_id, |
620 | 768 | "stream_ordering": max_stream_order, |
621 | 769 | } |
622 | for room_id, new_extrem in new_forward_extremeties.items() | |
770 | for room_id, new_extrem in new_forward_extremities.iteritems() | |
623 | 771 | for event_id in new_extrem |
624 | 772 | ] |
625 | 773 | ) |
626 | 774 | |
627 | # Ensure that we don't have the same event twice. | |
628 | # Pick the earliest non-outlier if there is one, else the earliest one. | |
775 | @classmethod | |
776 | def _filter_events_and_contexts_for_duplicates(cls, events_and_contexts): | |
777 | """Ensure that we don't have the same event twice. | |
778 | ||
779 | Pick the earliest non-outlier if there is one, else the earliest one. | |
780 | ||
781 | Args: | |
782 | events_and_contexts (list[(EventBase, EventContext)]): | |
783 | Returns: | |
784 | list[(EventBase, EventContext)]: filtered list | |
785 | """ | |
629 | 786 | new_events_and_contexts = OrderedDict() |
630 | 787 | for event, context in events_and_contexts: |
631 | 788 | prev_event_context = new_events_and_contexts.get(event.event_id) |
638 | 795 | new_events_and_contexts[event.event_id] = (event, context) |
639 | 796 | else: |
640 | 797 | new_events_and_contexts[event.event_id] = (event, context) |
641 | ||
642 | events_and_contexts = new_events_and_contexts.values() | |
643 | ||
798 | return new_events_and_contexts.values() | |
799 | ||
800 | def _update_room_depths_txn(self, txn, events_and_contexts, backfilled): | |
801 | """Update min_depth for each room | |
802 | ||
803 | Args: | |
804 | txn (twisted.enterprise.adbapi.Connection): db connection | |
805 | events_and_contexts (list[(EventBase, EventContext)]): events | |
806 | we are persisting | |
807 | backfilled (bool): True if the events were backfilled | |
808 | """ | |
644 | 809 | depth_updates = {} |
645 | 810 | for event, context in events_and_contexts: |
646 | 811 | # Remove the any existing cache entries for the event_ids |
656 | 821 | event.depth, depth_updates.get(event.room_id, event.depth) |
657 | 822 | ) |
658 | 823 | |
659 | for room_id, depth in depth_updates.items(): | |
824 | for room_id, depth in depth_updates.iteritems(): | |
660 | 825 | self._update_min_depth_for_room_txn(txn, room_id, depth) |
661 | 826 | |
827 | def _update_outliers_txn(self, txn, events_and_contexts): | |
828 | """Update any outliers with new event info. | |
829 | ||
830 | This turns outliers into ex-outliers (unless the new event was | |
831 | rejected). | |
832 | ||
833 | Args: | |
834 | txn (twisted.enterprise.adbapi.Connection): db connection | |
835 | events_and_contexts (list[(EventBase, EventContext)]): events | |
836 | we are persisting | |
837 | ||
838 | Returns: | |
839 | list[(EventBase, EventContext)] new list, without events which | |
840 | are already in the events table. | |
841 | """ | |
662 | 842 | txn.execute( |
663 | 843 | "SELECT event_id, outlier FROM events WHERE event_id in (%s)" % ( |
664 | 844 | ",".join(["?"] * len(events_and_contexts)), |
668 | 848 | |
669 | 849 | have_persisted = { |
670 | 850 | event_id: outlier |
671 | for event_id, outlier in txn.fetchall() | |
851 | for event_id, outlier in txn | |
672 | 852 | } |
673 | 853 | |
674 | 854 | to_remove = set() |
675 | 855 | for event, context in events_and_contexts: |
856 | if event.event_id not in have_persisted: | |
857 | continue | |
858 | ||
859 | to_remove.add(event) | |
860 | ||
676 | 861 | if context.rejected: |
677 | 862 | # If the event is rejected then we don't care if the event |
678 | 863 | # was an outlier or not. |
679 | if event.event_id in have_persisted: | |
680 | # If we have already seen the event then ignore it. | |
681 | to_remove.add(event) | |
682 | 864 | continue |
683 | ||
684 | if event.event_id not in have_persisted: | |
685 | continue | |
686 | ||
687 | to_remove.add(event) | |
688 | 865 | |
689 | 866 | outlier_persisted = have_persisted[event.event_id] |
690 | 867 | if not event.internal_metadata.is_outlier() and outlier_persisted: |
740 | 917 | # event isn't an outlier any more. |
741 | 918 | self._update_backward_extremeties(txn, [event]) |
742 | 919 | |
743 | events_and_contexts = [ | |
920 | return [ | |
744 | 921 | ec for ec in events_and_contexts if ec[0] not in to_remove |
745 | 922 | ] |
746 | 923 | |
924 | @classmethod | |
925 | def _delete_existing_rows_txn(cls, txn, events_and_contexts): | |
747 | 926 | if not events_and_contexts: |
748 | # Make sure we don't pass an empty list to functions that expect to | |
749 | # be storing at least one element. | |
927 | # nothing to do here | |
750 | 928 | return |
751 | 929 | |
752 | # From this point onwards the events are only events that we haven't | |
753 | # seen before. | |
754 | ||
755 | def event_dict(event): | |
756 | return { | |
757 | k: v | |
758 | for k, v in event.get_dict().items() | |
759 | if k not in [ | |
760 | "redacted", | |
761 | "redacted_because", | |
762 | ] | |
763 | } | |
764 | ||
765 | if delete_existing: | |
766 | # For paranoia reasons, we go and delete all the existing entries | |
767 | # for these events so we can reinsert them. | |
768 | # This gets around any problems with some tables already having | |
769 | # entries. | |
770 | ||
771 | logger.info("Deleting existing") | |
772 | ||
773 | for table in ( | |
930 | logger.info("Deleting existing") | |
931 | ||
932 | for table in ( | |
774 | 933 | "events", |
775 | 934 | "event_auth", |
776 | 935 | "event_json", |
793 | 952 | "redactions", |
794 | 953 | "room_memberships", |
795 | 954 | "topics" |
796 | ): | |
797 | txn.executemany( | |
798 | "DELETE FROM %s WHERE event_id = ?" % (table,), | |
799 | [(ev.event_id,) for ev, _ in events_and_contexts] | |
800 | ) | |
955 | ): | |
956 | txn.executemany( | |
957 | "DELETE FROM %s WHERE event_id = ?" % (table,), | |
958 | [(ev.event_id,) for ev, _ in events_and_contexts] | |
959 | ) | |
960 | ||
961 | def _store_event_txn(self, txn, events_and_contexts): | |
962 | """Insert new events into the event and event_json tables | |
963 | ||
964 | Args: | |
965 | txn (twisted.enterprise.adbapi.Connection): db connection | |
966 | events_and_contexts (list[(EventBase, EventContext)]): events | |
967 | we are persisting | |
968 | """ | |
969 | ||
970 | if not events_and_contexts: | |
971 | # nothing to do here | |
972 | return | |
973 | ||
974 | def event_dict(event): | |
975 | d = event.get_dict() | |
976 | d.pop("redacted", None) | |
977 | d.pop("redacted_because", None) | |
978 | return d | |
801 | 979 | |
802 | 980 | self._simple_insert_many_txn( |
803 | 981 | txn, |
841 | 1019 | ], |
842 | 1020 | ) |
843 | 1021 | |
1022 | def _store_rejected_events_txn(self, txn, events_and_contexts): | |
1023 | """Add rows to the 'rejections' table for received events which were | |
1024 | rejected | |
1025 | ||
1026 | Args: | |
1027 | txn (twisted.enterprise.adbapi.Connection): db connection | |
1028 | events_and_contexts (list[(EventBase, EventContext)]): events | |
1029 | we are persisting | |
1030 | ||
1031 | Returns: | |
1032 | list[(EventBase, EventContext)] new list, without the rejected | |
1033 | events. | |
1034 | """ | |
844 | 1035 | # Remove the rejected events from the list now that we've added them |
845 | 1036 | # to the events table and the events_json table. |
846 | 1037 | to_remove = set() |
852 | 1043 | ) |
853 | 1044 | to_remove.add(event) |
854 | 1045 | |
855 | events_and_contexts = [ | |
1046 | return [ | |
856 | 1047 | ec for ec in events_and_contexts if ec[0] not in to_remove |
857 | 1048 | ] |
858 | 1049 | |
1050 | def _update_metadata_tables_txn(self, txn, events_and_contexts, backfilled): | |
1051 | """Update all the miscellaneous tables for new events | |
1052 | ||
1053 | Args: | |
1054 | txn (twisted.enterprise.adbapi.Connection): db connection | |
1055 | events_and_contexts (list[(EventBase, EventContext)]): events | |
1056 | we are persisting | |
1057 | backfilled (bool): True if the events were backfilled | |
1058 | """ | |
1059 | ||
859 | 1060 | if not events_and_contexts: |
860 | # Make sure we don't pass an empty list to functions that expect to | |
861 | # be storing at least one element. | |
1061 | # nothing to do here | |
862 | 1062 | return |
863 | ||
864 | # From this point onwards the events are only ones that weren't rejected. | |
865 | 1063 | |
866 | 1064 | for event, context in events_and_contexts: |
867 | 1065 | # Insert all the push actions into the event_push_actions table. |
890 | 1088 | for auth_id, _ in event.auth_events |
891 | 1089 | ], |
892 | 1090 | ) |
893 | ||
894 | # Insert into the state_groups, state_groups_state, and | |
895 | # event_to_state_groups tables. | |
896 | self._store_mult_state_groups_txn(txn, events_and_contexts) | |
897 | 1091 | |
898 | 1092 | # Update the event_forward_extremities, event_backward_extremities and |
899 | 1093 | # event_edges tables. |
981 | 1175 | # Prefill the event cache |
982 | 1176 | self._add_to_cache(txn, events_and_contexts) |
983 | 1177 | |
984 | if backfilled: | |
985 | # Backfilled events come before the current state so we don't need | |
986 | # to update the current state table | |
987 | return | |
988 | ||
989 | return | |
990 | ||
991 | 1178 | def _add_to_cache(self, txn, events_and_contexts): |
992 | 1179 | to_prefill = [] |
993 | 1180 | |
1596 | 1783 | |
1597 | 1784 | def get_all_new_events_txn(txn): |
1598 | 1785 | sql = ( |
1599 | "SELECT e.stream_ordering, ej.internal_metadata, ej.json, eg.state_group" | |
1600 | " FROM events as e" | |
1601 | " JOIN event_json as ej" | |
1602 | " ON e.event_id = ej.event_id AND e.room_id = ej.room_id" | |
1603 | " LEFT JOIN event_to_state_groups as eg" | |
1604 | " ON e.event_id = eg.event_id" | |
1605 | " WHERE ? < e.stream_ordering AND e.stream_ordering <= ?" | |
1606 | " ORDER BY e.stream_ordering ASC" | |
1786 | "SELECT e.stream_ordering, e.event_id, e.room_id, e.type," | |
1787 | " state_key, redacts" | |
1788 | " FROM events AS e" | |
1789 | " LEFT JOIN redactions USING (event_id)" | |
1790 | " LEFT JOIN state_events USING (event_id)" | |
1791 | " WHERE ? < stream_ordering AND stream_ordering <= ?" | |
1792 | " ORDER BY stream_ordering ASC" | |
1607 | 1793 | " LIMIT ?" |
1608 | 1794 | ) |
1609 | 1795 | if have_forward_events: |
1629 | 1815 | forward_ex_outliers = [] |
1630 | 1816 | |
1631 | 1817 | sql = ( |
1632 | "SELECT -e.stream_ordering, ej.internal_metadata, ej.json," | |
1633 | " eg.state_group" | |
1634 | " FROM events as e" | |
1635 | " JOIN event_json as ej" | |
1636 | " ON e.event_id = ej.event_id AND e.room_id = ej.room_id" | |
1637 | " LEFT JOIN event_to_state_groups as eg" | |
1638 | " ON e.event_id = eg.event_id" | |
1639 | " WHERE ? > e.stream_ordering AND e.stream_ordering >= ?" | |
1640 | " ORDER BY e.stream_ordering DESC" | |
1818 | "SELECT -e.stream_ordering, e.event_id, e.room_id, e.type," | |
1819 | " state_key, redacts" | |
1820 | " FROM events AS e" | |
1821 | " LEFT JOIN redactions USING (event_id)" | |
1822 | " LEFT JOIN state_events USING (event_id)" | |
1823 | " WHERE ? > stream_ordering AND stream_ordering >= ?" | |
1824 | " ORDER BY stream_ordering DESC" | |
1641 | 1825 | " LIMIT ?" |
1642 | 1826 | ) |
1643 | 1827 | if have_backfill_events: |
1824 | 2008 | "state_key": key[1], |
1825 | 2009 | "event_id": state_id, |
1826 | 2010 | } |
1827 | for key, state_id in curr_state.items() | |
2011 | for key, state_id in curr_state.iteritems() | |
1828 | 2012 | ], |
1829 | 2013 | ) |
1830 | 2014 |
100 | 100 | key_ids |
101 | 101 | Args: |
102 | 102 | server_name (str): The name of the server. |
103 | key_ids (list of str): List of key_ids to try and look up. | |
103 | key_ids (iterable[str]): key_ids to try and look up. | |
104 | 104 | Returns: |
105 | (list of VerifyKey): The verification keys. | |
105 | Deferred: resolves to dict[str, VerifyKey]: map from | |
106 | key_id to verification key. | |
106 | 107 | """ |
107 | 108 | keys = {} |
108 | 109 | for key_id in key_ids: |
355 | 355 | ), |
356 | 356 | (current_version,) |
357 | 357 | ) |
358 | applied_deltas = [d for d, in txn.fetchall()] | |
358 | applied_deltas = [d for d, in txn] | |
359 | 359 | return current_version, applied_deltas, upgraded |
360 | 360 | |
361 | 361 | return None |
84 | 84 | self.presence_stream_cache.entity_has_changed, |
85 | 85 | state.user_id, stream_id, |
86 | 86 | ) |
87 | self._invalidate_cache_and_stream( | |
88 | txn, self._get_presence_for_user, (state.user_id,) | |
87 | txn.call_after( | |
88 | self._get_presence_for_user.invalidate, (state.user_id,) | |
89 | 89 | ) |
90 | 90 | |
91 | 91 | # Actually insert new rows |
312 | 312 | ) |
313 | 313 | |
314 | 314 | txn.execute(sql, (room_id, receipt_type, user_id)) |
315 | results = txn.fetchall() | |
316 | ||
317 | if results and topological_ordering: | |
318 | for to, so, _ in results: | |
315 | ||
316 | if topological_ordering: | |
317 | for to, so, _ in txn: | |
319 | 318 | if int(to) > topological_ordering: |
320 | 319 | return False |
321 | 320 | elif int(to) == topological_ordering and int(so) >= stream_ordering: |
208 | 208 | " WHERE lower(name) = lower(?)" |
209 | 209 | ) |
210 | 210 | txn.execute(sql, (user_id,)) |
211 | return dict(txn.fetchall()) | |
211 | return dict(txn) | |
212 | 212 | |
213 | 213 | return self.runInteraction("get_users_by_id_case_insensitive", f) |
214 | 214 |
395 | 395 | sql % ("AND appservice_id IS NULL",), |
396 | 396 | (stream_id,) |
397 | 397 | ) |
398 | return dict(txn.fetchall()) | |
398 | return dict(txn) | |
399 | 399 | else: |
400 | 400 | # We want to get from all lists, so we need to aggregate the results |
401 | 401 | |
421 | 421 | |
422 | 422 | results = {} |
423 | 423 | # A room is visible if its visible on any list. |
424 | for room_id, visibility in txn.fetchall(): | |
424 | for room_id, visibility in txn: | |
425 | 425 | results[room_id] = bool(visibility) or results.get(room_id, False) |
426 | 426 | |
427 | 427 | return results |
128 | 128 | with self._stream_id_gen.get_next() as stream_ordering: |
129 | 129 | yield self.runInteraction("locally_reject_invite", f, stream_ordering) |
130 | 130 | |
131 | @cachedInlineCallbacks(max_entries=100000, iterable=True, cache_context=True) | |
132 | def get_hosts_in_room(self, room_id, cache_context): | |
133 | """Returns the set of all hosts currently in the room | |
134 | """ | |
135 | user_ids = yield self.get_users_in_room( | |
136 | room_id, on_invalidate=cache_context.invalidate, | |
137 | ) | |
138 | hosts = frozenset(get_domain_from_id(user_id) for user_id in user_ids) | |
139 | defer.returnValue(hosts) | |
140 | ||
131 | 141 | @cached(max_entries=500000, iterable=True) |
132 | 142 | def get_users_in_room(self, room_id): |
133 | 143 | def f(txn): |
134 | ||
135 | rows = self._get_members_rows_txn( | |
136 | txn, | |
137 | room_id=room_id, | |
138 | membership=Membership.JOIN, | |
139 | ) | |
140 | ||
141 | return [r["user_id"] for r in rows] | |
144 | sql = ( | |
145 | "SELECT m.user_id FROM room_memberships as m" | |
146 | " INNER JOIN current_state_events as c" | |
147 | " ON m.event_id = c.event_id " | |
148 | " AND m.room_id = c.room_id " | |
149 | " AND m.user_id = c.state_key" | |
150 | " WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?" | |
151 | ) | |
152 | ||
153 | txn.execute(sql, (room_id, Membership.JOIN,)) | |
154 | return [r[0] for r in txn] | |
142 | 155 | return self.runInteraction("get_users_in_room", f) |
143 | 156 | |
144 | 157 | @cached() |
245 | 258 | |
246 | 259 | return results |
247 | 260 | |
248 | def _get_members_rows_txn(self, txn, room_id, membership=None, user_id=None): | |
249 | where_clause = "c.room_id = ?" | |
250 | where_values = [room_id] | |
251 | ||
252 | if membership: | |
253 | where_clause += " AND m.membership = ?" | |
254 | where_values.append(membership) | |
255 | ||
256 | if user_id: | |
257 | where_clause += " AND m.user_id = ?" | |
258 | where_values.append(user_id) | |
259 | ||
260 | sql = ( | |
261 | "SELECT m.* FROM room_memberships as m" | |
262 | " INNER JOIN current_state_events as c" | |
263 | " ON m.event_id = c.event_id " | |
264 | " AND m.room_id = c.room_id " | |
265 | " AND m.user_id = c.state_key" | |
266 | " WHERE c.type = 'm.room.member' AND %(where)s" | |
267 | ) % { | |
268 | "where": where_clause, | |
269 | } | |
270 | ||
271 | txn.execute(sql, where_values) | |
272 | rows = self.cursor_to_dict(txn) | |
273 | ||
274 | return rows | |
275 | ||
276 | @cached(max_entries=500000, iterable=True) | |
261 | @cachedInlineCallbacks(max_entries=500000, iterable=True) | |
277 | 262 | def get_rooms_for_user(self, user_id): |
278 | return self.get_rooms_for_user_where_membership_is( | |
263 | """Returns a set of room_ids the user is currently joined to | |
264 | """ | |
265 | rooms = yield self.get_rooms_for_user_where_membership_is( | |
279 | 266 | user_id, membership_list=[Membership.JOIN], |
280 | 267 | ) |
268 | defer.returnValue(frozenset(r.room_id for r in rooms)) | |
281 | 269 | |
282 | 270 | @cachedInlineCallbacks(max_entries=500000, cache_context=True, iterable=True) |
283 | 271 | def get_users_who_share_room_with_user(self, user_id, cache_context): |
284 | 272 | """Returns the set of users who share a room with `user_id` |
285 | 273 | """ |
286 | rooms = yield self.get_rooms_for_user( | |
274 | room_ids = yield self.get_rooms_for_user( | |
287 | 275 | user_id, on_invalidate=cache_context.invalidate, |
288 | 276 | ) |
289 | 277 | |
290 | 278 | user_who_share_room = set() |
291 | for room in rooms: | |
279 | for room_id in room_ids: | |
292 | 280 | user_ids = yield self.get_users_in_room( |
293 | room.room_id, on_invalidate=cache_context.invalidate, | |
281 | room_id, on_invalidate=cache_context.invalidate, | |
294 | 282 | ) |
295 | 283 | user_who_share_room.update(user_ids) |
296 | 284 |
71 | 71 | " WHERE event_id = ?" |
72 | 72 | ) |
73 | 73 | txn.execute(query, (event_id, )) |
74 | return {k: v for k, v in txn.fetchall()} | |
74 | return {k: v for k, v in txn} | |
75 | 75 | |
76 | 76 | def _store_event_reference_hashes_txn(self, txn, events): |
77 | 77 | """Store a hash for a PDU |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | from ._base import SQLBaseStore |
16 | from synapse.util.caches.descriptors import cached, cachedList | |
16 | from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks | |
17 | 17 | from synapse.util.caches import intern_string |
18 | 18 | from synapse.storage.engines import PostgresEngine |
19 | 19 | |
68 | 68 | where_clause="type='m.room.member'", |
69 | 69 | ) |
70 | 70 | |
71 | @cachedInlineCallbacks(max_entries=100000, iterable=True) | |
72 | def get_current_state_ids(self, room_id): | |
73 | rows = yield self._simple_select_list( | |
74 | table="current_state_events", | |
75 | keyvalues={"room_id": room_id}, | |
76 | retcols=["event_id", "type", "state_key"], | |
77 | desc="_calculate_state_delta", | |
78 | ) | |
79 | defer.returnValue({ | |
80 | (r["type"], r["state_key"]): r["event_id"] for r in rows | |
81 | }) | |
82 | ||
71 | 83 | @defer.inlineCallbacks |
72 | 84 | def get_state_groups_ids(self, room_id, event_ids): |
73 | 85 | if not event_ids: |
77 | 89 | event_ids, |
78 | 90 | ) |
79 | 91 | |
80 | groups = set(event_to_groups.values()) | |
92 | groups = set(event_to_groups.itervalues()) | |
81 | 93 | group_to_state = yield self._get_state_for_groups(groups) |
82 | 94 | |
83 | 95 | defer.returnValue(group_to_state) |
95 | 107 | |
96 | 108 | state_event_map = yield self.get_events( |
97 | 109 | [ |
98 | ev_id for group_ids in group_to_ids.values() | |
99 | for ev_id in group_ids.values() | |
110 | ev_id for group_ids in group_to_ids.itervalues() | |
111 | for ev_id in group_ids.itervalues() | |
100 | 112 | ], |
101 | 113 | get_prev_content=False |
102 | 114 | ) |
103 | 115 | |
104 | 116 | defer.returnValue({ |
105 | 117 | group: [ |
106 | state_event_map[v] for v in event_id_map.values() if v in state_event_map | |
118 | state_event_map[v] for v in event_id_map.itervalues() | |
119 | if v in state_event_map | |
107 | 120 | ] |
108 | for group, event_id_map in group_to_ids.items() | |
121 | for group, event_id_map in group_to_ids.iteritems() | |
109 | 122 | }) |
110 | 123 | |
111 | 124 | def _have_persisted_state_group_txn(self, txn, state_group): |
123 | 136 | continue |
124 | 137 | |
125 | 138 | if context.current_state_ids is None: |
139 | # AFAIK, this can never happen | |
140 | logger.error( | |
141 | "Non-outlier event %s had current_state_ids==None", | |
142 | event.event_id) | |
143 | continue | |
144 | ||
145 | # if the event was rejected, just give it the same state as its | |
146 | # predecessor. | |
147 | if context.rejected: | |
148 | state_groups[event.event_id] = context.prev_group | |
126 | 149 | continue |
127 | 150 | |
128 | 151 | state_groups[event.event_id] = context.state_group |
167 | 190 | "state_key": key[1], |
168 | 191 | "event_id": state_id, |
169 | 192 | } |
170 | for key, state_id in context.delta_ids.items() | |
193 | for key, state_id in context.delta_ids.iteritems() | |
171 | 194 | ], |
172 | 195 | ) |
173 | 196 | else: |
182 | 205 | "state_key": key[1], |
183 | 206 | "event_id": state_id, |
184 | 207 | } |
185 | for key, state_id in context.current_state_ids.items() | |
208 | for key, state_id in context.current_state_ids.iteritems() | |
186 | 209 | ], |
187 | 210 | ) |
188 | 211 | |
194 | 217 | "state_group": state_group_id, |
195 | 218 | "event_id": event_id, |
196 | 219 | } |
197 | for event_id, state_group_id in state_groups.items() | |
220 | for event_id, state_group_id in state_groups.iteritems() | |
198 | 221 | ], |
199 | 222 | ) |
200 | 223 | |
318 | 341 | args.extend(where_args) |
319 | 342 | |
320 | 343 | txn.execute(sql % (where_clause,), args) |
321 | rows = self.cursor_to_dict(txn) | |
322 | for row in rows: | |
323 | key = (row["type"], row["state_key"]) | |
324 | results[group][key] = row["event_id"] | |
344 | for row in txn: | |
345 | typ, state_key, event_id = row | |
346 | key = (typ, state_key) | |
347 | results[group][key] = event_id | |
325 | 348 | else: |
326 | 349 | if types is not None: |
327 | 350 | where_clause = "AND (%s)" % ( |
350 | 373 | " WHERE state_group = ? %s" % (where_clause,), |
351 | 374 | args |
352 | 375 | ) |
353 | rows = txn.fetchall() | |
354 | results[group].update({ | |
355 | (typ, state_key): event_id | |
356 | for typ, state_key, event_id in rows | |
376 | results[group].update( | |
377 | ((typ, state_key), event_id) | |
378 | for typ, state_key, event_id in txn | |
357 | 379 | if (typ, state_key) not in results[group] |
358 | }) | |
380 | ) | |
359 | 381 | |
360 | 382 | # If the lengths match then we must have all the types, |
361 | 383 | # so no need to go walk further down the tree. |
392 | 414 | event_ids, |
393 | 415 | ) |
394 | 416 | |
395 | groups = set(event_to_groups.values()) | |
417 | groups = set(event_to_groups.itervalues()) | |
396 | 418 | group_to_state = yield self._get_state_for_groups(groups, types) |
397 | 419 | |
398 | 420 | state_event_map = yield self.get_events( |
399 | [ev_id for sd in group_to_state.values() for ev_id in sd.values()], | |
421 | [ev_id for sd in group_to_state.itervalues() for ev_id in sd.itervalues()], | |
400 | 422 | get_prev_content=False |
401 | 423 | ) |
402 | 424 | |
403 | 425 | event_to_state = { |
404 | 426 | event_id: { |
405 | 427 | k: state_event_map[v] |
406 | for k, v in group_to_state[group].items() | |
428 | for k, v in group_to_state[group].iteritems() | |
407 | 429 | if v in state_event_map |
408 | 430 | } |
409 | for event_id, group in event_to_groups.items() | |
431 | for event_id, group in event_to_groups.iteritems() | |
410 | 432 | } |
411 | 433 | |
412 | 434 | defer.returnValue({event: event_to_state[event] for event in event_ids}) |
429 | 451 | event_ids, |
430 | 452 | ) |
431 | 453 | |
432 | groups = set(event_to_groups.values()) | |
454 | groups = set(event_to_groups.itervalues()) | |
433 | 455 | group_to_state = yield self._get_state_for_groups(groups, types) |
434 | 456 | |
435 | 457 | event_to_state = { |
436 | 458 | event_id: group_to_state[group] |
437 | for event_id, group in event_to_groups.items() | |
459 | for event_id, group in event_to_groups.iteritems() | |
438 | 460 | } |
439 | 461 | |
440 | 462 | defer.returnValue({event: event_to_state[event] for event in event_ids}) |
473 | 495 | state_map = yield self.get_state_ids_for_events([event_id], types) |
474 | 496 | defer.returnValue(state_map[event_id]) |
475 | 497 | |
476 | @cached(num_args=2, max_entries=10000) | |
498 | @cached(num_args=2, max_entries=100000) | |
477 | 499 | def _get_state_group_for_event(self, room_id, event_id): |
478 | 500 | return self._simple_select_one_onecol( |
479 | 501 | table="event_to_state_groups", |
546 | 568 | got_all = not (missing_types or types is None) |
547 | 569 | |
548 | 570 | return { |
549 | k: v for k, v in state_dict_ids.items() | |
571 | k: v for k, v in state_dict_ids.iteritems() | |
550 | 572 | if include(k[0], k[1]) |
551 | 573 | }, missing_types, got_all |
552 | 574 | |
605 | 627 | |
606 | 628 | # Now we want to update the cache with all the things we fetched |
607 | 629 | # from the database. |
608 | for group, group_state_dict in group_to_state_dict.items(): | |
630 | for group, group_state_dict in group_to_state_dict.iteritems(): | |
609 | 631 | if types: |
610 | 632 | # We delibrately put key -> None mappings into the cache to |
611 | 633 | # cache absence of the key, on the assumption that if we've |
620 | 642 | else: |
621 | 643 | state_dict = results[group] |
622 | 644 | |
623 | state_dict.update({ | |
624 | (intern_string(k[0]), intern_string(k[1])): v | |
625 | for k, v in group_state_dict.items() | |
626 | }) | |
645 | state_dict.update( | |
646 | ((intern_string(k[0]), intern_string(k[1])), v) | |
647 | for k, v in group_state_dict.iteritems() | |
648 | ) | |
627 | 649 | |
628 | 650 | self._state_group_cache.update( |
629 | 651 | cache_seq_num, |
634 | 656 | |
635 | 657 | # Remove all the entries with None values. The None values were just |
636 | 658 | # used for bookkeeping in the cache. |
637 | for group, state_dict in results.items(): | |
659 | for group, state_dict in results.iteritems(): | |
638 | 660 | results[group] = { |
639 | 661 | key: event_id |
640 | for key, event_id in state_dict.items() | |
662 | for key, event_id in state_dict.iteritems() | |
641 | 663 | if event_id |
642 | 664 | } |
643 | 665 | |
726 | 748 | # of keys |
727 | 749 | |
728 | 750 | delta_state = { |
729 | key: value for key, value in curr_state.items() | |
751 | key: value for key, value in curr_state.iteritems() | |
730 | 752 | if prev_state.get(key, None) != value |
731 | 753 | } |
732 | 754 | |
766 | 788 | "state_key": key[1], |
767 | 789 | "event_id": state_id, |
768 | 790 | } |
769 | for key, state_id in delta_state.items() | |
791 | for key, state_id in delta_state.iteritems() | |
770 | 792 | ], |
771 | 793 | ) |
772 | 794 |
828 | 828 | updatevalues={"stream_id": stream_id}, |
829 | 829 | desc="update_federation_out_pos", |
830 | 830 | ) |
831 | ||
832 | def has_room_changed_since(self, room_id, stream_id): | |
833 | return self._events_stream_cache.has_entity_changed(room_id, stream_id) |
94 | 94 | for stream_id, user_id, room_id in tag_ids: |
95 | 95 | txn.execute(sql, (user_id, room_id)) |
96 | 96 | tags = [] |
97 | for tag, content in txn.fetchall(): | |
97 | for tag, content in txn: | |
98 | 98 | tags.append(json.dumps(tag) + ":" + content) |
99 | 99 | tag_json = "{" + ",".join(tags) + "}" |
100 | 100 | results.append((stream_id, user_id, room_id, tag_json)) |
131 | 131 | " WHERE user_id = ? AND stream_id > ?" |
132 | 132 | ) |
133 | 133 | txn.execute(sql, (user_id, stream_id)) |
134 | room_ids = [row[0] for row in txn.fetchall()] | |
134 | room_ids = [row[0] for row in txn] | |
135 | 135 | return room_ids |
136 | 136 | |
137 | 137 | changed = self._account_data_stream_cache.has_entity_changed( |
29 | 29 | |
30 | 30 | |
31 | 31 | def _load_current_id(db_conn, table, column, step=1): |
32 | """ | |
33 | ||
34 | Args: | |
35 | db_conn (object): | |
36 | table (str): | |
37 | column (str): | |
38 | step (int): | |
39 | ||
40 | Returns: | |
41 | int | |
42 | """ | |
32 | 43 | cur = db_conn.cursor() |
33 | 44 | if step == 1: |
34 | 45 | cur.execute("SELECT MAX(%s) FROM %s" % (column, table,)) |
130 | 141 | def get_current_token(self): |
131 | 142 | """Returns the maximum stream id such that all stream ids less than or |
132 | 143 | equal to it have been successfully persisted. |
144 | ||
145 | Returns: | |
146 | int | |
133 | 147 | """ |
134 | 148 | with self._lock: |
135 | 149 | if self._unfinished_ids: |
25 | 25 | |
26 | 26 | class DeferredTimedOutError(SynapseError): |
27 | 27 | def __init__(self): |
28 | super(SynapseError).__init__(504, "Timed out") | |
28 | super(SynapseError, self).__init__(504, "Timed out") | |
29 | 29 | |
30 | 30 | |
31 | 31 | def unwrapFirstError(failure): |
92 | 92 | ret_deferred = defer.Deferred() |
93 | 93 | |
94 | 94 | def timed_out_fn(): |
95 | e = DeferredTimedOutError() | |
96 | ||
95 | 97 | try: |
96 | ret_deferred.errback(DeferredTimedOutError()) | |
98 | ret_deferred.errback(e) | |
97 | 99 | except: |
98 | 100 | pass |
99 | 101 | |
113 | 115 | |
114 | 116 | ret_deferred.addBoth(cancel) |
115 | 117 | |
116 | def sucess(res): | |
118 | def success(res): | |
117 | 119 | try: |
118 | 120 | ret_deferred.callback(res) |
119 | 121 | except: |
127 | 129 | except: |
128 | 130 | pass |
129 | 131 | |
130 | given_deferred.addCallbacks(callback=sucess, errback=err) | |
132 | given_deferred.addCallbacks(callback=success, errback=err) | |
131 | 133 | |
132 | 134 | timer = self.call_later(time_out, timed_out_fn) |
133 | 135 |
14 | 14 | import logging |
15 | 15 | |
16 | 16 | from synapse.util.async import ObservableDeferred |
17 | from synapse.util import unwrapFirstError | |
17 | from synapse.util import unwrapFirstError, logcontext | |
18 | 18 | from synapse.util.caches.lrucache import LruCache |
19 | 19 | from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry |
20 | from synapse.util.logcontext import ( | |
21 | PreserveLoggingContext, preserve_context_over_deferred, preserve_context_over_fn | |
22 | ) | |
23 | 20 | |
24 | 21 | from . import DEBUG_CACHES, register_cache |
25 | 22 | |
188 | 185 | self.cache.clear() |
189 | 186 | |
190 | 187 | |
191 | class CacheDescriptor(object): | |
188 | class _CacheDescriptorBase(object): | |
189 | def __init__(self, orig, num_args, inlineCallbacks, cache_context=False): | |
190 | self.orig = orig | |
191 | ||
192 | if inlineCallbacks: | |
193 | self.function_to_call = defer.inlineCallbacks(orig) | |
194 | else: | |
195 | self.function_to_call = orig | |
196 | ||
197 | arg_spec = inspect.getargspec(orig) | |
198 | all_args = arg_spec.args | |
199 | ||
200 | if "cache_context" in all_args: | |
201 | if not cache_context: | |
202 | raise ValueError( | |
203 | "Cannot have a 'cache_context' arg without setting" | |
204 | " cache_context=True" | |
205 | ) | |
206 | elif cache_context: | |
207 | raise ValueError( | |
208 | "Cannot have cache_context=True without having an arg" | |
209 | " named `cache_context`" | |
210 | ) | |
211 | ||
212 | if num_args is None: | |
213 | num_args = len(all_args) - 1 | |
214 | if cache_context: | |
215 | num_args -= 1 | |
216 | ||
217 | if len(all_args) < num_args + 1: | |
218 | raise Exception( | |
219 | "Not enough explicit positional arguments to key off for %r: " | |
220 | "got %i args, but wanted %i. (@cached cannot key off *args or " | |
221 | "**kwargs)" | |
222 | % (orig.__name__, len(all_args), num_args) | |
223 | ) | |
224 | ||
225 | self.num_args = num_args | |
226 | self.arg_names = all_args[1:num_args + 1] | |
227 | ||
228 | if "cache_context" in self.arg_names: | |
229 | raise Exception( | |
230 | "cache_context arg cannot be included among the cache keys" | |
231 | ) | |
232 | ||
233 | self.add_cache_context = cache_context | |
234 | ||
235 | ||
236 | class CacheDescriptor(_CacheDescriptorBase): | |
192 | 237 | """ A method decorator that applies a memoizing cache around the function. |
193 | 238 | |
194 | 239 | This caches deferreds, rather than the results themselves. Deferreds that |
216 | 261 | r2 = yield self.bar2(key, on_invalidate=cache_context.invalidate) |
217 | 262 | defer.returnValue(r1 + r2) |
218 | 263 | |
264 | Args: | |
265 | num_args (int): number of positional arguments (excluding ``self`` and | |
266 | ``cache_context``) to use as cache keys. Defaults to all named | |
267 | args of the function. | |
219 | 268 | """ |
220 | def __init__(self, orig, max_entries=1000, num_args=1, tree=False, | |
269 | def __init__(self, orig, max_entries=1000, num_args=None, tree=False, | |
221 | 270 | inlineCallbacks=False, cache_context=False, iterable=False): |
271 | ||
272 | super(CacheDescriptor, self).__init__( | |
273 | orig, num_args=num_args, inlineCallbacks=inlineCallbacks, | |
274 | cache_context=cache_context) | |
275 | ||
222 | 276 | max_entries = int(max_entries * CACHE_SIZE_FACTOR) |
223 | 277 | |
224 | self.orig = orig | |
225 | ||
226 | if inlineCallbacks: | |
227 | self.function_to_call = defer.inlineCallbacks(orig) | |
228 | else: | |
229 | self.function_to_call = orig | |
230 | ||
231 | 278 | self.max_entries = max_entries |
232 | self.num_args = num_args | |
233 | 279 | self.tree = tree |
234 | ||
235 | 280 | self.iterable = iterable |
236 | ||
237 | all_args = inspect.getargspec(orig) | |
238 | self.arg_names = all_args.args[1:num_args + 1] | |
239 | ||
240 | if "cache_context" in all_args.args: | |
241 | if not cache_context: | |
242 | raise ValueError( | |
243 | "Cannot have a 'cache_context' arg without setting" | |
244 | " cache_context=True" | |
245 | ) | |
246 | try: | |
247 | self.arg_names.remove("cache_context") | |
248 | except ValueError: | |
249 | pass | |
250 | elif cache_context: | |
251 | raise ValueError( | |
252 | "Cannot have cache_context=True without having an arg" | |
253 | " named `cache_context`" | |
254 | ) | |
255 | ||
256 | self.add_cache_context = cache_context | |
257 | ||
258 | if len(self.arg_names) < self.num_args: | |
259 | raise Exception( | |
260 | "Not enough explicit positional arguments to key off of for %r." | |
261 | " (@cached cannot key off of *args or **kwargs)" | |
262 | % (orig.__name__,) | |
263 | ) | |
264 | 281 | |
265 | 282 | def __get__(self, obj, objtype=None): |
266 | 283 | cache = Cache( |
307 | 324 | defer.returnValue(cached_result) |
308 | 325 | observer.addCallback(check_result) |
309 | 326 | |
310 | return preserve_context_over_deferred(observer) | |
311 | 327 | except KeyError: |
312 | 328 | ret = defer.maybeDeferred( |
313 | preserve_context_over_fn, | |
314 | self.function_to_call, | |
329 | logcontext.preserve_fn(self.function_to_call), | |
315 | 330 | obj, *args, **kwargs |
316 | 331 | ) |
317 | 332 | |
321 | 336 | |
322 | 337 | ret.addErrback(onErr) |
323 | 338 | |
324 | ret = ObservableDeferred(ret, consumeErrors=True) | |
325 | cache.set(cache_key, ret, callback=invalidate_callback) | |
326 | ||
327 | return preserve_context_over_deferred(ret.observe()) | |
339 | result_d = ObservableDeferred(ret, consumeErrors=True) | |
340 | cache.set(cache_key, result_d, callback=invalidate_callback) | |
341 | observer = result_d.observe() | |
342 | ||
343 | return logcontext.make_deferred_yieldable(observer) | |
328 | 344 | |
329 | 345 | wrapped.invalidate = cache.invalidate |
330 | 346 | wrapped.invalidate_all = cache.invalidate_all |
337 | 353 | return wrapped |
338 | 354 | |
339 | 355 | |
340 | class CacheListDescriptor(object): | |
356 | class CacheListDescriptor(_CacheDescriptorBase): | |
341 | 357 | """Wraps an existing cache to support bulk fetching of keys. |
342 | 358 | |
343 | 359 | Given a list of keys it looks in the cache to find any hits, then passes |
344 | the list of missing keys to the wrapped fucntion. | |
360 | the list of missing keys to the wrapped function. | |
361 | ||
362 | Once wrapped, the function returns either a Deferred which resolves to | |
363 | the list of results, or (if all results were cached), just the list of | |
364 | results. | |
345 | 365 | """ |
346 | 366 | |
347 | def __init__(self, orig, cached_method_name, list_name, num_args=1, | |
367 | def __init__(self, orig, cached_method_name, list_name, num_args=None, | |
348 | 368 | inlineCallbacks=False): |
349 | 369 | """ |
350 | 370 | Args: |
351 | 371 | orig (function) |
352 | method_name (str); The name of the chached method. | |
372 | cached_method_name (str): The name of the chached method. | |
353 | 373 | list_name (str): Name of the argument which is the bulk lookup list |
354 | num_args (int) | |
374 | num_args (int): number of positional arguments (excluding ``self``, | |
375 | but including list_name) to use as cache keys. Defaults to all | |
376 | named args of the function. | |
355 | 377 | inlineCallbacks (bool): Whether orig is a generator that should |
356 | 378 | be wrapped by defer.inlineCallbacks |
357 | 379 | """ |
358 | self.orig = orig | |
359 | ||
360 | if inlineCallbacks: | |
361 | self.function_to_call = defer.inlineCallbacks(orig) | |
362 | else: | |
363 | self.function_to_call = orig | |
364 | ||
365 | self.num_args = num_args | |
380 | super(CacheListDescriptor, self).__init__( | |
381 | orig, num_args=num_args, inlineCallbacks=inlineCallbacks) | |
382 | ||
366 | 383 | self.list_name = list_name |
367 | 384 | |
368 | self.arg_names = inspect.getargspec(orig).args[1:num_args + 1] | |
369 | 385 | self.list_pos = self.arg_names.index(self.list_name) |
370 | ||
371 | 386 | self.cached_method_name = cached_method_name |
372 | 387 | |
373 | 388 | self.sentinel = object() |
374 | ||
375 | if len(self.arg_names) < self.num_args: | |
376 | raise Exception( | |
377 | "Not enough explicit positional arguments to key off of for %r." | |
378 | " (@cached cannot key off of *args or **kwars)" | |
379 | % (orig.__name__,) | |
380 | ) | |
381 | 389 | |
382 | 390 | if self.list_name not in self.arg_names: |
383 | 391 | raise Exception( |
424 | 432 | args_to_call[self.list_name] = missing |
425 | 433 | |
426 | 434 | ret_d = defer.maybeDeferred( |
427 | preserve_context_over_fn, | |
428 | self.function_to_call, | |
435 | logcontext.preserve_fn(self.function_to_call), | |
429 | 436 | **args_to_call |
430 | 437 | ) |
431 | 438 | |
434 | 441 | # We need to create deferreds for each arg in the list so that |
435 | 442 | # we can insert the new deferred into the cache. |
436 | 443 | for arg in missing: |
437 | with PreserveLoggingContext(): | |
438 | observer = ret_d.observe() | |
444 | observer = ret_d.observe() | |
439 | 445 | observer.addCallback(lambda r, arg: r.get(arg, None), arg) |
440 | 446 | |
441 | 447 | observer = ObservableDeferred(observer) |
462 | 468 | results.update(res) |
463 | 469 | return results |
464 | 470 | |
465 | return preserve_context_over_deferred(defer.gatherResults( | |
471 | return logcontext.make_deferred_yieldable(defer.gatherResults( | |
466 | 472 | cached_defers.values(), |
467 | 473 | consumeErrors=True, |
468 | 474 | ).addCallback(update_results_dict).addErrback( |
486 | 492 | self.cache.invalidate(self.key) |
487 | 493 | |
488 | 494 | |
489 | def cached(max_entries=1000, num_args=1, tree=False, cache_context=False, | |
495 | def cached(max_entries=1000, num_args=None, tree=False, cache_context=False, | |
490 | 496 | iterable=False): |
491 | 497 | return lambda orig: CacheDescriptor( |
492 | 498 | orig, |
498 | 504 | ) |
499 | 505 | |
500 | 506 | |
501 | def cachedInlineCallbacks(max_entries=1000, num_args=1, tree=False, cache_context=False, | |
502 | iterable=False): | |
507 | def cachedInlineCallbacks(max_entries=1000, num_args=None, tree=False, | |
508 | cache_context=False, iterable=False): | |
503 | 509 | return lambda orig: CacheDescriptor( |
504 | 510 | orig, |
505 | 511 | max_entries=max_entries, |
511 | 517 | ) |
512 | 518 | |
513 | 519 | |
514 | def cachedList(cached_method_name, list_name, num_args=1, inlineCallbacks=False): | |
520 | def cachedList(cached_method_name, list_name, num_args=None, inlineCallbacks=False): | |
515 | 521 | """Creates a descriptor that wraps a function in a `CacheListDescriptor`. |
516 | 522 | |
517 | 523 | Used to do batch lookups for an already created cache. A single argument |
524 | 530 | cache (Cache): The underlying cache to use. |
525 | 531 | list_name (str): The name of the argument that is the list to use to |
526 | 532 | do batch lookups in the cache. |
527 | num_args (int): Number of arguments to use as the key in the cache. | |
533 | num_args (int): Number of arguments to use as the key in the cache | |
534 | (including list_name). Defaults to all named parameters. | |
528 | 535 | inlineCallbacks (bool): Should the function be wrapped in an |
529 | 536 | `defer.inlineCallbacks`? |
530 | 537 |
49 | 49 | def has_entity_changed(self, entity, stream_pos): |
50 | 50 | """Returns True if the entity may have been updated since stream_pos |
51 | 51 | """ |
52 | assert type(stream_pos) is int | |
52 | assert type(stream_pos) is int or type(stream_pos) is long | |
53 | 53 | |
54 | 54 | if stream_pos < self._earliest_known_stream_pos: |
55 | 55 | self.metrics.inc_misses() |
10 | 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | ||
14 | """ Thread-local-alike tracking of log contexts within synapse | |
15 | ||
16 | This module provides objects and utilities for tracking contexts through | |
17 | synapse code, so that log lines can include a request identifier, and so that | |
18 | CPU and database activity can be accounted for against the request that caused | |
19 | them. | |
20 | ||
21 | See doc/log_contexts.rst for details on how this works. | |
22 | """ | |
13 | 23 | |
14 | 24 | from twisted.internet import defer |
15 | 25 | |
299 | 309 | def preserve_context_over_deferred(deferred, context=None): |
300 | 310 | """Given a deferred wrap it such that any callbacks added later to it will |
301 | 311 | be invoked with the current context. |
312 | ||
313 | Deprecated: this almost certainly doesn't do want you want, ie make | |
314 | the deferred follow the synapse logcontext rules: try | |
315 | ``make_deferred_yieldable`` instead. | |
302 | 316 | """ |
303 | 317 | if context is None: |
304 | 318 | context = LoggingContext.current_context() |
308 | 322 | |
309 | 323 | |
310 | 324 | def preserve_fn(f): |
311 | """Ensures that function is called with correct context and that context is | |
312 | restored after return. Useful for wrapping functions that return a deferred | |
313 | which you don't yield on. | |
314 | """ | |
325 | """Wraps a function, to ensure that the current context is restored after | |
326 | return from the function, and that the sentinel context is set once the | |
327 | deferred returned by the funtion completes. | |
328 | ||
329 | Useful for wrapping functions that return a deferred which you don't yield | |
330 | on. | |
331 | """ | |
332 | def reset_context(result): | |
333 | LoggingContext.set_current_context(LoggingContext.sentinel) | |
334 | return result | |
335 | ||
336 | # XXX: why is this here rather than inside g? surely we want to preserve | |
337 | # the context from the time the function was called, not when it was | |
338 | # wrapped? | |
315 | 339 | current = LoggingContext.current_context() |
316 | 340 | |
317 | 341 | def g(*args, **kwargs): |
318 | with PreserveLoggingContext(current): | |
319 | res = f(*args, **kwargs) | |
320 | if isinstance(res, defer.Deferred): | |
321 | return preserve_context_over_deferred( | |
322 | res, context=LoggingContext.sentinel | |
323 | ) | |
324 | else: | |
325 | return res | |
342 | res = f(*args, **kwargs) | |
343 | if isinstance(res, defer.Deferred) and not res.called: | |
344 | # The function will have reset the context before returning, so | |
345 | # we need to restore it now. | |
346 | LoggingContext.set_current_context(current) | |
347 | ||
348 | # The original context will be restored when the deferred | |
349 | # completes, but there is nothing waiting for it, so it will | |
350 | # get leaked into the reactor or some other function which | |
351 | # wasn't expecting it. We therefore need to reset the context | |
352 | # here. | |
353 | # | |
354 | # (If this feels asymmetric, consider it this way: we are | |
355 | # effectively forking a new thread of execution. We are | |
356 | # probably currently within a ``with LoggingContext()`` block, | |
357 | # which is supposed to have a single entry and exit point. But | |
358 | # by spawning off another deferred, we are effectively | |
359 | # adding a new exit point.) | |
360 | res.addBoth(reset_context) | |
361 | return res | |
326 | 362 | return g |
363 | ||
364 | ||
365 | @defer.inlineCallbacks | |
366 | def make_deferred_yieldable(deferred): | |
367 | """Given a deferred, make it follow the Synapse logcontext rules: | |
368 | ||
369 | If the deferred has completed (or is not actually a Deferred), essentially | |
370 | does nothing (just returns another completed deferred with the | |
371 | result/failure). | |
372 | ||
373 | If the deferred has not yet completed, resets the logcontext before | |
374 | returning a deferred. Then, when the deferred completes, restores the | |
375 | current logcontext before running callbacks/errbacks. | |
376 | ||
377 | (This is more-or-less the opposite operation to preserve_fn.) | |
378 | """ | |
379 | with PreserveLoggingContext(): | |
380 | r = yield deferred | |
381 | defer.returnValue(r) | |
327 | 382 | |
328 | 383 | |
329 | 384 | # modules to ignore in `logcontext_tracer` |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 Vector Creations Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import phonenumbers | |
16 | from synapse.api.errors import SynapseError | |
17 | ||
18 | ||
19 | def phone_number_to_msisdn(country, number): | |
20 | """ | |
21 | Takes an ISO-3166-1 2 letter country code and phone number and | |
22 | returns an msisdn representing the canonical version of that | |
23 | phone number. | |
24 | Args: | |
25 | country (str): ISO-3166-1 2 letter country code | |
26 | number (str): Phone number in a national or international format | |
27 | ||
28 | Returns: | |
29 | (str) The canonical form of the phone number, as an msisdn | |
30 | Raises: | |
31 | SynapseError if the number could not be parsed. | |
32 | """ | |
33 | try: | |
34 | phoneNumber = phonenumbers.parse(number, country) | |
35 | except phonenumbers.NumberParseException: | |
36 | raise SynapseError(400, "Unable to parse phone number") | |
37 | return phonenumbers.format_number( | |
38 | phoneNumber, phonenumbers.PhoneNumberFormat.E164 | |
39 | )[1:] |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
14 | import synapse.util.logcontext | |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | 17 | from synapse.api.errors import CodeMessageException |
34 | 34 | |
35 | 35 | |
36 | 36 | @defer.inlineCallbacks |
37 | def get_retry_limiter(destination, clock, store, **kwargs): | |
37 | def get_retry_limiter(destination, clock, store, ignore_backoff=False, | |
38 | **kwargs): | |
38 | 39 | """For a given destination check if we have previously failed to |
39 | 40 | send a request there and are waiting before retrying the destination. |
40 | 41 | If we are not ready to retry the destination, this will raise a |
41 | 42 | NotRetryingDestination exception. Otherwise, will return a Context Manager |
42 | 43 | that will mark the destination as down if an exception is thrown (excluding |
43 | 44 | CodeMessageException with code < 500) |
45 | ||
46 | Args: | |
47 | destination (str): name of homeserver | |
48 | clock (synapse.util.clock): timing source | |
49 | store (synapse.storage.transactions.TransactionStore): datastore | |
50 | ignore_backoff (bool): true to ignore the historical backoff data and | |
51 | try the request anyway. We will still update the next | |
52 | retry_interval on success/failure. | |
44 | 53 | |
45 | 54 | Example usage: |
46 | 55 | |
65 | 74 | |
66 | 75 | now = int(clock.time_msec()) |
67 | 76 | |
68 | if retry_last_ts + retry_interval > now: | |
77 | if not ignore_backoff and retry_last_ts + retry_interval > now: | |
69 | 78 | raise NotRetryingDestination( |
70 | 79 | retry_last_ts=retry_last_ts, |
71 | 80 | retry_interval=retry_interval, |
123 | 132 | |
124 | 133 | def __exit__(self, exc_type, exc_val, exc_tb): |
125 | 134 | valid_err_code = False |
126 | if exc_type is not None and issubclass(exc_type, CodeMessageException): | |
135 | if exc_type is None: | |
136 | valid_err_code = True | |
137 | elif not issubclass(exc_type, Exception): | |
138 | # avoid treating exceptions which don't derive from Exception as | |
139 | # failures; this is mostly so as not to catch defer._DefGen. | |
140 | valid_err_code = True | |
141 | elif issubclass(exc_type, CodeMessageException): | |
127 | 142 | # Some error codes are perfectly fine for some APIs, whereas other |
128 | 143 | # APIs may expect to never received e.g. a 404. It's important to |
129 | 144 | # handle 404 as some remote servers will return a 404 when the HS |
141 | 156 | else: |
142 | 157 | valid_err_code = False |
143 | 158 | |
144 | if exc_type is None or valid_err_code: | |
159 | if valid_err_code: | |
145 | 160 | # We connected successfully. |
146 | 161 | if not self.retry_interval: |
147 | 162 | return |
148 | 163 | |
164 | logger.debug("Connection to %s was successful; clearing backoff", | |
165 | self.destination) | |
149 | 166 | retry_last_ts = 0 |
150 | 167 | self.retry_interval = 0 |
151 | 168 | else: |
159 | 176 | else: |
160 | 177 | self.retry_interval = self.min_retry_interval |
161 | 178 | |
179 | logger.debug( | |
180 | "Connection to %s was unsuccessful (%s(%s)); backoff now %i", | |
181 | self.destination, exc_type, exc_val, self.retry_interval | |
182 | ) | |
162 | 183 | retry_last_ts = int(self.clock.time_msec()) |
163 | 184 | |
164 | 185 | @defer.inlineCallbacks |
172 | 193 | "Failed to store set_destination_retry_timings", |
173 | 194 | ) |
174 | 195 | |
175 | store_retry_timings() | |
196 | # we deliberately do this in the background. | |
197 | synapse.util.logcontext.preserve_fn(store_retry_timings)() |
133 | 133 | if prev_membership not in MEMBERSHIP_PRIORITY: |
134 | 134 | prev_membership = "leave" |
135 | 135 | |
136 | # Always allow the user to see their own leave events, otherwise | |
137 | # they won't see the room disappear if they reject the invite | |
138 | if membership == "leave" and ( | |
139 | prev_membership == "join" or prev_membership == "invite" | |
140 | ): | |
141 | return True | |
142 | ||
136 | 143 | new_priority = MEMBERSHIP_PRIORITY.index(membership) |
137 | 144 | old_priority = MEMBERSHIP_PRIORITY.index(prev_membership) |
138 | 145 | if old_priority < new_priority: |
22 | 22 | |
23 | 23 | from synapse.api.filtering import Filter |
24 | 24 | from synapse.events import FrozenEvent |
25 | from synapse.api.errors import SynapseError | |
26 | ||
27 | import jsonschema | |
25 | 28 | |
26 | 29 | user_localpart = "test_user" |
27 | 30 | |
52 | 55 | self.filtering = hs.get_filtering() |
53 | 56 | |
54 | 57 | self.datastore = hs.get_datastore() |
58 | ||
59 | def test_errors_on_invalid_filters(self): | |
60 | invalid_filters = [ | |
61 | {"boom": {}}, | |
62 | {"account_data": "Hello World"}, | |
63 | {"event_fields": ["\\foo"]}, | |
64 | {"room": {"timeline": {"limit": 0}, "state": {"not_bars": ["*"]}}}, | |
65 | {"event_format": "other"}, | |
66 | {"room": {"not_rooms": ["#foo:pik-test"]}}, | |
67 | {"presence": {"senders": ["@bar;pik.test.com"]}} | |
68 | ] | |
69 | for filter in invalid_filters: | |
70 | with self.assertRaises(SynapseError) as check_filter_error: | |
71 | self.filtering.check_valid_filter(filter) | |
72 | self.assertIsInstance(check_filter_error.exception, SynapseError) | |
73 | ||
74 | def test_valid_filters(self): | |
75 | valid_filters = [ | |
76 | { | |
77 | "room": { | |
78 | "timeline": {"limit": 20}, | |
79 | "state": {"not_types": ["m.room.member"]}, | |
80 | "ephemeral": {"limit": 0, "not_types": ["*"]}, | |
81 | "include_leave": False, | |
82 | "rooms": ["!dee:pik-test"], | |
83 | "not_rooms": ["!gee:pik-test"], | |
84 | "account_data": {"limit": 0, "types": ["*"]} | |
85 | } | |
86 | }, | |
87 | { | |
88 | "room": { | |
89 | "state": { | |
90 | "types": ["m.room.*"], | |
91 | "not_rooms": ["!726s6s6q:example.com"] | |
92 | }, | |
93 | "timeline": { | |
94 | "limit": 10, | |
95 | "types": ["m.room.message"], | |
96 | "not_rooms": ["!726s6s6q:example.com"], | |
97 | "not_senders": ["@spam:example.com"] | |
98 | }, | |
99 | "ephemeral": { | |
100 | "types": ["m.receipt", "m.typing"], | |
101 | "not_rooms": ["!726s6s6q:example.com"], | |
102 | "not_senders": ["@spam:example.com"] | |
103 | } | |
104 | }, | |
105 | "presence": { | |
106 | "types": ["m.presence"], | |
107 | "not_senders": ["@alice:example.com"] | |
108 | }, | |
109 | "event_format": "client", | |
110 | "event_fields": ["type", "content", "sender"] | |
111 | } | |
112 | ] | |
113 | for filter in valid_filters: | |
114 | try: | |
115 | self.filtering.check_valid_filter(filter) | |
116 | except jsonschema.ValidationError as e: | |
117 | self.fail(e) | |
118 | ||
119 | def test_limits_are_applied(self): | |
120 | # TODO | |
121 | pass | |
55 | 122 | |
56 | 123 | def test_definition_types_works_with_literals(self): |
57 | 124 | definition = { |
92 | 92 | "room_alias": "#another:remote", |
93 | 93 | }, |
94 | 94 | retry_on_dns_fail=False, |
95 | ignore_backoff=True, | |
95 | 96 | ) |
96 | 97 | |
97 | 98 | @defer.inlineCallbacks |
323 | 323 | state = UserPresenceState.default(user_id) |
324 | 324 | state = state.copy_and_replace( |
325 | 325 | state=PresenceState.ONLINE, |
326 | last_active_ts=now, | |
326 | last_active_ts=0, | |
327 | 327 | last_user_sync_ts=now - SYNC_ONLINE_TIMEOUT - 1, |
328 | 328 | ) |
329 | 329 |
118 | 118 | self.mock_federation.make_query.assert_called_with( |
119 | 119 | destination="remote", |
120 | 120 | query_type="profile", |
121 | args={"user_id": "@alice:remote", "field": "displayname"} | |
121 | args={"user_id": "@alice:remote", "field": "displayname"}, | |
122 | ignore_backoff=True, | |
122 | 123 | ) |
123 | 124 | |
124 | 125 | @defer.inlineCallbacks |
191 | 191 | ), |
192 | 192 | json_data_callback=ANY, |
193 | 193 | long_retries=True, |
194 | backoff_on_404=True, | |
194 | 195 | ), |
195 | 196 | defer.succeed((200, "OK")) |
196 | 197 | ) |
262 | 263 | ), |
263 | 264 | json_data_callback=ANY, |
264 | 265 | long_retries=True, |
266 | backoff_on_404=True, | |
265 | 267 | ), |
266 | 268 | defer.succeed((200, "OK")) |
267 | 269 | ) |
67 | 67 | code, body = yield get |
68 | 68 | self.assertEquals(code, 200) |
69 | 69 | self.assertEquals(body["events"]["field_names"], [ |
70 | "position", "internal", "json", "state_group" | |
70 | "position", "event_id", "room_id", "type", "state_key", | |
71 | 71 | ]) |
72 | 72 | |
73 | 73 | @defer.inlineCallbacks |
32 | 32 | class FilterTestCase(unittest.TestCase): |
33 | 33 | |
34 | 34 | USER_ID = "@apple:test" |
35 | EXAMPLE_FILTER = {"type": ["m.*"]} | |
36 | EXAMPLE_FILTER_JSON = '{"type": ["m.*"]}' | |
35 | EXAMPLE_FILTER = {"room": {"timeline": {"types": ["m.room.message"]}}} | |
36 | EXAMPLE_FILTER_JSON = '{"room": {"timeline": {"types": ["m.room.message"]}}}' | |
37 | 37 | TO_REGISTER = [filter] |
38 | 38 | |
39 | 39 | @defer.inlineCallbacks |
88 | 88 | @defer.inlineCallbacks |
89 | 89 | def test_select_one_1col(self): |
90 | 90 | self.mock_txn.rowcount = 1 |
91 | self.mock_txn.fetchall.return_value = [("Value",)] | |
91 | self.mock_txn.__iter__ = Mock(return_value=iter([("Value",)])) | |
92 | 92 | |
93 | 93 | value = yield self.datastore._simple_select_one_onecol( |
94 | 94 | table="tablename", |
135 | 135 | @defer.inlineCallbacks |
136 | 136 | def test_select_list(self): |
137 | 137 | self.mock_txn.rowcount = 3 |
138 | self.mock_txn.fetchall.return_value = ((1,), (2,), (3,)) | |
138 | self.mock_txn.__iter__ = Mock(return_value=iter([(1,), (2,), (3,)])) | |
139 | 139 | self.mock_txn.description = ( |
140 | 140 | ("colA", None, None, None, None, None, None), |
141 | 141 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 Vector Creations Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import signedjson.key | |
16 | from twisted.internet import defer | |
17 | ||
18 | import tests.unittest | |
19 | import tests.utils | |
20 | ||
21 | ||
22 | class KeyStoreTestCase(tests.unittest.TestCase): | |
23 | def __init__(self, *args, **kwargs): | |
24 | super(KeyStoreTestCase, self).__init__(*args, **kwargs) | |
25 | self.store = None # type: synapse.storage.keys.KeyStore | |
26 | ||
27 | @defer.inlineCallbacks | |
28 | def setUp(self): | |
29 | hs = yield tests.utils.setup_test_homeserver() | |
30 | self.store = hs.get_datastore() | |
31 | ||
32 | @defer.inlineCallbacks | |
33 | def test_get_server_verify_keys(self): | |
34 | key1 = signedjson.key.decode_verify_key_base64( | |
35 | "ed25519", "key1", "fP5l4JzpZPq/zdbBg5xx6lQGAAOM9/3w94cqiJ5jPrw" | |
36 | ) | |
37 | key2 = signedjson.key.decode_verify_key_base64( | |
38 | "ed25519", "key2", "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw" | |
39 | ) | |
40 | yield self.store.store_server_verify_key( | |
41 | "server1", "from_server", 0, key1 | |
42 | ) | |
43 | yield self.store.store_server_verify_key( | |
44 | "server1", "from_server", 0, key2 | |
45 | ) | |
46 | ||
47 | res = yield self.store.get_server_verify_keys( | |
48 | "server1", ["ed25519:key1", "ed25519:key2", "ed25519:key3"]) | |
49 | ||
50 | self.assertEqual(len(res.keys()), 2) | |
51 | self.assertEqual(res["ed25519:key1"].version, "key1") | |
52 | self.assertEqual(res["ed25519:key2"].version, "key2") |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 Vector Creations Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2016 OpenMarket Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | import mock | |
17 | from synapse.api.errors import SynapseError | |
18 | from synapse.util import async | |
19 | from synapse.util import logcontext | |
20 | from twisted.internet import defer | |
21 | from synapse.util.caches import descriptors | |
22 | from tests import unittest | |
23 | ||
24 | logger = logging.getLogger(__name__) | |
25 | ||
26 | ||
27 | class DescriptorTestCase(unittest.TestCase): | |
28 | @defer.inlineCallbacks | |
29 | def test_cache(self): | |
30 | class Cls(object): | |
31 | def __init__(self): | |
32 | self.mock = mock.Mock() | |
33 | ||
34 | @descriptors.cached() | |
35 | def fn(self, arg1, arg2): | |
36 | return self.mock(arg1, arg2) | |
37 | ||
38 | obj = Cls() | |
39 | ||
40 | obj.mock.return_value = 'fish' | |
41 | r = yield obj.fn(1, 2) | |
42 | self.assertEqual(r, 'fish') | |
43 | obj.mock.assert_called_once_with(1, 2) | |
44 | obj.mock.reset_mock() | |
45 | ||
46 | # a call with different params should call the mock again | |
47 | obj.mock.return_value = 'chips' | |
48 | r = yield obj.fn(1, 3) | |
49 | self.assertEqual(r, 'chips') | |
50 | obj.mock.assert_called_once_with(1, 3) | |
51 | obj.mock.reset_mock() | |
52 | ||
53 | # the two values should now be cached | |
54 | r = yield obj.fn(1, 2) | |
55 | self.assertEqual(r, 'fish') | |
56 | r = yield obj.fn(1, 3) | |
57 | self.assertEqual(r, 'chips') | |
58 | obj.mock.assert_not_called() | |
59 | ||
60 | @defer.inlineCallbacks | |
61 | def test_cache_num_args(self): | |
62 | """Only the first num_args arguments should matter to the cache""" | |
63 | ||
64 | class Cls(object): | |
65 | def __init__(self): | |
66 | self.mock = mock.Mock() | |
67 | ||
68 | @descriptors.cached(num_args=1) | |
69 | def fn(self, arg1, arg2): | |
70 | return self.mock(arg1, arg2) | |
71 | ||
72 | obj = Cls() | |
73 | obj.mock.return_value = 'fish' | |
74 | r = yield obj.fn(1, 2) | |
75 | self.assertEqual(r, 'fish') | |
76 | obj.mock.assert_called_once_with(1, 2) | |
77 | obj.mock.reset_mock() | |
78 | ||
79 | # a call with different params should call the mock again | |
80 | obj.mock.return_value = 'chips' | |
81 | r = yield obj.fn(2, 3) | |
82 | self.assertEqual(r, 'chips') | |
83 | obj.mock.assert_called_once_with(2, 3) | |
84 | obj.mock.reset_mock() | |
85 | ||
86 | # the two values should now be cached; we should be able to vary | |
87 | # the second argument and still get the cached result. | |
88 | r = yield obj.fn(1, 4) | |
89 | self.assertEqual(r, 'fish') | |
90 | r = yield obj.fn(2, 5) | |
91 | self.assertEqual(r, 'chips') | |
92 | obj.mock.assert_not_called() | |
93 | ||
94 | def test_cache_logcontexts(self): | |
95 | """Check that logcontexts are set and restored correctly when | |
96 | using the cache.""" | |
97 | ||
98 | complete_lookup = defer.Deferred() | |
99 | ||
100 | class Cls(object): | |
101 | @descriptors.cached() | |
102 | def fn(self, arg1): | |
103 | @defer.inlineCallbacks | |
104 | def inner_fn(): | |
105 | with logcontext.PreserveLoggingContext(): | |
106 | yield complete_lookup | |
107 | defer.returnValue(1) | |
108 | ||
109 | return inner_fn() | |
110 | ||
111 | @defer.inlineCallbacks | |
112 | def do_lookup(): | |
113 | with logcontext.LoggingContext() as c1: | |
114 | c1.name = "c1" | |
115 | r = yield obj.fn(1) | |
116 | self.assertEqual(logcontext.LoggingContext.current_context(), | |
117 | c1) | |
118 | defer.returnValue(r) | |
119 | ||
120 | def check_result(r): | |
121 | self.assertEqual(r, 1) | |
122 | ||
123 | obj = Cls() | |
124 | ||
125 | # set off a deferred which will do a cache lookup | |
126 | d1 = do_lookup() | |
127 | self.assertEqual(logcontext.LoggingContext.current_context(), | |
128 | logcontext.LoggingContext.sentinel) | |
129 | d1.addCallback(check_result) | |
130 | ||
131 | # and another | |
132 | d2 = do_lookup() | |
133 | self.assertEqual(logcontext.LoggingContext.current_context(), | |
134 | logcontext.LoggingContext.sentinel) | |
135 | d2.addCallback(check_result) | |
136 | ||
137 | # let the lookup complete | |
138 | complete_lookup.callback(None) | |
139 | ||
140 | return defer.gatherResults([d1, d2]) | |
141 | ||
142 | def test_cache_logcontexts_with_exception(self): | |
143 | """Check that the cache sets and restores logcontexts correctly when | |
144 | the lookup function throws an exception""" | |
145 | ||
146 | class Cls(object): | |
147 | @descriptors.cached() | |
148 | def fn(self, arg1): | |
149 | @defer.inlineCallbacks | |
150 | def inner_fn(): | |
151 | yield async.run_on_reactor() | |
152 | raise SynapseError(400, "blah") | |
153 | ||
154 | return inner_fn() | |
155 | ||
156 | @defer.inlineCallbacks | |
157 | def do_lookup(): | |
158 | with logcontext.LoggingContext() as c1: | |
159 | c1.name = "c1" | |
160 | try: | |
161 | yield obj.fn(1) | |
162 | self.fail("No exception thrown") | |
163 | except SynapseError: | |
164 | pass | |
165 | ||
166 | self.assertEqual(logcontext.LoggingContext.current_context(), | |
167 | c1) | |
168 | ||
169 | obj = Cls() | |
170 | ||
171 | # set off a deferred which will do a cache lookup | |
172 | d1 = do_lookup() | |
173 | self.assertEqual(logcontext.LoggingContext.current_context(), | |
174 | logcontext.LoggingContext.sentinel) | |
175 | ||
176 | return d1 |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 Vector Creations Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | from synapse import util | |
15 | from twisted.internet import defer | |
16 | from tests import unittest | |
17 | ||
18 | ||
19 | class ClockTestCase(unittest.TestCase): | |
20 | @defer.inlineCallbacks | |
21 | def test_time_bound_deferred(self): | |
22 | # just a deferred which never resolves | |
23 | slow_deferred = defer.Deferred() | |
24 | ||
25 | clock = util.Clock() | |
26 | time_bound = clock.time_bound_deferred(slow_deferred, 0.001) | |
27 | ||
28 | try: | |
29 | yield time_bound | |
30 | self.fail("Expected timedout error, but got nothing") | |
31 | except util.DeferredTimedOutError: | |
32 | pass |
0 | import twisted.python.failure | |
0 | 1 | from twisted.internet import defer |
1 | 2 | from twisted.internet import reactor |
2 | 3 | from .. import unittest |
3 | 4 | |
4 | 5 | from synapse.util.async import sleep |
6 | from synapse.util import logcontext | |
5 | 7 | from synapse.util.logcontext import LoggingContext |
6 | 8 | |
7 | 9 | |
32 | 34 | context_one.test_key = "one" |
33 | 35 | yield sleep(0) |
34 | 36 | self._check_test_key("one") |
37 | ||
38 | def _test_preserve_fn(self, function): | |
39 | sentinel_context = LoggingContext.current_context() | |
40 | ||
41 | callback_completed = [False] | |
42 | ||
43 | @defer.inlineCallbacks | |
44 | def cb(): | |
45 | context_one.test_key = "one" | |
46 | yield function() | |
47 | self._check_test_key("one") | |
48 | ||
49 | callback_completed[0] = True | |
50 | ||
51 | with LoggingContext() as context_one: | |
52 | context_one.test_key = "one" | |
53 | ||
54 | # fire off function, but don't wait on it. | |
55 | logcontext.preserve_fn(cb)() | |
56 | ||
57 | self._check_test_key("one") | |
58 | ||
59 | # now wait for the function under test to have run, and check that | |
60 | # the logcontext is left in a sane state. | |
61 | d2 = defer.Deferred() | |
62 | ||
63 | def check_logcontext(): | |
64 | if not callback_completed[0]: | |
65 | reactor.callLater(0.01, check_logcontext) | |
66 | return | |
67 | ||
68 | # make sure that the context was reset before it got thrown back | |
69 | # into the reactor | |
70 | try: | |
71 | self.assertIs(LoggingContext.current_context(), | |
72 | sentinel_context) | |
73 | d2.callback(None) | |
74 | except BaseException: | |
75 | d2.errback(twisted.python.failure.Failure()) | |
76 | ||
77 | reactor.callLater(0.01, check_logcontext) | |
78 | ||
79 | # test is done once d2 finishes | |
80 | return d2 | |
81 | ||
82 | def test_preserve_fn_with_blocking_fn(self): | |
83 | @defer.inlineCallbacks | |
84 | def blocking_function(): | |
85 | yield sleep(0) | |
86 | ||
87 | return self._test_preserve_fn(blocking_function) | |
88 | ||
89 | def test_preserve_fn_with_non_blocking_fn(self): | |
90 | @defer.inlineCallbacks | |
91 | def nonblocking_function(): | |
92 | with logcontext.PreserveLoggingContext(): | |
93 | yield defer.succeed(None) | |
94 | ||
95 | return self._test_preserve_fn(nonblocking_function) |