Codebase list matrix-synapse / d6c3f2c
Imported Upstream version 0.23.0 Erik Johnston 6 years ago
63 changed file(s) with 2357 addition(s) and 892 deletion(s). Raw diff Collapse all Expand all
0 <!--
1
2 **IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
3 You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
4
5
6 This is a bug report template. By following the instructions below and
7 filling out the sections with your information, you will help the us to get all
8 the necessary data to fix your issue.
9
10 You can also preview your report before submitting it. You may remove sections
11 that aren't relevant to your particular case.
12
13 Text between <!-- and --​> marks will be invisible in the report.
14
15 -->
16
17 ### Description
18
19 Describe here the problem that you are experiencing, or the feature you are requesting.
20
21 ### Steps to reproduce
22
23 - For bugs, list the steps
24 - that reproduce the bug
25 - using hyphens as bullet points
26
27 Describe how what happens differs from what you expected.
28
29 If you can identify any relevant log snippets from _homeserver.log_, please include
30 those here (please be careful to remove any personal or private data):
31
32 ### Version information
33
34 <!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
35
36 - **Homeserver**: Was this issue identified on matrix.org or another homeserver?
37
38 If not matrix.org:
39 - **Version**: What version of Synapse is running? <!--
40 You can find the Synapse version by inspecting the server headers (replace matrix.org with
41 your own homeserver domain):
42 $ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
43 -->
44 - **Install method**: package manager/git clone/pip
45 - **Platform**: Tell us about the environment in which your homeserver is operating
46 - distro, hardware, if it's running in a vm/container, etc.
0 Changes in synapse v0.23.0 (2017-10-02)
1 =======================================
2
3 No changes since v0.23.0-rc2
4
5
6 Changes in synapse v0.23.0-rc2 (2017-09-26)
7 ===========================================
8
9 Bug fixes:
10
11 * Fix regression in performance of syncs (PR #2470)
12
13
14 Changes in synapse v0.23.0-rc1 (2017-09-25)
15 ===========================================
16
17 Features:
18
19 * Add a frontend proxy worker (PR #2344)
20 * Add support for event_id_only push format (PR #2450)
21 * Add a PoC for filtering spammy events (PR #2456)
22 * Add a config option to block all room invites (PR #2457)
23
24
25 Changes:
26
27 * Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
28 * Improve performance of generating push notifications (PR #2343, #2357, #2365,
29 #2366, #2371)
30 * Improve DB performance for device list handling in sync (PR #2362)
31 * Include a sample prometheus config (PR #2416)
32 * Document known to work postgres version (PR #2433) Thanks to @ptman!
33
34
35 Bug fixes:
36
37 * Fix caching error in the push evaluator (PR #2332)
38 * Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
39 * Fix port script for user directory tables (PR #2375)
40 * Fix device lists notifications when user rejoins a room (PR #2443, #2449)
41 * Fix sync to always send down current state events in timeline (PR #2451)
42 * Fix bug where guest users were incorrectly kicked (PR #2453)
43 * Fix bug talking to IPv6 only servers using SRV records (PR #2462)
44
45
046 Changes in synapse v0.22.1 (2017-07-06)
147 =======================================
248
2626 exclude jenkins*
2727 recursive-exclude jenkins *.sh
2828
29 prune .github
2930 prune demo/etc
199199 .. __: `key_management`_
200200
201201 The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is
202 configured without TLS; it is not recommended this be exposed outside your
203 local network. Port 8448 is configured to use TLS with a self-signed
204 certificate. This is fine for testing with but, to avoid your clients
205 complaining about the certificate, you will almost certainly want to use
206 another certificate for production purposes. (Note that a self-signed
202 configured without TLS; it should be behind a reverse proxy for TLS/SSL
203 termination on port 443 which in turn should be used for clients. Port 8448
204 is configured to use TLS with a self-signed certificate. If you would like
205 to do initial test with a client without having to setup a reverse proxy,
206 you can temporarly use another certificate. (Note that a self-signed
207207 certificate is fine for `Federation`_). You can do so by changing
208208 ``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path``
209209 in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure
282282 The easiest way to try out your new Synapse installation is by connecting to it
283283 from a web client. The easiest option is probably the one at
284284 http://riot.im/app. You will need to specify a "Custom server" when you log on
285 or register: set this to ``https://localhost:8448`` - remember to specify the
286 port (``:8448``) unless you changed the configuration. (Leave the identity
285 or register: set this to ``https://domain.tld`` if you setup a reverse proxy
286 following the recommended setup, or ``https://localhost:8448`` - remember to specify the
287 port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
287288 server as the default - see `Identity servers`_.)
289
290 If using port 8448 you will run into errors until you accept the self-signed
291 certificate. You can easily do this by going to ``https://localhost:8448``
292 directly with your browser and accept the presented certificate. You can then
293 go back in your web client and proceed further.
288294
289295 If all goes well you should at least be able to log in, create a room, and
290296 start sending messages.
592598 domain name. For example, you might want to run your server at
593599 ``synapse.example.com``, but have your Matrix user-ids look like
594600 ``@user:example.com``. (A SRV record also allows you to change the port from
595 the default 8448. However, if you are thinking of using a reverse-proxy, be
596 sure to read `Reverse-proxying the federation port`_ first.)
601 the default 8448. However, if you are thinking of using a reverse-proxy on the
602 federation port, which is not recommended, be sure to read
603 `Reverse-proxying the federation port`_ first.)
597604
598605 To use a SRV record, first create your SRV record and publish it in DNS. This
599606 should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
673680 Using a reverse proxy with Synapse
674681 ==================================
675682
676 It is possible to put a reverse proxy such as
683 It is recommended to put a reverse proxy such as
677684 `nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
678685 `Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
679686 `HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
691698 `Reverse-proxying the federation port`_.
692699
693700 The recommended setup is therefore to configure your reverse-proxy on port 443
694 for client connections, but to also expose port 8448 for server-server
695 connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx
696 configuration might look like::
701 to port 8008 of synapse for client connections, but to also directly expose port
702 8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``,
703 so an example nginx configuration might look like::
697704
698705 server {
699706 listen 443 ssl;
44 what you currently have installed to current version of synapse. The extra
55 instructions that may be required are listed later in this document.
66
7 If synapse was installed in a virtualenv then active that virtualenv before
8 upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
7 1. If synapse was installed in a virtualenv then active that virtualenv before
8 upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then
9 run:
10
11 .. code:: bash
12
13 source ~/.synapse/bin/activate
14
15 2. If synapse was installed using pip then upgrade to the latest version by
16 running:
17
18 .. code:: bash
19
20 pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
21
22 # restart synapse
23 synctl restart
24
25
26 If synapse was installed using git then upgrade to the latest version by
27 running:
28
29 .. code:: bash
30
31 # Pull the latest version of the master branch.
32 git pull
33 # Update the versions of synapse's python dependencies.
34 python synapse/python_dependencies.py | xargs pip install --upgrade
35
36 # restart synapse
37 ./synctl restart
38
39
40 To check whether your update was sucessful, you can check the Server header
41 returned by the Client-Server API:
942
1043 .. code:: bash
1144
12 source ~/.synapse/bin/activate
13
14 If synapse was installed using pip then upgrade to the latest version by
15 running:
16
17 .. code:: bash
18
19 pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
20
21 If synapse was installed using git then upgrade to the latest version by
22 running:
23
24 .. code:: bash
25
26 # Pull the latest version of the master branch.
27 git pull
28 # Update the versions of synapse's python dependencies.
29 python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
30
31 To check whether your update was sucessfull, run:
32
33 .. code:: bash
34
35 # replace your.server.domain with ther domain of your synapse homeserver
36 curl https://<your.server.domain>/_matrix/federation/v1/version
37
38 So for the Matrix.org HS server the URL would be: https://matrix.org/_matrix/federation/v1/version.
39
45 # replace <host.name> with the hostname of your synapse homeserver.
46 # You may need to specify a port (eg, :8448) if your server is not
47 # configured on port 443.
48 curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
4049
4150 Upgrading to v0.15.0
4251 ====================
7685 ``homeserver.yaml``::
7786
7887 app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
79
88
8089 Where ``registration-01.yaml`` looks like::
8190
8291 url: <String> # e.g. "https://my.application.service.com"
165174 it before starting the new version of the homeserver.
166175
167176 The script "database-prepare-for-0.5.0.sh" should be used to upgrade the
168 database. This will save all user information, such as logins and profiles,
177 database. This will save all user information, such as logins and profiles,
169178 but will otherwise purge the database. This includes messages, which
170179 rooms the home server was a member of and room alias mappings.
171180
174183 unfortunately, non trivial and requires human intervention to resolve any
175184 resulting conflicts during the upgrade process.
176185
177 Before running the command the homeserver should be first completely
186 Before running the command the homeserver should be first completely
178187 shutdown. To run it, simply specify the location of the database, e.g.:
179188
180189 ./scripts/database-prepare-for-0.5.0.sh "homeserver.db"
181190
182 Once this has successfully completed it will be safe to restart the
183 homeserver. You may notice that the homeserver takes a few seconds longer to
191 Once this has successfully completed it will be safe to restart the
192 homeserver. You may notice that the homeserver takes a few seconds longer to
184193 restart than usual as it reinitializes the database.
185194
186195 On startup of the new version, users can either rejoin remote rooms using room
187196 aliases or by being reinvited. Alternatively, if any other homeserver sends a
188 message to a room that the homeserver was previously in the local HS will
197 message to a room that the homeserver was previously in the local HS will
189198 automatically rejoin the room.
190199
191200 Upgrading to v0.4.0
244253 --config-path homeserver.config \
245254 --generate-config
246255
247 This config can be edited if desired, for example to specify a different SSL
256 This config can be edited if desired, for example to specify a different SSL
248257 certificate to use. Once done you can run the home server using::
249258
250259 $ python synapse/app/homeserver.py --config-path homeserver.config
265274 it before starting the new version of the homeserver.
266275
267276 The script "database-prepare-for-0.0.1.sh" should be used to upgrade the
268 database. This will save all user information, such as logins and profiles,
277 database. This will save all user information, such as logins and profiles,
269278 but will otherwise purge the database. This includes messages, which
270279 rooms the home server was a member of and room alias mappings.
271280
272 Before running the command the homeserver should be first completely
281 Before running the command the homeserver should be first completely
273282 shutdown. To run it, simply specify the location of the database, e.g.:
274283
275284 ./scripts/database-prepare-for-0.0.1.sh "homeserver.db"
276285
277 Once this has successfully completed it will be safe to restart the
278 homeserver. You may notice that the homeserver takes a few seconds longer to
286 Once this has successfully completed it will be safe to restart the
287 homeserver. You may notice that the homeserver takes a few seconds longer to
279288 restart than usual as it reinitializes the database.
280289
281290 On startup of the new version, users can either rejoin remote rooms using room
282291 aliases or by being reinvited. Alternatively, if any other homeserver sends a
283 message to a room that the homeserver was previously in the local HS will
292 message to a room that the homeserver was previously in the local HS will
284293 automatically rejoin the room.
0 This directory contains some sample monitoring config for using the
1 'Prometheus' monitoring server against synapse.
2
3 To use it, first install prometheus by following the instructions at
4
5 http://prometheus.io/
6
7 Then add a new job to the main prometheus.conf file:
8
9 job: {
10 name: "synapse"
11
12 target_group: {
13 target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics"
14 }
15 }
16
17 Metrics are disabled by default when running synapse; they must be enabled
18 with the 'enable-metrics' option, either in the synapse config file or as a
19 command-line option.
0 {{ template "head" . }}
1
2 {{ template "prom_content_head" . }}
3 <h1>System Resources</h1>
4
5 <h3>CPU</h3>
6 <div id="process_resource_utime"></div>
7 <script>
8 new PromConsole.Graph({
9 node: document.querySelector("#process_resource_utime"),
10 expr: "rate(process_cpu_seconds_total[2m]) * 100",
11 name: "[[job]]",
12 min: 0,
13 max: 100,
14 renderer: "line",
15 height: 150,
16 yAxisFormatter: PromConsole.NumberFormatter.humanize,
17 yHoverFormatter: PromConsole.NumberFormatter.humanize,
18 yUnits: "%",
19 yTitle: "CPU Usage"
20 })
21 </script>
22
23 <h3>Memory</h3>
24 <div id="process_resource_maxrss"></div>
25 <script>
26 new PromConsole.Graph({
27 node: document.querySelector("#process_resource_maxrss"),
28 expr: "process_psutil_rss:max",
29 name: "Maxrss",
30 min: 0,
31 renderer: "line",
32 height: 150,
33 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
34 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
35 yUnits: "bytes",
36 yTitle: "Usage"
37 })
38 </script>
39
40 <h3>File descriptors</h3>
41 <div id="process_fds"></div>
42 <script>
43 new PromConsole.Graph({
44 node: document.querySelector("#process_fds"),
45 expr: "process_open_fds{job='synapse'}",
46 name: "FDs",
47 min: 0,
48 renderer: "line",
49 height: 150,
50 yAxisFormatter: PromConsole.NumberFormatter.humanize,
51 yHoverFormatter: PromConsole.NumberFormatter.humanize,
52 yUnits: "",
53 yTitle: "Descriptors"
54 })
55 </script>
56
57 <h1>Reactor</h1>
58
59 <h3>Total reactor time</h3>
60 <div id="reactor_total_time"></div>
61 <script>
62 new PromConsole.Graph({
63 node: document.querySelector("#reactor_total_time"),
64 expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
65 name: "time",
66 max: 1,
67 min: 0,
68 renderer: "area",
69 height: 150,
70 yAxisFormatter: PromConsole.NumberFormatter.humanize,
71 yHoverFormatter: PromConsole.NumberFormatter.humanize,
72 yUnits: "s/s",
73 yTitle: "Usage"
74 })
75 </script>
76
77 <h3>Average reactor tick time</h3>
78 <div id="reactor_average_time"></div>
79 <script>
80 new PromConsole.Graph({
81 node: document.querySelector("#reactor_average_time"),
82 expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
83 name: "time",
84 min: 0,
85 renderer: "line",
86 height: 150,
87 yAxisFormatter: PromConsole.NumberFormatter.humanize,
88 yHoverFormatter: PromConsole.NumberFormatter.humanize,
89 yUnits: "s",
90 yTitle: "Time"
91 })
92 </script>
93
94 <h3>Pending calls per tick</h3>
95 <div id="reactor_pending_calls"></div>
96 <script>
97 new PromConsole.Graph({
98 node: document.querySelector("#reactor_pending_calls"),
99 expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
100 name: "calls",
101 min: 0,
102 renderer: "line",
103 height: 150,
104 yAxisFormatter: PromConsole.NumberFormatter.humanize,
105 yHoverFormatter: PromConsole.NumberFormatter.humanize,
106 yTitle: "Pending Cals"
107 })
108 </script>
109
110 <h1>Storage</h1>
111
112 <h3>Queries</h3>
113 <div id="synapse_storage_query_time"></div>
114 <script>
115 new PromConsole.Graph({
116 node: document.querySelector("#synapse_storage_query_time"),
117 expr: "rate(synapse_storage_query_time:count[2m])",
118 name: "[[verb]]",
119 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
120 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
121 yUnits: "queries/s",
122 yTitle: "Queries"
123 })
124 </script>
125
126 <h3>Transactions</h3>
127 <div id="synapse_storage_transaction_time"></div>
128 <script>
129 new PromConsole.Graph({
130 node: document.querySelector("#synapse_storage_transaction_time"),
131 expr: "rate(synapse_storage_transaction_time:count[2m])",
132 name: "[[desc]]",
133 min: 0,
134 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
135 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
136 yUnits: "txn/s",
137 yTitle: "Transactions"
138 })
139 </script>
140
141 <h3>Transaction execution time</h3>
142 <div id="synapse_storage_transactions_time_msec"></div>
143 <script>
144 new PromConsole.Graph({
145 node: document.querySelector("#synapse_storage_transactions_time_msec"),
146 expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
147 name: "[[desc]]",
148 min: 0,
149 yAxisFormatter: PromConsole.NumberFormatter.humanize,
150 yHoverFormatter: PromConsole.NumberFormatter.humanize,
151 yUnits: "s/s",
152 yTitle: "Usage"
153 })
154 </script>
155
156 <h3>Database scheduling latency</h3>
157 <div id="synapse_storage_schedule_time"></div>
158 <script>
159 new PromConsole.Graph({
160 node: document.querySelector("#synapse_storage_schedule_time"),
161 expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
162 name: "Total latency",
163 min: 0,
164 yAxisFormatter: PromConsole.NumberFormatter.humanize,
165 yHoverFormatter: PromConsole.NumberFormatter.humanize,
166 yUnits: "s/s",
167 yTitle: "Usage"
168 })
169 </script>
170
171 <h3>Cache hit ratio</h3>
172 <div id="synapse_cache_ratio"></div>
173 <script>
174 new PromConsole.Graph({
175 node: document.querySelector("#synapse_cache_ratio"),
176 expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
177 name: "[[name]]",
178 min: 0,
179 max: 100,
180 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
181 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
182 yUnits: "%",
183 yTitle: "Percentage"
184 })
185 </script>
186
187 <h3>Cache size</h3>
188 <div id="synapse_cache_size"></div>
189 <script>
190 new PromConsole.Graph({
191 node: document.querySelector("#synapse_cache_size"),
192 expr: "synapse_util_caches_cache:size",
193 name: "[[name]]",
194 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
195 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
196 yUnits: "",
197 yTitle: "Items"
198 })
199 </script>
200
201 <h1>Requests</h1>
202
203 <h3>Requests by Servlet</h3>
204 <div id="synapse_http_server_requests_servlet"></div>
205 <script>
206 new PromConsole.Graph({
207 node: document.querySelector("#synapse_http_server_requests_servlet"),
208 expr: "rate(synapse_http_server_requests:servlet[2m])",
209 name: "[[servlet]]",
210 yAxisFormatter: PromConsole.NumberFormatter.humanize,
211 yHoverFormatter: PromConsole.NumberFormatter.humanize,
212 yUnits: "req/s",
213 yTitle: "Requests"
214 })
215 </script>
216 <h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
217 <div id="synapse_http_server_requests_servlet_minus_events"></div>
218 <script>
219 new PromConsole.Graph({
220 node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
221 expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
222 name: "[[servlet]]",
223 yAxisFormatter: PromConsole.NumberFormatter.humanize,
224 yHoverFormatter: PromConsole.NumberFormatter.humanize,
225 yUnits: "req/s",
226 yTitle: "Requests"
227 })
228 </script>
229
230 <h3>Average response times</h3>
231 <div id="synapse_http_server_response_time_avg"></div>
232 <script>
233 new PromConsole.Graph({
234 node: document.querySelector("#synapse_http_server_response_time_avg"),
235 expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
236 name: "[[servlet]]",
237 yAxisFormatter: PromConsole.NumberFormatter.humanize,
238 yHoverFormatter: PromConsole.NumberFormatter.humanize,
239 yUnits: "s/req",
240 yTitle: "Response time"
241 })
242 </script>
243
244 <h3>All responses by code</h3>
245 <div id="synapse_http_server_responses"></div>
246 <script>
247 new PromConsole.Graph({
248 node: document.querySelector("#synapse_http_server_responses"),
249 expr: "rate(synapse_http_server_responses[2m])",
250 name: "[[method]] / [[code]]",
251 yAxisFormatter: PromConsole.NumberFormatter.humanize,
252 yHoverFormatter: PromConsole.NumberFormatter.humanize,
253 yUnits: "req/s",
254 yTitle: "Requests"
255 })
256 </script>
257
258 <h3>Error responses by code</h3>
259 <div id="synapse_http_server_responses_err"></div>
260 <script>
261 new PromConsole.Graph({
262 node: document.querySelector("#synapse_http_server_responses_err"),
263 expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])",
264 name: "[[method]] / [[code]]",
265 yAxisFormatter: PromConsole.NumberFormatter.humanize,
266 yHoverFormatter: PromConsole.NumberFormatter.humanize,
267 yUnits: "req/s",
268 yTitle: "Requests"
269 })
270 </script>
271
272
273 <h3>CPU Usage</h3>
274 <div id="synapse_http_server_response_ru_utime"></div>
275 <script>
276 new PromConsole.Graph({
277 node: document.querySelector("#synapse_http_server_response_ru_utime"),
278 expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
279 name: "[[servlet]]",
280 yAxisFormatter: PromConsole.NumberFormatter.humanize,
281 yHoverFormatter: PromConsole.NumberFormatter.humanize,
282 yUnits: "s/s",
283 yTitle: "CPU Usage"
284 })
285 </script>
286
287
288 <h3>DB Usage</h3>
289 <div id="synapse_http_server_response_db_txn_duration"></div>
290 <script>
291 new PromConsole.Graph({
292 node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
293 expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
294 name: "[[servlet]]",
295 yAxisFormatter: PromConsole.NumberFormatter.humanize,
296 yHoverFormatter: PromConsole.NumberFormatter.humanize,
297 yUnits: "s/s",
298 yTitle: "DB Usage"
299 })
300 </script>
301
302
303 <h3>Average event send times</h3>
304 <div id="synapse_http_server_send_time_avg"></div>
305 <script>
306 new PromConsole.Graph({
307 node: document.querySelector("#synapse_http_server_send_time_avg"),
308 expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
309 name: "[[servlet]]",
310 yAxisFormatter: PromConsole.NumberFormatter.humanize,
311 yHoverFormatter: PromConsole.NumberFormatter.humanize,
312 yUnits: "s/req",
313 yTitle: "Response time"
314 })
315 </script>
316
317 <h1>Federation</h1>
318
319 <h3>Sent Messages</h3>
320 <div id="synapse_federation_client_sent"></div>
321 <script>
322 new PromConsole.Graph({
323 node: document.querySelector("#synapse_federation_client_sent"),
324 expr: "rate(synapse_federation_client_sent[2m])",
325 name: "[[type]]",
326 yAxisFormatter: PromConsole.NumberFormatter.humanize,
327 yHoverFormatter: PromConsole.NumberFormatter.humanize,
328 yUnits: "req/s",
329 yTitle: "Requests"
330 })
331 </script>
332
333 <h3>Received Messages</h3>
334 <div id="synapse_federation_server_received"></div>
335 <script>
336 new PromConsole.Graph({
337 node: document.querySelector("#synapse_federation_server_received"),
338 expr: "rate(synapse_federation_server_received[2m])",
339 name: "[[type]]",
340 yAxisFormatter: PromConsole.NumberFormatter.humanize,
341 yHoverFormatter: PromConsole.NumberFormatter.humanize,
342 yUnits: "req/s",
343 yTitle: "Requests"
344 })
345 </script>
346
347 <h3>Pending</h3>
348 <div id="synapse_federation_transaction_queue_pending"></div>
349 <script>
350 new PromConsole.Graph({
351 node: document.querySelector("#synapse_federation_transaction_queue_pending"),
352 expr: "synapse_federation_transaction_queue_pending",
353 name: "[[type]]",
354 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
355 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
356 yUnits: "",
357 yTitle: "Units"
358 })
359 </script>
360
361 <h1>Clients</h1>
362
363 <h3>Notifiers</h3>
364 <div id="synapse_notifier_listeners"></div>
365 <script>
366 new PromConsole.Graph({
367 node: document.querySelector("#synapse_notifier_listeners"),
368 expr: "synapse_notifier_listeners",
369 name: "listeners",
370 min: 0,
371 yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
372 yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
373 yUnits: "",
374 yTitle: "Listeners"
375 })
376 </script>
377
378 <h3>Notified Events</h3>
379 <div id="synapse_notifier_notified_events"></div>
380 <script>
381 new PromConsole.Graph({
382 node: document.querySelector("#synapse_notifier_notified_events"),
383 expr: "rate(synapse_notifier_notified_events[2m])",
384 name: "events",
385 yAxisFormatter: PromConsole.NumberFormatter.humanize,
386 yHoverFormatter: PromConsole.NumberFormatter.humanize,
387 yUnits: "events/s",
388 yTitle: "Event rate"
389 })
390 </script>
391
392 {{ template "prom_content_tail" . }}
393
394 {{ template "tail" }}
0 synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
1 synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
2
3 synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
4 synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
5
6 synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
7
8 synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
9 synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
10
11 synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
12 synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
13 synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
14
15 synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
16 synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
17 synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
18
19 synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
20 synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0
88 Type=simple
99 User=synapse
1010 Group=synapse
11 EnvironmentFile=-/etc/sysconfig/synapse
1211 WorkingDirectory=/var/lib/synapse
13 ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml
12 ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
13 ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
1414
1515 [Install]
1616 WantedBy=multi-user.target
17
00 Using Postgres
11 --------------
2
3 Postgres version 9.4 or later is known to work.
24
35 Set up database
46 ===============
1616 ./sytest/jenkins/prep_sytest_for_postgres.sh
1717
1818 ./sytest/jenkins/install_and_run.sh \
19 --python $WORKSPACE/.tox/py27/bin/python \
1920 --synapse-directory $WORKSPACE \
2021 --dendron $WORKSPACE/dendron/bin/dendron \
2122 --haproxy \
1414 ./sytest/jenkins/prep_sytest_for_postgres.sh
1515
1616 ./sytest/jenkins/install_and_run.sh \
17 --python $WORKSPACE/.tox/py27/bin/python \
1718 --synapse-directory $WORKSPACE \
1819 --dendron $WORKSPACE/dendron/bin/dendron \
1313 ./sytest/jenkins/prep_sytest_for_postgres.sh
1414
1515 ./sytest/jenkins/install_and_run.sh \
16 --python $WORKSPACE/.tox/py27/bin/python \
1617 --synapse-directory $WORKSPACE \
1111 ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
1212
1313 ./sytest/jenkins/install_and_run.sh \
14 --python $WORKSPACE/.tox/py27/bin/python \
1415 --synapse-directory $WORKSPACE \
251251 )
252252 return
253253
254 if table in (
255 "user_directory", "user_directory_search", "users_who_share_rooms",
256 "users_in_pubic_room",
257 ):
258 # We don't port these tables, as they're a faff and we can regenreate
259 # them anyway.
260 self.progress.update(table, table_size) # Mark table as done
261 return
262
263 if table == "user_directory_stream_pos":
264 # We need to make sure there is a single row, `(X, null), as that is
265 # what synapse expects to be there.
266 yield self.postgres_store._simple_insert(
267 table=table,
268 values={"stream_id": None},
269 )
270 self.progress.update(table, table_size) # Mark table as done
271 return
272
254273 forward_select = (
255274 "SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
256275 % (table,)
0 #!/usr/bin/env python
1 #
2 # Copyright 2015, 2016 OpenMarket Ltd
3 # Copyright 2017 New Vector Ltd
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from __future__ import print_function
18
19 import argparse
020 import nacl.signing
121 import json
222 import base64
323 import requests
424 import sys
525 import srvlookup
6
26 import yaml
727
828 def encode_base64(input_bytes):
929 """Encode bytes as a base64 string without any padding."""
119139 origin_name, key, sig,
120140 )
121141 authorization_headers.append(bytes(header))
122 sys.stderr.write(header)
123 sys.stderr.write("\n")
142 print ("Authorization: %s" % header, file=sys.stderr)
143
144 dest = lookup(destination, path)
145 print ("Requesting %s" % dest, file=sys.stderr)
124146
125147 result = requests.get(
126 lookup(destination, path),
148 dest,
127149 headers={"Authorization": authorization_headers[0]},
128150 verify=False,
129151 )
132154
133155
134156 def main():
135 origin_name, keyfile, destination, path = sys.argv[1:]
136
137 with open(keyfile) as f:
157 parser = argparse.ArgumentParser(
158 description=
159 "Signs and sends a federation request to a matrix homeserver",
160 )
161
162 parser.add_argument(
163 "-N", "--server-name",
164 help="Name to give as the local homeserver. If unspecified, will be "
165 "read from the config file.",
166 )
167
168 parser.add_argument(
169 "-k", "--signing-key-path",
170 help="Path to the file containing the private ed25519 key to sign the "
171 "request with.",
172 )
173
174 parser.add_argument(
175 "-c", "--config",
176 default="homeserver.yaml",
177 help="Path to server config file. Ignored if --server-name and "
178 "--signing-key-path are both given.",
179 )
180
181 parser.add_argument(
182 "-d", "--destination",
183 default="matrix.org",
184 help="name of the remote homeserver. We will do SRV lookups and "
185 "connect appropriately.",
186 )
187
188 parser.add_argument(
189 "path",
190 help="request path. We will add '/_matrix/federation/v1/' to this."
191 )
192
193 args = parser.parse_args()
194
195 if not args.server_name or not args.signing_key_path:
196 read_args_from_config(args)
197
198 with open(args.signing_key_path) as f:
138199 key = read_signing_keys(f)[0]
139200
140201 result = get_json(
141 origin_name, key, destination, "/_matrix/federation/v1/" + path
202 args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path
142203 )
143204
144205 json.dump(result, sys.stdout)
145 print ""
206 print ("")
207
208
209 def read_args_from_config(args):
210 with open(args.config, 'r') as fh:
211 config = yaml.safe_load(fh)
212 if not args.server_name:
213 args.server_name = config['server_name']
214 if not args.signing_key_path:
215 args.signing_key_path = config['signing_key_path']
216
146217
147218 if __name__ == "__main__":
148219 main()
1515 """ This is a reference implementation of a Matrix home server.
1616 """
1717
18 __version__ = "0.22.1"
18 __version__ = "0.23.0"
208208 )[0]
209209 if user and access_token and ip_addr:
210210 self.store.insert_client_ip(
211 user=user,
211 user_id=user.to_string(),
212212 access_token=access_token,
213213 ip=ip_addr,
214214 user_agent=user_agent,
518518 )
519519
520520 def is_server_admin(self, user):
521 """ Check if the given user is a local server admin.
522
523 Args:
524 user (str): mxid of user to check
525
526 Returns:
527 bool: True if the user is an admin
528 """
521529 return self.store.is_server_admin(user)
522530
523531 @defer.inlineCallbacks
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import gc
15 import logging
16
17 import affinity
18 from daemonize import Daemonize
19 from synapse.util import PreserveLoggingContext
20 from synapse.util.rlimit import change_resource_limit
21 from twisted.internet import reactor
22
23
24 def start_worker_reactor(appname, config):
25 """ Run the reactor in the main process
26
27 Daemonizes if necessary, and then configures some resources, before starting
28 the reactor. Pulls configuration from the 'worker' settings in 'config'.
29
30 Args:
31 appname (str): application name which will be sent to syslog
32 config (synapse.config.Config): config object
33 """
34
35 logger = logging.getLogger(config.worker_app)
36
37 start_reactor(
38 appname,
39 config.soft_file_limit,
40 config.gc_thresholds,
41 config.worker_pid_file,
42 config.worker_daemonize,
43 config.worker_cpu_affinity,
44 logger,
45 )
46
47
48 def start_reactor(
49 appname,
50 soft_file_limit,
51 gc_thresholds,
52 pid_file,
53 daemonize,
54 cpu_affinity,
55 logger,
56 ):
57 """ Run the reactor in the main process
58
59 Daemonizes if necessary, and then configures some resources, before starting
60 the reactor
61
62 Args:
63 appname (str): application name which will be sent to syslog
64 soft_file_limit (int):
65 gc_thresholds:
66 pid_file (str): name of pid file to write to if daemonize is True
67 daemonize (bool): true to run the reactor in a background process
68 cpu_affinity (int|None): cpu affinity mask
69 logger (logging.Logger): logger instance to pass to Daemonize
70 """
71
72 def run():
73 # make sure that we run the reactor with the sentinel log context,
74 # otherwise other PreserveLoggingContext instances will get confused
75 # and complain when they see the logcontext arbitrarily swapping
76 # between the sentinel and `run` logcontexts.
77 with PreserveLoggingContext():
78 logger.info("Running")
79 if cpu_affinity is not None:
80 logger.info("Setting CPU affinity to %s" % cpu_affinity)
81 affinity.set_process_affinity_mask(0, cpu_affinity)
82 change_resource_limit(soft_file_limit)
83 if gc_thresholds:
84 gc.set_threshold(*gc_thresholds)
85 reactor.run()
86
87 if daemonize:
88 daemon = Daemonize(
89 app=appname,
90 pid=pid_file,
91 action=run,
92 auto_close_fds=False,
93 verbose=True,
94 logger=logger,
95 )
96 daemon.start()
97 else:
98 run()
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
18 from synapse.server import HomeServer
19 from synapse import events
20 from synapse.app import _base
1921 from synapse.config._base import ConfigError
22 from synapse.config.homeserver import HomeServerConfig
2023 from synapse.config.logger import setup_logging
21 from synapse.config.homeserver import HomeServerConfig
2224 from synapse.http.site import SynapseSite
23 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
25 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
26 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2427 from synapse.replication.slave.storage.directory import DirectoryStore
2528 from synapse.replication.slave.storage.events import SlavedEventStore
26 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2729 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
2830 from synapse.replication.tcp.client import ReplicationClientHandler
31 from synapse.server import HomeServer
2932 from synapse.storage.engines import create_engine
3033 from synapse.util.httpresourcetree import create_resource_tree
31 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
34 from synapse.util.logcontext import LoggingContext, preserve_fn
3235 from synapse.util.manhole import manhole
33 from synapse.util.rlimit import change_resource_limit
3436 from synapse.util.versionstring import get_version_string
35
36 from synapse import events
37
3837 from twisted.internet import reactor
3938 from twisted.web.resource import Resource
40
41 from daemonize import Daemonize
42
43 import sys
44 import logging
45 import gc
4639
4740 logger = logging.getLogger("synapse.app.appservice")
4841
180173 ps.setup()
181174 ps.start_listening(config.worker_listeners)
182175
183 def run():
184 # make sure that we run the reactor with the sentinel log context,
185 # otherwise other PreserveLoggingContext instances will get confused
186 # and complain when they see the logcontext arbitrarily swapping
187 # between the sentinel and `run` logcontexts.
188 with PreserveLoggingContext():
189 logger.info("Running")
190 change_resource_limit(config.soft_file_limit)
191 if config.gc_thresholds:
192 gc.set_threshold(*config.gc_thresholds)
193 reactor.run()
194
195176 def start():
196177 ps.get_datastore().start_profiling()
197178 ps.get_state_handler().start_caching()
198179
199180 reactor.callWhenRunning(start)
200181
201 if config.worker_daemonize:
202 daemon = Daemonize(
203 app="synapse-appservice",
204 pid=config.worker_pid_file,
205 action=run,
206 auto_close_fds=False,
207 verbose=True,
208 logger=logger,
209 )
210 daemon.start()
211 else:
212 run()
182 _base.start_worker_reactor("synapse-appservice", config)
213183
214184
215185 if __name__ == '__main__':
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
19 from synapse import events
20 from synapse.app import _base
1821 from synapse.config._base import ConfigError
1922 from synapse.config.homeserver import HomeServerConfig
2023 from synapse.config.logger import setup_logging
24 from synapse.crypto import context_factory
25 from synapse.http.server import JsonResource
2126 from synapse.http.site import SynapseSite
22 from synapse.http.server import JsonResource
23 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
27 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
2428 from synapse.replication.slave.storage._base import BaseSlavedStore
2529 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2630 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
31 from synapse.replication.slave.storage.directory import DirectoryStore
2732 from synapse.replication.slave.storage.events import SlavedEventStore
2833 from synapse.replication.slave.storage.keys import SlavedKeyStore
34 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
2935 from synapse.replication.slave.storage.room import RoomStore
30 from synapse.replication.slave.storage.directory import DirectoryStore
31 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
3236 from synapse.replication.slave.storage.transactions import TransactionStore
3337 from synapse.replication.tcp.client import ReplicationClientHandler
3438 from synapse.rest.client.v1.room import PublicRoomListRestServlet
3539 from synapse.server import HomeServer
3640 from synapse.storage.engines import create_engine
3741 from synapse.util.httpresourcetree import create_resource_tree
38 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
42 from synapse.util.logcontext import LoggingContext
3943 from synapse.util.manhole import manhole
40 from synapse.util.rlimit import change_resource_limit
4144 from synapse.util.versionstring import get_version_string
42 from synapse.crypto import context_factory
43
44 from synapse import events
45
46
4745 from twisted.internet import reactor
4846 from twisted.web.resource import Resource
49
50 from daemonize import Daemonize
51
52 import sys
53 import logging
54 import gc
5547
5648 logger = logging.getLogger("synapse.app.client_reader")
5749
182174 ss.get_handlers()
183175 ss.start_listening(config.worker_listeners)
184176
185 def run():
186 # make sure that we run the reactor with the sentinel log context,
187 # otherwise other PreserveLoggingContext instances will get confused
188 # and complain when they see the logcontext arbitrarily swapping
189 # between the sentinel and `run` logcontexts.
190 with PreserveLoggingContext():
191 logger.info("Running")
192 change_resource_limit(config.soft_file_limit)
193 if config.gc_thresholds:
194 gc.set_threshold(*config.gc_thresholds)
195 reactor.run()
196
197177 def start():
198178 ss.get_state_handler().start_caching()
199179 ss.get_datastore().start_profiling()
200180
201181 reactor.callWhenRunning(start)
202182
203 if config.worker_daemonize:
204 daemon = Daemonize(
205 app="synapse-client-reader",
206 pid=config.worker_pid_file,
207 action=run,
208 auto_close_fds=False,
209 verbose=True,
210 logger=logger,
211 )
212 daemon.start()
213 else:
214 run()
183 _base.start_worker_reactor("synapse-client-reader", config)
215184
216185
217186 if __name__ == '__main__':
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
19 from synapse import events
20 from synapse.api.urls import FEDERATION_PREFIX
21 from synapse.app import _base
1822 from synapse.config._base import ConfigError
1923 from synapse.config.homeserver import HomeServerConfig
2024 from synapse.config.logger import setup_logging
25 from synapse.crypto import context_factory
26 from synapse.federation.transport.server import TransportLayerServer
2127 from synapse.http.site import SynapseSite
22 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
28 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
2329 from synapse.replication.slave.storage._base import BaseSlavedStore
30 from synapse.replication.slave.storage.directory import DirectoryStore
2431 from synapse.replication.slave.storage.events import SlavedEventStore
2532 from synapse.replication.slave.storage.keys import SlavedKeyStore
2633 from synapse.replication.slave.storage.room import RoomStore
2734 from synapse.replication.slave.storage.transactions import TransactionStore
28 from synapse.replication.slave.storage.directory import DirectoryStore
2935 from synapse.replication.tcp.client import ReplicationClientHandler
3036 from synapse.server import HomeServer
3137 from synapse.storage.engines import create_engine
3238 from synapse.util.httpresourcetree import create_resource_tree
33 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
39 from synapse.util.logcontext import LoggingContext
3440 from synapse.util.manhole import manhole
35 from synapse.util.rlimit import change_resource_limit
3641 from synapse.util.versionstring import get_version_string
37 from synapse.api.urls import FEDERATION_PREFIX
38 from synapse.federation.transport.server import TransportLayerServer
39 from synapse.crypto import context_factory
40
41 from synapse import events
42
43
4442 from twisted.internet import reactor
4543 from twisted.web.resource import Resource
46
47 from daemonize import Daemonize
48
49 import sys
50 import logging
51 import gc
5244
5345 logger = logging.getLogger("synapse.app.federation_reader")
5446
171163 ss.get_handlers()
172164 ss.start_listening(config.worker_listeners)
173165
174 def run():
175 # make sure that we run the reactor with the sentinel log context,
176 # otherwise other PreserveLoggingContext instances will get confused
177 # and complain when they see the logcontext arbitrarily swapping
178 # between the sentinel and `run` logcontexts.
179 with PreserveLoggingContext():
180 logger.info("Running")
181 change_resource_limit(config.soft_file_limit)
182 if config.gc_thresholds:
183 gc.set_threshold(*config.gc_thresholds)
184 reactor.run()
185
186166 def start():
187167 ss.get_state_handler().start_caching()
188168 ss.get_datastore().start_profiling()
189169
190170 reactor.callWhenRunning(start)
191171
192 if config.worker_daemonize:
193 daemon = Daemonize(
194 app="synapse-federation-reader",
195 pid=config.worker_pid_file,
196 action=run,
197 auto_close_fds=False,
198 verbose=True,
199 logger=logger,
200 )
201 daemon.start()
202 else:
203 run()
172 _base.start_worker_reactor("synapse-federation-reader", config)
204173
205174
206175 if __name__ == '__main__':
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
18 from synapse.server import HomeServer
19 from synapse import events
20 from synapse.app import _base
1921 from synapse.config._base import ConfigError
22 from synapse.config.homeserver import HomeServerConfig
2023 from synapse.config.logger import setup_logging
21 from synapse.config.homeserver import HomeServerConfig
2224 from synapse.crypto import context_factory
25 from synapse.federation import send_queue
2326 from synapse.http.site import SynapseSite
24 from synapse.federation import send_queue
25 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
27 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
2628 from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
29 from synapse.replication.slave.storage.devices import SlavedDeviceStore
2730 from synapse.replication.slave.storage.events import SlavedEventStore
31 from synapse.replication.slave.storage.presence import SlavedPresenceStore
2832 from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
2933 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
30 from synapse.replication.slave.storage.presence import SlavedPresenceStore
3134 from synapse.replication.slave.storage.transactions import TransactionStore
32 from synapse.replication.slave.storage.devices import SlavedDeviceStore
3335 from synapse.replication.tcp.client import ReplicationClientHandler
36 from synapse.server import HomeServer
3437 from synapse.storage.engines import create_engine
3538 from synapse.util.async import Linearizer
3639 from synapse.util.httpresourcetree import create_resource_tree
37 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
40 from synapse.util.logcontext import LoggingContext, preserve_fn
3841 from synapse.util.manhole import manhole
39 from synapse.util.rlimit import change_resource_limit
4042 from synapse.util.versionstring import get_version_string
41
42 from synapse import events
43
44 from twisted.internet import reactor, defer
43 from twisted.internet import defer, reactor
4544 from twisted.web.resource import Resource
46
47 from daemonize import Daemonize
48
49 import sys
50 import logging
51 import gc
5245
5346 logger = logging.getLogger("synapse.app.federation_sender")
5447
212205 ps.setup()
213206 ps.start_listening(config.worker_listeners)
214207
215 def run():
216 # make sure that we run the reactor with the sentinel log context,
217 # otherwise other PreserveLoggingContext instances will get confused
218 # and complain when they see the logcontext arbitrarily swapping
219 # between the sentinel and `run` logcontexts.
220 with PreserveLoggingContext():
221 logger.info("Running")
222 change_resource_limit(config.soft_file_limit)
223 if config.gc_thresholds:
224 gc.set_threshold(*config.gc_thresholds)
225 reactor.run()
226
227208 def start():
228209 ps.get_datastore().start_profiling()
229210 ps.get_state_handler().start_caching()
230211
231212 reactor.callWhenRunning(start)
232
233 if config.worker_daemonize:
234 daemon = Daemonize(
235 app="synapse-federation-sender",
236 pid=config.worker_pid_file,
237 action=run,
238 auto_close_fds=False,
239 verbose=True,
240 logger=logger,
241 )
242 daemon.start()
243 else:
244 run()
213 _base.start_worker_reactor("synapse-federation-sender", config)
245214
246215
247216 class FederationSenderHandler(object):
0 #!/usr/bin/env python
1 # -*- coding: utf-8 -*-
2 # Copyright 2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 import logging
16 import sys
17
18 import synapse
19 from synapse import events
20 from synapse.api.errors import SynapseError
21 from synapse.app import _base
22 from synapse.config._base import ConfigError
23 from synapse.config.homeserver import HomeServerConfig
24 from synapse.config.logger import setup_logging
25 from synapse.crypto import context_factory
26 from synapse.http.server import JsonResource
27 from synapse.http.servlet import (
28 RestServlet, parse_json_object_from_request,
29 )
30 from synapse.http.site import SynapseSite
31 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
32 from synapse.replication.slave.storage._base import BaseSlavedStore
33 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
34 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
35 from synapse.replication.slave.storage.devices import SlavedDeviceStore
36 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
37 from synapse.replication.tcp.client import ReplicationClientHandler
38 from synapse.rest.client.v2_alpha._base import client_v2_patterns
39 from synapse.server import HomeServer
40 from synapse.storage.engines import create_engine
41 from synapse.util.httpresourcetree import create_resource_tree
42 from synapse.util.logcontext import LoggingContext
43 from synapse.util.manhole import manhole
44 from synapse.util.versionstring import get_version_string
45 from twisted.internet import defer, reactor
46 from twisted.web.resource import Resource
47
48 logger = logging.getLogger("synapse.app.frontend_proxy")
49
50
51 class KeyUploadServlet(RestServlet):
52 PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$",
53 releases=())
54
55 def __init__(self, hs):
56 """
57 Args:
58 hs (synapse.server.HomeServer): server
59 """
60 super(KeyUploadServlet, self).__init__()
61 self.auth = hs.get_auth()
62 self.store = hs.get_datastore()
63 self.http_client = hs.get_simple_http_client()
64 self.main_uri = hs.config.worker_main_http_uri
65
66 @defer.inlineCallbacks
67 def on_POST(self, request, device_id):
68 requester = yield self.auth.get_user_by_req(request, allow_guest=True)
69 user_id = requester.user.to_string()
70 body = parse_json_object_from_request(request)
71
72 if device_id is not None:
73 # passing the device_id here is deprecated; however, we allow it
74 # for now for compatibility with older clients.
75 if (requester.device_id is not None and
76 device_id != requester.device_id):
77 logger.warning("Client uploading keys for a different device "
78 "(logged in as %s, uploading for %s)",
79 requester.device_id, device_id)
80 else:
81 device_id = requester.device_id
82
83 if device_id is None:
84 raise SynapseError(
85 400,
86 "To upload keys, you must pass device_id when authenticating"
87 )
88
89 if body:
90 # They're actually trying to upload something, proxy to main synapse.
91 result = yield self.http_client.post_json_get_json(
92 self.main_uri + request.uri,
93 body,
94 )
95
96 defer.returnValue((200, result))
97 else:
98 # Just interested in counts.
99 result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
100 defer.returnValue((200, {"one_time_key_counts": result}))
101
102
103 class FrontendProxySlavedStore(
104 SlavedDeviceStore,
105 SlavedClientIpStore,
106 SlavedApplicationServiceStore,
107 SlavedRegistrationStore,
108 BaseSlavedStore,
109 ):
110 pass
111
112
113 class FrontendProxyServer(HomeServer):
114 def get_db_conn(self, run_new_connection=True):
115 # Any param beginning with cp_ is a parameter for adbapi, and should
116 # not be passed to the database engine.
117 db_params = {
118 k: v for k, v in self.db_config.get("args", {}).items()
119 if not k.startswith("cp_")
120 }
121 db_conn = self.database_engine.module.connect(**db_params)
122
123 if run_new_connection:
124 self.database_engine.on_new_connection(db_conn)
125 return db_conn
126
127 def setup(self):
128 logger.info("Setting up.")
129 self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
130 logger.info("Finished setting up.")
131
132 def _listen_http(self, listener_config):
133 port = listener_config["port"]
134 bind_addresses = listener_config["bind_addresses"]
135 site_tag = listener_config.get("tag", port)
136 resources = {}
137 for res in listener_config["resources"]:
138 for name in res["names"]:
139 if name == "metrics":
140 resources[METRICS_PREFIX] = MetricsResource(self)
141 elif name == "client":
142 resource = JsonResource(self, canonical_json=False)
143 KeyUploadServlet(self).register(resource)
144 resources.update({
145 "/_matrix/client/r0": resource,
146 "/_matrix/client/unstable": resource,
147 "/_matrix/client/v2_alpha": resource,
148 "/_matrix/client/api/v1": resource,
149 })
150
151 root_resource = create_resource_tree(resources, Resource())
152
153 for address in bind_addresses:
154 reactor.listenTCP(
155 port,
156 SynapseSite(
157 "synapse.access.http.%s" % (site_tag,),
158 site_tag,
159 listener_config,
160 root_resource,
161 ),
162 interface=address
163 )
164
165 logger.info("Synapse client reader now listening on port %d", port)
166
167 def start_listening(self, listeners):
168 for listener in listeners:
169 if listener["type"] == "http":
170 self._listen_http(listener)
171 elif listener["type"] == "manhole":
172 bind_addresses = listener["bind_addresses"]
173
174 for address in bind_addresses:
175 reactor.listenTCP(
176 listener["port"],
177 manhole(
178 username="matrix",
179 password="rabbithole",
180 globals={"hs": self},
181 ),
182 interface=address
183 )
184 else:
185 logger.warn("Unrecognized listener type: %s", listener["type"])
186
187 self.get_tcp_replication().start_replication(self)
188
189 def build_tcp_replication(self):
190 return ReplicationClientHandler(self.get_datastore())
191
192
193 def start(config_options):
194 try:
195 config = HomeServerConfig.load_config(
196 "Synapse frontend proxy", config_options
197 )
198 except ConfigError as e:
199 sys.stderr.write("\n" + e.message + "\n")
200 sys.exit(1)
201
202 assert config.worker_app == "synapse.app.frontend_proxy"
203
204 assert config.worker_main_http_uri is not None
205
206 setup_logging(config, use_worker_options=True)
207
208 events.USE_FROZEN_DICTS = config.use_frozen_dicts
209
210 database_engine = create_engine(config.database_config)
211
212 tls_server_context_factory = context_factory.ServerContextFactory(config)
213
214 ss = FrontendProxyServer(
215 config.server_name,
216 db_config=config.database_config,
217 tls_server_context_factory=tls_server_context_factory,
218 config=config,
219 version_string="Synapse/" + get_version_string(synapse),
220 database_engine=database_engine,
221 )
222
223 ss.setup()
224 ss.get_handlers()
225 ss.start_listening(config.worker_listeners)
226
227 def start():
228 ss.get_state_handler().start_caching()
229 ss.get_datastore().start_profiling()
230
231 reactor.callWhenRunning(start)
232
233 _base.start_worker_reactor("synapse-frontend-proxy", config)
234
235
236 if __name__ == '__main__':
237 with LoggingContext("main"):
238 start(sys.argv[1:])
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15
16 import synapse
17
1815 import gc
1916 import logging
2017 import os
2118 import sys
2219
20 import synapse
2321 import synapse.config.logger
22 from synapse import events
23 from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
24 LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
25 STATIC_PREFIX, WEB_CLIENT_PREFIX
26 from synapse.app import _base
2427 from synapse.config._base import ConfigError
25
26 from synapse.python_dependencies import (
27 check_requirements, CONDITIONAL_REQUIREMENTS
28 )
29
28 from synapse.config.homeserver import HomeServerConfig
29 from synapse.crypto import context_factory
30 from synapse.federation.transport.server import TransportLayerServer
31 from synapse.http.server import RootRedirect
32 from synapse.http.site import SynapseSite
33 from synapse.metrics import register_memory_metrics
34 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
35 from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
36 check_requirements
37 from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
3038 from synapse.rest import ClientRestResource
31 from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
32 from synapse.storage import are_all_users_on_domain
33 from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
34
35 from synapse.server import HomeServer
36
37 from twisted.internet import reactor, defer
38 from twisted.application import service
39 from twisted.web.resource import Resource, EncodingResourceWrapper
40 from twisted.web.static import File
41 from twisted.web.server import GzipEncoderFactory
42 from synapse.http.server import RootRedirect
39 from synapse.rest.key.v1.server_key_resource import LocalKey
40 from synapse.rest.key.v2 import KeyApiV2Resource
4341 from synapse.rest.media.v0.content_repository import ContentRepoResource
4442 from synapse.rest.media.v1.media_repository import MediaRepositoryResource
45 from synapse.rest.key.v1.server_key_resource import LocalKey
46 from synapse.rest.key.v2 import KeyApiV2Resource
47 from synapse.api.urls import (
48 FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
49 SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX,
50 SERVER_KEY_V2_PREFIX,
51 )
52 from synapse.config.homeserver import HomeServerConfig
53 from synapse.crypto import context_factory
54 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
55 from synapse.metrics import register_memory_metrics
56 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
57 from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
58 from synapse.federation.transport.server import TransportLayerServer
59
43 from synapse.server import HomeServer
44 from synapse.storage import are_all_users_on_domain
45 from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
46 from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
47 from synapse.util.httpresourcetree import create_resource_tree
48 from synapse.util.logcontext import LoggingContext
49 from synapse.util.manhole import manhole
6050 from synapse.util.rlimit import change_resource_limit
6151 from synapse.util.versionstring import get_version_string
62 from synapse.util.httpresourcetree import create_resource_tree
63 from synapse.util.manhole import manhole
64
65 from synapse.http.site import SynapseSite
66
67 from synapse import events
68
69 from daemonize import Daemonize
52 from twisted.application import service
53 from twisted.internet import defer, reactor
54 from twisted.web.resource import EncodingResourceWrapper, Resource
55 from twisted.web.server import GzipEncoderFactory
56 from twisted.web.static import File
7057
7158 logger = logging.getLogger("synapse.app.homeserver")
7259
445432 # be quite busy the first few minutes
446433 clock.call_later(5 * 60, phone_stats_home)
447434
448 def in_thread():
449 # Uncomment to enable tracing of log context changes.
450 # sys.settrace(logcontext_tracer)
451
452 # make sure that we run the reactor with the sentinel log context,
453 # otherwise other PreserveLoggingContext instances will get confused
454 # and complain when they see the logcontext arbitrarily swapping
455 # between the sentinel and `run` logcontexts.
456 with PreserveLoggingContext():
457 change_resource_limit(hs.config.soft_file_limit)
458 if hs.config.gc_thresholds:
459 gc.set_threshold(*hs.config.gc_thresholds)
460 reactor.run()
461
462 if hs.config.daemonize:
463
464 if hs.config.print_pidfile:
465 print (hs.config.pid_file)
466
467 daemon = Daemonize(
468 app="synapse-homeserver",
469 pid=hs.config.pid_file,
470 action=lambda: in_thread(),
471 auto_close_fds=False,
472 verbose=True,
473 logger=logger,
474 )
475
476 daemon.start()
477 else:
478 in_thread()
435 if hs.config.daemonize and hs.config.print_pidfile:
436 print (hs.config.pid_file)
437
438 _base.start_reactor(
439 "synapse-homeserver",
440 hs.config.soft_file_limit,
441 hs.config.gc_thresholds,
442 hs.config.pid_file,
443 hs.config.daemonize,
444 hs.config.cpu_affinity,
445 logger,
446 )
479447
480448
481449 def main():
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
19 from synapse import events
20 from synapse.api.urls import (
21 CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
22 )
23 from synapse.app import _base
1824 from synapse.config._base import ConfigError
1925 from synapse.config.homeserver import HomeServerConfig
2026 from synapse.config.logger import setup_logging
27 from synapse.crypto import context_factory
2128 from synapse.http.site import SynapseSite
22 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
29 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
2330 from synapse.replication.slave.storage._base import BaseSlavedStore
2431 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2532 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
3239 from synapse.storage.engines import create_engine
3340 from synapse.storage.media_repository import MediaRepositoryStore
3441 from synapse.util.httpresourcetree import create_resource_tree
35 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
42 from synapse.util.logcontext import LoggingContext
3643 from synapse.util.manhole import manhole
37 from synapse.util.rlimit import change_resource_limit
3844 from synapse.util.versionstring import get_version_string
39 from synapse.api.urls import (
40 CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
41 )
42 from synapse.crypto import context_factory
43
44 from synapse import events
45
46
4745 from twisted.internet import reactor
4846 from twisted.web.resource import Resource
49
50 from daemonize import Daemonize
51
52 import sys
53 import logging
54 import gc
5547
5648 logger = logging.getLogger("synapse.app.media_repository")
5749
179171 ss.get_handlers()
180172 ss.start_listening(config.worker_listeners)
181173
182 def run():
183 # make sure that we run the reactor with the sentinel log context,
184 # otherwise other PreserveLoggingContext instances will get confused
185 # and complain when they see the logcontext arbitrarily swapping
186 # between the sentinel and `run` logcontexts.
187 with PreserveLoggingContext():
188 logger.info("Running")
189 change_resource_limit(config.soft_file_limit)
190 if config.gc_thresholds:
191 gc.set_threshold(*config.gc_thresholds)
192 reactor.run()
193
194174 def start():
195175 ss.get_state_handler().start_caching()
196176 ss.get_datastore().start_profiling()
197177
198178 reactor.callWhenRunning(start)
199179
200 if config.worker_daemonize:
201 daemon = Daemonize(
202 app="synapse-media-repository",
203 pid=config.worker_pid_file,
204 action=run,
205 auto_close_fds=False,
206 verbose=True,
207 logger=logger,
208 )
209 daemon.start()
210 else:
211 run()
180 _base.start_worker_reactor("synapse-media-repository", config)
212181
213182
214183 if __name__ == '__main__':
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import logging
16 import sys
1517
1618 import synapse
17
18 from synapse.server import HomeServer
19 from synapse import events
20 from synapse.app import _base
1921 from synapse.config._base import ConfigError
22 from synapse.config.homeserver import HomeServerConfig
2023 from synapse.config.logger import setup_logging
21 from synapse.config.homeserver import HomeServerConfig
2224 from synapse.http.site import SynapseSite
23 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
24 from synapse.storage.roommember import RoomMemberStore
25 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
26 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
2527 from synapse.replication.slave.storage.events import SlavedEventStore
2628 from synapse.replication.slave.storage.pushers import SlavedPusherStore
2729 from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
28 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
2930 from synapse.replication.tcp.client import ReplicationClientHandler
31 from synapse.server import HomeServer
32 from synapse.storage import DataStore
3033 from synapse.storage.engines import create_engine
31 from synapse.storage import DataStore
34 from synapse.storage.roommember import RoomMemberStore
3235 from synapse.util.httpresourcetree import create_resource_tree
33 from synapse.util.logcontext import LoggingContext, preserve_fn, \
34 PreserveLoggingContext
36 from synapse.util.logcontext import LoggingContext, preserve_fn
3537 from synapse.util.manhole import manhole
36 from synapse.util.rlimit import change_resource_limit
3738 from synapse.util.versionstring import get_version_string
38
39 from synapse import events
40
41 from twisted.internet import reactor, defer
39 from twisted.internet import defer, reactor
4240 from twisted.web.resource import Resource
43
44 from daemonize import Daemonize
45
46 import sys
47 import logging
48 import gc
4941
5042 logger = logging.getLogger("synapse.app.pusher")
5143
243235 ps.setup()
244236 ps.start_listening(config.worker_listeners)
245237
246 def run():
247 # make sure that we run the reactor with the sentinel log context,
248 # otherwise other PreserveLoggingContext instances will get confused
249 # and complain when they see the logcontext arbitrarily swapping
250 # between the sentinel and `run` logcontexts.
251 with PreserveLoggingContext():
252 logger.info("Running")
253 change_resource_limit(config.soft_file_limit)
254 if config.gc_thresholds:
255 gc.set_threshold(*config.gc_thresholds)
256 reactor.run()
257
258238 def start():
259239 ps.get_pusherpool().start()
260240 ps.get_datastore().start_profiling()
262242
263243 reactor.callWhenRunning(start)
264244
265 if config.worker_daemonize:
266 daemon = Daemonize(
267 app="synapse-pusher",
268 pid=config.worker_pid_file,
269 action=run,
270 auto_close_fds=False,
271 verbose=True,
272 logger=logger,
273 )
274 daemon.start()
275 else:
276 run()
245 _base.start_worker_reactor("synapse-pusher", config)
277246
278247
279248 if __name__ == '__main__':
1212 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
15 import contextlib
16 import logging
17 import sys
1518
1619 import synapse
17
1820 from synapse.api.constants import EventTypes
21 from synapse.app import _base
1922 from synapse.config._base import ConfigError
2023 from synapse.config.homeserver import HomeServerConfig
2124 from synapse.config.logger import setup_logging
2225 from synapse.handlers.presence import PresenceHandler, get_interested_parties
26 from synapse.http.server import JsonResource
2327 from synapse.http.site import SynapseSite
24 from synapse.http.server import JsonResource
25 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
26 from synapse.rest.client.v2_alpha import sync
27 from synapse.rest.client.v1 import events
28 from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
29 from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
28 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
3029 from synapse.replication.slave.storage._base import BaseSlavedStore
31 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
32 from synapse.replication.slave.storage.events import SlavedEventStore
33 from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
3430 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
3531 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
36 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
37 from synapse.replication.slave.storage.filtering import SlavedFilteringStore
38 from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
39 from synapse.replication.slave.storage.presence import SlavedPresenceStore
32 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
4033 from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
4134 from synapse.replication.slave.storage.devices import SlavedDeviceStore
35 from synapse.replication.slave.storage.events import SlavedEventStore
36 from synapse.replication.slave.storage.filtering import SlavedFilteringStore
37 from synapse.replication.slave.storage.presence import SlavedPresenceStore
38 from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
39 from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
40 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
4241 from synapse.replication.slave.storage.room import RoomStore
4342 from synapse.replication.tcp.client import ReplicationClientHandler
43 from synapse.rest.client.v1 import events
44 from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
45 from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
46 from synapse.rest.client.v2_alpha import sync
4447 from synapse.server import HomeServer
4548 from synapse.storage.engines import create_engine
4649 from synapse.storage.presence import UserPresenceState
4750 from synapse.storage.roommember import RoomMemberStore
4851 from synapse.util.httpresourcetree import create_resource_tree
49 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
52 from synapse.util.logcontext import LoggingContext, preserve_fn
5053 from synapse.util.manhole import manhole
51 from synapse.util.rlimit import change_resource_limit
5254 from synapse.util.stringutils import random_string
5355 from synapse.util.versionstring import get_version_string
54
55 from twisted.internet import reactor, defer
56 from twisted.internet import defer, reactor
5657 from twisted.web.resource import Resource
57
58 from daemonize import Daemonize
59
60 import sys
61 import logging
62 import contextlib
63 import gc
6458
6559 logger = logging.getLogger("synapse.app.synchrotron")
6660
439433 ss.setup()
440434 ss.start_listening(config.worker_listeners)
441435
442 def run():
443 # make sure that we run the reactor with the sentinel log context,
444 # otherwise other PreserveLoggingContext instances will get confused
445 # and complain when they see the logcontext arbitrarily swapping
446 # between the sentinel and `run` logcontexts.
447 with PreserveLoggingContext():
448 logger.info("Running")
449 change_resource_limit(config.soft_file_limit)
450 if config.gc_thresholds:
451 gc.set_threshold(*config.gc_thresholds)
452 reactor.run()
453
454436 def start():
455437 ss.get_datastore().start_profiling()
456438 ss.get_state_handler().start_caching()
457439
458440 reactor.callWhenRunning(start)
459441
460 if config.worker_daemonize:
461 daemon = Daemonize(
462 app="synapse-synchrotron",
463 pid=config.worker_pid_file,
464 action=run,
465 auto_close_fds=False,
466 verbose=True,
467 logger=logger,
468 )
469 daemon.start()
470 else:
471 run()
442 _base.start_worker_reactor("synapse-synchrotron", config)
472443
473444
474445 if __name__ == '__main__':
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 import logging
17 import sys
18
1619 import synapse
17
18 from synapse.server import HomeServer
20 from synapse import events
21 from synapse.app import _base
1922 from synapse.config._base import ConfigError
23 from synapse.config.homeserver import HomeServerConfig
2024 from synapse.config.logger import setup_logging
21 from synapse.config.homeserver import HomeServerConfig
2225 from synapse.crypto import context_factory
26 from synapse.http.server import JsonResource
2327 from synapse.http.site import SynapseSite
24 from synapse.http.server import JsonResource
25 from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
28 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
2629 from synapse.replication.slave.storage._base import BaseSlavedStore
2730 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
2831 from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
3033 from synapse.replication.slave.storage.registration import SlavedRegistrationStore
3134 from synapse.replication.tcp.client import ReplicationClientHandler
3235 from synapse.rest.client.v2_alpha import user_directory
36 from synapse.server import HomeServer
3337 from synapse.storage.engines import create_engine
3438 from synapse.storage.user_directory import UserDirectoryStore
39 from synapse.util.caches.stream_change_cache import StreamChangeCache
3540 from synapse.util.httpresourcetree import create_resource_tree
36 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
41 from synapse.util.logcontext import LoggingContext, preserve_fn
3742 from synapse.util.manhole import manhole
38 from synapse.util.rlimit import change_resource_limit
3943 from synapse.util.versionstring import get_version_string
40 from synapse.util.caches.stream_change_cache import StreamChangeCache
41
42 from synapse import events
43
4444 from twisted.internet import reactor
4545 from twisted.web.resource import Resource
46
47 from daemonize import Daemonize
48
49 import sys
50 import logging
51 import gc
5246
5347 logger = logging.getLogger("synapse.app.user_dir")
5448
232226 ps.setup()
233227 ps.start_listening(config.worker_listeners)
234228
235 def run():
236 # make sure that we run the reactor with the sentinel log context,
237 # otherwise other PreserveLoggingContext instances will get confused
238 # and complain when they see the logcontext arbitrarily swapping
239 # between the sentinel and `run` logcontexts.
240 with PreserveLoggingContext():
241 logger.info("Running")
242 change_resource_limit(config.soft_file_limit)
243 if config.gc_thresholds:
244 gc.set_threshold(*config.gc_thresholds)
245 reactor.run()
246
247229 def start():
248230 ps.get_datastore().start_profiling()
249231 ps.get_state_handler().start_caching()
250232
251233 reactor.callWhenRunning(start)
252234
253 if config.worker_daemonize:
254 daemon = Daemonize(
255 app="synapse-user-dir",
256 pid=config.worker_pid_file,
257 action=run,
258 auto_close_fds=False,
259 verbose=True,
260 logger=logger,
261 )
262 daemon.start()
263 else:
264 run()
235 _base.start_worker_reactor("synapse-user-dir", config)
265236
266237
267238 if __name__ == '__main__':
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2017 New Vector Ltd
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
2829 self.user_agent_suffix = config.get("user_agent_suffix")
2930 self.use_frozen_dicts = config.get("use_frozen_dicts", False)
3031 self.public_baseurl = config.get("public_baseurl")
32 self.cpu_affinity = config.get("cpu_affinity")
3133
3234 # Whether to send federation traffic out in this process. This only
3335 # applies to some federation traffic, and so shouldn't be used to
3941 self.update_user_directory = config.get("update_user_directory", True)
4042
4143 self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
44
45 # Whether we should block invites sent to users on this server
46 # (other than those sent by local server admins)
47 self.block_non_admin_invites = config.get(
48 "block_non_admin_invites", False,
49 )
4250
4351 if self.public_baseurl is not None:
4452 if self.public_baseurl[-1] != '/':
146154 # When running as a daemon, the file to store the pid in
147155 pid_file: %(pid_file)s
148156
157 # CPU affinity mask. Setting this restricts the CPUs on which the
158 # process will be scheduled. It is represented as a bitmask, with the
159 # lowest order bit corresponding to the first logical CPU and the
160 # highest order bit corresponding to the last logical CPU. Not all CPUs
161 # may exist on a given system but a mask may specify more CPUs than are
162 # present.
163 #
164 # For example:
165 # 0x00000001 is processor #0,
166 # 0x00000003 is processors #0 and #1,
167 # 0xFFFFFFFF is all processors (#0 through #31).
168 #
169 # Pinning a Python process to a single CPU is desirable, because Python
170 # is inherently single-threaded due to the GIL, and can suffer a
171 # 30-40%% slowdown due to cache blow-out and thread context switching
172 # if the scheduler happens to schedule the underlying threads across
173 # different cores. See
174 # https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
175 #
176 # cpu_affinity: 0xFFFFFFFF
177
149178 # Whether to serve a web client from the HTTP/HTTPS root resource.
150179 web_client: True
151180
169198 # Set the limit on the returned events in the timeline in the get
170199 # and sync operations. The default value is -1, means no upper limit.
171200 # filter_timeline_limit: 5000
201
202 # Whether room invites to users on this server should be blocked
203 # (except those sent by local server admins). The default is False.
204 # block_non_admin_invites: True
172205
173206 # List of ports that Synapse should listen on, their purpose and their
174207 # configuration.
3131 self.worker_replication_port = config.get("worker_replication_port", None)
3232 self.worker_name = config.get("worker_name", self.worker_app)
3333
34 self.worker_main_http_uri = config.get("worker_main_http_uri", None)
35 self.worker_cpu_affinity = config.get("worker_cpu_affinity")
36
3437 if self.worker_listeners:
3538 for listener in self.worker_listeners:
3639 bind_address = listener.pop("bind_address", None)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15
15 from synapse.util import logcontext
1616 from twisted.web.http import HTTPClient
1717 from twisted.internet.protocol import Factory
1818 from twisted.internet import defer, reactor
1919 from synapse.http.endpoint import matrix_federation_endpoint
20 from synapse.util.logcontext import (
21 preserve_context_over_fn, preserve_context_over_deferred
22 )
2320 import simplejson as json
2421 import logging
2522
4239
4340 for i in range(5):
4441 try:
45 protocol = yield preserve_context_over_fn(
46 endpoint.connect, factory
47 )
48 server_response, server_certificate = yield preserve_context_over_deferred(
49 protocol.remote_key
50 )
51 defer.returnValue((server_response, server_certificate))
52 return
42 with logcontext.PreserveLoggingContext():
43 protocol = yield endpoint.connect(factory)
44 server_response, server_certificate = yield protocol.remote_key
45 defer.returnValue((server_response, server_certificate))
5346 except SynapseKeyClientError as e:
5447 logger.exception("Error getting key for %r" % (server_name,))
5548 if e.status.startswith("4"):
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2017 New Vector Ltd.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
1415
1516 from synapse.crypto.keyclient import fetch_server_key
1617 from synapse.api.errors import SynapseError, Codes
17 from synapse.util import unwrapFirstError
18 from synapse.util.async import ObservableDeferred
18 from synapse.util import unwrapFirstError, logcontext
1919 from synapse.util.logcontext import (
20 preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
20 PreserveLoggingContext,
2121 preserve_fn
2222 )
2323 from synapse.util.metrics import Measure
5656 json_object(dict): The JSON object to verify.
5757 deferred(twisted.internet.defer.Deferred):
5858 A deferred (server_name, key_id, verify_key) tuple that resolves when
59 a verify key has been fetched
59 a verify key has been fetched. The deferreds' callbacks are run with no
60 logcontext.
6061 """
6162
6263
7374 self.perspective_servers = self.config.perspectives
7475 self.hs = hs
7576
77 # map from server name to Deferred. Has an entry for each server with
78 # an ongoing key download; the Deferred completes once the download
79 # completes.
80 #
81 # These are regular, logcontext-agnostic Deferreds.
7682 self.key_downloads = {}
7783
7884 def verify_json_for_server(self, server_name, json_object):
79 return self.verify_json_objects_for_server(
80 [(server_name, json_object)]
81 )[0]
85 return logcontext.make_deferred_yieldable(
86 self.verify_json_objects_for_server(
87 [(server_name, json_object)]
88 )[0]
89 )
8290
8391 def verify_json_objects_for_server(self, server_and_json):
84 """Bulk verfies signatures of json objects, bulk fetching keys as
92 """Bulk verifies signatures of json objects, bulk fetching keys as
8593 necessary.
8694
8795 Args:
8896 server_and_json (list): List of pairs of (server_name, json_object)
8997
9098 Returns:
91 list of deferreds indicating success or failure to verify each
92 json object's signature for the given server_name.
99 List<Deferred>: for each input pair, a deferred indicating success
100 or failure to verify each json object's signature for the given
101 server_name. The deferreds run their callbacks in the sentinel
102 logcontext.
93103 """
94104 verify_requests = []
95105
116126
117127 verify_requests.append(verify_request)
118128
119 @defer.inlineCallbacks
120 def handle_key_deferred(verify_request):
121 server_name = verify_request.server_name
122 try:
123 _, key_id, verify_key = yield verify_request.deferred
124 except IOError as e:
125 logger.warn(
126 "Got IOError when downloading keys for %s: %s %s",
127 server_name, type(e).__name__, str(e.message),
128 )
129 raise SynapseError(
130 502,
131 "Error downloading keys for %s" % (server_name,),
132 Codes.UNAUTHORIZED,
133 )
134 except Exception as e:
135 logger.exception(
136 "Got Exception when downloading keys for %s: %s %s",
137 server_name, type(e).__name__, str(e.message),
138 )
139 raise SynapseError(
140 401,
141 "No key for %s with id %s" % (server_name, key_ids),
142 Codes.UNAUTHORIZED,
143 )
144
145 json_object = verify_request.json_object
146
147 logger.debug("Got key %s %s:%s for server %s, verifying" % (
148 key_id, verify_key.alg, verify_key.version, server_name,
149 ))
150 try:
151 verify_signed_json(json_object, server_name, verify_key)
152 except:
153 raise SynapseError(
154 401,
155 "Invalid signature for server %s with key %s:%s" % (
156 server_name, verify_key.alg, verify_key.version
157 ),
158 Codes.UNAUTHORIZED,
159 )
160
161 server_to_deferred = {
162 server_name: defer.Deferred()
163 for server_name, _ in server_and_json
164 }
165
166 with PreserveLoggingContext():
167
168 # We want to wait for any previous lookups to complete before
169 # proceeding.
170 wait_on_deferred = self.wait_for_previous_lookups(
171 [server_name for server_name, _ in server_and_json],
172 server_to_deferred,
173 )
174
175 # Actually start fetching keys.
176 wait_on_deferred.addBoth(
177 lambda _: self.get_server_verify_keys(verify_requests)
178 )
179
180 # When we've finished fetching all the keys for a given server_name,
181 # resolve the deferred passed to `wait_for_previous_lookups` so that
182 # any lookups waiting will proceed.
183 server_to_request_ids = {}
184
185 def remove_deferreds(res, server_name, verify_request):
186 request_id = id(verify_request)
187 server_to_request_ids[server_name].discard(request_id)
188 if not server_to_request_ids[server_name]:
189 d = server_to_deferred.pop(server_name, None)
190 if d:
191 d.callback(None)
192 return res
193
194 for verify_request in verify_requests:
195 server_name = verify_request.server_name
196 request_id = id(verify_request)
197 server_to_request_ids.setdefault(server_name, set()).add(request_id)
198 deferred.addBoth(remove_deferreds, server_name, verify_request)
129 preserve_fn(self._start_key_lookups)(verify_requests)
199130
200131 # Pass those keys to handle_key_deferred so that the json object
201132 # signatures can be verified
133 handle = preserve_fn(_handle_key_deferred)
202134 return [
203 preserve_context_over_fn(handle_key_deferred, verify_request)
204 for verify_request in verify_requests
135 handle(rq) for rq in verify_requests
205136 ]
137
138 @defer.inlineCallbacks
139 def _start_key_lookups(self, verify_requests):
140 """Sets off the key fetches for each verify request
141
142 Once each fetch completes, verify_request.deferred will be resolved.
143
144 Args:
145 verify_requests (List[VerifyKeyRequest]):
146 """
147
148 # create a deferred for each server we're going to look up the keys
149 # for; we'll resolve them once we have completed our lookups.
150 # These will be passed into wait_for_previous_lookups to block
151 # any other lookups until we have finished.
152 # The deferreds are called with no logcontext.
153 server_to_deferred = {
154 rq.server_name: defer.Deferred()
155 for rq in verify_requests
156 }
157
158 # We want to wait for any previous lookups to complete before
159 # proceeding.
160 yield self.wait_for_previous_lookups(
161 [rq.server_name for rq in verify_requests],
162 server_to_deferred,
163 )
164
165 # Actually start fetching keys.
166 self._get_server_verify_keys(verify_requests)
167
168 # When we've finished fetching all the keys for a given server_name,
169 # resolve the deferred passed to `wait_for_previous_lookups` so that
170 # any lookups waiting will proceed.
171 #
172 # map from server name to a set of request ids
173 server_to_request_ids = {}
174
175 for verify_request in verify_requests:
176 server_name = verify_request.server_name
177 request_id = id(verify_request)
178 server_to_request_ids.setdefault(server_name, set()).add(request_id)
179
180 def remove_deferreds(res, verify_request):
181 server_name = verify_request.server_name
182 request_id = id(verify_request)
183 server_to_request_ids[server_name].discard(request_id)
184 if not server_to_request_ids[server_name]:
185 d = server_to_deferred.pop(server_name, None)
186 if d:
187 d.callback(None)
188 return res
189
190 for verify_request in verify_requests:
191 verify_request.deferred.addBoth(
192 remove_deferreds, verify_request,
193 )
206194
207195 @defer.inlineCallbacks
208196 def wait_for_previous_lookups(self, server_names, server_to_deferred):
211199 Args:
212200 server_names (list): list of server_names we want to lookup
213201 server_to_deferred (dict): server_name to deferred which gets
214 resolved once we've finished looking up keys for that server
202 resolved once we've finished looking up keys for that server.
203 The Deferreds should be regular twisted ones which call their
204 callbacks with no logcontext.
205
206 Returns: a Deferred which resolves once all key lookups for the given
207 servers have completed. Follows the synapse rules of logcontext
208 preservation.
215209 """
216210 while True:
217211 wait_on = [
225219 else:
226220 break
227221
222 def rm(r, server_name_):
223 self.key_downloads.pop(server_name_, None)
224 return r
225
228226 for server_name, deferred in server_to_deferred.items():
229 d = ObservableDeferred(preserve_context_over_deferred(deferred))
230 self.key_downloads[server_name] = d
231
232 def rm(r, server_name):
233 self.key_downloads.pop(server_name, None)
234 return r
235
236 d.addBoth(rm, server_name)
237
238 def get_server_verify_keys(self, verify_requests):
227 self.key_downloads[server_name] = deferred
228 deferred.addBoth(rm, server_name)
229
230 def _get_server_verify_keys(self, verify_requests):
239231 """Tries to find at least one key for each verify request
240232
241233 For each verify_request, verify_request.deferred is called back with
304296 if not missing_keys:
305297 break
306298
307 for verify_request in requests_missing_keys.values():
308 verify_request.deferred.errback(SynapseError(
309 401,
310 "No key for %s with id %s" % (
311 verify_request.server_name, verify_request.key_ids,
312 ),
313 Codes.UNAUTHORIZED,
314 ))
299 with PreserveLoggingContext():
300 for verify_request in requests_missing_keys:
301 verify_request.deferred.errback(SynapseError(
302 401,
303 "No key for %s with id %s" % (
304 verify_request.server_name, verify_request.key_ids,
305 ),
306 Codes.UNAUTHORIZED,
307 ))
315308
316309 def on_err(err):
317 for verify_request in verify_requests:
318 if not verify_request.deferred.called:
319 verify_request.deferred.errback(err)
320
321 do_iterations().addErrback(on_err)
310 with PreserveLoggingContext():
311 for verify_request in verify_requests:
312 if not verify_request.deferred.called:
313 verify_request.deferred.errback(err)
314
315 preserve_fn(do_iterations)().addErrback(on_err)
322316
323317 @defer.inlineCallbacks
324318 def get_keys_from_store(self, server_name_and_key_ids):
332326 Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
333327 server_name -> key_id -> VerifyKey
334328 """
335 res = yield preserve_context_over_deferred(defer.gatherResults(
329 res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
336330 [
337331 preserve_fn(self.store.get_server_verify_keys)(
338332 server_name, key_ids
340334 for server_name, key_ids in server_name_and_key_ids
341335 ],
342336 consumeErrors=True,
343 )).addErrback(unwrapFirstError)
337 ).addErrback(unwrapFirstError))
344338
345339 defer.returnValue(dict(res))
346340
361355 )
362356 defer.returnValue({})
363357
364 results = yield preserve_context_over_deferred(defer.gatherResults(
358 results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
365359 [
366360 preserve_fn(get_key)(p_name, p_keys)
367361 for p_name, p_keys in self.perspective_servers.items()
368362 ],
369363 consumeErrors=True,
370 )).addErrback(unwrapFirstError)
364 ).addErrback(unwrapFirstError))
371365
372366 union_of_keys = {}
373367 for result in results:
401395
402396 defer.returnValue(keys)
403397
404 results = yield preserve_context_over_deferred(defer.gatherResults(
398 results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
405399 [
406400 preserve_fn(get_key)(server_name, key_ids)
407401 for server_name, key_ids in server_name_and_key_ids
408402 ],
409403 consumeErrors=True,
410 )).addErrback(unwrapFirstError)
404 ).addErrback(unwrapFirstError))
411405
412406 merged = {}
413407 for result in results:
484478 for server_name, response_keys in processed_response.items():
485479 keys.setdefault(server_name, {}).update(response_keys)
486480
487 yield preserve_context_over_deferred(defer.gatherResults(
481 yield logcontext.make_deferred_yieldable(defer.gatherResults(
488482 [
489483 preserve_fn(self.store_keys)(
490484 server_name=server_name,
494488 for server_name, response_keys in keys.items()
495489 ],
496490 consumeErrors=True
497 )).addErrback(unwrapFirstError)
491 ).addErrback(unwrapFirstError))
498492
499493 defer.returnValue(keys)
500494
542536
543537 keys.update(response_keys)
544538
545 yield preserve_context_over_deferred(defer.gatherResults(
539 yield logcontext.make_deferred_yieldable(defer.gatherResults(
546540 [
547541 preserve_fn(self.store_keys)(
548542 server_name=key_server_name,
552546 for key_server_name, verify_keys in keys.items()
553547 ],
554548 consumeErrors=True
555 )).addErrback(unwrapFirstError)
549 ).addErrback(unwrapFirstError))
556550
557551 defer.returnValue(keys)
558552
618612 response_keys.update(verify_keys)
619613 response_keys.update(old_verify_keys)
620614
621 yield preserve_context_over_deferred(defer.gatherResults(
615 yield logcontext.make_deferred_yieldable(defer.gatherResults(
622616 [
623617 preserve_fn(self.store.store_server_keys_json)(
624618 server_name=server_name,
631625 for key_id in updated_key_ids
632626 ],
633627 consumeErrors=True,
634 )).addErrback(unwrapFirstError)
628 ).addErrback(unwrapFirstError))
635629
636630 results[server_name] = response_keys
637631
709703
710704 defer.returnValue(verify_keys)
711705
712 @defer.inlineCallbacks
713706 def store_keys(self, server_name, from_server, verify_keys):
714707 """Store a collection of verify keys for a given server
715708 Args:
720713 A deferred that completes when the keys are stored.
721714 """
722715 # TODO(markjh): Store whether the keys have expired.
723 yield preserve_context_over_deferred(defer.gatherResults(
716 return logcontext.make_deferred_yieldable(defer.gatherResults(
724717 [
725718 preserve_fn(self.store.store_server_verify_key)(
726719 server_name, server_name, key.time_added, key
728721 for key_id, key in verify_keys.items()
729722 ],
730723 consumeErrors=True,
731 )).addErrback(unwrapFirstError)
724 ).addErrback(unwrapFirstError))
725
726
727 @defer.inlineCallbacks
728 def _handle_key_deferred(verify_request):
729 server_name = verify_request.server_name
730 try:
731 with PreserveLoggingContext():
732 _, key_id, verify_key = yield verify_request.deferred
733 except IOError as e:
734 logger.warn(
735 "Got IOError when downloading keys for %s: %s %s",
736 server_name, type(e).__name__, str(e.message),
737 )
738 raise SynapseError(
739 502,
740 "Error downloading keys for %s" % (server_name,),
741 Codes.UNAUTHORIZED,
742 )
743 except Exception as e:
744 logger.exception(
745 "Got Exception when downloading keys for %s: %s %s",
746 server_name, type(e).__name__, str(e.message),
747 )
748 raise SynapseError(
749 401,
750 "No key for %s with id %s" % (server_name, verify_request.key_ids),
751 Codes.UNAUTHORIZED,
752 )
753
754 json_object = verify_request.json_object
755
756 logger.debug("Got key %s %s:%s for server %s, verifying" % (
757 key_id, verify_key.alg, verify_key.version, server_name,
758 ))
759 try:
760 verify_signed_json(json_object, server_name, verify_key)
761 except:
762 raise SynapseError(
763 401,
764 "Invalid signature for server %s with key %s:%s" % (
765 server_name, verify_key.alg, verify_key.version
766 ),
767 Codes.UNAUTHORIZED,
768 )
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 New Vector Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def check_event_for_spam(event):
17 """Checks if a given event is considered "spammy" by this server.
18
19 If the server considers an event spammy, then it will be rejected if
20 sent by a local user. If it is sent by a user on another server, then
21 users receive a blank event.
22
23 Args:
24 event (synapse.events.EventBase): the event to be checked
25
26 Returns:
27 bool: True if the event is spammy.
28 """
29 if not hasattr(event, "content") or "body" not in event.content:
30 return False
31
32 # for example:
33 #
34 # if "the third flower is green" in event.content["body"]:
35 # return True
36
37 return False
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
15
16 from twisted.internet import defer
17
18 from synapse.events.utils import prune_event
19
20 from synapse.crypto.event_signing import check_event_content_hash
14 import logging
2115
2216 from synapse.api.errors import SynapseError
23
24 from synapse.util import unwrapFirstError
25 from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
26
27 import logging
28
17 from synapse.crypto.event_signing import check_event_content_hash
18 from synapse.events import spamcheck
19 from synapse.events.utils import prune_event
20 from synapse.util import unwrapFirstError, logcontext
21 from twisted.internet import defer
2922
3023 logger = logging.getLogger(__name__)
3124
5649 """
5750 deferreds = self._check_sigs_and_hashes(pdus)
5851
59 def callback(pdu):
60 return pdu
52 @defer.inlineCallbacks
53 def handle_check_result(pdu, deferred):
54 try:
55 res = yield logcontext.make_deferred_yieldable(deferred)
56 except SynapseError:
57 res = None
6158
62 def errback(failure, pdu):
63 failure.trap(SynapseError)
64 return None
65
66 def try_local_db(res, pdu):
6759 if not res:
6860 # Check local db.
69 return self.store.get_event(
61 res = yield self.store.get_event(
7062 pdu.event_id,
7163 allow_rejected=True,
7264 allow_none=True,
7365 )
74 return res
7566
76 def try_remote(res, pdu):
7767 if not res and pdu.origin != origin:
78 return self.get_pdu(
79 destinations=[pdu.origin],
80 event_id=pdu.event_id,
81 outlier=outlier,
82 timeout=10000,
83 ).addErrback(lambda e: None)
84 return res
68 try:
69 res = yield self.get_pdu(
70 destinations=[pdu.origin],
71 event_id=pdu.event_id,
72 outlier=outlier,
73 timeout=10000,
74 )
75 except SynapseError:
76 pass
8577
86 def warn(res, pdu):
8778 if not res:
8879 logger.warn(
8980 "Failed to find copy of %s with valid signature",
9081 pdu.event_id,
9182 )
92 return res
9383
94 for pdu, deferred in zip(pdus, deferreds):
95 deferred.addCallbacks(
96 callback, errback, errbackArgs=[pdu]
97 ).addCallback(
98 try_local_db, pdu
99 ).addCallback(
100 try_remote, pdu
101 ).addCallback(
102 warn, pdu
84 defer.returnValue(res)
85
86 handle = logcontext.preserve_fn(handle_check_result)
87 deferreds2 = [
88 handle(pdu, deferred)
89 for pdu, deferred in zip(pdus, deferreds)
90 ]
91
92 valid_pdus = yield logcontext.make_deferred_yieldable(
93 defer.gatherResults(
94 deferreds2,
95 consumeErrors=True,
10396 )
104
105 valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
106 deferreds,
107 consumeErrors=True
108 )).addErrback(unwrapFirstError)
97 ).addErrback(unwrapFirstError)
10998
11099 if include_none:
111100 defer.returnValue(valid_pdus)
113102 defer.returnValue([p for p in valid_pdus if p])
114103
115104 def _check_sigs_and_hash(self, pdu):
116 return self._check_sigs_and_hashes([pdu])[0]
105 return logcontext.make_deferred_yieldable(
106 self._check_sigs_and_hashes([pdu])[0],
107 )
117108
118109 def _check_sigs_and_hashes(self, pdus):
119 """Throws a SynapseError if a PDU does not have the correct
120 signatures.
110 """Checks that each of the received events is correctly signed by the
111 sending server.
112
113 Args:
114 pdus (list[FrozenEvent]): the events to be checked
121115
122116 Returns:
123 FrozenEvent: Either the given event or it redacted if it failed the
124 content hash check.
117 list[Deferred]: for each input event, a deferred which:
118 * returns the original event if the checks pass
119 * returns a redacted version of the event (if the signature
120 matched but the hash did not)
121 * throws a SynapseError if the signature check failed.
122 The deferreds run their callbacks in the sentinel logcontext.
125123 """
126124
127125 redacted_pdus = [
129127 for pdu in pdus
130128 ]
131129
132 deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
130 deferreds = self.keyring.verify_json_objects_for_server([
133131 (p.origin, p.get_pdu_json())
134132 for p in redacted_pdus
135133 ])
136134
135 ctx = logcontext.LoggingContext.current_context()
136
137137 def callback(_, pdu, redacted):
138 if not check_event_content_hash(pdu):
139 logger.warn(
140 "Event content has been tampered, redacting %s: %s",
141 pdu.event_id, pdu.get_pdu_json()
142 )
143 return redacted
144 return pdu
138 with logcontext.PreserveLoggingContext(ctx):
139 if not check_event_content_hash(pdu):
140 logger.warn(
141 "Event content has been tampered, redacting %s: %s",
142 pdu.event_id, pdu.get_pdu_json()
143 )
144 return redacted
145
146 if spamcheck.check_event_for_spam(pdu):
147 logger.warn(
148 "Event contains spam, redacting %s: %s",
149 pdu.event_id, pdu.get_pdu_json()
150 )
151 return redacted
152
153 return pdu
145154
146155 def errback(failure, pdu):
147156 failure.trap(SynapseError)
148 logger.warn(
149 "Signature check failed for %s",
150 pdu.event_id,
151 )
157 with logcontext.PreserveLoggingContext(ctx):
158 logger.warn(
159 "Signature check failed for %s",
160 pdu.event_id,
161 )
152162 return failure
153163
154164 for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):
2121 from synapse.api.errors import (
2222 CodeMessageException, HttpResponseException, SynapseError,
2323 )
24 from synapse.util import unwrapFirstError
24 from synapse.util import unwrapFirstError, logcontext
2525 from synapse.util.caches.expiringcache import ExpiringCache
2626 from synapse.util.logutils import log_function
2727 from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
188188 ]
189189
190190 # FIXME: We should handle signature failures more gracefully.
191 pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
191 pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults(
192192 self._check_sigs_and_hashes(pdus),
193193 consumeErrors=True,
194 )).addErrback(unwrapFirstError)
194 ).addErrback(unwrapFirstError))
195195
196196 defer.returnValue(pdus)
197197
251251 pdu = pdu_list[0]
252252
253253 # Check signatures are correct.
254 signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
254 signed_pdu = yield self._check_sigs_and_hash(pdu)
255255
256256 break
257257
152152 class BaseFederationServlet(object):
153153 REQUIRE_AUTH = True
154154
155 def __init__(self, handler, authenticator, ratelimiter, server_name,
156 room_list_handler):
155 def __init__(self, handler, authenticator, ratelimiter, server_name):
157156 self.handler = handler
158157 self.authenticator = authenticator
159158 self.ratelimiter = ratelimiter
160 self.room_list_handler = room_list_handler
161159
162160 def _wrap(self, func):
163161 authenticator = self.authenticator
589587 else:
590588 network_tuple = ThirdPartyInstanceID(None, None)
591589
592 data = yield self.room_list_handler.get_local_public_room_list(
590 data = yield self.handler.get_local_public_room_list(
593591 limit, since_token,
594592 network_tuple=network_tuple
595593 )
610608 }))
611609
612610
613 SERVLET_CLASSES = (
611 FEDERATION_SERVLET_CLASSES = (
614612 FederationSendServlet,
615613 FederationPullServlet,
616614 FederationEventServlet,
633631 FederationThirdPartyInviteExchangeServlet,
634632 On3pidBindServlet,
635633 OpenIdUserInfo,
636 PublicRoomList,
637634 FederationVersionServlet,
638635 )
639636
637 ROOM_LIST_CLASSES = (
638 PublicRoomList,
639 )
640
640641
641642 def register_servlets(hs, resource, authenticator, ratelimiter):
642 for servletclass in SERVLET_CLASSES:
643 for servletclass in FEDERATION_SERVLET_CLASSES:
643644 servletclass(
644645 handler=hs.get_replication_layer(),
645646 authenticator=authenticator,
646647 ratelimiter=ratelimiter,
647648 server_name=hs.hostname,
648 room_list_handler=hs.get_room_list_handler(),
649649 ).register(resource)
650
651 for servletclass in ROOM_LIST_CLASSES:
652 servletclass(
653 handler=hs.get_room_list_handler(),
654 authenticator=authenticator,
655 ratelimiter=ratelimiter,
656 server_name=hs.hostname,
657 ).register(resource)
269269 user_id (str)
270270 from_token (StreamToken)
271271 """
272 now_token = yield self.hs.get_event_sources().get_current_token()
273
272274 room_ids = yield self.store.get_rooms_for_user(user_id)
273275
274276 # First we check if any devices have changed
279281 # Then work out if any users have since joined
280282 rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
281283
284 member_events = yield self.store.get_membership_changes_for_user(
285 user_id, from_token.room_key, now_token.room_key
286 )
287 rooms_changed.update(event.room_id for event in member_events)
288
282289 stream_ordering = RoomStreamToken.parse_stream_token(
283 from_token.room_key).stream
290 from_token.room_key
291 ).stream
284292
285293 possibly_changed = set(changed)
294 possibly_left = set()
286295 for room_id in rooms_changed:
296 current_state_ids = yield self.store.get_current_state_ids(room_id)
297
298 # The user may have left the room
299 # TODO: Check if they actually did or if we were just invited.
300 if room_id not in room_ids:
301 for key, event_id in current_state_ids.iteritems():
302 etype, state_key = key
303 if etype != EventTypes.Member:
304 continue
305 possibly_left.add(state_key)
306 continue
307
287308 # Fetch the current state at the time.
288309 try:
289310 event_ids = yield self.store.get_forward_extremeties_for_room(
294315 # ordering: treat it the same as a new room
295316 event_ids = []
296317
297 current_state_ids = yield self.store.get_current_state_ids(room_id)
298
299318 # special-case for an empty prev state: include all members
300319 # in the changed list
301320 if not event_ids:
306325 possibly_changed.add(state_key)
307326 continue
308327
328 current_member_id = current_state_ids.get((EventTypes.Member, user_id))
329 if not current_member_id:
330 continue
331
309332 # mapping from event_id -> state_dict
310333 prev_state_ids = yield self.store.get_state_ids_for_events(event_ids)
334
335 # Check if we've joined the room? If so we just blindly add all the users to
336 # the "possibly changed" users.
337 for state_dict in prev_state_ids.itervalues():
338 member_event = state_dict.get((EventTypes.Member, user_id), None)
339 if not member_event or member_event != current_member_id:
340 for key, event_id in current_state_ids.iteritems():
341 etype, state_key = key
342 if etype != EventTypes.Member:
343 continue
344 possibly_changed.add(state_key)
345 break
311346
312347 # If there has been any change in membership, include them in the
313348 # possibly changed list. We'll check if they are joined below,
319354
320355 # check if this member has changed since any of the extremities
321356 # at the stream_ordering, and add them to the list if so.
322 for state_dict in prev_state_ids.values():
357 for state_dict in prev_state_ids.itervalues():
323358 prev_event_id = state_dict.get(key, None)
324359 if not prev_event_id or prev_event_id != event_id:
325 possibly_changed.add(state_key)
360 if state_key != user_id:
361 possibly_changed.add(state_key)
326362 break
327363
328 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
329 user_id
330 )
331
332 # Take the intersection of the users whose devices may have changed
333 # and those that actually still share a room with the user
334 defer.returnValue(users_who_share_room & possibly_changed)
364 if possibly_changed or possibly_left:
365 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
366 user_id
367 )
368
369 # Take the intersection of the users whose devices may have changed
370 # and those that actually still share a room with the user
371 possibly_joined = possibly_changed & users_who_share_room
372 possibly_left = (possibly_changed | possibly_left) - users_who_share_room
373 else:
374 possibly_joined = []
375 possibly_left = []
376
377 defer.returnValue({
378 "changed": list(possibly_joined),
379 "left": list(possibly_left),
380 })
335381
336382 @defer.inlineCallbacks
337383 def on_federation_query_user_devices(self, user_id):
10731073 if is_blocked:
10741074 raise SynapseError(403, "This room has been blocked on this server")
10751075
1076 if self.hs.config.block_non_admin_invites:
1077 raise SynapseError(403, "This server does not accept room invites")
1078
10761079 membership = event.content.get("membership")
10771080 if event.type != EventTypes.Member or membership != Membership.INVITE:
10781081 raise SynapseError(400, "The event was not an m.room.member invite event")
14121415 auth_events=auth_events,
14131416 )
14141417
1415 if not event.internal_metadata.is_outlier():
1418 if not event.internal_metadata.is_outlier() and not backfilled:
14161419 yield self.action_generator.handle_push_actions_for_event(
14171420 event, context
14181421 )
16051608
16061609 context.rejected = RejectedReason.AUTH_ERROR
16071610
1608 if event.type == EventTypes.GuestAccess:
1611 if event.type == EventTypes.GuestAccess and not context.rejected:
16091612 yield self.maybe_kick_guest_users(event)
16101613
16111614 defer.returnValue(context)
20892092 @defer.inlineCallbacks
20902093 @log_function
20912094 def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
2095 """Handle an exchange_third_party_invite request from a remote server
2096
2097 The remote server will call this when it wants to turn a 3pid invite
2098 into a normal m.room.member invite.
2099
2100 Returns:
2101 Deferred: resolves (to None)
2102 """
20922103 builder = self.event_builder_factory.new(event_dict)
20932104
20942105 message_handler = self.hs.get_handlers().message_handler
21072118 raise e
21082119 yield self._check_signature(event, context)
21092120
2121 # XXX we send the invite here, but send_membership_event also sends it,
2122 # so we end up making two requests. I think this is redundant.
21102123 returned_invite = yield self.send_invite(origin, event)
21112124 # TODO: Make sure the signatures actually are correct.
21122125 event.signatures.update(returned_invite.signatures)
2126
21132127 member_handler = self.hs.get_handlers().room_member_handler
21142128 yield member_handler.send_membership_event(None, event, context)
21152129
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
14 from synapse.events import spamcheck
1515 from twisted.internet import defer
1616
1717 from synapse.api.constants import EventTypes, Membership
320320 token_id=requester.access_token_id,
321321 txn_id=txn_id
322322 )
323
324 if spamcheck.check_event_for_spam(event):
325 raise SynapseError(
326 403, "Spam is not permitted here", Codes.FORBIDDEN
327 )
328
323329 yield self.send_nonmember_event(
324330 requester,
325331 event,
190190 if action in ["kick", "unban"]:
191191 effective_membership_state = "leave"
192192
193 # if this is a join with a 3pid signature, we may need to turn a 3pid
194 # invite into a normal invite before we can handle the join.
193195 if third_party_signed is not None:
194196 replication = self.hs.get_replication_layer()
195197 yield replication.exchange_third_party_invite(
206208 is_blocked = yield self.store.is_room_blocked(room_id)
207209 if is_blocked:
208210 raise SynapseError(403, "This room has been blocked on this server")
211
212 if (effective_membership_state == "invite" and
213 self.hs.config.block_non_admin_invites):
214 is_requester_admin = yield self.auth.is_server_admin(
215 requester.user,
216 )
217 if not is_requester_admin:
218 raise SynapseError(
219 403, "Invites have been disabled on this server",
220 )
209221
210222 latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
211223 current_state_ids = yield self.state_handler.get_current_state_ids(
470482 requester,
471483 txn_id
472484 ):
485 if self.hs.config.block_non_admin_invites:
486 is_requester_admin = yield self.auth.is_server_admin(
487 requester.user,
488 )
489 if not is_requester_admin:
490 raise SynapseError(
491 403, "Invites have been disabled on this server",
492 Codes.FORBIDDEN,
493 )
494
473495 invitee = yield self._lookup_3pid(
474496 id_server, medium, address
475497 )
107107 return True
108108
109109
110 class DeviceLists(collections.namedtuple("DeviceLists", [
111 "changed", # list of user_ids whose devices may have changed
112 "left", # list of user_ids whose devices we no longer track
113 ])):
114 __slots__ = []
115
116 def __nonzero__(self):
117 return bool(self.changed or self.left)
118
119
110120 class SyncResult(collections.namedtuple("SyncResult", [
111121 "next_batch", # Token for the next sync
112122 "presence", # List of presence events for the user.
289299
290300 if recents:
291301 recents = sync_config.filter_collection.filter_room_timeline(recents)
302
303 # We check if there are any state events, if there are then we pass
304 # all current state events to the filter_events function. This is to
305 # ensure that we always include current state in the timeline
306 current_state_ids = frozenset()
307 if any(e.is_state() for e in recents):
308 current_state_ids = yield self.state.get_current_state_ids(room_id)
309 current_state_ids = frozenset(current_state_ids.itervalues())
310
292311 recents = yield filter_events_for_client(
293312 self.store,
294313 sync_config.user.to_string(),
295314 recents,
315 always_include_ids=current_state_ids,
296316 )
297317 else:
298318 recents = []
324344 loaded_recents = sync_config.filter_collection.filter_room_timeline(
325345 events
326346 )
347
348 # We check if there are any state events, if there are then we pass
349 # all current state events to the filter_events function. This is to
350 # ensure that we always include current state in the timeline
351 current_state_ids = frozenset()
352 if any(e.is_state() for e in loaded_recents):
353 current_state_ids = yield self.state.get_current_state_ids(room_id)
354 current_state_ids = frozenset(current_state_ids.itervalues())
355
327356 loaded_recents = yield filter_events_for_client(
328357 self.store,
329358 sync_config.user.to_string(),
330359 loaded_recents,
360 always_include_ids=current_state_ids,
331361 )
332362 loaded_recents.extend(recents)
333363 recents = loaded_recents
534564 res = yield self._generate_sync_entry_for_rooms(
535565 sync_result_builder, account_data_by_room
536566 )
537 newly_joined_rooms, newly_joined_users = res
567 newly_joined_rooms, newly_joined_users, _, _ = res
568 _, _, newly_left_rooms, newly_left_users = res
538569
539570 block_all_presence_data = (
540571 since_token is None and
548579 yield self._generate_sync_entry_for_to_device(sync_result_builder)
549580
550581 device_lists = yield self._generate_sync_entry_for_device_list(
551 sync_result_builder
582 sync_result_builder,
583 newly_joined_rooms=newly_joined_rooms,
584 newly_joined_users=newly_joined_users,
585 newly_left_rooms=newly_left_rooms,
586 newly_left_users=newly_left_users,
552587 )
553588
554589 device_id = sync_config.device_id
573608
574609 @measure_func("_generate_sync_entry_for_device_list")
575610 @defer.inlineCallbacks
576 def _generate_sync_entry_for_device_list(self, sync_result_builder):
611 def _generate_sync_entry_for_device_list(self, sync_result_builder,
612 newly_joined_rooms, newly_joined_users,
613 newly_left_rooms, newly_left_users):
577614 user_id = sync_result_builder.sync_config.user.to_string()
578615 since_token = sync_result_builder.since_token
579616
580617 if since_token and since_token.device_list_key:
581 room_ids = yield self.store.get_rooms_for_user(user_id)
582
583 user_ids_changed = set()
584618 changed = yield self.store.get_user_whose_devices_changed(
585619 since_token.device_list_key
586620 )
587 for other_user_id in changed:
588 other_room_ids = yield self.store.get_rooms_for_user(other_user_id)
589 if room_ids.intersection(other_room_ids):
590 user_ids_changed.add(other_user_id)
591
592 defer.returnValue(user_ids_changed)
621
622 # TODO: Be more clever than this, i.e. remove users who we already
623 # share a room with?
624 for room_id in newly_joined_rooms:
625 joined_users = yield self.state.get_current_user_in_room(room_id)
626 newly_joined_users.update(joined_users)
627
628 for room_id in newly_left_rooms:
629 left_users = yield self.state.get_current_user_in_room(room_id)
630 newly_left_users.update(left_users)
631
632 # TODO: Check that these users are actually new, i.e. either they
633 # weren't in the previous sync *or* they left and rejoined.
634 changed.update(newly_joined_users)
635
636 if not changed and not newly_left_users:
637 defer.returnValue(DeviceLists(
638 changed=[],
639 left=newly_left_users,
640 ))
641
642 users_who_share_room = yield self.store.get_users_who_share_room_with_user(
643 user_id
644 )
645
646 defer.returnValue(DeviceLists(
647 changed=users_who_share_room & changed,
648 left=set(newly_left_users) - users_who_share_room,
649 ))
593650 else:
594 defer.returnValue([])
651 defer.returnValue(DeviceLists(
652 changed=[],
653 left=[],
654 ))
595655
596656 @defer.inlineCallbacks
597657 def _generate_sync_entry_for_to_device(self, sync_result_builder):
755815 account_data_by_room(dict): Dictionary of per room account data
756816
757817 Returns:
758 Deferred(tuple): Returns a 2-tuple of
759 `(newly_joined_rooms, newly_joined_users)`
818 Deferred(tuple): Returns a 4-tuple of
819 `(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
760820 """
761821 user_id = sync_result_builder.sync_config.user.to_string()
762822 block_all_room_ephemeral = (
787847 )
788848 if not tags_by_room:
789849 logger.debug("no-oping sync")
790 defer.returnValue(([], []))
850 defer.returnValue(([], [], [], []))
791851
792852 ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
793853 "m.ignored_user_list", user_id=user_id,
800860
801861 if since_token:
802862 res = yield self._get_rooms_changed(sync_result_builder, ignored_users)
803 room_entries, invited, newly_joined_rooms = res
863 room_entries, invited, newly_joined_rooms, newly_left_rooms = res
804864
805865 tags_by_room = yield self.store.get_updated_tags(
806866 user_id, since_token.account_data_key,
808868 else:
809869 res = yield self._get_all_rooms(sync_result_builder, ignored_users)
810870 room_entries, invited, newly_joined_rooms = res
871 newly_left_rooms = []
811872
812873 tags_by_room = yield self.store.get_tags_for_user(user_id)
813874
828889
829890 # Now we want to get any newly joined users
830891 newly_joined_users = set()
892 newly_left_users = set()
831893 if since_token:
832894 for joined_sync in sync_result_builder.joined:
833895 it = itertools.chain(
834 joined_sync.timeline.events, joined_sync.state.values()
896 joined_sync.timeline.events, joined_sync.state.itervalues()
835897 )
836898 for event in it:
837899 if event.type == EventTypes.Member:
838900 if event.membership == Membership.JOIN:
839901 newly_joined_users.add(event.state_key)
840
841 defer.returnValue((newly_joined_rooms, newly_joined_users))
902 else:
903 prev_content = event.unsigned.get("prev_content", {})
904 prev_membership = prev_content.get("membership", None)
905 if prev_membership == Membership.JOIN:
906 newly_left_users.add(event.state_key)
907
908 newly_left_users -= newly_joined_users
909
910 defer.returnValue((
911 newly_joined_rooms,
912 newly_joined_users,
913 newly_left_rooms,
914 newly_left_users,
915 ))
842916
843917 @defer.inlineCallbacks
844918 def _have_rooms_changed(self, sync_result_builder):
908982 mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
909983
910984 newly_joined_rooms = []
985 newly_left_rooms = []
911986 room_entries = []
912987 invited = []
913 for room_id, events in mem_change_events_by_room_id.items():
988 for room_id, events in mem_change_events_by_room_id.iteritems():
914989 non_joins = [e for e in events if e.membership != Membership.JOIN]
915990 has_join = len(non_joins) != len(events)
916991
917992 # We want to figure out if we joined the room at some point since
918993 # the last sync (even if we have since left). This is to make sure
919994 # we do send down the room, and with full state, where necessary
995
996 old_state_ids = None
997 if room_id in joined_room_ids and non_joins:
998 # Always include if the user (re)joined the room, especially
999 # important so that device list changes are calculated correctly.
1000 # If there are non join member events, but we are still in the room,
1001 # then the user must have left and joined
1002 newly_joined_rooms.append(room_id)
1003
1004 # User is in the room so we don't need to do the invite/leave checks
1005 continue
1006
9201007 if room_id in joined_room_ids or has_join:
9211008 old_state_ids = yield self.get_state_at(room_id, since_token)
9221009 old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
9281015 if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
9291016 newly_joined_rooms.append(room_id)
9301017
931 if room_id in joined_room_ids:
932 continue
1018 # If user is in the room then we don't need to do the invite/leave checks
1019 if room_id in joined_room_ids:
1020 continue
9331021
9341022 if not non_joins:
9351023 continue
1024
1025 # Check if we have left the room. This can either be because we were
1026 # joined before *or* that we since joined and then left.
1027 if events[-1].membership != Membership.JOIN:
1028 if has_join:
1029 newly_left_rooms.append(room_id)
1030 else:
1031 if not old_state_ids:
1032 old_state_ids = yield self.get_state_at(room_id, since_token)
1033 old_mem_ev_id = old_state_ids.get(
1034 (EventTypes.Member, user_id),
1035 None,
1036 )
1037 old_mem_ev = None
1038 if old_mem_ev_id:
1039 old_mem_ev = yield self.store.get_event(
1040 old_mem_ev_id, allow_none=True
1041 )
1042 if old_mem_ev and old_mem_ev.membership == Membership.JOIN:
1043 newly_left_rooms.append(room_id)
9361044
9371045 # Only bother if we're still currently invited
9381046 should_invite = non_joins[-1].membership == Membership.INVITE
10111119 upto_token=since_token,
10121120 ))
10131121
1014 defer.returnValue((room_entries, invited, newly_joined_rooms))
1122 defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms))
10151123
10161124 @defer.inlineCallbacks
10171125 def _get_all_rooms(self, sync_result_builder, ignored_users):
12591367 self.invited = []
12601368 self.archived = []
12611369 self.device = []
1370 self.to_device = []
12621371
12631372
12641373 class RoomSyncResultBuilder(object):
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import socket
1415
1516 from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
1617 from twisted.internet import defer, reactor
2930
3031 SERVER_CACHE = {}
3132
32
33 # our record of an individual server which can be tried to reach a destination.
34 #
35 # "host" is actually a dotted-quad or ipv6 address string. Except when there's
36 # no SRV record, in which case it is the original hostname.
3337 _Server = collections.namedtuple(
3438 "_Server", "priority weight host port expires"
3539 )
218222 return self.default_server
219223 else:
220224 raise ConnectError(
221 "Not server available for %s" % self.service_name
225 "No server available for %s" % self.service_name
222226 )
223227
228 # look for all servers with the same priority
224229 min_priority = self.servers[0].priority
225230 weight_indexes = list(
226231 (index, server.weight + 1)
230235
231236 total_weight = sum(weight for index, weight in weight_indexes)
232237 target_weight = random.randint(0, total_weight)
233
234238 for index, weight in weight_indexes:
235239 target_weight -= weight
236240 if target_weight <= 0:
237241 server = self.servers[index]
242 # XXX: this looks totally dubious:
243 #
244 # (a) we never reuse a server until we have been through
245 # all of the servers at the same priority, so if the
246 # weights are A: 100, B:1, we always do ABABAB instead of
247 # AAAA...AAAB (approximately).
248 #
249 # (b) After using all the servers at the lowest priority,
250 # we move onto the next priority. We should only use the
251 # second priority if servers at the top priority are
252 # unreachable.
253 #
238254 del self.servers[index]
239255 self.used_servers.append(server)
240256 return server
279295 continue
280296
281297 payload = answer.payload
282 host = str(payload.target)
283 srv_ttl = answer.ttl
284
285 try:
286 answers, _, _ = yield dns_client.lookupAddress(host)
287 except DNSNameError:
288 continue
289
290 for answer in answers:
291 if answer.type == dns.A and answer.payload:
292 ip = answer.payload.dottedQuad()
293 host_ttl = min(srv_ttl, answer.ttl)
294
295 servers.append(_Server(
296 host=ip,
297 port=int(payload.port),
298 priority=int(payload.priority),
299 weight=int(payload.weight),
300 expires=int(clock.time()) + host_ttl,
301 ))
298
299 hosts = yield _get_hosts_for_srv_record(
300 dns_client, str(payload.target)
301 )
302
303 for (ip, ttl) in hosts:
304 host_ttl = min(answer.ttl, ttl)
305
306 servers.append(_Server(
307 host=ip,
308 port=int(payload.port),
309 priority=int(payload.priority),
310 weight=int(payload.weight),
311 expires=int(clock.time()) + host_ttl,
312 ))
302313
303314 servers.sort()
304315 cache[service_name] = list(servers)
316327 raise e
317328
318329 defer.returnValue(servers)
330
331
332 @defer.inlineCallbacks
333 def _get_hosts_for_srv_record(dns_client, host):
334 """Look up each of the hosts in a SRV record
335
336 Args:
337 dns_client (twisted.names.dns.IResolver):
338 host (basestring): host to look up
339
340 Returns:
341 Deferred[list[(str, int)]]: a list of (host, ttl) pairs
342
343 """
344 ip4_servers = []
345 ip6_servers = []
346
347 def cb(res):
348 # lookupAddress and lookupIP6Address return a three-tuple
349 # giving the answer, authority, and additional sections of the
350 # response.
351 #
352 # we only care about the answers.
353
354 return res[0]
355
356 def eb(res):
357 res.trap(DNSNameError)
358 return []
359
360 # no logcontexts here, so we can safely fire these off and gatherResults
361 d1 = dns_client.lookupAddress(host).addCallbacks(cb, eb)
362 d2 = dns_client.lookupIPV6Address(host).addCallbacks(cb, eb)
363 results = yield defer.gatherResults([d1, d2], consumeErrors=True)
364
365 for result in results:
366 for answer in result:
367 if not answer.payload:
368 continue
369
370 try:
371 if answer.type == dns.A:
372 ip = answer.payload.dottedQuad()
373 ip4_servers.append((ip, answer.ttl))
374 elif answer.type == dns.AAAA:
375 ip = socket.inet_ntop(
376 socket.AF_INET6, answer.payload.address,
377 )
378 ip6_servers.append((ip, answer.ttl))
379 else:
380 # the most likely candidate here is a CNAME record.
381 # rfc2782 says srvs may not point to aliases.
382 logger.warn(
383 "Ignoring unexpected DNS record type %s for %s",
384 answer.type, host,
385 )
386 continue
387 except Exception as e:
388 logger.warn("Ignoring invalid DNS response for %s: %s",
389 host, e)
390 continue
391
392 # keep the ipv4 results before the ipv6 results, mostly to match historical
393 # behaviour.
394 defer.returnValue(ip4_servers + ip6_servers)
1818
1919 from .push_rule_evaluator import PushRuleEvaluatorForEvent
2020
21 from synapse.visibility import filter_events_for_clients_context
2221 from synapse.api.constants import EventTypes, Membership
22 from synapse.metrics import get_metrics_for
23 from synapse.util.caches import metrics as cache_metrics
2324 from synapse.util.caches.descriptors import cached
2425 from synapse.util.async import Linearizer
2526
3031
3132
3233 rules_by_room = {}
34
35 push_metrics = get_metrics_for(__name__)
36
37 push_rules_invalidation_counter = push_metrics.register_counter(
38 "push_rules_invalidation_counter"
39 )
40 push_rules_state_size_counter = push_metrics.register_counter(
41 "push_rules_state_size_counter"
42 )
43
44 # Measures whether we use the fast path of using state deltas, or if we have to
45 # recalculate from scratch
46 push_rules_delta_state_cache_metric = cache_metrics.register_cache(
47 "cache",
48 size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values
49 cache_name="push_rules_delta_state_cache_metric",
50 )
3351
3452
3553 class BulkPushRuleEvaluator(object):
4058 def __init__(self, hs):
4159 self.hs = hs
4260 self.store = hs.get_datastore()
61
62 self.room_push_rule_cache_metrics = cache_metrics.register_cache(
63 "cache",
64 size_callback=lambda: 0, # There's not good value for this
65 cache_name="room_push_rule_cache",
66 )
4367
4468 @defer.inlineCallbacks
4569 def _get_rules_for_event(self, event, context):
78102 # It's important that RulesForRoom gets added to self._get_rules_for_room.cache
79103 # before any lookup methods get called on it as otherwise there may be
80104 # a race if invalidate_all gets called (which assumes its in the cache)
81 return RulesForRoom(self.hs, room_id, self._get_rules_for_room.cache)
105 return RulesForRoom(
106 self.hs, room_id, self._get_rules_for_room.cache,
107 self.room_push_rule_cache_metrics,
108 )
82109
83110 @defer.inlineCallbacks
84111 def action_for_event_by_user(self, event, context):
91118 rules_by_user = yield self._get_rules_for_event(event, context)
92119 actions_by_user = {}
93120
94 # None of these users can be peeking since this list of users comes
95 # from the set of users in the room, so we know for sure they're all
96 # actually in the room.
97 user_tuples = [(u, False) for u in rules_by_user]
98
99 filtered_by_user = yield filter_events_for_clients_context(
100 self.store, user_tuples, [event], {event.event_id: context}
101 )
102
103121 room_members = yield self.store.get_joined_users_from_context(
104122 event, context
105123 )
109127 condition_cache = {}
110128
111129 for uid, rules in rules_by_user.iteritems():
130 if event.sender == uid:
131 continue
132
133 if not event.is_state():
134 is_ignored = yield self.store.is_ignored_by(event.sender, uid)
135 if is_ignored:
136 continue
137
112138 display_name = None
113139 profile_info = room_members.get(uid)
114140 if profile_info:
119145 # that user, as they might not be already joined.
120146 if event.type == EventTypes.Member and event.state_key == uid:
121147 display_name = event.content.get("displayname", None)
122
123 filtered = filtered_by_user[uid]
124 if len(filtered) == 0:
125 continue
126
127 if filtered[0].sender == uid:
128 continue
129148
130149 for rule in rules:
131150 if 'enabled' in rule and not rule['enabled']:
169188 the entire cache for the room.
170189 """
171190
172 def __init__(self, hs, room_id, rules_for_room_cache):
191 def __init__(self, hs, room_id, rules_for_room_cache, room_push_rule_cache_metrics):
173192 """
174193 Args:
175194 hs (HomeServer)
176195 room_id (str)
177196 rules_for_room_cache(Cache): The cache object that caches these
178197 RoomsForUser objects.
198 room_push_rule_cache_metrics (CacheMetric)
179199 """
180200 self.room_id = room_id
181201 self.is_mine_id = hs.is_mine_id
182202 self.store = hs.get_datastore()
203 self.room_push_rule_cache_metrics = room_push_rule_cache_metrics
183204
184205 self.linearizer = Linearizer(name="rules_for_room")
185206
221242 """
222243 state_group = context.state_group
223244
245 if state_group and self.state_group == state_group:
246 logger.debug("Using cached rules for %r", self.room_id)
247 self.room_push_rule_cache_metrics.inc_hits()
248 defer.returnValue(self.rules_by_user)
249
224250 with (yield self.linearizer.queue(())):
225251 if state_group and self.state_group == state_group:
226252 logger.debug("Using cached rules for %r", self.room_id)
253 self.room_push_rule_cache_metrics.inc_hits()
227254 defer.returnValue(self.rules_by_user)
255
256 self.room_push_rule_cache_metrics.inc_misses()
228257
229258 ret_rules_by_user = {}
230259 missing_member_event_ids = {}
233262 # results.
234263 ret_rules_by_user = self.rules_by_user
235264 current_state_ids = context.delta_ids
265
266 push_rules_delta_state_cache_metric.inc_hits()
236267 else:
237268 current_state_ids = context.current_state_ids
269 push_rules_delta_state_cache_metric.inc_misses()
270
271 push_rules_state_size_counter.inc_by(len(current_state_ids))
238272
239273 logger.debug(
240274 "Looking for member changes in %r %r", state_group, current_state_ids
280314 logger.debug("Found new member events %r", missing_member_event_ids)
281315 yield self._update_rules_with_member_event_ids(
282316 ret_rules_by_user, missing_member_event_ids, state_group, event
317 )
318 else:
319 # The push rules didn't change but lets update the cache anyway
320 self.update_cache(
321 self.sequence,
322 members={}, # There were no membership changes
323 rules_by_user=ret_rules_by_user,
324 state_group=state_group
283325 )
284326
285327 if logger.isEnabledFor(logging.DEBUG):
379421 self.state_group = object()
380422 self.member_map = {}
381423 self.rules_by_user = {}
424 push_rules_invalidation_counter.inc()
382425
383426 def update_cache(self, sequence, members, rules_by_user, state_group):
384427 if sequence == self.sequence:
243243
244244 @defer.inlineCallbacks
245245 def _build_notification_dict(self, event, tweaks, badge):
246 if self.data.get('format') == 'event_id_only':
247 d = {
248 'notification': {
249 'event_id': event.event_id,
250 'room_id': event.room_id,
251 'counts': {
252 'unread': badge,
253 },
254 'devices': [
255 {
256 'app_id': self.app_id,
257 'pushkey': self.pushkey,
258 'pushkey_ts': long(self.pushkey_ts / 1000),
259 'data': self.data_minus_url,
260 }
261 ]
262 }
263 }
264 defer.returnValue(d)
265
246266 ctx = yield push_tools.get_context_for_event(
247267 self.store, self.state_handler, event, self.user_id
248268 )
199199 return re.compile(r, flags=re.IGNORECASE)
200200
201201
202 def _flatten_dict(d, prefix=[], result={}):
202 def _flatten_dict(d, prefix=[], result=None):
203 if result is None:
204 result = {}
203205 for key, value in d.items():
204206 if isinstance(value, basestring):
205207 result[".".join(prefix + [key])] = value.lower()
3030 "pyyaml": ["yaml"],
3131 "pyasn1": ["pyasn1"],
3232 "daemonize": ["daemonize"],
33 "py-bcrypt": ["bcrypt"],
33 "bcrypt": ["bcrypt"],
3434 "pillow": ["PIL"],
3535 "pydenticon": ["pydenticon"],
3636 "ujson": ["ujson"],
3939 "pymacaroons-pynacl": ["pymacaroons"],
4040 "msgpack-python>=0.3.0": ["msgpack"],
4141 "phonenumbers>=8.2.0": ["phonenumbers"],
42 "affinity": ["affinity"],
4243 }
4344 CONDITIONAL_REQUIREMENTS = {
4445 "web_client": {
2828 max_entries=50000 * CACHE_SIZE_FACTOR,
2929 )
3030
31 def insert_client_ip(self, user, access_token, ip, user_agent, device_id):
31 def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
3232 now = int(self._clock.time_msec())
33 user_id = user.to_string()
3433 key = (user_id, access_token, ip)
3534
3635 try:
322322
323323 @classmethod
324324 def from_line(cls, line):
325 user_id, access_token, ip, device_id, last_seen, user_agent = line.split(" ", 5)
326
327 return cls(user_id, access_token, ip, user_agent, device_id, int(last_seen))
328
329 def to_line(self):
330 return " ".join((
331 self.user_id, self.access_token, self.ip, self.device_id,
332 str(self.last_seen), self.user_agent,
325 user_id, jsn = line.split(" ", 1)
326
327 access_token, ip, user_agent, device_id, last_seen = json.loads(jsn)
328
329 return cls(
330 user_id, access_token, ip, user_agent, device_id, last_seen
331 )
332
333 def to_line(self):
334 return self.user_id + " " + json.dumps((
335 self.access_token, self.ip, self.user_agent, self.device_id,
336 self.last_seen,
333337 ))
334338
335339
243243 becoming full.
244244 """
245245 if self.state == ConnectionStates.CLOSED:
246 logger.info("[%s] Not sending, connection closed", self.id())
246 logger.debug("[%s] Not sending, connection closed", self.id())
247247 return
248248
249249 if do_buffer and self.state != ConnectionStates.ESTABLISHED:
263263 def _queue_command(self, cmd):
264264 """Queue the command until the connection is ready to write to again.
265265 """
266 logger.info("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd)
266 logger.debug("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd)
267267 self.pending_commands.append(cmd)
268268
269269 if len(self.pending_commands) > self.max_line_buffer:
167167
168168 DEFAULT_MESSAGE = (
169169 "Sharing illegal content on this server is not permitted and rooms in"
170 " violatation will be blocked."
170 " violation will be blocked."
171171 )
172172
173173 def __init__(self, hs):
295295
296296 class ResetPasswordRestServlet(ClientV1RestServlet):
297297 """Post request to allow an administrator reset password for a user.
298 This need a user have a administrator access in Synapse.
298 This needs user to have administrator access in Synapse.
299299 Example:
300300 http://localhost:8008/_matrix/client/api/v1/admin/reset_password/
301301 @user:to_reset_password?access_token=admin_access_token
318318 @defer.inlineCallbacks
319319 def on_POST(self, request, target_user_id):
320320 """Post request to allow an administrator reset password for a user.
321 This need a user have a administrator access in Synapse.
321 This needs user to have administrator access in Synapse.
322322 """
323323 UserID.from_string(target_user_id)
324324 requester = yield self.auth.get_user_by_req(request)
342342
343343 class GetUsersPaginatedRestServlet(ClientV1RestServlet):
344344 """Get request to get specific number of users from Synapse.
345 This need a user have a administrator access in Synapse.
345 This needs user to have administrator access in Synapse.
346346 Example:
347347 http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
348348 @admin:user?access_token=admin_access_token&start=0&limit=10
361361 @defer.inlineCallbacks
362362 def on_GET(self, request, target_user_id):
363363 """Get request to get specific number of users from Synapse.
364 This need a user have a administrator access in Synapse.
364 This needs user to have administrator access in Synapse.
365365 """
366366 target_user = UserID.from_string(target_user_id)
367367 requester = yield self.auth.get_user_by_req(request)
394394 @defer.inlineCallbacks
395395 def on_POST(self, request, target_user_id):
396396 """Post request to get specific number of users from Synapse..
397 This need a user have a administrator access in Synapse.
397 This needs user to have administrator access in Synapse.
398398 Example:
399399 http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
400400 @admin:user?access_token=admin_access_token
432432 class SearchUsersRestServlet(ClientV1RestServlet):
433433 """Get request to search user table for specific users according to
434434 search term.
435 This need a user have a administrator access in Synapse.
435 This needs user to have administrator access in Synapse.
436436 Example:
437437 http://localhost:8008/_matrix/client/api/v1/admin/search_users/
438438 @admin:user?access_token=admin_access_token&term=alice
452452 def on_GET(self, request, target_user_id):
453453 """Get request to search user table for specific users according to
454454 search term.
455 This need a user have a administrator access in Synapse.
455 This needs user to have a administrator access in Synapse.
456456 """
457457 target_user = UserID.from_string(target_user_id)
458458 requester = yield self.auth.get_user_by_req(request)
187187
188188 user_id = requester.user.to_string()
189189
190 changed = yield self.device_handler.get_user_ids_changed(
190 results = yield self.device_handler.get_user_ids_changed(
191191 user_id, from_token,
192192 )
193193
194 defer.returnValue((200, {
195 "changed": list(changed),
196 }))
194 defer.returnValue((200, results))
197195
198196
199197 class OneTimeKeyServlet(RestServlet):
109109 filter_id = parse_string(request, "filter", default=None)
110110 full_state = parse_boolean(request, "full_state", default=False)
111111
112 logger.info(
112 logger.debug(
113113 "/sync: user=%r, timeout=%r, since=%r,"
114114 " set_presence=%r, filter_id=%r, device_id=%r" % (
115115 user, timeout, since, set_presence, filter_id, device_id
163163 )
164164
165165 time_now = self.clock.time_msec()
166
167 joined = self.encode_joined(
168 sync_result.joined, time_now, requester.access_token_id, filter.event_fields
169 )
170
171 invited = self.encode_invited(
172 sync_result.invited, time_now, requester.access_token_id
173 )
174
175 archived = self.encode_archived(
176 sync_result.archived, time_now, requester.access_token_id,
166 response_content = self.encode_response(
167 time_now, sync_result, requester.access_token_id, filter
168 )
169
170 defer.returnValue((200, response_content))
171
172 @staticmethod
173 def encode_response(time_now, sync_result, access_token_id, filter):
174 joined = SyncRestServlet.encode_joined(
175 sync_result.joined, time_now, access_token_id, filter.event_fields
176 )
177
178 invited = SyncRestServlet.encode_invited(
179 sync_result.invited, time_now, access_token_id,
180 )
181
182 archived = SyncRestServlet.encode_archived(
183 sync_result.archived, time_now, access_token_id,
177184 filter.event_fields,
178185 )
179186
180 response_content = {
187 return {
181188 "account_data": {"events": sync_result.account_data},
182189 "to_device": {"events": sync_result.to_device},
183190 "device_lists": {
184 "changed": list(sync_result.device_lists),
191 "changed": list(sync_result.device_lists.changed),
192 "left": list(sync_result.device_lists.left),
185193 },
186 "presence": self.encode_presence(
194 "presence": SyncRestServlet.encode_presence(
187195 sync_result.presence, time_now
188196 ),
189197 "rooms": {
195203 "next_batch": sync_result.next_batch.to_string(),
196204 }
197205
198 defer.returnValue((200, response_content))
199
200 def encode_presence(self, events, time_now):
206 @staticmethod
207 def encode_presence(events, time_now):
201208 return {
202209 "events": [
203210 {
211218 ]
212219 }
213220
214 def encode_joined(self, rooms, time_now, token_id, event_fields):
221 @staticmethod
222 def encode_joined(rooms, time_now, token_id, event_fields):
215223 """
216224 Encode the joined rooms in a sync result
217225
230238 """
231239 joined = {}
232240 for room in rooms:
233 joined[room.room_id] = self.encode_room(
241 joined[room.room_id] = SyncRestServlet.encode_room(
234242 room, time_now, token_id, only_fields=event_fields
235243 )
236244
237245 return joined
238246
239 def encode_invited(self, rooms, time_now, token_id):
247 @staticmethod
248 def encode_invited(rooms, time_now, token_id):
240249 """
241250 Encode the invited rooms in a sync result
242251
269278
270279 return invited
271280
272 def encode_archived(self, rooms, time_now, token_id, event_fields):
281 @staticmethod
282 def encode_archived(rooms, time_now, token_id, event_fields):
273283 """
274284 Encode the archived rooms in a sync result
275285
288298 """
289299 joined = {}
290300 for room in rooms:
291 joined[room.room_id] = self.encode_room(
301 joined[room.room_id] = SyncRestServlet.encode_room(
292302 room, time_now, token_id, joined=False, only_fields=event_fields
293303 )
294304
307307 " WHERE stream_id < ?"
308308 )
309309 txn.execute(update_max_id_sql, (next_id, next_id))
310
311 @cachedInlineCallbacks(num_args=2, cache_context=True, max_entries=5000)
312 def is_ignored_by(self, ignored_user_id, ignorer_user_id, cache_context):
313 ignored_account_data = yield self.get_global_account_data_by_type_for_user(
314 "m.ignored_user_list", ignorer_user_id,
315 on_invalidate=cache_context.invalidate,
316 )
317 if not ignored_account_data:
318 defer.returnValue(False)
319
320 defer.returnValue(
321 ignored_user_id in ignored_account_data.get("ignored_users", {})
322 )
5555 )
5656 reactor.addSystemEventTrigger("before", "shutdown", self._update_client_ips_batch)
5757
58 def insert_client_ip(self, user, access_token, ip, user_agent, device_id):
59 now = int(self._clock.time_msec())
60 key = (user.to_string(), access_token, ip)
58 def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id,
59 now=None):
60 if not now:
61 now = int(self._clock.time_msec())
62 key = (user_id, access_token, ip)
6163
6264 try:
6365 last_seen = self.client_ip_last_seen.get(key)
112112 keys[key_id] = key
113113 defer.returnValue(keys)
114114
115 @defer.inlineCallbacks
116115 def store_server_verify_key(self, server_name, from_server, time_now_ms,
117116 verify_key):
118117 """Stores a NACL verification key for the given server.
119118 Args:
120119 server_name (str): The name of the server.
121 key_id (str): The version of the key for the server.
122120 from_server (str): Where the verification key was looked up
123 ts_now_ms (int): The time now in milliseconds
124 verification_key (VerifyKey): The NACL verify key.
125 """
126 yield self._simple_upsert(
127 table="server_signature_keys",
128 keyvalues={
129 "server_name": server_name,
130 "key_id": "%s:%s" % (verify_key.alg, verify_key.version),
131 },
132 values={
133 "from_server": from_server,
134 "ts_added_ms": time_now_ms,
135 "verify_key": buffer(verify_key.encode()),
136 },
137 desc="store_server_verify_key",
138 )
121 time_now_ms (int): The time now in milliseconds
122 verify_key (nacl.signing.VerifyKey): The NACL verify key.
123 """
124 key_id = "%s:%s" % (verify_key.alg, verify_key.version)
125
126 def _txn(txn):
127 self._simple_upsert_txn(
128 txn,
129 table="server_signature_keys",
130 keyvalues={
131 "server_name": server_name,
132 "key_id": key_id,
133 },
134 values={
135 "from_server": from_server,
136 "ts_added_ms": time_now_ms,
137 "verify_key": buffer(verify_key.encode()),
138 },
139 )
140 txn.call_after(
141 self._get_server_verify_key.invalidate,
142 (server_name, key_id)
143 )
144
145 return self.runInteraction("store_server_verify_key", _txn)
139146
140147 def store_server_keys_json(self, server_name, key_id, from_server,
141148 ts_now_ms, ts_expires_ms, key_json_bytes):
4242
4343
4444 @defer.inlineCallbacks
45 def filter_events_for_clients(store, user_tuples, events, event_id_to_state):
45 def filter_events_for_clients(store, user_tuples, events, event_id_to_state,
46 always_include_ids=frozenset()):
4647 """ Returns dict of user_id -> list of events that user is allowed to
4748 see.
4849
5354 * the user has not been a member of the room since the
5455 given events
5556 events ([synapse.events.EventBase]): list of events to filter
57 always_include_ids (set(event_id)): set of event ids to specifically
58 include (unless sender is ignored)
5659 """
5760 forgotten = yield preserve_context_over_deferred(defer.gatherResults([
5861 defer.maybeDeferred(
9093 if not event.is_state() and event.sender in ignore_list:
9194 return False
9295
96 if event.event_id in always_include_ids:
97 return True
98
9399 state = event_id_to_state[event.event_id]
94100
95101 # get the room_visibility at the time of the event.
188194
189195
190196 @defer.inlineCallbacks
191 def filter_events_for_clients_context(store, user_tuples, events, event_id_to_context):
192 user_ids = set(u[0] for u in user_tuples)
193 event_id_to_state = {}
194 for event_id, context in event_id_to_context.items():
195 state = yield store.get_events([
196 e_id
197 for key, e_id in context.current_state_ids.iteritems()
198 if key == (EventTypes.RoomHistoryVisibility, "")
199 or (key[0] == EventTypes.Member and key[1] in user_ids)
200 ])
201 event_id_to_state[event_id] = state
202
203 res = yield filter_events_for_clients(
204 store, user_tuples, events, event_id_to_state
205 )
206 defer.returnValue(res)
207
208
209 @defer.inlineCallbacks
210 def filter_events_for_client(store, user_id, events, is_peeking=False):
197 def filter_events_for_client(store, user_id, events, is_peeking=False,
198 always_include_ids=frozenset()):
211199 """
212200 Check which events a user is allowed to see
213201
231219 types=types
232220 )
233221 res = yield filter_events_for_clients(
234 store, [(user_id, is_peeking)], events, event_id_to_state
222 store, [(user_id, is_peeking)], events, event_id_to_state,
223 always_include_ids=always_include_ids,
235224 )
236225 defer.returnValue(res.get(user_id, []))
0 # -*- coding: utf-8 -*-
1 # Copyright 2017 New Vector Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import time
15
16 import signedjson.key
17 import signedjson.sign
18 from mock import Mock
19 from synapse.api.errors import SynapseError
20 from synapse.crypto import keyring
21 from synapse.util import async, logcontext
22 from synapse.util.logcontext import LoggingContext
23 from tests import unittest, utils
24 from twisted.internet import defer
25
26
27 class MockPerspectiveServer(object):
28 def __init__(self):
29 self.server_name = "mock_server"
30 self.key = signedjson.key.generate_signing_key(0)
31
32 def get_verify_keys(self):
33 vk = signedjson.key.get_verify_key(self.key)
34 return {
35 "%s:%s" % (vk.alg, vk.version): vk,
36 }
37
38 def get_signed_key(self, server_name, verify_key):
39 key_id = "%s:%s" % (verify_key.alg, verify_key.version)
40 res = {
41 "server_name": server_name,
42 "old_verify_keys": {},
43 "valid_until_ts": time.time() * 1000 + 3600,
44 "verify_keys": {
45 key_id: {
46 "key": signedjson.key.encode_verify_key_base64(verify_key)
47 }
48 }
49 }
50 signedjson.sign.sign_json(res, self.server_name, self.key)
51 return res
52
53
54 class KeyringTestCase(unittest.TestCase):
55 @defer.inlineCallbacks
56 def setUp(self):
57 self.mock_perspective_server = MockPerspectiveServer()
58 self.http_client = Mock()
59 self.hs = yield utils.setup_test_homeserver(
60 handlers=None,
61 http_client=self.http_client,
62 )
63 self.hs.config.perspectives = {
64 self.mock_perspective_server.server_name:
65 self.mock_perspective_server.get_verify_keys()
66 }
67
68 def check_context(self, _, expected):
69 self.assertEquals(
70 getattr(LoggingContext.current_context(), "test_key", None),
71 expected
72 )
73
74 @defer.inlineCallbacks
75 def test_wait_for_previous_lookups(self):
76 sentinel_context = LoggingContext.current_context()
77
78 kr = keyring.Keyring(self.hs)
79
80 lookup_1_deferred = defer.Deferred()
81 lookup_2_deferred = defer.Deferred()
82
83 with LoggingContext("one") as context_one:
84 context_one.test_key = "one"
85
86 wait_1_deferred = kr.wait_for_previous_lookups(
87 ["server1"],
88 {"server1": lookup_1_deferred},
89 )
90
91 # there were no previous lookups, so the deferred should be ready
92 self.assertTrue(wait_1_deferred.called)
93 # ... so we should have preserved the LoggingContext.
94 self.assertIs(LoggingContext.current_context(), context_one)
95 wait_1_deferred.addBoth(self.check_context, "one")
96
97 with LoggingContext("two") as context_two:
98 context_two.test_key = "two"
99
100 # set off another wait. It should block because the first lookup
101 # hasn't yet completed.
102 wait_2_deferred = kr.wait_for_previous_lookups(
103 ["server1"],
104 {"server1": lookup_2_deferred},
105 )
106 self.assertFalse(wait_2_deferred.called)
107 # ... so we should have reset the LoggingContext.
108 self.assertIs(LoggingContext.current_context(), sentinel_context)
109 wait_2_deferred.addBoth(self.check_context, "two")
110
111 # let the first lookup complete (in the sentinel context)
112 lookup_1_deferred.callback(None)
113
114 # now the second wait should complete and restore our
115 # loggingcontext.
116 yield wait_2_deferred
117
118 @defer.inlineCallbacks
119 def test_verify_json_objects_for_server_awaits_previous_requests(self):
120 key1 = signedjson.key.generate_signing_key(1)
121
122 kr = keyring.Keyring(self.hs)
123 json1 = {}
124 signedjson.sign.sign_json(json1, "server10", key1)
125
126 persp_resp = {
127 "server_keys": [
128 self.mock_perspective_server.get_signed_key(
129 "server10",
130 signedjson.key.get_verify_key(key1)
131 ),
132 ]
133 }
134 persp_deferred = defer.Deferred()
135
136 @defer.inlineCallbacks
137 def get_perspectives(**kwargs):
138 self.assertEquals(
139 LoggingContext.current_context().test_key, "11",
140 )
141 with logcontext.PreserveLoggingContext():
142 yield persp_deferred
143 defer.returnValue(persp_resp)
144 self.http_client.post_json.side_effect = get_perspectives
145
146 with LoggingContext("11") as context_11:
147 context_11.test_key = "11"
148
149 # start off a first set of lookups
150 res_deferreds = kr.verify_json_objects_for_server(
151 [("server10", json1),
152 ("server11", {})
153 ]
154 )
155
156 # the unsigned json should be rejected pretty quickly
157 self.assertTrue(res_deferreds[1].called)
158 try:
159 yield res_deferreds[1]
160 self.assertFalse("unsigned json didn't cause a failure")
161 except SynapseError:
162 pass
163
164 self.assertFalse(res_deferreds[0].called)
165 res_deferreds[0].addBoth(self.check_context, None)
166
167 # wait a tick for it to send the request to the perspectives server
168 # (it first tries the datastore)
169 yield async.sleep(0.005)
170 self.http_client.post_json.assert_called_once()
171
172 self.assertIs(LoggingContext.current_context(), context_11)
173
174 context_12 = LoggingContext("12")
175 context_12.test_key = "12"
176 with logcontext.PreserveLoggingContext(context_12):
177 # a second request for a server with outstanding requests
178 # should block rather than start a second call
179 self.http_client.post_json.reset_mock()
180 self.http_client.post_json.return_value = defer.Deferred()
181
182 res_deferreds_2 = kr.verify_json_objects_for_server(
183 [("server10", json1)],
184 )
185 yield async.sleep(0.005)
186 self.http_client.post_json.assert_not_called()
187 res_deferreds_2[0].addBoth(self.check_context, None)
188
189 # complete the first request
190 with logcontext.PreserveLoggingContext():
191 persp_deferred.callback(persp_resp)
192 self.assertIs(LoggingContext.current_context(), context_11)
193
194 with logcontext.PreserveLoggingContext():
195 yield res_deferreds[0]
196 yield res_deferreds_2[0]
197
198 @defer.inlineCallbacks
199 def test_verify_json_for_server(self):
200 kr = keyring.Keyring(self.hs)
201
202 key1 = signedjson.key.generate_signing_key(1)
203 yield self.hs.datastore.store_server_verify_key(
204 "server9", "", time.time() * 1000,
205 signedjson.key.get_verify_key(key1),
206 )
207 json1 = {}
208 signedjson.sign.sign_json(json1, "server9", key1)
209
210 sentinel_context = LoggingContext.current_context()
211
212 with LoggingContext("one") as context_one:
213 context_one.test_key = "one"
214
215 defer = kr.verify_json_for_server("server9", {})
216 try:
217 yield defer
218 self.fail("should fail on unsigned json")
219 except SynapseError:
220 pass
221 self.assertIs(LoggingContext.current_context(), context_one)
222
223 defer = kr.verify_json_for_server("server9", json1)
224 self.assertFalse(defer.called)
225 self.assertIs(LoggingContext.current_context(), sentinel_context)
226 yield defer
227
228 self.assertIs(LoggingContext.current_context(), context_one)
1818 import synapse.handlers.device
1919
2020 import synapse.storage
21 from synapse import types
2221 from tests import unittest, utils
2322
2423 user1 = "@boris:aaa"
178177
179178 if ip is not None:
180179 yield self.store.insert_client_ip(
181 types.UserID.from_string(user_id),
180 user_id,
182181 access_token, ip, "user_agent", device_id)
183182 self.clock.advance_time(1000)
1414
1515 from twisted.internet import defer
1616
17 import synapse.server
18 import synapse.storage
19 import synapse.types
2017 import tests.unittest
2118 import tests.utils
2219
3835 self.clock.now = 12345678
3936 user_id = "@user:id"
4037 yield self.store.insert_client_ip(
41 synapse.types.UserID.from_string(user_id),
38 user_id,
4239 "access_token", "ip", "user_agent", "device_id",
4340 )
4441
2323 from tests.utils import MockClock
2424
2525
26 @unittest.DEBUG
2627 class DnsTestCase(unittest.TestCase):
2728
2829 @defer.inlineCallbacks
2930 def test_resolve(self):
3031 dns_client_mock = Mock()
3132
32 service_name = "test_service.examle.com"
33 service_name = "test_service.example.com"
3334 host_name = "example.com"
3435 ip_address = "127.0.0.1"
36 ip6_address = "::1"
3537
3638 answer_srv = dns.RRHeader(
3739 type=dns.SRV,
4749 )
4850 )
4951
50 dns_client_mock.lookupService.return_value = ([answer_srv], None, None)
51 dns_client_mock.lookupAddress.return_value = ([answer_a], None, None)
52 answer_aaaa = dns.RRHeader(
53 type=dns.AAAA,
54 payload=dns.Record_AAAA(
55 address=ip6_address,
56 )
57 )
58
59 dns_client_mock.lookupService.return_value = defer.succeed(
60 ([answer_srv], None, None),
61 )
62 dns_client_mock.lookupAddress.return_value = defer.succeed(
63 ([answer_a], None, None),
64 )
65 dns_client_mock.lookupIPV6Address.return_value = defer.succeed(
66 ([answer_aaaa], None, None),
67 )
5268
5369 cache = {}
5470
5874
5975 dns_client_mock.lookupService.assert_called_once_with(service_name)
6076 dns_client_mock.lookupAddress.assert_called_once_with(host_name)
77 dns_client_mock.lookupIPV6Address.assert_called_once_with(host_name)
6178
62 self.assertEquals(len(servers), 1)
79 self.assertEquals(len(servers), 2)
6380 self.assertEquals(servers, cache[service_name])
6481 self.assertEquals(servers[0].host, ip_address)
82 self.assertEquals(servers[1].host, ip6_address)
6583
6684 @defer.inlineCallbacks
6785 def test_from_cache_expired_and_dns_fail(self):
5555 config.worker_replication_url = ""
5656 config.worker_app = None
5757 config.email_enable_notifs = False
58 config.block_non_admin_invites = False
5859
5960 config.use_frozen_dicts = True
6061 config.database_config = {"name": "sqlite3"}
1313
1414 setenv =
1515 PYTHONDONTWRITEBYTECODE = no_byte_code
16 # As of twisted 16.4, trial tries to import the tests as a package, which
17 # means it needs to be on the pythonpath.
18 PYTHONPATH = {toxinidir}
16
1917 commands =
20 /bin/sh -c "find {toxinidir} -name '*.pyc' -delete ; coverage run {env:COVERAGE_OPTS:} --source={toxinidir}/synapse \
21 {envbindir}/trial {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:}"
18 /usr/bin/find "{toxinidir}" -name '*.pyc' -delete
19 coverage run {env:COVERAGE_OPTS:} --source="{toxinidir}/synapse" \
20 "{envbindir}/trial" {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:}
2221 {env:DUMP_COVERAGE_COMMAND:coverage report -m}
22
23 [testenv:py27]
24
25 # As of twisted 16.4, trial tries to import the tests as a package (previously
26 # it loaded the files explicitly), which means they need to be on the
27 # pythonpath. Our sdist doesn't include the 'tests' package, so normally it
28 # doesn't work within the tox virtualenv.
29 #
30 # As a workaround, we tell tox to do install with 'pip -e', which just
31 # creates a symlink to the project directory instead of unpacking the sdist.
32 #
33 # (An alternative to this would be to set PYTHONPATH to include the project
34 # directory. Note two problems with this:
35 #
36 # - if you set it via `setenv`, then it is also set during the 'install'
37 # phase, which inhibits unpacking the sdist, so the virtualenv isn't
38 # useful for anything else without setting PYTHONPATH similarly.
39 #
40 # - `synapse` is also loaded from PYTHONPATH so even if you only set
41 # PYTHONPATH for the test phase, we're still running the tests against
42 # the working copy rather than the contents of the sdist. So frankly
43 # you might as well use -e in the first place.
44 #
45 # )
46 usedevelop=true
2347
2448 [testenv:packaging]
2549 deps =