Imported Upstream version 0.23.0
Erik Johnston
6 years ago
0 | <!-- | |
1 | ||
2 | **IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**: | |
3 | You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;) | |
4 | ||
5 | ||
6 | This is a bug report template. By following the instructions below and | |
7 | filling out the sections with your information, you will help the us to get all | |
8 | the necessary data to fix your issue. | |
9 | ||
10 | You can also preview your report before submitting it. You may remove sections | |
11 | that aren't relevant to your particular case. | |
12 | ||
13 | Text between <!-- and --> marks will be invisible in the report. | |
14 | ||
15 | --> | |
16 | ||
17 | ### Description | |
18 | ||
19 | Describe here the problem that you are experiencing, or the feature you are requesting. | |
20 | ||
21 | ### Steps to reproduce | |
22 | ||
23 | - For bugs, list the steps | |
24 | - that reproduce the bug | |
25 | - using hyphens as bullet points | |
26 | ||
27 | Describe how what happens differs from what you expected. | |
28 | ||
29 | If you can identify any relevant log snippets from _homeserver.log_, please include | |
30 | those here (please be careful to remove any personal or private data): | |
31 | ||
32 | ### Version information | |
33 | ||
34 | <!-- IMPORTANT: please answer the following questions, to help us narrow down the problem --> | |
35 | ||
36 | - **Homeserver**: Was this issue identified on matrix.org or another homeserver? | |
37 | ||
38 | If not matrix.org: | |
39 | - **Version**: What version of Synapse is running? <!-- | |
40 | You can find the Synapse version by inspecting the server headers (replace matrix.org with | |
41 | your own homeserver domain): | |
42 | $ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:" | |
43 | --> | |
44 | - **Install method**: package manager/git clone/pip | |
45 | - **Platform**: Tell us about the environment in which your homeserver is operating | |
46 | - distro, hardware, if it's running in a vm/container, etc. |
0 | Changes in synapse v0.23.0 (2017-10-02) | |
1 | ======================================= | |
2 | ||
3 | No changes since v0.23.0-rc2 | |
4 | ||
5 | ||
6 | Changes in synapse v0.23.0-rc2 (2017-09-26) | |
7 | =========================================== | |
8 | ||
9 | Bug fixes: | |
10 | ||
11 | * Fix regression in performance of syncs (PR #2470) | |
12 | ||
13 | ||
14 | Changes in synapse v0.23.0-rc1 (2017-09-25) | |
15 | =========================================== | |
16 | ||
17 | Features: | |
18 | ||
19 | * Add a frontend proxy worker (PR #2344) | |
20 | * Add support for event_id_only push format (PR #2450) | |
21 | * Add a PoC for filtering spammy events (PR #2456) | |
22 | * Add a config option to block all room invites (PR #2457) | |
23 | ||
24 | ||
25 | Changes: | |
26 | ||
27 | * Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias! | |
28 | * Improve performance of generating push notifications (PR #2343, #2357, #2365, | |
29 | #2366, #2371) | |
30 | * Improve DB performance for device list handling in sync (PR #2362) | |
31 | * Include a sample prometheus config (PR #2416) | |
32 | * Document known to work postgres version (PR #2433) Thanks to @ptman! | |
33 | ||
34 | ||
35 | Bug fixes: | |
36 | ||
37 | * Fix caching error in the push evaluator (PR #2332) | |
38 | * Fix bug where pusherpool didn't start and broke some rooms (PR #2342) | |
39 | * Fix port script for user directory tables (PR #2375) | |
40 | * Fix device lists notifications when user rejoins a room (PR #2443, #2449) | |
41 | * Fix sync to always send down current state events in timeline (PR #2451) | |
42 | * Fix bug where guest users were incorrectly kicked (PR #2453) | |
43 | * Fix bug talking to IPv6 only servers using SRV records (PR #2462) | |
44 | ||
45 | ||
0 | 46 | Changes in synapse v0.22.1 (2017-07-06) |
1 | 47 | ======================================= |
2 | 48 |
199 | 199 | .. __: `key_management`_ |
200 | 200 | |
201 | 201 | The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is |
202 | configured without TLS; it is not recommended this be exposed outside your | |
203 | local network. Port 8448 is configured to use TLS with a self-signed | |
204 | certificate. This is fine for testing with but, to avoid your clients | |
205 | complaining about the certificate, you will almost certainly want to use | |
206 | another certificate for production purposes. (Note that a self-signed | |
202 | configured without TLS; it should be behind a reverse proxy for TLS/SSL | |
203 | termination on port 443 which in turn should be used for clients. Port 8448 | |
204 | is configured to use TLS with a self-signed certificate. If you would like | |
205 | to do initial test with a client without having to setup a reverse proxy, | |
206 | you can temporarly use another certificate. (Note that a self-signed | |
207 | 207 | certificate is fine for `Federation`_). You can do so by changing |
208 | 208 | ``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path`` |
209 | 209 | in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure |
282 | 282 | The easiest way to try out your new Synapse installation is by connecting to it |
283 | 283 | from a web client. The easiest option is probably the one at |
284 | 284 | http://riot.im/app. You will need to specify a "Custom server" when you log on |
285 | or register: set this to ``https://localhost:8448`` - remember to specify the | |
286 | port (``:8448``) unless you changed the configuration. (Leave the identity | |
285 | or register: set this to ``https://domain.tld`` if you setup a reverse proxy | |
286 | following the recommended setup, or ``https://localhost:8448`` - remember to specify the | |
287 | port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity | |
287 | 288 | server as the default - see `Identity servers`_.) |
289 | ||
290 | If using port 8448 you will run into errors until you accept the self-signed | |
291 | certificate. You can easily do this by going to ``https://localhost:8448`` | |
292 | directly with your browser and accept the presented certificate. You can then | |
293 | go back in your web client and proceed further. | |
288 | 294 | |
289 | 295 | If all goes well you should at least be able to log in, create a room, and |
290 | 296 | start sending messages. |
592 | 598 | domain name. For example, you might want to run your server at |
593 | 599 | ``synapse.example.com``, but have your Matrix user-ids look like |
594 | 600 | ``@user:example.com``. (A SRV record also allows you to change the port from |
595 | the default 8448. However, if you are thinking of using a reverse-proxy, be | |
596 | sure to read `Reverse-proxying the federation port`_ first.) | |
601 | the default 8448. However, if you are thinking of using a reverse-proxy on the | |
602 | federation port, which is not recommended, be sure to read | |
603 | `Reverse-proxying the federation port`_ first.) | |
597 | 604 | |
598 | 605 | To use a SRV record, first create your SRV record and publish it in DNS. This |
599 | 606 | should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port> |
673 | 680 | Using a reverse proxy with Synapse |
674 | 681 | ================================== |
675 | 682 | |
676 | It is possible to put a reverse proxy such as | |
683 | It is recommended to put a reverse proxy such as | |
677 | 684 | `nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_, |
678 | 685 | `Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or |
679 | 686 | `HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of |
691 | 698 | `Reverse-proxying the federation port`_. |
692 | 699 | |
693 | 700 | The recommended setup is therefore to configure your reverse-proxy on port 443 |
694 | for client connections, but to also expose port 8448 for server-server | |
695 | connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx | |
696 | configuration might look like:: | |
701 | to port 8008 of synapse for client connections, but to also directly expose port | |
702 | 8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``, | |
703 | so an example nginx configuration might look like:: | |
697 | 704 | |
698 | 705 | server { |
699 | 706 | listen 443 ssl; |
4 | 4 | what you currently have installed to current version of synapse. The extra |
5 | 5 | instructions that may be required are listed later in this document. |
6 | 6 | |
7 | If synapse was installed in a virtualenv then active that virtualenv before | |
8 | upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run: | |
7 | 1. If synapse was installed in a virtualenv then active that virtualenv before | |
8 | upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then | |
9 | run: | |
10 | ||
11 | .. code:: bash | |
12 | ||
13 | source ~/.synapse/bin/activate | |
14 | ||
15 | 2. If synapse was installed using pip then upgrade to the latest version by | |
16 | running: | |
17 | ||
18 | .. code:: bash | |
19 | ||
20 | pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master | |
21 | ||
22 | # restart synapse | |
23 | synctl restart | |
24 | ||
25 | ||
26 | If synapse was installed using git then upgrade to the latest version by | |
27 | running: | |
28 | ||
29 | .. code:: bash | |
30 | ||
31 | # Pull the latest version of the master branch. | |
32 | git pull | |
33 | # Update the versions of synapse's python dependencies. | |
34 | python synapse/python_dependencies.py | xargs pip install --upgrade | |
35 | ||
36 | # restart synapse | |
37 | ./synctl restart | |
38 | ||
39 | ||
40 | To check whether your update was sucessful, you can check the Server header | |
41 | returned by the Client-Server API: | |
9 | 42 | |
10 | 43 | .. code:: bash |
11 | 44 | |
12 | source ~/.synapse/bin/activate | |
13 | ||
14 | If synapse was installed using pip then upgrade to the latest version by | |
15 | running: | |
16 | ||
17 | .. code:: bash | |
18 | ||
19 | pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master | |
20 | ||
21 | If synapse was installed using git then upgrade to the latest version by | |
22 | running: | |
23 | ||
24 | .. code:: bash | |
25 | ||
26 | # Pull the latest version of the master branch. | |
27 | git pull | |
28 | # Update the versions of synapse's python dependencies. | |
29 | python synapse/python_dependencies.py | xargs -n1 pip install --upgrade | |
30 | ||
31 | To check whether your update was sucessfull, run: | |
32 | ||
33 | .. code:: bash | |
34 | ||
35 | # replace your.server.domain with ther domain of your synapse homeserver | |
36 | curl https://<your.server.domain>/_matrix/federation/v1/version | |
37 | ||
38 | So for the Matrix.org HS server the URL would be: https://matrix.org/_matrix/federation/v1/version. | |
39 | ||
45 | # replace <host.name> with the hostname of your synapse homeserver. | |
46 | # You may need to specify a port (eg, :8448) if your server is not | |
47 | # configured on port 443. | |
48 | curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:" | |
40 | 49 | |
41 | 50 | Upgrading to v0.15.0 |
42 | 51 | ==================== |
76 | 85 | ``homeserver.yaml``:: |
77 | 86 | |
78 | 87 | app_service_config_files: ["registration-01.yaml", "registration-02.yaml"] |
79 | ||
88 | ||
80 | 89 | Where ``registration-01.yaml`` looks like:: |
81 | 90 | |
82 | 91 | url: <String> # e.g. "https://my.application.service.com" |
165 | 174 | it before starting the new version of the homeserver. |
166 | 175 | |
167 | 176 | The script "database-prepare-for-0.5.0.sh" should be used to upgrade the |
168 | database. This will save all user information, such as logins and profiles, | |
177 | database. This will save all user information, such as logins and profiles, | |
169 | 178 | but will otherwise purge the database. This includes messages, which |
170 | 179 | rooms the home server was a member of and room alias mappings. |
171 | 180 | |
174 | 183 | unfortunately, non trivial and requires human intervention to resolve any |
175 | 184 | resulting conflicts during the upgrade process. |
176 | 185 | |
177 | Before running the command the homeserver should be first completely | |
186 | Before running the command the homeserver should be first completely | |
178 | 187 | shutdown. To run it, simply specify the location of the database, e.g.: |
179 | 188 | |
180 | 189 | ./scripts/database-prepare-for-0.5.0.sh "homeserver.db" |
181 | 190 | |
182 | Once this has successfully completed it will be safe to restart the | |
183 | homeserver. You may notice that the homeserver takes a few seconds longer to | |
191 | Once this has successfully completed it will be safe to restart the | |
192 | homeserver. You may notice that the homeserver takes a few seconds longer to | |
184 | 193 | restart than usual as it reinitializes the database. |
185 | 194 | |
186 | 195 | On startup of the new version, users can either rejoin remote rooms using room |
187 | 196 | aliases or by being reinvited. Alternatively, if any other homeserver sends a |
188 | message to a room that the homeserver was previously in the local HS will | |
197 | message to a room that the homeserver was previously in the local HS will | |
189 | 198 | automatically rejoin the room. |
190 | 199 | |
191 | 200 | Upgrading to v0.4.0 |
244 | 253 | --config-path homeserver.config \ |
245 | 254 | --generate-config |
246 | 255 | |
247 | This config can be edited if desired, for example to specify a different SSL | |
256 | This config can be edited if desired, for example to specify a different SSL | |
248 | 257 | certificate to use. Once done you can run the home server using:: |
249 | 258 | |
250 | 259 | $ python synapse/app/homeserver.py --config-path homeserver.config |
265 | 274 | it before starting the new version of the homeserver. |
266 | 275 | |
267 | 276 | The script "database-prepare-for-0.0.1.sh" should be used to upgrade the |
268 | database. This will save all user information, such as logins and profiles, | |
277 | database. This will save all user information, such as logins and profiles, | |
269 | 278 | but will otherwise purge the database. This includes messages, which |
270 | 279 | rooms the home server was a member of and room alias mappings. |
271 | 280 | |
272 | Before running the command the homeserver should be first completely | |
281 | Before running the command the homeserver should be first completely | |
273 | 282 | shutdown. To run it, simply specify the location of the database, e.g.: |
274 | 283 | |
275 | 284 | ./scripts/database-prepare-for-0.0.1.sh "homeserver.db" |
276 | 285 | |
277 | Once this has successfully completed it will be safe to restart the | |
278 | homeserver. You may notice that the homeserver takes a few seconds longer to | |
286 | Once this has successfully completed it will be safe to restart the | |
287 | homeserver. You may notice that the homeserver takes a few seconds longer to | |
279 | 288 | restart than usual as it reinitializes the database. |
280 | 289 | |
281 | 290 | On startup of the new version, users can either rejoin remote rooms using room |
282 | 291 | aliases or by being reinvited. Alternatively, if any other homeserver sends a |
283 | message to a room that the homeserver was previously in the local HS will | |
292 | message to a room that the homeserver was previously in the local HS will | |
284 | 293 | automatically rejoin the room. |
0 | This directory contains some sample monitoring config for using the | |
1 | 'Prometheus' monitoring server against synapse. | |
2 | ||
3 | To use it, first install prometheus by following the instructions at | |
4 | ||
5 | http://prometheus.io/ | |
6 | ||
7 | Then add a new job to the main prometheus.conf file: | |
8 | ||
9 | job: { | |
10 | name: "synapse" | |
11 | ||
12 | target_group: { | |
13 | target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics" | |
14 | } | |
15 | } | |
16 | ||
17 | Metrics are disabled by default when running synapse; they must be enabled | |
18 | with the 'enable-metrics' option, either in the synapse config file or as a | |
19 | command-line option. |
0 | {{ template "head" . }} | |
1 | ||
2 | {{ template "prom_content_head" . }} | |
3 | <h1>System Resources</h1> | |
4 | ||
5 | <h3>CPU</h3> | |
6 | <div id="process_resource_utime"></div> | |
7 | <script> | |
8 | new PromConsole.Graph({ | |
9 | node: document.querySelector("#process_resource_utime"), | |
10 | expr: "rate(process_cpu_seconds_total[2m]) * 100", | |
11 | name: "[[job]]", | |
12 | min: 0, | |
13 | max: 100, | |
14 | renderer: "line", | |
15 | height: 150, | |
16 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
17 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
18 | yUnits: "%", | |
19 | yTitle: "CPU Usage" | |
20 | }) | |
21 | </script> | |
22 | ||
23 | <h3>Memory</h3> | |
24 | <div id="process_resource_maxrss"></div> | |
25 | <script> | |
26 | new PromConsole.Graph({ | |
27 | node: document.querySelector("#process_resource_maxrss"), | |
28 | expr: "process_psutil_rss:max", | |
29 | name: "Maxrss", | |
30 | min: 0, | |
31 | renderer: "line", | |
32 | height: 150, | |
33 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
34 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
35 | yUnits: "bytes", | |
36 | yTitle: "Usage" | |
37 | }) | |
38 | </script> | |
39 | ||
40 | <h3>File descriptors</h3> | |
41 | <div id="process_fds"></div> | |
42 | <script> | |
43 | new PromConsole.Graph({ | |
44 | node: document.querySelector("#process_fds"), | |
45 | expr: "process_open_fds{job='synapse'}", | |
46 | name: "FDs", | |
47 | min: 0, | |
48 | renderer: "line", | |
49 | height: 150, | |
50 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
51 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
52 | yUnits: "", | |
53 | yTitle: "Descriptors" | |
54 | }) | |
55 | </script> | |
56 | ||
57 | <h1>Reactor</h1> | |
58 | ||
59 | <h3>Total reactor time</h3> | |
60 | <div id="reactor_total_time"></div> | |
61 | <script> | |
62 | new PromConsole.Graph({ | |
63 | node: document.querySelector("#reactor_total_time"), | |
64 | expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000", | |
65 | name: "time", | |
66 | max: 1, | |
67 | min: 0, | |
68 | renderer: "area", | |
69 | height: 150, | |
70 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
71 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
72 | yUnits: "s/s", | |
73 | yTitle: "Usage" | |
74 | }) | |
75 | </script> | |
76 | ||
77 | <h3>Average reactor tick time</h3> | |
78 | <div id="reactor_average_time"></div> | |
79 | <script> | |
80 | new PromConsole.Graph({ | |
81 | node: document.querySelector("#reactor_average_time"), | |
82 | expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000", | |
83 | name: "time", | |
84 | min: 0, | |
85 | renderer: "line", | |
86 | height: 150, | |
87 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
88 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
89 | yUnits: "s", | |
90 | yTitle: "Time" | |
91 | }) | |
92 | </script> | |
93 | ||
94 | <h3>Pending calls per tick</h3> | |
95 | <div id="reactor_pending_calls"></div> | |
96 | <script> | |
97 | new PromConsole.Graph({ | |
98 | node: document.querySelector("#reactor_pending_calls"), | |
99 | expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])", | |
100 | name: "calls", | |
101 | min: 0, | |
102 | renderer: "line", | |
103 | height: 150, | |
104 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
105 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
106 | yTitle: "Pending Cals" | |
107 | }) | |
108 | </script> | |
109 | ||
110 | <h1>Storage</h1> | |
111 | ||
112 | <h3>Queries</h3> | |
113 | <div id="synapse_storage_query_time"></div> | |
114 | <script> | |
115 | new PromConsole.Graph({ | |
116 | node: document.querySelector("#synapse_storage_query_time"), | |
117 | expr: "rate(synapse_storage_query_time:count[2m])", | |
118 | name: "[[verb]]", | |
119 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
120 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
121 | yUnits: "queries/s", | |
122 | yTitle: "Queries" | |
123 | }) | |
124 | </script> | |
125 | ||
126 | <h3>Transactions</h3> | |
127 | <div id="synapse_storage_transaction_time"></div> | |
128 | <script> | |
129 | new PromConsole.Graph({ | |
130 | node: document.querySelector("#synapse_storage_transaction_time"), | |
131 | expr: "rate(synapse_storage_transaction_time:count[2m])", | |
132 | name: "[[desc]]", | |
133 | min: 0, | |
134 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
135 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
136 | yUnits: "txn/s", | |
137 | yTitle: "Transactions" | |
138 | }) | |
139 | </script> | |
140 | ||
141 | <h3>Transaction execution time</h3> | |
142 | <div id="synapse_storage_transactions_time_msec"></div> | |
143 | <script> | |
144 | new PromConsole.Graph({ | |
145 | node: document.querySelector("#synapse_storage_transactions_time_msec"), | |
146 | expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000", | |
147 | name: "[[desc]]", | |
148 | min: 0, | |
149 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
150 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
151 | yUnits: "s/s", | |
152 | yTitle: "Usage" | |
153 | }) | |
154 | </script> | |
155 | ||
156 | <h3>Database scheduling latency</h3> | |
157 | <div id="synapse_storage_schedule_time"></div> | |
158 | <script> | |
159 | new PromConsole.Graph({ | |
160 | node: document.querySelector("#synapse_storage_schedule_time"), | |
161 | expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000", | |
162 | name: "Total latency", | |
163 | min: 0, | |
164 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
165 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
166 | yUnits: "s/s", | |
167 | yTitle: "Usage" | |
168 | }) | |
169 | </script> | |
170 | ||
171 | <h3>Cache hit ratio</h3> | |
172 | <div id="synapse_cache_ratio"></div> | |
173 | <script> | |
174 | new PromConsole.Graph({ | |
175 | node: document.querySelector("#synapse_cache_ratio"), | |
176 | expr: "rate(synapse_util_caches_cache:total[2m]) * 100", | |
177 | name: "[[name]]", | |
178 | min: 0, | |
179 | max: 100, | |
180 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
181 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
182 | yUnits: "%", | |
183 | yTitle: "Percentage" | |
184 | }) | |
185 | </script> | |
186 | ||
187 | <h3>Cache size</h3> | |
188 | <div id="synapse_cache_size"></div> | |
189 | <script> | |
190 | new PromConsole.Graph({ | |
191 | node: document.querySelector("#synapse_cache_size"), | |
192 | expr: "synapse_util_caches_cache:size", | |
193 | name: "[[name]]", | |
194 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
195 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
196 | yUnits: "", | |
197 | yTitle: "Items" | |
198 | }) | |
199 | </script> | |
200 | ||
201 | <h1>Requests</h1> | |
202 | ||
203 | <h3>Requests by Servlet</h3> | |
204 | <div id="synapse_http_server_requests_servlet"></div> | |
205 | <script> | |
206 | new PromConsole.Graph({ | |
207 | node: document.querySelector("#synapse_http_server_requests_servlet"), | |
208 | expr: "rate(synapse_http_server_requests:servlet[2m])", | |
209 | name: "[[servlet]]", | |
210 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
211 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
212 | yUnits: "req/s", | |
213 | yTitle: "Requests" | |
214 | }) | |
215 | </script> | |
216 | <h4> (without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4> | |
217 | <div id="synapse_http_server_requests_servlet_minus_events"></div> | |
218 | <script> | |
219 | new PromConsole.Graph({ | |
220 | node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"), | |
221 | expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])", | |
222 | name: "[[servlet]]", | |
223 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
224 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
225 | yUnits: "req/s", | |
226 | yTitle: "Requests" | |
227 | }) | |
228 | </script> | |
229 | ||
230 | <h3>Average response times</h3> | |
231 | <div id="synapse_http_server_response_time_avg"></div> | |
232 | <script> | |
233 | new PromConsole.Graph({ | |
234 | node: document.querySelector("#synapse_http_server_response_time_avg"), | |
235 | expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000", | |
236 | name: "[[servlet]]", | |
237 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
238 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
239 | yUnits: "s/req", | |
240 | yTitle: "Response time" | |
241 | }) | |
242 | </script> | |
243 | ||
244 | <h3>All responses by code</h3> | |
245 | <div id="synapse_http_server_responses"></div> | |
246 | <script> | |
247 | new PromConsole.Graph({ | |
248 | node: document.querySelector("#synapse_http_server_responses"), | |
249 | expr: "rate(synapse_http_server_responses[2m])", | |
250 | name: "[[method]] / [[code]]", | |
251 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
252 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
253 | yUnits: "req/s", | |
254 | yTitle: "Requests" | |
255 | }) | |
256 | </script> | |
257 | ||
258 | <h3>Error responses by code</h3> | |
259 | <div id="synapse_http_server_responses_err"></div> | |
260 | <script> | |
261 | new PromConsole.Graph({ | |
262 | node: document.querySelector("#synapse_http_server_responses_err"), | |
263 | expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])", | |
264 | name: "[[method]] / [[code]]", | |
265 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
266 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
267 | yUnits: "req/s", | |
268 | yTitle: "Requests" | |
269 | }) | |
270 | </script> | |
271 | ||
272 | ||
273 | <h3>CPU Usage</h3> | |
274 | <div id="synapse_http_server_response_ru_utime"></div> | |
275 | <script> | |
276 | new PromConsole.Graph({ | |
277 | node: document.querySelector("#synapse_http_server_response_ru_utime"), | |
278 | expr: "rate(synapse_http_server_response_ru_utime:total[2m])", | |
279 | name: "[[servlet]]", | |
280 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
281 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
282 | yUnits: "s/s", | |
283 | yTitle: "CPU Usage" | |
284 | }) | |
285 | </script> | |
286 | ||
287 | ||
288 | <h3>DB Usage</h3> | |
289 | <div id="synapse_http_server_response_db_txn_duration"></div> | |
290 | <script> | |
291 | new PromConsole.Graph({ | |
292 | node: document.querySelector("#synapse_http_server_response_db_txn_duration"), | |
293 | expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])", | |
294 | name: "[[servlet]]", | |
295 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
296 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
297 | yUnits: "s/s", | |
298 | yTitle: "DB Usage" | |
299 | }) | |
300 | </script> | |
301 | ||
302 | ||
303 | <h3>Average event send times</h3> | |
304 | <div id="synapse_http_server_send_time_avg"></div> | |
305 | <script> | |
306 | new PromConsole.Graph({ | |
307 | node: document.querySelector("#synapse_http_server_send_time_avg"), | |
308 | expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000", | |
309 | name: "[[servlet]]", | |
310 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
311 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
312 | yUnits: "s/req", | |
313 | yTitle: "Response time" | |
314 | }) | |
315 | </script> | |
316 | ||
317 | <h1>Federation</h1> | |
318 | ||
319 | <h3>Sent Messages</h3> | |
320 | <div id="synapse_federation_client_sent"></div> | |
321 | <script> | |
322 | new PromConsole.Graph({ | |
323 | node: document.querySelector("#synapse_federation_client_sent"), | |
324 | expr: "rate(synapse_federation_client_sent[2m])", | |
325 | name: "[[type]]", | |
326 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
327 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
328 | yUnits: "req/s", | |
329 | yTitle: "Requests" | |
330 | }) | |
331 | </script> | |
332 | ||
333 | <h3>Received Messages</h3> | |
334 | <div id="synapse_federation_server_received"></div> | |
335 | <script> | |
336 | new PromConsole.Graph({ | |
337 | node: document.querySelector("#synapse_federation_server_received"), | |
338 | expr: "rate(synapse_federation_server_received[2m])", | |
339 | name: "[[type]]", | |
340 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
341 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
342 | yUnits: "req/s", | |
343 | yTitle: "Requests" | |
344 | }) | |
345 | </script> | |
346 | ||
347 | <h3>Pending</h3> | |
348 | <div id="synapse_federation_transaction_queue_pending"></div> | |
349 | <script> | |
350 | new PromConsole.Graph({ | |
351 | node: document.querySelector("#synapse_federation_transaction_queue_pending"), | |
352 | expr: "synapse_federation_transaction_queue_pending", | |
353 | name: "[[type]]", | |
354 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
355 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
356 | yUnits: "", | |
357 | yTitle: "Units" | |
358 | }) | |
359 | </script> | |
360 | ||
361 | <h1>Clients</h1> | |
362 | ||
363 | <h3>Notifiers</h3> | |
364 | <div id="synapse_notifier_listeners"></div> | |
365 | <script> | |
366 | new PromConsole.Graph({ | |
367 | node: document.querySelector("#synapse_notifier_listeners"), | |
368 | expr: "synapse_notifier_listeners", | |
369 | name: "listeners", | |
370 | min: 0, | |
371 | yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
372 | yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, | |
373 | yUnits: "", | |
374 | yTitle: "Listeners" | |
375 | }) | |
376 | </script> | |
377 | ||
378 | <h3>Notified Events</h3> | |
379 | <div id="synapse_notifier_notified_events"></div> | |
380 | <script> | |
381 | new PromConsole.Graph({ | |
382 | node: document.querySelector("#synapse_notifier_notified_events"), | |
383 | expr: "rate(synapse_notifier_notified_events[2m])", | |
384 | name: "events", | |
385 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
386 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
387 | yUnits: "events/s", | |
388 | yTitle: "Event rate" | |
389 | }) | |
390 | </script> | |
391 | ||
392 | {{ template "prom_content_tail" . }} | |
393 | ||
394 | {{ template "tail" }} |
0 | synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0) | |
1 | synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0) | |
2 | ||
3 | synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method) | |
4 | synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet) | |
5 | ||
6 | synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet) | |
7 | ||
8 | synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m]) | |
9 | synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s]) | |
10 | ||
11 | synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0 | |
12 | synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0 | |
13 | synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job) | |
14 | ||
15 | synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0 | |
16 | synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0 | |
17 | synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job) | |
18 | ||
19 | synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0 | |
20 | synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0 |
8 | 8 | Type=simple |
9 | 9 | User=synapse |
10 | 10 | Group=synapse |
11 | EnvironmentFile=-/etc/sysconfig/synapse | |
12 | 11 | WorkingDirectory=/var/lib/synapse |
13 | ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml | |
12 | ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml | |
13 | ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml | |
14 | 14 | |
15 | 15 | [Install] |
16 | 16 | WantedBy=multi-user.target |
17 |
0 | 0 | Using Postgres |
1 | 1 | -------------- |
2 | ||
3 | Postgres version 9.4 or later is known to work. | |
2 | 4 | |
3 | 5 | Set up database |
4 | 6 | =============== |
16 | 16 | ./sytest/jenkins/prep_sytest_for_postgres.sh |
17 | 17 | |
18 | 18 | ./sytest/jenkins/install_and_run.sh \ |
19 | --python $WORKSPACE/.tox/py27/bin/python \ | |
19 | 20 | --synapse-directory $WORKSPACE \ |
20 | 21 | --dendron $WORKSPACE/dendron/bin/dendron \ |
21 | 22 | --haproxy \ |
14 | 14 | ./sytest/jenkins/prep_sytest_for_postgres.sh |
15 | 15 | |
16 | 16 | ./sytest/jenkins/install_and_run.sh \ |
17 | --python $WORKSPACE/.tox/py27/bin/python \ | |
17 | 18 | --synapse-directory $WORKSPACE \ |
18 | 19 | --dendron $WORKSPACE/dendron/bin/dendron \ |
13 | 13 | ./sytest/jenkins/prep_sytest_for_postgres.sh |
14 | 14 | |
15 | 15 | ./sytest/jenkins/install_and_run.sh \ |
16 | --python $WORKSPACE/.tox/py27/bin/python \ | |
16 | 17 | --synapse-directory $WORKSPACE \ |
11 | 11 | ./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git |
12 | 12 | |
13 | 13 | ./sytest/jenkins/install_and_run.sh \ |
14 | --python $WORKSPACE/.tox/py27/bin/python \ | |
14 | 15 | --synapse-directory $WORKSPACE \ |
251 | 251 | ) |
252 | 252 | return |
253 | 253 | |
254 | if table in ( | |
255 | "user_directory", "user_directory_search", "users_who_share_rooms", | |
256 | "users_in_pubic_room", | |
257 | ): | |
258 | # We don't port these tables, as they're a faff and we can regenreate | |
259 | # them anyway. | |
260 | self.progress.update(table, table_size) # Mark table as done | |
261 | return | |
262 | ||
263 | if table == "user_directory_stream_pos": | |
264 | # We need to make sure there is a single row, `(X, null), as that is | |
265 | # what synapse expects to be there. | |
266 | yield self.postgres_store._simple_insert( | |
267 | table=table, | |
268 | values={"stream_id": None}, | |
269 | ) | |
270 | self.progress.update(table, table_size) # Mark table as done | |
271 | return | |
272 | ||
254 | 273 | forward_select = ( |
255 | 274 | "SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" |
256 | 275 | % (table,) |
0 | #!/usr/bin/env python | |
1 | # | |
2 | # Copyright 2015, 2016 OpenMarket Ltd | |
3 | # Copyright 2017 New Vector Ltd | |
4 | # | |
5 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
6 | # you may not use this file except in compliance with the License. | |
7 | # You may obtain a copy of the License at | |
8 | # | |
9 | # http://www.apache.org/licenses/LICENSE-2.0 | |
10 | # | |
11 | # Unless required by applicable law or agreed to in writing, software | |
12 | # distributed under the License is distributed on an "AS IS" BASIS, | |
13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
14 | # See the License for the specific language governing permissions and | |
15 | # limitations under the License. | |
16 | ||
17 | from __future__ import print_function | |
18 | ||
19 | import argparse | |
0 | 20 | import nacl.signing |
1 | 21 | import json |
2 | 22 | import base64 |
3 | 23 | import requests |
4 | 24 | import sys |
5 | 25 | import srvlookup |
6 | ||
26 | import yaml | |
7 | 27 | |
8 | 28 | def encode_base64(input_bytes): |
9 | 29 | """Encode bytes as a base64 string without any padding.""" |
119 | 139 | origin_name, key, sig, |
120 | 140 | ) |
121 | 141 | authorization_headers.append(bytes(header)) |
122 | sys.stderr.write(header) | |
123 | sys.stderr.write("\n") | |
142 | print ("Authorization: %s" % header, file=sys.stderr) | |
143 | ||
144 | dest = lookup(destination, path) | |
145 | print ("Requesting %s" % dest, file=sys.stderr) | |
124 | 146 | |
125 | 147 | result = requests.get( |
126 | lookup(destination, path), | |
148 | dest, | |
127 | 149 | headers={"Authorization": authorization_headers[0]}, |
128 | 150 | verify=False, |
129 | 151 | ) |
132 | 154 | |
133 | 155 | |
134 | 156 | def main(): |
135 | origin_name, keyfile, destination, path = sys.argv[1:] | |
136 | ||
137 | with open(keyfile) as f: | |
157 | parser = argparse.ArgumentParser( | |
158 | description= | |
159 | "Signs and sends a federation request to a matrix homeserver", | |
160 | ) | |
161 | ||
162 | parser.add_argument( | |
163 | "-N", "--server-name", | |
164 | help="Name to give as the local homeserver. If unspecified, will be " | |
165 | "read from the config file.", | |
166 | ) | |
167 | ||
168 | parser.add_argument( | |
169 | "-k", "--signing-key-path", | |
170 | help="Path to the file containing the private ed25519 key to sign the " | |
171 | "request with.", | |
172 | ) | |
173 | ||
174 | parser.add_argument( | |
175 | "-c", "--config", | |
176 | default="homeserver.yaml", | |
177 | help="Path to server config file. Ignored if --server-name and " | |
178 | "--signing-key-path are both given.", | |
179 | ) | |
180 | ||
181 | parser.add_argument( | |
182 | "-d", "--destination", | |
183 | default="matrix.org", | |
184 | help="name of the remote homeserver. We will do SRV lookups and " | |
185 | "connect appropriately.", | |
186 | ) | |
187 | ||
188 | parser.add_argument( | |
189 | "path", | |
190 | help="request path. We will add '/_matrix/federation/v1/' to this." | |
191 | ) | |
192 | ||
193 | args = parser.parse_args() | |
194 | ||
195 | if not args.server_name or not args.signing_key_path: | |
196 | read_args_from_config(args) | |
197 | ||
198 | with open(args.signing_key_path) as f: | |
138 | 199 | key = read_signing_keys(f)[0] |
139 | 200 | |
140 | 201 | result = get_json( |
141 | origin_name, key, destination, "/_matrix/federation/v1/" + path | |
202 | args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path | |
142 | 203 | ) |
143 | 204 | |
144 | 205 | json.dump(result, sys.stdout) |
145 | print "" | |
206 | print ("") | |
207 | ||
208 | ||
209 | def read_args_from_config(args): | |
210 | with open(args.config, 'r') as fh: | |
211 | config = yaml.safe_load(fh) | |
212 | if not args.server_name: | |
213 | args.server_name = config['server_name'] | |
214 | if not args.signing_key_path: | |
215 | args.signing_key_path = config['signing_key_path'] | |
216 | ||
146 | 217 | |
147 | 218 | if __name__ == "__main__": |
148 | 219 | main() |
15 | 15 | """ This is a reference implementation of a Matrix home server. |
16 | 16 | """ |
17 | 17 | |
18 | __version__ = "0.22.1" | |
18 | __version__ = "0.23.0" |
208 | 208 | )[0] |
209 | 209 | if user and access_token and ip_addr: |
210 | 210 | self.store.insert_client_ip( |
211 | user=user, | |
211 | user_id=user.to_string(), | |
212 | 212 | access_token=access_token, |
213 | 213 | ip=ip_addr, |
214 | 214 | user_agent=user_agent, |
518 | 518 | ) |
519 | 519 | |
520 | 520 | def is_server_admin(self, user): |
521 | """ Check if the given user is a local server admin. | |
522 | ||
523 | Args: | |
524 | user (str): mxid of user to check | |
525 | ||
526 | Returns: | |
527 | bool: True if the user is an admin | |
528 | """ | |
521 | 529 | return self.store.is_server_admin(user) |
522 | 530 | |
523 | 531 | @defer.inlineCallbacks |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import gc | |
15 | import logging | |
16 | ||
17 | import affinity | |
18 | from daemonize import Daemonize | |
19 | from synapse.util import PreserveLoggingContext | |
20 | from synapse.util.rlimit import change_resource_limit | |
21 | from twisted.internet import reactor | |
22 | ||
23 | ||
24 | def start_worker_reactor(appname, config): | |
25 | """ Run the reactor in the main process | |
26 | ||
27 | Daemonizes if necessary, and then configures some resources, before starting | |
28 | the reactor. Pulls configuration from the 'worker' settings in 'config'. | |
29 | ||
30 | Args: | |
31 | appname (str): application name which will be sent to syslog | |
32 | config (synapse.config.Config): config object | |
33 | """ | |
34 | ||
35 | logger = logging.getLogger(config.worker_app) | |
36 | ||
37 | start_reactor( | |
38 | appname, | |
39 | config.soft_file_limit, | |
40 | config.gc_thresholds, | |
41 | config.worker_pid_file, | |
42 | config.worker_daemonize, | |
43 | config.worker_cpu_affinity, | |
44 | logger, | |
45 | ) | |
46 | ||
47 | ||
48 | def start_reactor( | |
49 | appname, | |
50 | soft_file_limit, | |
51 | gc_thresholds, | |
52 | pid_file, | |
53 | daemonize, | |
54 | cpu_affinity, | |
55 | logger, | |
56 | ): | |
57 | """ Run the reactor in the main process | |
58 | ||
59 | Daemonizes if necessary, and then configures some resources, before starting | |
60 | the reactor | |
61 | ||
62 | Args: | |
63 | appname (str): application name which will be sent to syslog | |
64 | soft_file_limit (int): | |
65 | gc_thresholds: | |
66 | pid_file (str): name of pid file to write to if daemonize is True | |
67 | daemonize (bool): true to run the reactor in a background process | |
68 | cpu_affinity (int|None): cpu affinity mask | |
69 | logger (logging.Logger): logger instance to pass to Daemonize | |
70 | """ | |
71 | ||
72 | def run(): | |
73 | # make sure that we run the reactor with the sentinel log context, | |
74 | # otherwise other PreserveLoggingContext instances will get confused | |
75 | # and complain when they see the logcontext arbitrarily swapping | |
76 | # between the sentinel and `run` logcontexts. | |
77 | with PreserveLoggingContext(): | |
78 | logger.info("Running") | |
79 | if cpu_affinity is not None: | |
80 | logger.info("Setting CPU affinity to %s" % cpu_affinity) | |
81 | affinity.set_process_affinity_mask(0, cpu_affinity) | |
82 | change_resource_limit(soft_file_limit) | |
83 | if gc_thresholds: | |
84 | gc.set_threshold(*gc_thresholds) | |
85 | reactor.run() | |
86 | ||
87 | if daemonize: | |
88 | daemon = Daemonize( | |
89 | app=appname, | |
90 | pid=pid_file, | |
91 | action=run, | |
92 | auto_close_fds=False, | |
93 | verbose=True, | |
94 | logger=logger, | |
95 | ) | |
96 | daemon.start() | |
97 | else: | |
98 | run() |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
18 | from synapse.server import HomeServer | |
19 | from synapse import events | |
20 | from synapse.app import _base | |
19 | 21 | from synapse.config._base import ConfigError |
22 | from synapse.config.homeserver import HomeServerConfig | |
20 | 23 | from synapse.config.logger import setup_logging |
21 | from synapse.config.homeserver import HomeServerConfig | |
22 | 24 | from synapse.http.site import SynapseSite |
23 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
25 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
26 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore | |
24 | 27 | from synapse.replication.slave.storage.directory import DirectoryStore |
25 | 28 | from synapse.replication.slave.storage.events import SlavedEventStore |
26 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore | |
27 | 29 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore |
28 | 30 | from synapse.replication.tcp.client import ReplicationClientHandler |
31 | from synapse.server import HomeServer | |
29 | 32 | from synapse.storage.engines import create_engine |
30 | 33 | from synapse.util.httpresourcetree import create_resource_tree |
31 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn | |
34 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
32 | 35 | from synapse.util.manhole import manhole |
33 | from synapse.util.rlimit import change_resource_limit | |
34 | 36 | from synapse.util.versionstring import get_version_string |
35 | ||
36 | from synapse import events | |
37 | ||
38 | 37 | from twisted.internet import reactor |
39 | 38 | from twisted.web.resource import Resource |
40 | ||
41 | from daemonize import Daemonize | |
42 | ||
43 | import sys | |
44 | import logging | |
45 | import gc | |
46 | 39 | |
47 | 40 | logger = logging.getLogger("synapse.app.appservice") |
48 | 41 | |
180 | 173 | ps.setup() |
181 | 174 | ps.start_listening(config.worker_listeners) |
182 | 175 | |
183 | def run(): | |
184 | # make sure that we run the reactor with the sentinel log context, | |
185 | # otherwise other PreserveLoggingContext instances will get confused | |
186 | # and complain when they see the logcontext arbitrarily swapping | |
187 | # between the sentinel and `run` logcontexts. | |
188 | with PreserveLoggingContext(): | |
189 | logger.info("Running") | |
190 | change_resource_limit(config.soft_file_limit) | |
191 | if config.gc_thresholds: | |
192 | gc.set_threshold(*config.gc_thresholds) | |
193 | reactor.run() | |
194 | ||
195 | 176 | def start(): |
196 | 177 | ps.get_datastore().start_profiling() |
197 | 178 | ps.get_state_handler().start_caching() |
198 | 179 | |
199 | 180 | reactor.callWhenRunning(start) |
200 | 181 | |
201 | if config.worker_daemonize: | |
202 | daemon = Daemonize( | |
203 | app="synapse-appservice", | |
204 | pid=config.worker_pid_file, | |
205 | action=run, | |
206 | auto_close_fds=False, | |
207 | verbose=True, | |
208 | logger=logger, | |
209 | ) | |
210 | daemon.start() | |
211 | else: | |
212 | run() | |
182 | _base.start_worker_reactor("synapse-appservice", config) | |
213 | 183 | |
214 | 184 | |
215 | 185 | if __name__ == '__main__': |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
19 | from synapse import events | |
20 | from synapse.app import _base | |
18 | 21 | from synapse.config._base import ConfigError |
19 | 22 | from synapse.config.homeserver import HomeServerConfig |
20 | 23 | from synapse.config.logger import setup_logging |
24 | from synapse.crypto import context_factory | |
25 | from synapse.http.server import JsonResource | |
21 | 26 | from synapse.http.site import SynapseSite |
22 | from synapse.http.server import JsonResource | |
23 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
27 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
24 | 28 | from synapse.replication.slave.storage._base import BaseSlavedStore |
25 | 29 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
26 | 30 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore |
31 | from synapse.replication.slave.storage.directory import DirectoryStore | |
27 | 32 | from synapse.replication.slave.storage.events import SlavedEventStore |
28 | 33 | from synapse.replication.slave.storage.keys import SlavedKeyStore |
34 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore | |
29 | 35 | from synapse.replication.slave.storage.room import RoomStore |
30 | from synapse.replication.slave.storage.directory import DirectoryStore | |
31 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore | |
32 | 36 | from synapse.replication.slave.storage.transactions import TransactionStore |
33 | 37 | from synapse.replication.tcp.client import ReplicationClientHandler |
34 | 38 | from synapse.rest.client.v1.room import PublicRoomListRestServlet |
35 | 39 | from synapse.server import HomeServer |
36 | 40 | from synapse.storage.engines import create_engine |
37 | 41 | from synapse.util.httpresourcetree import create_resource_tree |
38 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
42 | from synapse.util.logcontext import LoggingContext | |
39 | 43 | from synapse.util.manhole import manhole |
40 | from synapse.util.rlimit import change_resource_limit | |
41 | 44 | from synapse.util.versionstring import get_version_string |
42 | from synapse.crypto import context_factory | |
43 | ||
44 | from synapse import events | |
45 | ||
46 | ||
47 | 45 | from twisted.internet import reactor |
48 | 46 | from twisted.web.resource import Resource |
49 | ||
50 | from daemonize import Daemonize | |
51 | ||
52 | import sys | |
53 | import logging | |
54 | import gc | |
55 | 47 | |
56 | 48 | logger = logging.getLogger("synapse.app.client_reader") |
57 | 49 | |
182 | 174 | ss.get_handlers() |
183 | 175 | ss.start_listening(config.worker_listeners) |
184 | 176 | |
185 | def run(): | |
186 | # make sure that we run the reactor with the sentinel log context, | |
187 | # otherwise other PreserveLoggingContext instances will get confused | |
188 | # and complain when they see the logcontext arbitrarily swapping | |
189 | # between the sentinel and `run` logcontexts. | |
190 | with PreserveLoggingContext(): | |
191 | logger.info("Running") | |
192 | change_resource_limit(config.soft_file_limit) | |
193 | if config.gc_thresholds: | |
194 | gc.set_threshold(*config.gc_thresholds) | |
195 | reactor.run() | |
196 | ||
197 | 177 | def start(): |
198 | 178 | ss.get_state_handler().start_caching() |
199 | 179 | ss.get_datastore().start_profiling() |
200 | 180 | |
201 | 181 | reactor.callWhenRunning(start) |
202 | 182 | |
203 | if config.worker_daemonize: | |
204 | daemon = Daemonize( | |
205 | app="synapse-client-reader", | |
206 | pid=config.worker_pid_file, | |
207 | action=run, | |
208 | auto_close_fds=False, | |
209 | verbose=True, | |
210 | logger=logger, | |
211 | ) | |
212 | daemon.start() | |
213 | else: | |
214 | run() | |
183 | _base.start_worker_reactor("synapse-client-reader", config) | |
215 | 184 | |
216 | 185 | |
217 | 186 | if __name__ == '__main__': |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
19 | from synapse import events | |
20 | from synapse.api.urls import FEDERATION_PREFIX | |
21 | from synapse.app import _base | |
18 | 22 | from synapse.config._base import ConfigError |
19 | 23 | from synapse.config.homeserver import HomeServerConfig |
20 | 24 | from synapse.config.logger import setup_logging |
25 | from synapse.crypto import context_factory | |
26 | from synapse.federation.transport.server import TransportLayerServer | |
21 | 27 | from synapse.http.site import SynapseSite |
22 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
28 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
23 | 29 | from synapse.replication.slave.storage._base import BaseSlavedStore |
30 | from synapse.replication.slave.storage.directory import DirectoryStore | |
24 | 31 | from synapse.replication.slave.storage.events import SlavedEventStore |
25 | 32 | from synapse.replication.slave.storage.keys import SlavedKeyStore |
26 | 33 | from synapse.replication.slave.storage.room import RoomStore |
27 | 34 | from synapse.replication.slave.storage.transactions import TransactionStore |
28 | from synapse.replication.slave.storage.directory import DirectoryStore | |
29 | 35 | from synapse.replication.tcp.client import ReplicationClientHandler |
30 | 36 | from synapse.server import HomeServer |
31 | 37 | from synapse.storage.engines import create_engine |
32 | 38 | from synapse.util.httpresourcetree import create_resource_tree |
33 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
39 | from synapse.util.logcontext import LoggingContext | |
34 | 40 | from synapse.util.manhole import manhole |
35 | from synapse.util.rlimit import change_resource_limit | |
36 | 41 | from synapse.util.versionstring import get_version_string |
37 | from synapse.api.urls import FEDERATION_PREFIX | |
38 | from synapse.federation.transport.server import TransportLayerServer | |
39 | from synapse.crypto import context_factory | |
40 | ||
41 | from synapse import events | |
42 | ||
43 | ||
44 | 42 | from twisted.internet import reactor |
45 | 43 | from twisted.web.resource import Resource |
46 | ||
47 | from daemonize import Daemonize | |
48 | ||
49 | import sys | |
50 | import logging | |
51 | import gc | |
52 | 44 | |
53 | 45 | logger = logging.getLogger("synapse.app.federation_reader") |
54 | 46 | |
171 | 163 | ss.get_handlers() |
172 | 164 | ss.start_listening(config.worker_listeners) |
173 | 165 | |
174 | def run(): | |
175 | # make sure that we run the reactor with the sentinel log context, | |
176 | # otherwise other PreserveLoggingContext instances will get confused | |
177 | # and complain when they see the logcontext arbitrarily swapping | |
178 | # between the sentinel and `run` logcontexts. | |
179 | with PreserveLoggingContext(): | |
180 | logger.info("Running") | |
181 | change_resource_limit(config.soft_file_limit) | |
182 | if config.gc_thresholds: | |
183 | gc.set_threshold(*config.gc_thresholds) | |
184 | reactor.run() | |
185 | ||
186 | 166 | def start(): |
187 | 167 | ss.get_state_handler().start_caching() |
188 | 168 | ss.get_datastore().start_profiling() |
189 | 169 | |
190 | 170 | reactor.callWhenRunning(start) |
191 | 171 | |
192 | if config.worker_daemonize: | |
193 | daemon = Daemonize( | |
194 | app="synapse-federation-reader", | |
195 | pid=config.worker_pid_file, | |
196 | action=run, | |
197 | auto_close_fds=False, | |
198 | verbose=True, | |
199 | logger=logger, | |
200 | ) | |
201 | daemon.start() | |
202 | else: | |
203 | run() | |
172 | _base.start_worker_reactor("synapse-federation-reader", config) | |
204 | 173 | |
205 | 174 | |
206 | 175 | if __name__ == '__main__': |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
18 | from synapse.server import HomeServer | |
19 | from synapse import events | |
20 | from synapse.app import _base | |
19 | 21 | from synapse.config._base import ConfigError |
22 | from synapse.config.homeserver import HomeServerConfig | |
20 | 23 | from synapse.config.logger import setup_logging |
21 | from synapse.config.homeserver import HomeServerConfig | |
22 | 24 | from synapse.crypto import context_factory |
25 | from synapse.federation import send_queue | |
23 | 26 | from synapse.http.site import SynapseSite |
24 | from synapse.federation import send_queue | |
25 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
27 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
26 | 28 | from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore |
29 | from synapse.replication.slave.storage.devices import SlavedDeviceStore | |
27 | 30 | from synapse.replication.slave.storage.events import SlavedEventStore |
31 | from synapse.replication.slave.storage.presence import SlavedPresenceStore | |
28 | 32 | from synapse.replication.slave.storage.receipts import SlavedReceiptsStore |
29 | 33 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore |
30 | from synapse.replication.slave.storage.presence import SlavedPresenceStore | |
31 | 34 | from synapse.replication.slave.storage.transactions import TransactionStore |
32 | from synapse.replication.slave.storage.devices import SlavedDeviceStore | |
33 | 35 | from synapse.replication.tcp.client import ReplicationClientHandler |
36 | from synapse.server import HomeServer | |
34 | 37 | from synapse.storage.engines import create_engine |
35 | 38 | from synapse.util.async import Linearizer |
36 | 39 | from synapse.util.httpresourcetree import create_resource_tree |
37 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn | |
40 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
38 | 41 | from synapse.util.manhole import manhole |
39 | from synapse.util.rlimit import change_resource_limit | |
40 | 42 | from synapse.util.versionstring import get_version_string |
41 | ||
42 | from synapse import events | |
43 | ||
44 | from twisted.internet import reactor, defer | |
43 | from twisted.internet import defer, reactor | |
45 | 44 | from twisted.web.resource import Resource |
46 | ||
47 | from daemonize import Daemonize | |
48 | ||
49 | import sys | |
50 | import logging | |
51 | import gc | |
52 | 45 | |
53 | 46 | logger = logging.getLogger("synapse.app.federation_sender") |
54 | 47 | |
212 | 205 | ps.setup() |
213 | 206 | ps.start_listening(config.worker_listeners) |
214 | 207 | |
215 | def run(): | |
216 | # make sure that we run the reactor with the sentinel log context, | |
217 | # otherwise other PreserveLoggingContext instances will get confused | |
218 | # and complain when they see the logcontext arbitrarily swapping | |
219 | # between the sentinel and `run` logcontexts. | |
220 | with PreserveLoggingContext(): | |
221 | logger.info("Running") | |
222 | change_resource_limit(config.soft_file_limit) | |
223 | if config.gc_thresholds: | |
224 | gc.set_threshold(*config.gc_thresholds) | |
225 | reactor.run() | |
226 | ||
227 | 208 | def start(): |
228 | 209 | ps.get_datastore().start_profiling() |
229 | 210 | ps.get_state_handler().start_caching() |
230 | 211 | |
231 | 212 | reactor.callWhenRunning(start) |
232 | ||
233 | if config.worker_daemonize: | |
234 | daemon = Daemonize( | |
235 | app="synapse-federation-sender", | |
236 | pid=config.worker_pid_file, | |
237 | action=run, | |
238 | auto_close_fds=False, | |
239 | verbose=True, | |
240 | logger=logger, | |
241 | ) | |
242 | daemon.start() | |
243 | else: | |
244 | run() | |
213 | _base.start_worker_reactor("synapse-federation-sender", config) | |
245 | 214 | |
246 | 215 | |
247 | 216 | class FederationSenderHandler(object): |
0 | #!/usr/bin/env python | |
1 | # -*- coding: utf-8 -*- | |
2 | # Copyright 2016 OpenMarket Ltd | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
5 | # you may not use this file except in compliance with the License. | |
6 | # You may obtain a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, | |
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
13 | # See the License for the specific language governing permissions and | |
14 | # limitations under the License. | |
15 | import logging | |
16 | import sys | |
17 | ||
18 | import synapse | |
19 | from synapse import events | |
20 | from synapse.api.errors import SynapseError | |
21 | from synapse.app import _base | |
22 | from synapse.config._base import ConfigError | |
23 | from synapse.config.homeserver import HomeServerConfig | |
24 | from synapse.config.logger import setup_logging | |
25 | from synapse.crypto import context_factory | |
26 | from synapse.http.server import JsonResource | |
27 | from synapse.http.servlet import ( | |
28 | RestServlet, parse_json_object_from_request, | |
29 | ) | |
30 | from synapse.http.site import SynapseSite | |
31 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
32 | from synapse.replication.slave.storage._base import BaseSlavedStore | |
33 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore | |
34 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore | |
35 | from synapse.replication.slave.storage.devices import SlavedDeviceStore | |
36 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore | |
37 | from synapse.replication.tcp.client import ReplicationClientHandler | |
38 | from synapse.rest.client.v2_alpha._base import client_v2_patterns | |
39 | from synapse.server import HomeServer | |
40 | from synapse.storage.engines import create_engine | |
41 | from synapse.util.httpresourcetree import create_resource_tree | |
42 | from synapse.util.logcontext import LoggingContext | |
43 | from synapse.util.manhole import manhole | |
44 | from synapse.util.versionstring import get_version_string | |
45 | from twisted.internet import defer, reactor | |
46 | from twisted.web.resource import Resource | |
47 | ||
48 | logger = logging.getLogger("synapse.app.frontend_proxy") | |
49 | ||
50 | ||
51 | class KeyUploadServlet(RestServlet): | |
52 | PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$", | |
53 | releases=()) | |
54 | ||
55 | def __init__(self, hs): | |
56 | """ | |
57 | Args: | |
58 | hs (synapse.server.HomeServer): server | |
59 | """ | |
60 | super(KeyUploadServlet, self).__init__() | |
61 | self.auth = hs.get_auth() | |
62 | self.store = hs.get_datastore() | |
63 | self.http_client = hs.get_simple_http_client() | |
64 | self.main_uri = hs.config.worker_main_http_uri | |
65 | ||
66 | @defer.inlineCallbacks | |
67 | def on_POST(self, request, device_id): | |
68 | requester = yield self.auth.get_user_by_req(request, allow_guest=True) | |
69 | user_id = requester.user.to_string() | |
70 | body = parse_json_object_from_request(request) | |
71 | ||
72 | if device_id is not None: | |
73 | # passing the device_id here is deprecated; however, we allow it | |
74 | # for now for compatibility with older clients. | |
75 | if (requester.device_id is not None and | |
76 | device_id != requester.device_id): | |
77 | logger.warning("Client uploading keys for a different device " | |
78 | "(logged in as %s, uploading for %s)", | |
79 | requester.device_id, device_id) | |
80 | else: | |
81 | device_id = requester.device_id | |
82 | ||
83 | if device_id is None: | |
84 | raise SynapseError( | |
85 | 400, | |
86 | "To upload keys, you must pass device_id when authenticating" | |
87 | ) | |
88 | ||
89 | if body: | |
90 | # They're actually trying to upload something, proxy to main synapse. | |
91 | result = yield self.http_client.post_json_get_json( | |
92 | self.main_uri + request.uri, | |
93 | body, | |
94 | ) | |
95 | ||
96 | defer.returnValue((200, result)) | |
97 | else: | |
98 | # Just interested in counts. | |
99 | result = yield self.store.count_e2e_one_time_keys(user_id, device_id) | |
100 | defer.returnValue((200, {"one_time_key_counts": result})) | |
101 | ||
102 | ||
103 | class FrontendProxySlavedStore( | |
104 | SlavedDeviceStore, | |
105 | SlavedClientIpStore, | |
106 | SlavedApplicationServiceStore, | |
107 | SlavedRegistrationStore, | |
108 | BaseSlavedStore, | |
109 | ): | |
110 | pass | |
111 | ||
112 | ||
113 | class FrontendProxyServer(HomeServer): | |
114 | def get_db_conn(self, run_new_connection=True): | |
115 | # Any param beginning with cp_ is a parameter for adbapi, and should | |
116 | # not be passed to the database engine. | |
117 | db_params = { | |
118 | k: v for k, v in self.db_config.get("args", {}).items() | |
119 | if not k.startswith("cp_") | |
120 | } | |
121 | db_conn = self.database_engine.module.connect(**db_params) | |
122 | ||
123 | if run_new_connection: | |
124 | self.database_engine.on_new_connection(db_conn) | |
125 | return db_conn | |
126 | ||
127 | def setup(self): | |
128 | logger.info("Setting up.") | |
129 | self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self) | |
130 | logger.info("Finished setting up.") | |
131 | ||
132 | def _listen_http(self, listener_config): | |
133 | port = listener_config["port"] | |
134 | bind_addresses = listener_config["bind_addresses"] | |
135 | site_tag = listener_config.get("tag", port) | |
136 | resources = {} | |
137 | for res in listener_config["resources"]: | |
138 | for name in res["names"]: | |
139 | if name == "metrics": | |
140 | resources[METRICS_PREFIX] = MetricsResource(self) | |
141 | elif name == "client": | |
142 | resource = JsonResource(self, canonical_json=False) | |
143 | KeyUploadServlet(self).register(resource) | |
144 | resources.update({ | |
145 | "/_matrix/client/r0": resource, | |
146 | "/_matrix/client/unstable": resource, | |
147 | "/_matrix/client/v2_alpha": resource, | |
148 | "/_matrix/client/api/v1": resource, | |
149 | }) | |
150 | ||
151 | root_resource = create_resource_tree(resources, Resource()) | |
152 | ||
153 | for address in bind_addresses: | |
154 | reactor.listenTCP( | |
155 | port, | |
156 | SynapseSite( | |
157 | "synapse.access.http.%s" % (site_tag,), | |
158 | site_tag, | |
159 | listener_config, | |
160 | root_resource, | |
161 | ), | |
162 | interface=address | |
163 | ) | |
164 | ||
165 | logger.info("Synapse client reader now listening on port %d", port) | |
166 | ||
167 | def start_listening(self, listeners): | |
168 | for listener in listeners: | |
169 | if listener["type"] == "http": | |
170 | self._listen_http(listener) | |
171 | elif listener["type"] == "manhole": | |
172 | bind_addresses = listener["bind_addresses"] | |
173 | ||
174 | for address in bind_addresses: | |
175 | reactor.listenTCP( | |
176 | listener["port"], | |
177 | manhole( | |
178 | username="matrix", | |
179 | password="rabbithole", | |
180 | globals={"hs": self}, | |
181 | ), | |
182 | interface=address | |
183 | ) | |
184 | else: | |
185 | logger.warn("Unrecognized listener type: %s", listener["type"]) | |
186 | ||
187 | self.get_tcp_replication().start_replication(self) | |
188 | ||
189 | def build_tcp_replication(self): | |
190 | return ReplicationClientHandler(self.get_datastore()) | |
191 | ||
192 | ||
193 | def start(config_options): | |
194 | try: | |
195 | config = HomeServerConfig.load_config( | |
196 | "Synapse frontend proxy", config_options | |
197 | ) | |
198 | except ConfigError as e: | |
199 | sys.stderr.write("\n" + e.message + "\n") | |
200 | sys.exit(1) | |
201 | ||
202 | assert config.worker_app == "synapse.app.frontend_proxy" | |
203 | ||
204 | assert config.worker_main_http_uri is not None | |
205 | ||
206 | setup_logging(config, use_worker_options=True) | |
207 | ||
208 | events.USE_FROZEN_DICTS = config.use_frozen_dicts | |
209 | ||
210 | database_engine = create_engine(config.database_config) | |
211 | ||
212 | tls_server_context_factory = context_factory.ServerContextFactory(config) | |
213 | ||
214 | ss = FrontendProxyServer( | |
215 | config.server_name, | |
216 | db_config=config.database_config, | |
217 | tls_server_context_factory=tls_server_context_factory, | |
218 | config=config, | |
219 | version_string="Synapse/" + get_version_string(synapse), | |
220 | database_engine=database_engine, | |
221 | ) | |
222 | ||
223 | ss.setup() | |
224 | ss.get_handlers() | |
225 | ss.start_listening(config.worker_listeners) | |
226 | ||
227 | def start(): | |
228 | ss.get_state_handler().start_caching() | |
229 | ss.get_datastore().start_profiling() | |
230 | ||
231 | reactor.callWhenRunning(start) | |
232 | ||
233 | _base.start_worker_reactor("synapse-frontend-proxy", config) | |
234 | ||
235 | ||
236 | if __name__ == '__main__': | |
237 | with LoggingContext("main"): | |
238 | start(sys.argv[1:]) |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | ||
16 | import synapse | |
17 | ||
18 | 15 | import gc |
19 | 16 | import logging |
20 | 17 | import os |
21 | 18 | import sys |
22 | 19 | |
20 | import synapse | |
23 | 21 | import synapse.config.logger |
22 | from synapse import events | |
23 | from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \ | |
24 | LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \ | |
25 | STATIC_PREFIX, WEB_CLIENT_PREFIX | |
26 | from synapse.app import _base | |
24 | 27 | from synapse.config._base import ConfigError |
25 | ||
26 | from synapse.python_dependencies import ( | |
27 | check_requirements, CONDITIONAL_REQUIREMENTS | |
28 | ) | |
29 | ||
28 | from synapse.config.homeserver import HomeServerConfig | |
29 | from synapse.crypto import context_factory | |
30 | from synapse.federation.transport.server import TransportLayerServer | |
31 | from synapse.http.server import RootRedirect | |
32 | from synapse.http.site import SynapseSite | |
33 | from synapse.metrics import register_memory_metrics | |
34 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
35 | from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \ | |
36 | check_requirements | |
37 | from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory | |
30 | 38 | from synapse.rest import ClientRestResource |
31 | from synapse.storage.engines import create_engine, IncorrectDatabaseSetup | |
32 | from synapse.storage import are_all_users_on_domain | |
33 | from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database | |
34 | ||
35 | from synapse.server import HomeServer | |
36 | ||
37 | from twisted.internet import reactor, defer | |
38 | from twisted.application import service | |
39 | from twisted.web.resource import Resource, EncodingResourceWrapper | |
40 | from twisted.web.static import File | |
41 | from twisted.web.server import GzipEncoderFactory | |
42 | from synapse.http.server import RootRedirect | |
39 | from synapse.rest.key.v1.server_key_resource import LocalKey | |
40 | from synapse.rest.key.v2 import KeyApiV2Resource | |
43 | 41 | from synapse.rest.media.v0.content_repository import ContentRepoResource |
44 | 42 | from synapse.rest.media.v1.media_repository import MediaRepositoryResource |
45 | from synapse.rest.key.v1.server_key_resource import LocalKey | |
46 | from synapse.rest.key.v2 import KeyApiV2Resource | |
47 | from synapse.api.urls import ( | |
48 | FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX, | |
49 | SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX, | |
50 | SERVER_KEY_V2_PREFIX, | |
51 | ) | |
52 | from synapse.config.homeserver import HomeServerConfig | |
53 | from synapse.crypto import context_factory | |
54 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
55 | from synapse.metrics import register_memory_metrics | |
56 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
57 | from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory | |
58 | from synapse.federation.transport.server import TransportLayerServer | |
59 | ||
43 | from synapse.server import HomeServer | |
44 | from synapse.storage import are_all_users_on_domain | |
45 | from synapse.storage.engines import IncorrectDatabaseSetup, create_engine | |
46 | from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database | |
47 | from synapse.util.httpresourcetree import create_resource_tree | |
48 | from synapse.util.logcontext import LoggingContext | |
49 | from synapse.util.manhole import manhole | |
60 | 50 | from synapse.util.rlimit import change_resource_limit |
61 | 51 | from synapse.util.versionstring import get_version_string |
62 | from synapse.util.httpresourcetree import create_resource_tree | |
63 | from synapse.util.manhole import manhole | |
64 | ||
65 | from synapse.http.site import SynapseSite | |
66 | ||
67 | from synapse import events | |
68 | ||
69 | from daemonize import Daemonize | |
52 | from twisted.application import service | |
53 | from twisted.internet import defer, reactor | |
54 | from twisted.web.resource import EncodingResourceWrapper, Resource | |
55 | from twisted.web.server import GzipEncoderFactory | |
56 | from twisted.web.static import File | |
70 | 57 | |
71 | 58 | logger = logging.getLogger("synapse.app.homeserver") |
72 | 59 | |
445 | 432 | # be quite busy the first few minutes |
446 | 433 | clock.call_later(5 * 60, phone_stats_home) |
447 | 434 | |
448 | def in_thread(): | |
449 | # Uncomment to enable tracing of log context changes. | |
450 | # sys.settrace(logcontext_tracer) | |
451 | ||
452 | # make sure that we run the reactor with the sentinel log context, | |
453 | # otherwise other PreserveLoggingContext instances will get confused | |
454 | # and complain when they see the logcontext arbitrarily swapping | |
455 | # between the sentinel and `run` logcontexts. | |
456 | with PreserveLoggingContext(): | |
457 | change_resource_limit(hs.config.soft_file_limit) | |
458 | if hs.config.gc_thresholds: | |
459 | gc.set_threshold(*hs.config.gc_thresholds) | |
460 | reactor.run() | |
461 | ||
462 | if hs.config.daemonize: | |
463 | ||
464 | if hs.config.print_pidfile: | |
465 | print (hs.config.pid_file) | |
466 | ||
467 | daemon = Daemonize( | |
468 | app="synapse-homeserver", | |
469 | pid=hs.config.pid_file, | |
470 | action=lambda: in_thread(), | |
471 | auto_close_fds=False, | |
472 | verbose=True, | |
473 | logger=logger, | |
474 | ) | |
475 | ||
476 | daemon.start() | |
477 | else: | |
478 | in_thread() | |
435 | if hs.config.daemonize and hs.config.print_pidfile: | |
436 | print (hs.config.pid_file) | |
437 | ||
438 | _base.start_reactor( | |
439 | "synapse-homeserver", | |
440 | hs.config.soft_file_limit, | |
441 | hs.config.gc_thresholds, | |
442 | hs.config.pid_file, | |
443 | hs.config.daemonize, | |
444 | hs.config.cpu_affinity, | |
445 | logger, | |
446 | ) | |
479 | 447 | |
480 | 448 | |
481 | 449 | def main(): |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
19 | from synapse import events | |
20 | from synapse.api.urls import ( | |
21 | CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX | |
22 | ) | |
23 | from synapse.app import _base | |
18 | 24 | from synapse.config._base import ConfigError |
19 | 25 | from synapse.config.homeserver import HomeServerConfig |
20 | 26 | from synapse.config.logger import setup_logging |
27 | from synapse.crypto import context_factory | |
21 | 28 | from synapse.http.site import SynapseSite |
22 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
29 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
23 | 30 | from synapse.replication.slave.storage._base import BaseSlavedStore |
24 | 31 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
25 | 32 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore |
32 | 39 | from synapse.storage.engines import create_engine |
33 | 40 | from synapse.storage.media_repository import MediaRepositoryStore |
34 | 41 | from synapse.util.httpresourcetree import create_resource_tree |
35 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext | |
42 | from synapse.util.logcontext import LoggingContext | |
36 | 43 | from synapse.util.manhole import manhole |
37 | from synapse.util.rlimit import change_resource_limit | |
38 | 44 | from synapse.util.versionstring import get_version_string |
39 | from synapse.api.urls import ( | |
40 | CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX | |
41 | ) | |
42 | from synapse.crypto import context_factory | |
43 | ||
44 | from synapse import events | |
45 | ||
46 | ||
47 | 45 | from twisted.internet import reactor |
48 | 46 | from twisted.web.resource import Resource |
49 | ||
50 | from daemonize import Daemonize | |
51 | ||
52 | import sys | |
53 | import logging | |
54 | import gc | |
55 | 47 | |
56 | 48 | logger = logging.getLogger("synapse.app.media_repository") |
57 | 49 | |
179 | 171 | ss.get_handlers() |
180 | 172 | ss.start_listening(config.worker_listeners) |
181 | 173 | |
182 | def run(): | |
183 | # make sure that we run the reactor with the sentinel log context, | |
184 | # otherwise other PreserveLoggingContext instances will get confused | |
185 | # and complain when they see the logcontext arbitrarily swapping | |
186 | # between the sentinel and `run` logcontexts. | |
187 | with PreserveLoggingContext(): | |
188 | logger.info("Running") | |
189 | change_resource_limit(config.soft_file_limit) | |
190 | if config.gc_thresholds: | |
191 | gc.set_threshold(*config.gc_thresholds) | |
192 | reactor.run() | |
193 | ||
194 | 174 | def start(): |
195 | 175 | ss.get_state_handler().start_caching() |
196 | 176 | ss.get_datastore().start_profiling() |
197 | 177 | |
198 | 178 | reactor.callWhenRunning(start) |
199 | 179 | |
200 | if config.worker_daemonize: | |
201 | daemon = Daemonize( | |
202 | app="synapse-media-repository", | |
203 | pid=config.worker_pid_file, | |
204 | action=run, | |
205 | auto_close_fds=False, | |
206 | verbose=True, | |
207 | logger=logger, | |
208 | ) | |
209 | daemon.start() | |
210 | else: | |
211 | run() | |
180 | _base.start_worker_reactor("synapse-media-repository", config) | |
212 | 181 | |
213 | 182 | |
214 | 183 | if __name__ == '__main__': |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import logging | |
16 | import sys | |
15 | 17 | |
16 | 18 | import synapse |
17 | ||
18 | from synapse.server import HomeServer | |
19 | from synapse import events | |
20 | from synapse.app import _base | |
19 | 21 | from synapse.config._base import ConfigError |
22 | from synapse.config.homeserver import HomeServerConfig | |
20 | 23 | from synapse.config.logger import setup_logging |
21 | from synapse.config.homeserver import HomeServerConfig | |
22 | 24 | from synapse.http.site import SynapseSite |
23 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
24 | from synapse.storage.roommember import RoomMemberStore | |
25 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
26 | from synapse.replication.slave.storage.account_data import SlavedAccountDataStore | |
25 | 27 | from synapse.replication.slave.storage.events import SlavedEventStore |
26 | 28 | from synapse.replication.slave.storage.pushers import SlavedPusherStore |
27 | 29 | from synapse.replication.slave.storage.receipts import SlavedReceiptsStore |
28 | from synapse.replication.slave.storage.account_data import SlavedAccountDataStore | |
29 | 30 | from synapse.replication.tcp.client import ReplicationClientHandler |
31 | from synapse.server import HomeServer | |
32 | from synapse.storage import DataStore | |
30 | 33 | from synapse.storage.engines import create_engine |
31 | from synapse.storage import DataStore | |
34 | from synapse.storage.roommember import RoomMemberStore | |
32 | 35 | from synapse.util.httpresourcetree import create_resource_tree |
33 | from synapse.util.logcontext import LoggingContext, preserve_fn, \ | |
34 | PreserveLoggingContext | |
36 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
35 | 37 | from synapse.util.manhole import manhole |
36 | from synapse.util.rlimit import change_resource_limit | |
37 | 38 | from synapse.util.versionstring import get_version_string |
38 | ||
39 | from synapse import events | |
40 | ||
41 | from twisted.internet import reactor, defer | |
39 | from twisted.internet import defer, reactor | |
42 | 40 | from twisted.web.resource import Resource |
43 | ||
44 | from daemonize import Daemonize | |
45 | ||
46 | import sys | |
47 | import logging | |
48 | import gc | |
49 | 41 | |
50 | 42 | logger = logging.getLogger("synapse.app.pusher") |
51 | 43 | |
243 | 235 | ps.setup() |
244 | 236 | ps.start_listening(config.worker_listeners) |
245 | 237 | |
246 | def run(): | |
247 | # make sure that we run the reactor with the sentinel log context, | |
248 | # otherwise other PreserveLoggingContext instances will get confused | |
249 | # and complain when they see the logcontext arbitrarily swapping | |
250 | # between the sentinel and `run` logcontexts. | |
251 | with PreserveLoggingContext(): | |
252 | logger.info("Running") | |
253 | change_resource_limit(config.soft_file_limit) | |
254 | if config.gc_thresholds: | |
255 | gc.set_threshold(*config.gc_thresholds) | |
256 | reactor.run() | |
257 | ||
258 | 238 | def start(): |
259 | 239 | ps.get_pusherpool().start() |
260 | 240 | ps.get_datastore().start_profiling() |
262 | 242 | |
263 | 243 | reactor.callWhenRunning(start) |
264 | 244 | |
265 | if config.worker_daemonize: | |
266 | daemon = Daemonize( | |
267 | app="synapse-pusher", | |
268 | pid=config.worker_pid_file, | |
269 | action=run, | |
270 | auto_close_fds=False, | |
271 | verbose=True, | |
272 | logger=logger, | |
273 | ) | |
274 | daemon.start() | |
275 | else: | |
276 | run() | |
245 | _base.start_worker_reactor("synapse-pusher", config) | |
277 | 246 | |
278 | 247 | |
279 | 248 | if __name__ == '__main__': |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | import contextlib | |
16 | import logging | |
17 | import sys | |
15 | 18 | |
16 | 19 | import synapse |
17 | ||
18 | 20 | from synapse.api.constants import EventTypes |
21 | from synapse.app import _base | |
19 | 22 | from synapse.config._base import ConfigError |
20 | 23 | from synapse.config.homeserver import HomeServerConfig |
21 | 24 | from synapse.config.logger import setup_logging |
22 | 25 | from synapse.handlers.presence import PresenceHandler, get_interested_parties |
26 | from synapse.http.server import JsonResource | |
23 | 27 | from synapse.http.site import SynapseSite |
24 | from synapse.http.server import JsonResource | |
25 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
26 | from synapse.rest.client.v2_alpha import sync | |
27 | from synapse.rest.client.v1 import events | |
28 | from synapse.rest.client.v1.room import RoomInitialSyncRestServlet | |
29 | from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet | |
28 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
30 | 29 | from synapse.replication.slave.storage._base import BaseSlavedStore |
31 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore | |
32 | from synapse.replication.slave.storage.events import SlavedEventStore | |
33 | from synapse.replication.slave.storage.receipts import SlavedReceiptsStore | |
34 | 30 | from synapse.replication.slave.storage.account_data import SlavedAccountDataStore |
35 | 31 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
36 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore | |
37 | from synapse.replication.slave.storage.filtering import SlavedFilteringStore | |
38 | from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore | |
39 | from synapse.replication.slave.storage.presence import SlavedPresenceStore | |
32 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore | |
40 | 33 | from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore |
41 | 34 | from synapse.replication.slave.storage.devices import SlavedDeviceStore |
35 | from synapse.replication.slave.storage.events import SlavedEventStore | |
36 | from synapse.replication.slave.storage.filtering import SlavedFilteringStore | |
37 | from synapse.replication.slave.storage.presence import SlavedPresenceStore | |
38 | from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore | |
39 | from synapse.replication.slave.storage.receipts import SlavedReceiptsStore | |
40 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore | |
42 | 41 | from synapse.replication.slave.storage.room import RoomStore |
43 | 42 | from synapse.replication.tcp.client import ReplicationClientHandler |
43 | from synapse.rest.client.v1 import events | |
44 | from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet | |
45 | from synapse.rest.client.v1.room import RoomInitialSyncRestServlet | |
46 | from synapse.rest.client.v2_alpha import sync | |
44 | 47 | from synapse.server import HomeServer |
45 | 48 | from synapse.storage.engines import create_engine |
46 | 49 | from synapse.storage.presence import UserPresenceState |
47 | 50 | from synapse.storage.roommember import RoomMemberStore |
48 | 51 | from synapse.util.httpresourcetree import create_resource_tree |
49 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn | |
52 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
50 | 53 | from synapse.util.manhole import manhole |
51 | from synapse.util.rlimit import change_resource_limit | |
52 | 54 | from synapse.util.stringutils import random_string |
53 | 55 | from synapse.util.versionstring import get_version_string |
54 | ||
55 | from twisted.internet import reactor, defer | |
56 | from twisted.internet import defer, reactor | |
56 | 57 | from twisted.web.resource import Resource |
57 | ||
58 | from daemonize import Daemonize | |
59 | ||
60 | import sys | |
61 | import logging | |
62 | import contextlib | |
63 | import gc | |
64 | 58 | |
65 | 59 | logger = logging.getLogger("synapse.app.synchrotron") |
66 | 60 | |
439 | 433 | ss.setup() |
440 | 434 | ss.start_listening(config.worker_listeners) |
441 | 435 | |
442 | def run(): | |
443 | # make sure that we run the reactor with the sentinel log context, | |
444 | # otherwise other PreserveLoggingContext instances will get confused | |
445 | # and complain when they see the logcontext arbitrarily swapping | |
446 | # between the sentinel and `run` logcontexts. | |
447 | with PreserveLoggingContext(): | |
448 | logger.info("Running") | |
449 | change_resource_limit(config.soft_file_limit) | |
450 | if config.gc_thresholds: | |
451 | gc.set_threshold(*config.gc_thresholds) | |
452 | reactor.run() | |
453 | ||
454 | 436 | def start(): |
455 | 437 | ss.get_datastore().start_profiling() |
456 | 438 | ss.get_state_handler().start_caching() |
457 | 439 | |
458 | 440 | reactor.callWhenRunning(start) |
459 | 441 | |
460 | if config.worker_daemonize: | |
461 | daemon = Daemonize( | |
462 | app="synapse-synchrotron", | |
463 | pid=config.worker_pid_file, | |
464 | action=run, | |
465 | auto_close_fds=False, | |
466 | verbose=True, | |
467 | logger=logger, | |
468 | ) | |
469 | daemon.start() | |
470 | else: | |
471 | run() | |
442 | _base.start_worker_reactor("synapse-synchrotron", config) | |
472 | 443 | |
473 | 444 | |
474 | 445 | if __name__ == '__main__': |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | import logging | |
17 | import sys | |
18 | ||
16 | 19 | import synapse |
17 | ||
18 | from synapse.server import HomeServer | |
20 | from synapse import events | |
21 | from synapse.app import _base | |
19 | 22 | from synapse.config._base import ConfigError |
23 | from synapse.config.homeserver import HomeServerConfig | |
20 | 24 | from synapse.config.logger import setup_logging |
21 | from synapse.config.homeserver import HomeServerConfig | |
22 | 25 | from synapse.crypto import context_factory |
26 | from synapse.http.server import JsonResource | |
23 | 27 | from synapse.http.site import SynapseSite |
24 | from synapse.http.server import JsonResource | |
25 | from synapse.metrics.resource import MetricsResource, METRICS_PREFIX | |
28 | from synapse.metrics.resource import METRICS_PREFIX, MetricsResource | |
26 | 29 | from synapse.replication.slave.storage._base import BaseSlavedStore |
27 | 30 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
28 | 31 | from synapse.replication.slave.storage.client_ips import SlavedClientIpStore |
30 | 33 | from synapse.replication.slave.storage.registration import SlavedRegistrationStore |
31 | 34 | from synapse.replication.tcp.client import ReplicationClientHandler |
32 | 35 | from synapse.rest.client.v2_alpha import user_directory |
36 | from synapse.server import HomeServer | |
33 | 37 | from synapse.storage.engines import create_engine |
34 | 38 | from synapse.storage.user_directory import UserDirectoryStore |
39 | from synapse.util.caches.stream_change_cache import StreamChangeCache | |
35 | 40 | from synapse.util.httpresourcetree import create_resource_tree |
36 | from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn | |
41 | from synapse.util.logcontext import LoggingContext, preserve_fn | |
37 | 42 | from synapse.util.manhole import manhole |
38 | from synapse.util.rlimit import change_resource_limit | |
39 | 43 | from synapse.util.versionstring import get_version_string |
40 | from synapse.util.caches.stream_change_cache import StreamChangeCache | |
41 | ||
42 | from synapse import events | |
43 | ||
44 | 44 | from twisted.internet import reactor |
45 | 45 | from twisted.web.resource import Resource |
46 | ||
47 | from daemonize import Daemonize | |
48 | ||
49 | import sys | |
50 | import logging | |
51 | import gc | |
52 | 46 | |
53 | 47 | logger = logging.getLogger("synapse.app.user_dir") |
54 | 48 | |
232 | 226 | ps.setup() |
233 | 227 | ps.start_listening(config.worker_listeners) |
234 | 228 | |
235 | def run(): | |
236 | # make sure that we run the reactor with the sentinel log context, | |
237 | # otherwise other PreserveLoggingContext instances will get confused | |
238 | # and complain when they see the logcontext arbitrarily swapping | |
239 | # between the sentinel and `run` logcontexts. | |
240 | with PreserveLoggingContext(): | |
241 | logger.info("Running") | |
242 | change_resource_limit(config.soft_file_limit) | |
243 | if config.gc_thresholds: | |
244 | gc.set_threshold(*config.gc_thresholds) | |
245 | reactor.run() | |
246 | ||
247 | 229 | def start(): |
248 | 230 | ps.get_datastore().start_profiling() |
249 | 231 | ps.get_state_handler().start_caching() |
250 | 232 | |
251 | 233 | reactor.callWhenRunning(start) |
252 | 234 | |
253 | if config.worker_daemonize: | |
254 | daemon = Daemonize( | |
255 | app="synapse-user-dir", | |
256 | pid=config.worker_pid_file, | |
257 | action=run, | |
258 | auto_close_fds=False, | |
259 | verbose=True, | |
260 | logger=logger, | |
261 | ) | |
262 | daemon.start() | |
263 | else: | |
264 | run() | |
235 | _base.start_worker_reactor("synapse-user-dir", config) | |
265 | 236 | |
266 | 237 | |
267 | 238 | if __name__ == '__main__': |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | # Copyright 2017 New Vector Ltd | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
28 | 29 | self.user_agent_suffix = config.get("user_agent_suffix") |
29 | 30 | self.use_frozen_dicts = config.get("use_frozen_dicts", False) |
30 | 31 | self.public_baseurl = config.get("public_baseurl") |
32 | self.cpu_affinity = config.get("cpu_affinity") | |
31 | 33 | |
32 | 34 | # Whether to send federation traffic out in this process. This only |
33 | 35 | # applies to some federation traffic, and so shouldn't be used to |
39 | 41 | self.update_user_directory = config.get("update_user_directory", True) |
40 | 42 | |
41 | 43 | self.filter_timeline_limit = config.get("filter_timeline_limit", -1) |
44 | ||
45 | # Whether we should block invites sent to users on this server | |
46 | # (other than those sent by local server admins) | |
47 | self.block_non_admin_invites = config.get( | |
48 | "block_non_admin_invites", False, | |
49 | ) | |
42 | 50 | |
43 | 51 | if self.public_baseurl is not None: |
44 | 52 | if self.public_baseurl[-1] != '/': |
146 | 154 | # When running as a daemon, the file to store the pid in |
147 | 155 | pid_file: %(pid_file)s |
148 | 156 | |
157 | # CPU affinity mask. Setting this restricts the CPUs on which the | |
158 | # process will be scheduled. It is represented as a bitmask, with the | |
159 | # lowest order bit corresponding to the first logical CPU and the | |
160 | # highest order bit corresponding to the last logical CPU. Not all CPUs | |
161 | # may exist on a given system but a mask may specify more CPUs than are | |
162 | # present. | |
163 | # | |
164 | # For example: | |
165 | # 0x00000001 is processor #0, | |
166 | # 0x00000003 is processors #0 and #1, | |
167 | # 0xFFFFFFFF is all processors (#0 through #31). | |
168 | # | |
169 | # Pinning a Python process to a single CPU is desirable, because Python | |
170 | # is inherently single-threaded due to the GIL, and can suffer a | |
171 | # 30-40%% slowdown due to cache blow-out and thread context switching | |
172 | # if the scheduler happens to schedule the underlying threads across | |
173 | # different cores. See | |
174 | # https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/. | |
175 | # | |
176 | # cpu_affinity: 0xFFFFFFFF | |
177 | ||
149 | 178 | # Whether to serve a web client from the HTTP/HTTPS root resource. |
150 | 179 | web_client: True |
151 | 180 | |
169 | 198 | # Set the limit on the returned events in the timeline in the get |
170 | 199 | # and sync operations. The default value is -1, means no upper limit. |
171 | 200 | # filter_timeline_limit: 5000 |
201 | ||
202 | # Whether room invites to users on this server should be blocked | |
203 | # (except those sent by local server admins). The default is False. | |
204 | # block_non_admin_invites: True | |
172 | 205 | |
173 | 206 | # List of ports that Synapse should listen on, their purpose and their |
174 | 207 | # configuration. |
31 | 31 | self.worker_replication_port = config.get("worker_replication_port", None) |
32 | 32 | self.worker_name = config.get("worker_name", self.worker_app) |
33 | 33 | |
34 | self.worker_main_http_uri = config.get("worker_main_http_uri", None) | |
35 | self.worker_cpu_affinity = config.get("worker_cpu_affinity") | |
36 | ||
34 | 37 | if self.worker_listeners: |
35 | 38 | for listener in self.worker_listeners: |
36 | 39 | bind_address = listener.pop("bind_address", None) |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | ||
15 | from synapse.util import logcontext | |
16 | 16 | from twisted.web.http import HTTPClient |
17 | 17 | from twisted.internet.protocol import Factory |
18 | 18 | from twisted.internet import defer, reactor |
19 | 19 | from synapse.http.endpoint import matrix_federation_endpoint |
20 | from synapse.util.logcontext import ( | |
21 | preserve_context_over_fn, preserve_context_over_deferred | |
22 | ) | |
23 | 20 | import simplejson as json |
24 | 21 | import logging |
25 | 22 | |
42 | 39 | |
43 | 40 | for i in range(5): |
44 | 41 | try: |
45 | protocol = yield preserve_context_over_fn( | |
46 | endpoint.connect, factory | |
47 | ) | |
48 | server_response, server_certificate = yield preserve_context_over_deferred( | |
49 | protocol.remote_key | |
50 | ) | |
51 | defer.returnValue((server_response, server_certificate)) | |
52 | return | |
42 | with logcontext.PreserveLoggingContext(): | |
43 | protocol = yield endpoint.connect(factory) | |
44 | server_response, server_certificate = yield protocol.remote_key | |
45 | defer.returnValue((server_response, server_certificate)) | |
53 | 46 | except SynapseKeyClientError as e: |
54 | 47 | logger.exception("Error getting key for %r" % (server_name,)) |
55 | 48 | if e.status.startswith("4"): |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | # Copyright 2017 New Vector Ltd. | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
14 | 15 | |
15 | 16 | from synapse.crypto.keyclient import fetch_server_key |
16 | 17 | from synapse.api.errors import SynapseError, Codes |
17 | from synapse.util import unwrapFirstError | |
18 | from synapse.util.async import ObservableDeferred | |
18 | from synapse.util import unwrapFirstError, logcontext | |
19 | 19 | from synapse.util.logcontext import ( |
20 | preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext, | |
20 | PreserveLoggingContext, | |
21 | 21 | preserve_fn |
22 | 22 | ) |
23 | 23 | from synapse.util.metrics import Measure |
56 | 56 | json_object(dict): The JSON object to verify. |
57 | 57 | deferred(twisted.internet.defer.Deferred): |
58 | 58 | A deferred (server_name, key_id, verify_key) tuple that resolves when |
59 | a verify key has been fetched | |
59 | a verify key has been fetched. The deferreds' callbacks are run with no | |
60 | logcontext. | |
60 | 61 | """ |
61 | 62 | |
62 | 63 | |
73 | 74 | self.perspective_servers = self.config.perspectives |
74 | 75 | self.hs = hs |
75 | 76 | |
77 | # map from server name to Deferred. Has an entry for each server with | |
78 | # an ongoing key download; the Deferred completes once the download | |
79 | # completes. | |
80 | # | |
81 | # These are regular, logcontext-agnostic Deferreds. | |
76 | 82 | self.key_downloads = {} |
77 | 83 | |
78 | 84 | def verify_json_for_server(self, server_name, json_object): |
79 | return self.verify_json_objects_for_server( | |
80 | [(server_name, json_object)] | |
81 | )[0] | |
85 | return logcontext.make_deferred_yieldable( | |
86 | self.verify_json_objects_for_server( | |
87 | [(server_name, json_object)] | |
88 | )[0] | |
89 | ) | |
82 | 90 | |
83 | 91 | def verify_json_objects_for_server(self, server_and_json): |
84 | """Bulk verfies signatures of json objects, bulk fetching keys as | |
92 | """Bulk verifies signatures of json objects, bulk fetching keys as | |
85 | 93 | necessary. |
86 | 94 | |
87 | 95 | Args: |
88 | 96 | server_and_json (list): List of pairs of (server_name, json_object) |
89 | 97 | |
90 | 98 | Returns: |
91 | list of deferreds indicating success or failure to verify each | |
92 | json object's signature for the given server_name. | |
99 | List<Deferred>: for each input pair, a deferred indicating success | |
100 | or failure to verify each json object's signature for the given | |
101 | server_name. The deferreds run their callbacks in the sentinel | |
102 | logcontext. | |
93 | 103 | """ |
94 | 104 | verify_requests = [] |
95 | 105 | |
116 | 126 | |
117 | 127 | verify_requests.append(verify_request) |
118 | 128 | |
119 | @defer.inlineCallbacks | |
120 | def handle_key_deferred(verify_request): | |
121 | server_name = verify_request.server_name | |
122 | try: | |
123 | _, key_id, verify_key = yield verify_request.deferred | |
124 | except IOError as e: | |
125 | logger.warn( | |
126 | "Got IOError when downloading keys for %s: %s %s", | |
127 | server_name, type(e).__name__, str(e.message), | |
128 | ) | |
129 | raise SynapseError( | |
130 | 502, | |
131 | "Error downloading keys for %s" % (server_name,), | |
132 | Codes.UNAUTHORIZED, | |
133 | ) | |
134 | except Exception as e: | |
135 | logger.exception( | |
136 | "Got Exception when downloading keys for %s: %s %s", | |
137 | server_name, type(e).__name__, str(e.message), | |
138 | ) | |
139 | raise SynapseError( | |
140 | 401, | |
141 | "No key for %s with id %s" % (server_name, key_ids), | |
142 | Codes.UNAUTHORIZED, | |
143 | ) | |
144 | ||
145 | json_object = verify_request.json_object | |
146 | ||
147 | logger.debug("Got key %s %s:%s for server %s, verifying" % ( | |
148 | key_id, verify_key.alg, verify_key.version, server_name, | |
149 | )) | |
150 | try: | |
151 | verify_signed_json(json_object, server_name, verify_key) | |
152 | except: | |
153 | raise SynapseError( | |
154 | 401, | |
155 | "Invalid signature for server %s with key %s:%s" % ( | |
156 | server_name, verify_key.alg, verify_key.version | |
157 | ), | |
158 | Codes.UNAUTHORIZED, | |
159 | ) | |
160 | ||
161 | server_to_deferred = { | |
162 | server_name: defer.Deferred() | |
163 | for server_name, _ in server_and_json | |
164 | } | |
165 | ||
166 | with PreserveLoggingContext(): | |
167 | ||
168 | # We want to wait for any previous lookups to complete before | |
169 | # proceeding. | |
170 | wait_on_deferred = self.wait_for_previous_lookups( | |
171 | [server_name for server_name, _ in server_and_json], | |
172 | server_to_deferred, | |
173 | ) | |
174 | ||
175 | # Actually start fetching keys. | |
176 | wait_on_deferred.addBoth( | |
177 | lambda _: self.get_server_verify_keys(verify_requests) | |
178 | ) | |
179 | ||
180 | # When we've finished fetching all the keys for a given server_name, | |
181 | # resolve the deferred passed to `wait_for_previous_lookups` so that | |
182 | # any lookups waiting will proceed. | |
183 | server_to_request_ids = {} | |
184 | ||
185 | def remove_deferreds(res, server_name, verify_request): | |
186 | request_id = id(verify_request) | |
187 | server_to_request_ids[server_name].discard(request_id) | |
188 | if not server_to_request_ids[server_name]: | |
189 | d = server_to_deferred.pop(server_name, None) | |
190 | if d: | |
191 | d.callback(None) | |
192 | return res | |
193 | ||
194 | for verify_request in verify_requests: | |
195 | server_name = verify_request.server_name | |
196 | request_id = id(verify_request) | |
197 | server_to_request_ids.setdefault(server_name, set()).add(request_id) | |
198 | deferred.addBoth(remove_deferreds, server_name, verify_request) | |
129 | preserve_fn(self._start_key_lookups)(verify_requests) | |
199 | 130 | |
200 | 131 | # Pass those keys to handle_key_deferred so that the json object |
201 | 132 | # signatures can be verified |
133 | handle = preserve_fn(_handle_key_deferred) | |
202 | 134 | return [ |
203 | preserve_context_over_fn(handle_key_deferred, verify_request) | |
204 | for verify_request in verify_requests | |
135 | handle(rq) for rq in verify_requests | |
205 | 136 | ] |
137 | ||
138 | @defer.inlineCallbacks | |
139 | def _start_key_lookups(self, verify_requests): | |
140 | """Sets off the key fetches for each verify request | |
141 | ||
142 | Once each fetch completes, verify_request.deferred will be resolved. | |
143 | ||
144 | Args: | |
145 | verify_requests (List[VerifyKeyRequest]): | |
146 | """ | |
147 | ||
148 | # create a deferred for each server we're going to look up the keys | |
149 | # for; we'll resolve them once we have completed our lookups. | |
150 | # These will be passed into wait_for_previous_lookups to block | |
151 | # any other lookups until we have finished. | |
152 | # The deferreds are called with no logcontext. | |
153 | server_to_deferred = { | |
154 | rq.server_name: defer.Deferred() | |
155 | for rq in verify_requests | |
156 | } | |
157 | ||
158 | # We want to wait for any previous lookups to complete before | |
159 | # proceeding. | |
160 | yield self.wait_for_previous_lookups( | |
161 | [rq.server_name for rq in verify_requests], | |
162 | server_to_deferred, | |
163 | ) | |
164 | ||
165 | # Actually start fetching keys. | |
166 | self._get_server_verify_keys(verify_requests) | |
167 | ||
168 | # When we've finished fetching all the keys for a given server_name, | |
169 | # resolve the deferred passed to `wait_for_previous_lookups` so that | |
170 | # any lookups waiting will proceed. | |
171 | # | |
172 | # map from server name to a set of request ids | |
173 | server_to_request_ids = {} | |
174 | ||
175 | for verify_request in verify_requests: | |
176 | server_name = verify_request.server_name | |
177 | request_id = id(verify_request) | |
178 | server_to_request_ids.setdefault(server_name, set()).add(request_id) | |
179 | ||
180 | def remove_deferreds(res, verify_request): | |
181 | server_name = verify_request.server_name | |
182 | request_id = id(verify_request) | |
183 | server_to_request_ids[server_name].discard(request_id) | |
184 | if not server_to_request_ids[server_name]: | |
185 | d = server_to_deferred.pop(server_name, None) | |
186 | if d: | |
187 | d.callback(None) | |
188 | return res | |
189 | ||
190 | for verify_request in verify_requests: | |
191 | verify_request.deferred.addBoth( | |
192 | remove_deferreds, verify_request, | |
193 | ) | |
206 | 194 | |
207 | 195 | @defer.inlineCallbacks |
208 | 196 | def wait_for_previous_lookups(self, server_names, server_to_deferred): |
211 | 199 | Args: |
212 | 200 | server_names (list): list of server_names we want to lookup |
213 | 201 | server_to_deferred (dict): server_name to deferred which gets |
214 | resolved once we've finished looking up keys for that server | |
202 | resolved once we've finished looking up keys for that server. | |
203 | The Deferreds should be regular twisted ones which call their | |
204 | callbacks with no logcontext. | |
205 | ||
206 | Returns: a Deferred which resolves once all key lookups for the given | |
207 | servers have completed. Follows the synapse rules of logcontext | |
208 | preservation. | |
215 | 209 | """ |
216 | 210 | while True: |
217 | 211 | wait_on = [ |
225 | 219 | else: |
226 | 220 | break |
227 | 221 | |
222 | def rm(r, server_name_): | |
223 | self.key_downloads.pop(server_name_, None) | |
224 | return r | |
225 | ||
228 | 226 | for server_name, deferred in server_to_deferred.items(): |
229 | d = ObservableDeferred(preserve_context_over_deferred(deferred)) | |
230 | self.key_downloads[server_name] = d | |
231 | ||
232 | def rm(r, server_name): | |
233 | self.key_downloads.pop(server_name, None) | |
234 | return r | |
235 | ||
236 | d.addBoth(rm, server_name) | |
237 | ||
238 | def get_server_verify_keys(self, verify_requests): | |
227 | self.key_downloads[server_name] = deferred | |
228 | deferred.addBoth(rm, server_name) | |
229 | ||
230 | def _get_server_verify_keys(self, verify_requests): | |
239 | 231 | """Tries to find at least one key for each verify request |
240 | 232 | |
241 | 233 | For each verify_request, verify_request.deferred is called back with |
304 | 296 | if not missing_keys: |
305 | 297 | break |
306 | 298 | |
307 | for verify_request in requests_missing_keys.values(): | |
308 | verify_request.deferred.errback(SynapseError( | |
309 | 401, | |
310 | "No key for %s with id %s" % ( | |
311 | verify_request.server_name, verify_request.key_ids, | |
312 | ), | |
313 | Codes.UNAUTHORIZED, | |
314 | )) | |
299 | with PreserveLoggingContext(): | |
300 | for verify_request in requests_missing_keys: | |
301 | verify_request.deferred.errback(SynapseError( | |
302 | 401, | |
303 | "No key for %s with id %s" % ( | |
304 | verify_request.server_name, verify_request.key_ids, | |
305 | ), | |
306 | Codes.UNAUTHORIZED, | |
307 | )) | |
315 | 308 | |
316 | 309 | def on_err(err): |
317 | for verify_request in verify_requests: | |
318 | if not verify_request.deferred.called: | |
319 | verify_request.deferred.errback(err) | |
320 | ||
321 | do_iterations().addErrback(on_err) | |
310 | with PreserveLoggingContext(): | |
311 | for verify_request in verify_requests: | |
312 | if not verify_request.deferred.called: | |
313 | verify_request.deferred.errback(err) | |
314 | ||
315 | preserve_fn(do_iterations)().addErrback(on_err) | |
322 | 316 | |
323 | 317 | @defer.inlineCallbacks |
324 | 318 | def get_keys_from_store(self, server_name_and_key_ids): |
332 | 326 | Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from |
333 | 327 | server_name -> key_id -> VerifyKey |
334 | 328 | """ |
335 | res = yield preserve_context_over_deferred(defer.gatherResults( | |
329 | res = yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
336 | 330 | [ |
337 | 331 | preserve_fn(self.store.get_server_verify_keys)( |
338 | 332 | server_name, key_ids |
340 | 334 | for server_name, key_ids in server_name_and_key_ids |
341 | 335 | ], |
342 | 336 | consumeErrors=True, |
343 | )).addErrback(unwrapFirstError) | |
337 | ).addErrback(unwrapFirstError)) | |
344 | 338 | |
345 | 339 | defer.returnValue(dict(res)) |
346 | 340 | |
361 | 355 | ) |
362 | 356 | defer.returnValue({}) |
363 | 357 | |
364 | results = yield preserve_context_over_deferred(defer.gatherResults( | |
358 | results = yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
365 | 359 | [ |
366 | 360 | preserve_fn(get_key)(p_name, p_keys) |
367 | 361 | for p_name, p_keys in self.perspective_servers.items() |
368 | 362 | ], |
369 | 363 | consumeErrors=True, |
370 | )).addErrback(unwrapFirstError) | |
364 | ).addErrback(unwrapFirstError)) | |
371 | 365 | |
372 | 366 | union_of_keys = {} |
373 | 367 | for result in results: |
401 | 395 | |
402 | 396 | defer.returnValue(keys) |
403 | 397 | |
404 | results = yield preserve_context_over_deferred(defer.gatherResults( | |
398 | results = yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
405 | 399 | [ |
406 | 400 | preserve_fn(get_key)(server_name, key_ids) |
407 | 401 | for server_name, key_ids in server_name_and_key_ids |
408 | 402 | ], |
409 | 403 | consumeErrors=True, |
410 | )).addErrback(unwrapFirstError) | |
404 | ).addErrback(unwrapFirstError)) | |
411 | 405 | |
412 | 406 | merged = {} |
413 | 407 | for result in results: |
484 | 478 | for server_name, response_keys in processed_response.items(): |
485 | 479 | keys.setdefault(server_name, {}).update(response_keys) |
486 | 480 | |
487 | yield preserve_context_over_deferred(defer.gatherResults( | |
481 | yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
488 | 482 | [ |
489 | 483 | preserve_fn(self.store_keys)( |
490 | 484 | server_name=server_name, |
494 | 488 | for server_name, response_keys in keys.items() |
495 | 489 | ], |
496 | 490 | consumeErrors=True |
497 | )).addErrback(unwrapFirstError) | |
491 | ).addErrback(unwrapFirstError)) | |
498 | 492 | |
499 | 493 | defer.returnValue(keys) |
500 | 494 | |
542 | 536 | |
543 | 537 | keys.update(response_keys) |
544 | 538 | |
545 | yield preserve_context_over_deferred(defer.gatherResults( | |
539 | yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
546 | 540 | [ |
547 | 541 | preserve_fn(self.store_keys)( |
548 | 542 | server_name=key_server_name, |
552 | 546 | for key_server_name, verify_keys in keys.items() |
553 | 547 | ], |
554 | 548 | consumeErrors=True |
555 | )).addErrback(unwrapFirstError) | |
549 | ).addErrback(unwrapFirstError)) | |
556 | 550 | |
557 | 551 | defer.returnValue(keys) |
558 | 552 | |
618 | 612 | response_keys.update(verify_keys) |
619 | 613 | response_keys.update(old_verify_keys) |
620 | 614 | |
621 | yield preserve_context_over_deferred(defer.gatherResults( | |
615 | yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
622 | 616 | [ |
623 | 617 | preserve_fn(self.store.store_server_keys_json)( |
624 | 618 | server_name=server_name, |
631 | 625 | for key_id in updated_key_ids |
632 | 626 | ], |
633 | 627 | consumeErrors=True, |
634 | )).addErrback(unwrapFirstError) | |
628 | ).addErrback(unwrapFirstError)) | |
635 | 629 | |
636 | 630 | results[server_name] = response_keys |
637 | 631 | |
709 | 703 | |
710 | 704 | defer.returnValue(verify_keys) |
711 | 705 | |
712 | @defer.inlineCallbacks | |
713 | 706 | def store_keys(self, server_name, from_server, verify_keys): |
714 | 707 | """Store a collection of verify keys for a given server |
715 | 708 | Args: |
720 | 713 | A deferred that completes when the keys are stored. |
721 | 714 | """ |
722 | 715 | # TODO(markjh): Store whether the keys have expired. |
723 | yield preserve_context_over_deferred(defer.gatherResults( | |
716 | return logcontext.make_deferred_yieldable(defer.gatherResults( | |
724 | 717 | [ |
725 | 718 | preserve_fn(self.store.store_server_verify_key)( |
726 | 719 | server_name, server_name, key.time_added, key |
728 | 721 | for key_id, key in verify_keys.items() |
729 | 722 | ], |
730 | 723 | consumeErrors=True, |
731 | )).addErrback(unwrapFirstError) | |
724 | ).addErrback(unwrapFirstError)) | |
725 | ||
726 | ||
727 | @defer.inlineCallbacks | |
728 | def _handle_key_deferred(verify_request): | |
729 | server_name = verify_request.server_name | |
730 | try: | |
731 | with PreserveLoggingContext(): | |
732 | _, key_id, verify_key = yield verify_request.deferred | |
733 | except IOError as e: | |
734 | logger.warn( | |
735 | "Got IOError when downloading keys for %s: %s %s", | |
736 | server_name, type(e).__name__, str(e.message), | |
737 | ) | |
738 | raise SynapseError( | |
739 | 502, | |
740 | "Error downloading keys for %s" % (server_name,), | |
741 | Codes.UNAUTHORIZED, | |
742 | ) | |
743 | except Exception as e: | |
744 | logger.exception( | |
745 | "Got Exception when downloading keys for %s: %s %s", | |
746 | server_name, type(e).__name__, str(e.message), | |
747 | ) | |
748 | raise SynapseError( | |
749 | 401, | |
750 | "No key for %s with id %s" % (server_name, verify_request.key_ids), | |
751 | Codes.UNAUTHORIZED, | |
752 | ) | |
753 | ||
754 | json_object = verify_request.json_object | |
755 | ||
756 | logger.debug("Got key %s %s:%s for server %s, verifying" % ( | |
757 | key_id, verify_key.alg, verify_key.version, server_name, | |
758 | )) | |
759 | try: | |
760 | verify_signed_json(json_object, server_name, verify_key) | |
761 | except: | |
762 | raise SynapseError( | |
763 | 401, | |
764 | "Invalid signature for server %s with key %s:%s" % ( | |
765 | server_name, verify_key.alg, verify_key.version | |
766 | ), | |
767 | Codes.UNAUTHORIZED, | |
768 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 New Vector Ltd. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | ||
16 | def check_event_for_spam(event): | |
17 | """Checks if a given event is considered "spammy" by this server. | |
18 | ||
19 | If the server considers an event spammy, then it will be rejected if | |
20 | sent by a local user. If it is sent by a user on another server, then | |
21 | users receive a blank event. | |
22 | ||
23 | Args: | |
24 | event (synapse.events.EventBase): the event to be checked | |
25 | ||
26 | Returns: | |
27 | bool: True if the event is spammy. | |
28 | """ | |
29 | if not hasattr(event, "content") or "body" not in event.content: | |
30 | return False | |
31 | ||
32 | # for example: | |
33 | # | |
34 | # if "the third flower is green" in event.content["body"]: | |
35 | # return True | |
36 | ||
37 | return False |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
15 | ||
16 | from twisted.internet import defer | |
17 | ||
18 | from synapse.events.utils import prune_event | |
19 | ||
20 | from synapse.crypto.event_signing import check_event_content_hash | |
14 | import logging | |
21 | 15 | |
22 | 16 | from synapse.api.errors import SynapseError |
23 | ||
24 | from synapse.util import unwrapFirstError | |
25 | from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred | |
26 | ||
27 | import logging | |
28 | ||
17 | from synapse.crypto.event_signing import check_event_content_hash | |
18 | from synapse.events import spamcheck | |
19 | from synapse.events.utils import prune_event | |
20 | from synapse.util import unwrapFirstError, logcontext | |
21 | from twisted.internet import defer | |
29 | 22 | |
30 | 23 | logger = logging.getLogger(__name__) |
31 | 24 | |
56 | 49 | """ |
57 | 50 | deferreds = self._check_sigs_and_hashes(pdus) |
58 | 51 | |
59 | def callback(pdu): | |
60 | return pdu | |
52 | @defer.inlineCallbacks | |
53 | def handle_check_result(pdu, deferred): | |
54 | try: | |
55 | res = yield logcontext.make_deferred_yieldable(deferred) | |
56 | except SynapseError: | |
57 | res = None | |
61 | 58 | |
62 | def errback(failure, pdu): | |
63 | failure.trap(SynapseError) | |
64 | return None | |
65 | ||
66 | def try_local_db(res, pdu): | |
67 | 59 | if not res: |
68 | 60 | # Check local db. |
69 | return self.store.get_event( | |
61 | res = yield self.store.get_event( | |
70 | 62 | pdu.event_id, |
71 | 63 | allow_rejected=True, |
72 | 64 | allow_none=True, |
73 | 65 | ) |
74 | return res | |
75 | 66 | |
76 | def try_remote(res, pdu): | |
77 | 67 | if not res and pdu.origin != origin: |
78 | return self.get_pdu( | |
79 | destinations=[pdu.origin], | |
80 | event_id=pdu.event_id, | |
81 | outlier=outlier, | |
82 | timeout=10000, | |
83 | ).addErrback(lambda e: None) | |
84 | return res | |
68 | try: | |
69 | res = yield self.get_pdu( | |
70 | destinations=[pdu.origin], | |
71 | event_id=pdu.event_id, | |
72 | outlier=outlier, | |
73 | timeout=10000, | |
74 | ) | |
75 | except SynapseError: | |
76 | pass | |
85 | 77 | |
86 | def warn(res, pdu): | |
87 | 78 | if not res: |
88 | 79 | logger.warn( |
89 | 80 | "Failed to find copy of %s with valid signature", |
90 | 81 | pdu.event_id, |
91 | 82 | ) |
92 | return res | |
93 | 83 | |
94 | for pdu, deferred in zip(pdus, deferreds): | |
95 | deferred.addCallbacks( | |
96 | callback, errback, errbackArgs=[pdu] | |
97 | ).addCallback( | |
98 | try_local_db, pdu | |
99 | ).addCallback( | |
100 | try_remote, pdu | |
101 | ).addCallback( | |
102 | warn, pdu | |
84 | defer.returnValue(res) | |
85 | ||
86 | handle = logcontext.preserve_fn(handle_check_result) | |
87 | deferreds2 = [ | |
88 | handle(pdu, deferred) | |
89 | for pdu, deferred in zip(pdus, deferreds) | |
90 | ] | |
91 | ||
92 | valid_pdus = yield logcontext.make_deferred_yieldable( | |
93 | defer.gatherResults( | |
94 | deferreds2, | |
95 | consumeErrors=True, | |
103 | 96 | ) |
104 | ||
105 | valid_pdus = yield preserve_context_over_deferred(defer.gatherResults( | |
106 | deferreds, | |
107 | consumeErrors=True | |
108 | )).addErrback(unwrapFirstError) | |
97 | ).addErrback(unwrapFirstError) | |
109 | 98 | |
110 | 99 | if include_none: |
111 | 100 | defer.returnValue(valid_pdus) |
113 | 102 | defer.returnValue([p for p in valid_pdus if p]) |
114 | 103 | |
115 | 104 | def _check_sigs_and_hash(self, pdu): |
116 | return self._check_sigs_and_hashes([pdu])[0] | |
105 | return logcontext.make_deferred_yieldable( | |
106 | self._check_sigs_and_hashes([pdu])[0], | |
107 | ) | |
117 | 108 | |
118 | 109 | def _check_sigs_and_hashes(self, pdus): |
119 | """Throws a SynapseError if a PDU does not have the correct | |
120 | signatures. | |
110 | """Checks that each of the received events is correctly signed by the | |
111 | sending server. | |
112 | ||
113 | Args: | |
114 | pdus (list[FrozenEvent]): the events to be checked | |
121 | 115 | |
122 | 116 | Returns: |
123 | FrozenEvent: Either the given event or it redacted if it failed the | |
124 | content hash check. | |
117 | list[Deferred]: for each input event, a deferred which: | |
118 | * returns the original event if the checks pass | |
119 | * returns a redacted version of the event (if the signature | |
120 | matched but the hash did not) | |
121 | * throws a SynapseError if the signature check failed. | |
122 | The deferreds run their callbacks in the sentinel logcontext. | |
125 | 123 | """ |
126 | 124 | |
127 | 125 | redacted_pdus = [ |
129 | 127 | for pdu in pdus |
130 | 128 | ] |
131 | 129 | |
132 | deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([ | |
130 | deferreds = self.keyring.verify_json_objects_for_server([ | |
133 | 131 | (p.origin, p.get_pdu_json()) |
134 | 132 | for p in redacted_pdus |
135 | 133 | ]) |
136 | 134 | |
135 | ctx = logcontext.LoggingContext.current_context() | |
136 | ||
137 | 137 | def callback(_, pdu, redacted): |
138 | if not check_event_content_hash(pdu): | |
139 | logger.warn( | |
140 | "Event content has been tampered, redacting %s: %s", | |
141 | pdu.event_id, pdu.get_pdu_json() | |
142 | ) | |
143 | return redacted | |
144 | return pdu | |
138 | with logcontext.PreserveLoggingContext(ctx): | |
139 | if not check_event_content_hash(pdu): | |
140 | logger.warn( | |
141 | "Event content has been tampered, redacting %s: %s", | |
142 | pdu.event_id, pdu.get_pdu_json() | |
143 | ) | |
144 | return redacted | |
145 | ||
146 | if spamcheck.check_event_for_spam(pdu): | |
147 | logger.warn( | |
148 | "Event contains spam, redacting %s: %s", | |
149 | pdu.event_id, pdu.get_pdu_json() | |
150 | ) | |
151 | return redacted | |
152 | ||
153 | return pdu | |
145 | 154 | |
146 | 155 | def errback(failure, pdu): |
147 | 156 | failure.trap(SynapseError) |
148 | logger.warn( | |
149 | "Signature check failed for %s", | |
150 | pdu.event_id, | |
151 | ) | |
157 | with logcontext.PreserveLoggingContext(ctx): | |
158 | logger.warn( | |
159 | "Signature check failed for %s", | |
160 | pdu.event_id, | |
161 | ) | |
152 | 162 | return failure |
153 | 163 | |
154 | 164 | for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus): |
21 | 21 | from synapse.api.errors import ( |
22 | 22 | CodeMessageException, HttpResponseException, SynapseError, |
23 | 23 | ) |
24 | from synapse.util import unwrapFirstError | |
24 | from synapse.util import unwrapFirstError, logcontext | |
25 | 25 | from synapse.util.caches.expiringcache import ExpiringCache |
26 | 26 | from synapse.util.logutils import log_function |
27 | 27 | from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred |
188 | 188 | ] |
189 | 189 | |
190 | 190 | # FIXME: We should handle signature failures more gracefully. |
191 | pdus[:] = yield preserve_context_over_deferred(defer.gatherResults( | |
191 | pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults( | |
192 | 192 | self._check_sigs_and_hashes(pdus), |
193 | 193 | consumeErrors=True, |
194 | )).addErrback(unwrapFirstError) | |
194 | ).addErrback(unwrapFirstError)) | |
195 | 195 | |
196 | 196 | defer.returnValue(pdus) |
197 | 197 | |
251 | 251 | pdu = pdu_list[0] |
252 | 252 | |
253 | 253 | # Check signatures are correct. |
254 | signed_pdu = yield self._check_sigs_and_hashes([pdu])[0] | |
254 | signed_pdu = yield self._check_sigs_and_hash(pdu) | |
255 | 255 | |
256 | 256 | break |
257 | 257 |
152 | 152 | class BaseFederationServlet(object): |
153 | 153 | REQUIRE_AUTH = True |
154 | 154 | |
155 | def __init__(self, handler, authenticator, ratelimiter, server_name, | |
156 | room_list_handler): | |
155 | def __init__(self, handler, authenticator, ratelimiter, server_name): | |
157 | 156 | self.handler = handler |
158 | 157 | self.authenticator = authenticator |
159 | 158 | self.ratelimiter = ratelimiter |
160 | self.room_list_handler = room_list_handler | |
161 | 159 | |
162 | 160 | def _wrap(self, func): |
163 | 161 | authenticator = self.authenticator |
589 | 587 | else: |
590 | 588 | network_tuple = ThirdPartyInstanceID(None, None) |
591 | 589 | |
592 | data = yield self.room_list_handler.get_local_public_room_list( | |
590 | data = yield self.handler.get_local_public_room_list( | |
593 | 591 | limit, since_token, |
594 | 592 | network_tuple=network_tuple |
595 | 593 | ) |
610 | 608 | })) |
611 | 609 | |
612 | 610 | |
613 | SERVLET_CLASSES = ( | |
611 | FEDERATION_SERVLET_CLASSES = ( | |
614 | 612 | FederationSendServlet, |
615 | 613 | FederationPullServlet, |
616 | 614 | FederationEventServlet, |
633 | 631 | FederationThirdPartyInviteExchangeServlet, |
634 | 632 | On3pidBindServlet, |
635 | 633 | OpenIdUserInfo, |
636 | PublicRoomList, | |
637 | 634 | FederationVersionServlet, |
638 | 635 | ) |
639 | 636 | |
637 | ROOM_LIST_CLASSES = ( | |
638 | PublicRoomList, | |
639 | ) | |
640 | ||
640 | 641 | |
641 | 642 | def register_servlets(hs, resource, authenticator, ratelimiter): |
642 | for servletclass in SERVLET_CLASSES: | |
643 | for servletclass in FEDERATION_SERVLET_CLASSES: | |
643 | 644 | servletclass( |
644 | 645 | handler=hs.get_replication_layer(), |
645 | 646 | authenticator=authenticator, |
646 | 647 | ratelimiter=ratelimiter, |
647 | 648 | server_name=hs.hostname, |
648 | room_list_handler=hs.get_room_list_handler(), | |
649 | 649 | ).register(resource) |
650 | ||
651 | for servletclass in ROOM_LIST_CLASSES: | |
652 | servletclass( | |
653 | handler=hs.get_room_list_handler(), | |
654 | authenticator=authenticator, | |
655 | ratelimiter=ratelimiter, | |
656 | server_name=hs.hostname, | |
657 | ).register(resource) |
269 | 269 | user_id (str) |
270 | 270 | from_token (StreamToken) |
271 | 271 | """ |
272 | now_token = yield self.hs.get_event_sources().get_current_token() | |
273 | ||
272 | 274 | room_ids = yield self.store.get_rooms_for_user(user_id) |
273 | 275 | |
274 | 276 | # First we check if any devices have changed |
279 | 281 | # Then work out if any users have since joined |
280 | 282 | rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key) |
281 | 283 | |
284 | member_events = yield self.store.get_membership_changes_for_user( | |
285 | user_id, from_token.room_key, now_token.room_key | |
286 | ) | |
287 | rooms_changed.update(event.room_id for event in member_events) | |
288 | ||
282 | 289 | stream_ordering = RoomStreamToken.parse_stream_token( |
283 | from_token.room_key).stream | |
290 | from_token.room_key | |
291 | ).stream | |
284 | 292 | |
285 | 293 | possibly_changed = set(changed) |
294 | possibly_left = set() | |
286 | 295 | for room_id in rooms_changed: |
296 | current_state_ids = yield self.store.get_current_state_ids(room_id) | |
297 | ||
298 | # The user may have left the room | |
299 | # TODO: Check if they actually did or if we were just invited. | |
300 | if room_id not in room_ids: | |
301 | for key, event_id in current_state_ids.iteritems(): | |
302 | etype, state_key = key | |
303 | if etype != EventTypes.Member: | |
304 | continue | |
305 | possibly_left.add(state_key) | |
306 | continue | |
307 | ||
287 | 308 | # Fetch the current state at the time. |
288 | 309 | try: |
289 | 310 | event_ids = yield self.store.get_forward_extremeties_for_room( |
294 | 315 | # ordering: treat it the same as a new room |
295 | 316 | event_ids = [] |
296 | 317 | |
297 | current_state_ids = yield self.store.get_current_state_ids(room_id) | |
298 | ||
299 | 318 | # special-case for an empty prev state: include all members |
300 | 319 | # in the changed list |
301 | 320 | if not event_ids: |
306 | 325 | possibly_changed.add(state_key) |
307 | 326 | continue |
308 | 327 | |
328 | current_member_id = current_state_ids.get((EventTypes.Member, user_id)) | |
329 | if not current_member_id: | |
330 | continue | |
331 | ||
309 | 332 | # mapping from event_id -> state_dict |
310 | 333 | prev_state_ids = yield self.store.get_state_ids_for_events(event_ids) |
334 | ||
335 | # Check if we've joined the room? If so we just blindly add all the users to | |
336 | # the "possibly changed" users. | |
337 | for state_dict in prev_state_ids.itervalues(): | |
338 | member_event = state_dict.get((EventTypes.Member, user_id), None) | |
339 | if not member_event or member_event != current_member_id: | |
340 | for key, event_id in current_state_ids.iteritems(): | |
341 | etype, state_key = key | |
342 | if etype != EventTypes.Member: | |
343 | continue | |
344 | possibly_changed.add(state_key) | |
345 | break | |
311 | 346 | |
312 | 347 | # If there has been any change in membership, include them in the |
313 | 348 | # possibly changed list. We'll check if they are joined below, |
319 | 354 | |
320 | 355 | # check if this member has changed since any of the extremities |
321 | 356 | # at the stream_ordering, and add them to the list if so. |
322 | for state_dict in prev_state_ids.values(): | |
357 | for state_dict in prev_state_ids.itervalues(): | |
323 | 358 | prev_event_id = state_dict.get(key, None) |
324 | 359 | if not prev_event_id or prev_event_id != event_id: |
325 | possibly_changed.add(state_key) | |
360 | if state_key != user_id: | |
361 | possibly_changed.add(state_key) | |
326 | 362 | break |
327 | 363 | |
328 | users_who_share_room = yield self.store.get_users_who_share_room_with_user( | |
329 | user_id | |
330 | ) | |
331 | ||
332 | # Take the intersection of the users whose devices may have changed | |
333 | # and those that actually still share a room with the user | |
334 | defer.returnValue(users_who_share_room & possibly_changed) | |
364 | if possibly_changed or possibly_left: | |
365 | users_who_share_room = yield self.store.get_users_who_share_room_with_user( | |
366 | user_id | |
367 | ) | |
368 | ||
369 | # Take the intersection of the users whose devices may have changed | |
370 | # and those that actually still share a room with the user | |
371 | possibly_joined = possibly_changed & users_who_share_room | |
372 | possibly_left = (possibly_changed | possibly_left) - users_who_share_room | |
373 | else: | |
374 | possibly_joined = [] | |
375 | possibly_left = [] | |
376 | ||
377 | defer.returnValue({ | |
378 | "changed": list(possibly_joined), | |
379 | "left": list(possibly_left), | |
380 | }) | |
335 | 381 | |
336 | 382 | @defer.inlineCallbacks |
337 | 383 | def on_federation_query_user_devices(self, user_id): |
1073 | 1073 | if is_blocked: |
1074 | 1074 | raise SynapseError(403, "This room has been blocked on this server") |
1075 | 1075 | |
1076 | if self.hs.config.block_non_admin_invites: | |
1077 | raise SynapseError(403, "This server does not accept room invites") | |
1078 | ||
1076 | 1079 | membership = event.content.get("membership") |
1077 | 1080 | if event.type != EventTypes.Member or membership != Membership.INVITE: |
1078 | 1081 | raise SynapseError(400, "The event was not an m.room.member invite event") |
1412 | 1415 | auth_events=auth_events, |
1413 | 1416 | ) |
1414 | 1417 | |
1415 | if not event.internal_metadata.is_outlier(): | |
1418 | if not event.internal_metadata.is_outlier() and not backfilled: | |
1416 | 1419 | yield self.action_generator.handle_push_actions_for_event( |
1417 | 1420 | event, context |
1418 | 1421 | ) |
1605 | 1608 | |
1606 | 1609 | context.rejected = RejectedReason.AUTH_ERROR |
1607 | 1610 | |
1608 | if event.type == EventTypes.GuestAccess: | |
1611 | if event.type == EventTypes.GuestAccess and not context.rejected: | |
1609 | 1612 | yield self.maybe_kick_guest_users(event) |
1610 | 1613 | |
1611 | 1614 | defer.returnValue(context) |
2089 | 2092 | @defer.inlineCallbacks |
2090 | 2093 | @log_function |
2091 | 2094 | def on_exchange_third_party_invite_request(self, origin, room_id, event_dict): |
2095 | """Handle an exchange_third_party_invite request from a remote server | |
2096 | ||
2097 | The remote server will call this when it wants to turn a 3pid invite | |
2098 | into a normal m.room.member invite. | |
2099 | ||
2100 | Returns: | |
2101 | Deferred: resolves (to None) | |
2102 | """ | |
2092 | 2103 | builder = self.event_builder_factory.new(event_dict) |
2093 | 2104 | |
2094 | 2105 | message_handler = self.hs.get_handlers().message_handler |
2107 | 2118 | raise e |
2108 | 2119 | yield self._check_signature(event, context) |
2109 | 2120 | |
2121 | # XXX we send the invite here, but send_membership_event also sends it, | |
2122 | # so we end up making two requests. I think this is redundant. | |
2110 | 2123 | returned_invite = yield self.send_invite(origin, event) |
2111 | 2124 | # TODO: Make sure the signatures actually are correct. |
2112 | 2125 | event.signatures.update(returned_invite.signatures) |
2126 | ||
2113 | 2127 | member_handler = self.hs.get_handlers().room_member_handler |
2114 | 2128 | yield member_handler.send_membership_event(None, event, context) |
2115 | 2129 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
14 | from synapse.events import spamcheck | |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | 17 | from synapse.api.constants import EventTypes, Membership |
320 | 320 | token_id=requester.access_token_id, |
321 | 321 | txn_id=txn_id |
322 | 322 | ) |
323 | ||
324 | if spamcheck.check_event_for_spam(event): | |
325 | raise SynapseError( | |
326 | 403, "Spam is not permitted here", Codes.FORBIDDEN | |
327 | ) | |
328 | ||
323 | 329 | yield self.send_nonmember_event( |
324 | 330 | requester, |
325 | 331 | event, |
190 | 190 | if action in ["kick", "unban"]: |
191 | 191 | effective_membership_state = "leave" |
192 | 192 | |
193 | # if this is a join with a 3pid signature, we may need to turn a 3pid | |
194 | # invite into a normal invite before we can handle the join. | |
193 | 195 | if third_party_signed is not None: |
194 | 196 | replication = self.hs.get_replication_layer() |
195 | 197 | yield replication.exchange_third_party_invite( |
206 | 208 | is_blocked = yield self.store.is_room_blocked(room_id) |
207 | 209 | if is_blocked: |
208 | 210 | raise SynapseError(403, "This room has been blocked on this server") |
211 | ||
212 | if (effective_membership_state == "invite" and | |
213 | self.hs.config.block_non_admin_invites): | |
214 | is_requester_admin = yield self.auth.is_server_admin( | |
215 | requester.user, | |
216 | ) | |
217 | if not is_requester_admin: | |
218 | raise SynapseError( | |
219 | 403, "Invites have been disabled on this server", | |
220 | ) | |
209 | 221 | |
210 | 222 | latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id) |
211 | 223 | current_state_ids = yield self.state_handler.get_current_state_ids( |
470 | 482 | requester, |
471 | 483 | txn_id |
472 | 484 | ): |
485 | if self.hs.config.block_non_admin_invites: | |
486 | is_requester_admin = yield self.auth.is_server_admin( | |
487 | requester.user, | |
488 | ) | |
489 | if not is_requester_admin: | |
490 | raise SynapseError( | |
491 | 403, "Invites have been disabled on this server", | |
492 | Codes.FORBIDDEN, | |
493 | ) | |
494 | ||
473 | 495 | invitee = yield self._lookup_3pid( |
474 | 496 | id_server, medium, address |
475 | 497 | ) |
107 | 107 | return True |
108 | 108 | |
109 | 109 | |
110 | class DeviceLists(collections.namedtuple("DeviceLists", [ | |
111 | "changed", # list of user_ids whose devices may have changed | |
112 | "left", # list of user_ids whose devices we no longer track | |
113 | ])): | |
114 | __slots__ = [] | |
115 | ||
116 | def __nonzero__(self): | |
117 | return bool(self.changed or self.left) | |
118 | ||
119 | ||
110 | 120 | class SyncResult(collections.namedtuple("SyncResult", [ |
111 | 121 | "next_batch", # Token for the next sync |
112 | 122 | "presence", # List of presence events for the user. |
289 | 299 | |
290 | 300 | if recents: |
291 | 301 | recents = sync_config.filter_collection.filter_room_timeline(recents) |
302 | ||
303 | # We check if there are any state events, if there are then we pass | |
304 | # all current state events to the filter_events function. This is to | |
305 | # ensure that we always include current state in the timeline | |
306 | current_state_ids = frozenset() | |
307 | if any(e.is_state() for e in recents): | |
308 | current_state_ids = yield self.state.get_current_state_ids(room_id) | |
309 | current_state_ids = frozenset(current_state_ids.itervalues()) | |
310 | ||
292 | 311 | recents = yield filter_events_for_client( |
293 | 312 | self.store, |
294 | 313 | sync_config.user.to_string(), |
295 | 314 | recents, |
315 | always_include_ids=current_state_ids, | |
296 | 316 | ) |
297 | 317 | else: |
298 | 318 | recents = [] |
324 | 344 | loaded_recents = sync_config.filter_collection.filter_room_timeline( |
325 | 345 | events |
326 | 346 | ) |
347 | ||
348 | # We check if there are any state events, if there are then we pass | |
349 | # all current state events to the filter_events function. This is to | |
350 | # ensure that we always include current state in the timeline | |
351 | current_state_ids = frozenset() | |
352 | if any(e.is_state() for e in loaded_recents): | |
353 | current_state_ids = yield self.state.get_current_state_ids(room_id) | |
354 | current_state_ids = frozenset(current_state_ids.itervalues()) | |
355 | ||
327 | 356 | loaded_recents = yield filter_events_for_client( |
328 | 357 | self.store, |
329 | 358 | sync_config.user.to_string(), |
330 | 359 | loaded_recents, |
360 | always_include_ids=current_state_ids, | |
331 | 361 | ) |
332 | 362 | loaded_recents.extend(recents) |
333 | 363 | recents = loaded_recents |
534 | 564 | res = yield self._generate_sync_entry_for_rooms( |
535 | 565 | sync_result_builder, account_data_by_room |
536 | 566 | ) |
537 | newly_joined_rooms, newly_joined_users = res | |
567 | newly_joined_rooms, newly_joined_users, _, _ = res | |
568 | _, _, newly_left_rooms, newly_left_users = res | |
538 | 569 | |
539 | 570 | block_all_presence_data = ( |
540 | 571 | since_token is None and |
548 | 579 | yield self._generate_sync_entry_for_to_device(sync_result_builder) |
549 | 580 | |
550 | 581 | device_lists = yield self._generate_sync_entry_for_device_list( |
551 | sync_result_builder | |
582 | sync_result_builder, | |
583 | newly_joined_rooms=newly_joined_rooms, | |
584 | newly_joined_users=newly_joined_users, | |
585 | newly_left_rooms=newly_left_rooms, | |
586 | newly_left_users=newly_left_users, | |
552 | 587 | ) |
553 | 588 | |
554 | 589 | device_id = sync_config.device_id |
573 | 608 | |
574 | 609 | @measure_func("_generate_sync_entry_for_device_list") |
575 | 610 | @defer.inlineCallbacks |
576 | def _generate_sync_entry_for_device_list(self, sync_result_builder): | |
611 | def _generate_sync_entry_for_device_list(self, sync_result_builder, | |
612 | newly_joined_rooms, newly_joined_users, | |
613 | newly_left_rooms, newly_left_users): | |
577 | 614 | user_id = sync_result_builder.sync_config.user.to_string() |
578 | 615 | since_token = sync_result_builder.since_token |
579 | 616 | |
580 | 617 | if since_token and since_token.device_list_key: |
581 | room_ids = yield self.store.get_rooms_for_user(user_id) | |
582 | ||
583 | user_ids_changed = set() | |
584 | 618 | changed = yield self.store.get_user_whose_devices_changed( |
585 | 619 | since_token.device_list_key |
586 | 620 | ) |
587 | for other_user_id in changed: | |
588 | other_room_ids = yield self.store.get_rooms_for_user(other_user_id) | |
589 | if room_ids.intersection(other_room_ids): | |
590 | user_ids_changed.add(other_user_id) | |
591 | ||
592 | defer.returnValue(user_ids_changed) | |
621 | ||
622 | # TODO: Be more clever than this, i.e. remove users who we already | |
623 | # share a room with? | |
624 | for room_id in newly_joined_rooms: | |
625 | joined_users = yield self.state.get_current_user_in_room(room_id) | |
626 | newly_joined_users.update(joined_users) | |
627 | ||
628 | for room_id in newly_left_rooms: | |
629 | left_users = yield self.state.get_current_user_in_room(room_id) | |
630 | newly_left_users.update(left_users) | |
631 | ||
632 | # TODO: Check that these users are actually new, i.e. either they | |
633 | # weren't in the previous sync *or* they left and rejoined. | |
634 | changed.update(newly_joined_users) | |
635 | ||
636 | if not changed and not newly_left_users: | |
637 | defer.returnValue(DeviceLists( | |
638 | changed=[], | |
639 | left=newly_left_users, | |
640 | )) | |
641 | ||
642 | users_who_share_room = yield self.store.get_users_who_share_room_with_user( | |
643 | user_id | |
644 | ) | |
645 | ||
646 | defer.returnValue(DeviceLists( | |
647 | changed=users_who_share_room & changed, | |
648 | left=set(newly_left_users) - users_who_share_room, | |
649 | )) | |
593 | 650 | else: |
594 | defer.returnValue([]) | |
651 | defer.returnValue(DeviceLists( | |
652 | changed=[], | |
653 | left=[], | |
654 | )) | |
595 | 655 | |
596 | 656 | @defer.inlineCallbacks |
597 | 657 | def _generate_sync_entry_for_to_device(self, sync_result_builder): |
755 | 815 | account_data_by_room(dict): Dictionary of per room account data |
756 | 816 | |
757 | 817 | Returns: |
758 | Deferred(tuple): Returns a 2-tuple of | |
759 | `(newly_joined_rooms, newly_joined_users)` | |
818 | Deferred(tuple): Returns a 4-tuple of | |
819 | `(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)` | |
760 | 820 | """ |
761 | 821 | user_id = sync_result_builder.sync_config.user.to_string() |
762 | 822 | block_all_room_ephemeral = ( |
787 | 847 | ) |
788 | 848 | if not tags_by_room: |
789 | 849 | logger.debug("no-oping sync") |
790 | defer.returnValue(([], [])) | |
850 | defer.returnValue(([], [], [], [])) | |
791 | 851 | |
792 | 852 | ignored_account_data = yield self.store.get_global_account_data_by_type_for_user( |
793 | 853 | "m.ignored_user_list", user_id=user_id, |
800 | 860 | |
801 | 861 | if since_token: |
802 | 862 | res = yield self._get_rooms_changed(sync_result_builder, ignored_users) |
803 | room_entries, invited, newly_joined_rooms = res | |
863 | room_entries, invited, newly_joined_rooms, newly_left_rooms = res | |
804 | 864 | |
805 | 865 | tags_by_room = yield self.store.get_updated_tags( |
806 | 866 | user_id, since_token.account_data_key, |
808 | 868 | else: |
809 | 869 | res = yield self._get_all_rooms(sync_result_builder, ignored_users) |
810 | 870 | room_entries, invited, newly_joined_rooms = res |
871 | newly_left_rooms = [] | |
811 | 872 | |
812 | 873 | tags_by_room = yield self.store.get_tags_for_user(user_id) |
813 | 874 | |
828 | 889 | |
829 | 890 | # Now we want to get any newly joined users |
830 | 891 | newly_joined_users = set() |
892 | newly_left_users = set() | |
831 | 893 | if since_token: |
832 | 894 | for joined_sync in sync_result_builder.joined: |
833 | 895 | it = itertools.chain( |
834 | joined_sync.timeline.events, joined_sync.state.values() | |
896 | joined_sync.timeline.events, joined_sync.state.itervalues() | |
835 | 897 | ) |
836 | 898 | for event in it: |
837 | 899 | if event.type == EventTypes.Member: |
838 | 900 | if event.membership == Membership.JOIN: |
839 | 901 | newly_joined_users.add(event.state_key) |
840 | ||
841 | defer.returnValue((newly_joined_rooms, newly_joined_users)) | |
902 | else: | |
903 | prev_content = event.unsigned.get("prev_content", {}) | |
904 | prev_membership = prev_content.get("membership", None) | |
905 | if prev_membership == Membership.JOIN: | |
906 | newly_left_users.add(event.state_key) | |
907 | ||
908 | newly_left_users -= newly_joined_users | |
909 | ||
910 | defer.returnValue(( | |
911 | newly_joined_rooms, | |
912 | newly_joined_users, | |
913 | newly_left_rooms, | |
914 | newly_left_users, | |
915 | )) | |
842 | 916 | |
843 | 917 | @defer.inlineCallbacks |
844 | 918 | def _have_rooms_changed(self, sync_result_builder): |
908 | 982 | mem_change_events_by_room_id.setdefault(event.room_id, []).append(event) |
909 | 983 | |
910 | 984 | newly_joined_rooms = [] |
985 | newly_left_rooms = [] | |
911 | 986 | room_entries = [] |
912 | 987 | invited = [] |
913 | for room_id, events in mem_change_events_by_room_id.items(): | |
988 | for room_id, events in mem_change_events_by_room_id.iteritems(): | |
914 | 989 | non_joins = [e for e in events if e.membership != Membership.JOIN] |
915 | 990 | has_join = len(non_joins) != len(events) |
916 | 991 | |
917 | 992 | # We want to figure out if we joined the room at some point since |
918 | 993 | # the last sync (even if we have since left). This is to make sure |
919 | 994 | # we do send down the room, and with full state, where necessary |
995 | ||
996 | old_state_ids = None | |
997 | if room_id in joined_room_ids and non_joins: | |
998 | # Always include if the user (re)joined the room, especially | |
999 | # important so that device list changes are calculated correctly. | |
1000 | # If there are non join member events, but we are still in the room, | |
1001 | # then the user must have left and joined | |
1002 | newly_joined_rooms.append(room_id) | |
1003 | ||
1004 | # User is in the room so we don't need to do the invite/leave checks | |
1005 | continue | |
1006 | ||
920 | 1007 | if room_id in joined_room_ids or has_join: |
921 | 1008 | old_state_ids = yield self.get_state_at(room_id, since_token) |
922 | 1009 | old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None) |
928 | 1015 | if not old_mem_ev or old_mem_ev.membership != Membership.JOIN: |
929 | 1016 | newly_joined_rooms.append(room_id) |
930 | 1017 | |
931 | if room_id in joined_room_ids: | |
932 | continue | |
1018 | # If user is in the room then we don't need to do the invite/leave checks | |
1019 | if room_id in joined_room_ids: | |
1020 | continue | |
933 | 1021 | |
934 | 1022 | if not non_joins: |
935 | 1023 | continue |
1024 | ||
1025 | # Check if we have left the room. This can either be because we were | |
1026 | # joined before *or* that we since joined and then left. | |
1027 | if events[-1].membership != Membership.JOIN: | |
1028 | if has_join: | |
1029 | newly_left_rooms.append(room_id) | |
1030 | else: | |
1031 | if not old_state_ids: | |
1032 | old_state_ids = yield self.get_state_at(room_id, since_token) | |
1033 | old_mem_ev_id = old_state_ids.get( | |
1034 | (EventTypes.Member, user_id), | |
1035 | None, | |
1036 | ) | |
1037 | old_mem_ev = None | |
1038 | if old_mem_ev_id: | |
1039 | old_mem_ev = yield self.store.get_event( | |
1040 | old_mem_ev_id, allow_none=True | |
1041 | ) | |
1042 | if old_mem_ev and old_mem_ev.membership == Membership.JOIN: | |
1043 | newly_left_rooms.append(room_id) | |
936 | 1044 | |
937 | 1045 | # Only bother if we're still currently invited |
938 | 1046 | should_invite = non_joins[-1].membership == Membership.INVITE |
1011 | 1119 | upto_token=since_token, |
1012 | 1120 | )) |
1013 | 1121 | |
1014 | defer.returnValue((room_entries, invited, newly_joined_rooms)) | |
1122 | defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms)) | |
1015 | 1123 | |
1016 | 1124 | @defer.inlineCallbacks |
1017 | 1125 | def _get_all_rooms(self, sync_result_builder, ignored_users): |
1259 | 1367 | self.invited = [] |
1260 | 1368 | self.archived = [] |
1261 | 1369 | self.device = [] |
1370 | self.to_device = [] | |
1262 | 1371 | |
1263 | 1372 | |
1264 | 1373 | class RoomSyncResultBuilder(object): |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import socket | |
14 | 15 | |
15 | 16 | from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS |
16 | 17 | from twisted.internet import defer, reactor |
29 | 30 | |
30 | 31 | SERVER_CACHE = {} |
31 | 32 | |
32 | ||
33 | # our record of an individual server which can be tried to reach a destination. | |
34 | # | |
35 | # "host" is actually a dotted-quad or ipv6 address string. Except when there's | |
36 | # no SRV record, in which case it is the original hostname. | |
33 | 37 | _Server = collections.namedtuple( |
34 | 38 | "_Server", "priority weight host port expires" |
35 | 39 | ) |
218 | 222 | return self.default_server |
219 | 223 | else: |
220 | 224 | raise ConnectError( |
221 | "Not server available for %s" % self.service_name | |
225 | "No server available for %s" % self.service_name | |
222 | 226 | ) |
223 | 227 | |
228 | # look for all servers with the same priority | |
224 | 229 | min_priority = self.servers[0].priority |
225 | 230 | weight_indexes = list( |
226 | 231 | (index, server.weight + 1) |
230 | 235 | |
231 | 236 | total_weight = sum(weight for index, weight in weight_indexes) |
232 | 237 | target_weight = random.randint(0, total_weight) |
233 | ||
234 | 238 | for index, weight in weight_indexes: |
235 | 239 | target_weight -= weight |
236 | 240 | if target_weight <= 0: |
237 | 241 | server = self.servers[index] |
242 | # XXX: this looks totally dubious: | |
243 | # | |
244 | # (a) we never reuse a server until we have been through | |
245 | # all of the servers at the same priority, so if the | |
246 | # weights are A: 100, B:1, we always do ABABAB instead of | |
247 | # AAAA...AAAB (approximately). | |
248 | # | |
249 | # (b) After using all the servers at the lowest priority, | |
250 | # we move onto the next priority. We should only use the | |
251 | # second priority if servers at the top priority are | |
252 | # unreachable. | |
253 | # | |
238 | 254 | del self.servers[index] |
239 | 255 | self.used_servers.append(server) |
240 | 256 | return server |
279 | 295 | continue |
280 | 296 | |
281 | 297 | payload = answer.payload |
282 | host = str(payload.target) | |
283 | srv_ttl = answer.ttl | |
284 | ||
285 | try: | |
286 | answers, _, _ = yield dns_client.lookupAddress(host) | |
287 | except DNSNameError: | |
288 | continue | |
289 | ||
290 | for answer in answers: | |
291 | if answer.type == dns.A and answer.payload: | |
292 | ip = answer.payload.dottedQuad() | |
293 | host_ttl = min(srv_ttl, answer.ttl) | |
294 | ||
295 | servers.append(_Server( | |
296 | host=ip, | |
297 | port=int(payload.port), | |
298 | priority=int(payload.priority), | |
299 | weight=int(payload.weight), | |
300 | expires=int(clock.time()) + host_ttl, | |
301 | )) | |
298 | ||
299 | hosts = yield _get_hosts_for_srv_record( | |
300 | dns_client, str(payload.target) | |
301 | ) | |
302 | ||
303 | for (ip, ttl) in hosts: | |
304 | host_ttl = min(answer.ttl, ttl) | |
305 | ||
306 | servers.append(_Server( | |
307 | host=ip, | |
308 | port=int(payload.port), | |
309 | priority=int(payload.priority), | |
310 | weight=int(payload.weight), | |
311 | expires=int(clock.time()) + host_ttl, | |
312 | )) | |
302 | 313 | |
303 | 314 | servers.sort() |
304 | 315 | cache[service_name] = list(servers) |
316 | 327 | raise e |
317 | 328 | |
318 | 329 | defer.returnValue(servers) |
330 | ||
331 | ||
332 | @defer.inlineCallbacks | |
333 | def _get_hosts_for_srv_record(dns_client, host): | |
334 | """Look up each of the hosts in a SRV record | |
335 | ||
336 | Args: | |
337 | dns_client (twisted.names.dns.IResolver): | |
338 | host (basestring): host to look up | |
339 | ||
340 | Returns: | |
341 | Deferred[list[(str, int)]]: a list of (host, ttl) pairs | |
342 | ||
343 | """ | |
344 | ip4_servers = [] | |
345 | ip6_servers = [] | |
346 | ||
347 | def cb(res): | |
348 | # lookupAddress and lookupIP6Address return a three-tuple | |
349 | # giving the answer, authority, and additional sections of the | |
350 | # response. | |
351 | # | |
352 | # we only care about the answers. | |
353 | ||
354 | return res[0] | |
355 | ||
356 | def eb(res): | |
357 | res.trap(DNSNameError) | |
358 | return [] | |
359 | ||
360 | # no logcontexts here, so we can safely fire these off and gatherResults | |
361 | d1 = dns_client.lookupAddress(host).addCallbacks(cb, eb) | |
362 | d2 = dns_client.lookupIPV6Address(host).addCallbacks(cb, eb) | |
363 | results = yield defer.gatherResults([d1, d2], consumeErrors=True) | |
364 | ||
365 | for result in results: | |
366 | for answer in result: | |
367 | if not answer.payload: | |
368 | continue | |
369 | ||
370 | try: | |
371 | if answer.type == dns.A: | |
372 | ip = answer.payload.dottedQuad() | |
373 | ip4_servers.append((ip, answer.ttl)) | |
374 | elif answer.type == dns.AAAA: | |
375 | ip = socket.inet_ntop( | |
376 | socket.AF_INET6, answer.payload.address, | |
377 | ) | |
378 | ip6_servers.append((ip, answer.ttl)) | |
379 | else: | |
380 | # the most likely candidate here is a CNAME record. | |
381 | # rfc2782 says srvs may not point to aliases. | |
382 | logger.warn( | |
383 | "Ignoring unexpected DNS record type %s for %s", | |
384 | answer.type, host, | |
385 | ) | |
386 | continue | |
387 | except Exception as e: | |
388 | logger.warn("Ignoring invalid DNS response for %s: %s", | |
389 | host, e) | |
390 | continue | |
391 | ||
392 | # keep the ipv4 results before the ipv6 results, mostly to match historical | |
393 | # behaviour. | |
394 | defer.returnValue(ip4_servers + ip6_servers) |
18 | 18 | |
19 | 19 | from .push_rule_evaluator import PushRuleEvaluatorForEvent |
20 | 20 | |
21 | from synapse.visibility import filter_events_for_clients_context | |
22 | 21 | from synapse.api.constants import EventTypes, Membership |
22 | from synapse.metrics import get_metrics_for | |
23 | from synapse.util.caches import metrics as cache_metrics | |
23 | 24 | from synapse.util.caches.descriptors import cached |
24 | 25 | from synapse.util.async import Linearizer |
25 | 26 | |
30 | 31 | |
31 | 32 | |
32 | 33 | rules_by_room = {} |
34 | ||
35 | push_metrics = get_metrics_for(__name__) | |
36 | ||
37 | push_rules_invalidation_counter = push_metrics.register_counter( | |
38 | "push_rules_invalidation_counter" | |
39 | ) | |
40 | push_rules_state_size_counter = push_metrics.register_counter( | |
41 | "push_rules_state_size_counter" | |
42 | ) | |
43 | ||
44 | # Measures whether we use the fast path of using state deltas, or if we have to | |
45 | # recalculate from scratch | |
46 | push_rules_delta_state_cache_metric = cache_metrics.register_cache( | |
47 | "cache", | |
48 | size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values | |
49 | cache_name="push_rules_delta_state_cache_metric", | |
50 | ) | |
33 | 51 | |
34 | 52 | |
35 | 53 | class BulkPushRuleEvaluator(object): |
40 | 58 | def __init__(self, hs): |
41 | 59 | self.hs = hs |
42 | 60 | self.store = hs.get_datastore() |
61 | ||
62 | self.room_push_rule_cache_metrics = cache_metrics.register_cache( | |
63 | "cache", | |
64 | size_callback=lambda: 0, # There's not good value for this | |
65 | cache_name="room_push_rule_cache", | |
66 | ) | |
43 | 67 | |
44 | 68 | @defer.inlineCallbacks |
45 | 69 | def _get_rules_for_event(self, event, context): |
78 | 102 | # It's important that RulesForRoom gets added to self._get_rules_for_room.cache |
79 | 103 | # before any lookup methods get called on it as otherwise there may be |
80 | 104 | # a race if invalidate_all gets called (which assumes its in the cache) |
81 | return RulesForRoom(self.hs, room_id, self._get_rules_for_room.cache) | |
105 | return RulesForRoom( | |
106 | self.hs, room_id, self._get_rules_for_room.cache, | |
107 | self.room_push_rule_cache_metrics, | |
108 | ) | |
82 | 109 | |
83 | 110 | @defer.inlineCallbacks |
84 | 111 | def action_for_event_by_user(self, event, context): |
91 | 118 | rules_by_user = yield self._get_rules_for_event(event, context) |
92 | 119 | actions_by_user = {} |
93 | 120 | |
94 | # None of these users can be peeking since this list of users comes | |
95 | # from the set of users in the room, so we know for sure they're all | |
96 | # actually in the room. | |
97 | user_tuples = [(u, False) for u in rules_by_user] | |
98 | ||
99 | filtered_by_user = yield filter_events_for_clients_context( | |
100 | self.store, user_tuples, [event], {event.event_id: context} | |
101 | ) | |
102 | ||
103 | 121 | room_members = yield self.store.get_joined_users_from_context( |
104 | 122 | event, context |
105 | 123 | ) |
109 | 127 | condition_cache = {} |
110 | 128 | |
111 | 129 | for uid, rules in rules_by_user.iteritems(): |
130 | if event.sender == uid: | |
131 | continue | |
132 | ||
133 | if not event.is_state(): | |
134 | is_ignored = yield self.store.is_ignored_by(event.sender, uid) | |
135 | if is_ignored: | |
136 | continue | |
137 | ||
112 | 138 | display_name = None |
113 | 139 | profile_info = room_members.get(uid) |
114 | 140 | if profile_info: |
119 | 145 | # that user, as they might not be already joined. |
120 | 146 | if event.type == EventTypes.Member and event.state_key == uid: |
121 | 147 | display_name = event.content.get("displayname", None) |
122 | ||
123 | filtered = filtered_by_user[uid] | |
124 | if len(filtered) == 0: | |
125 | continue | |
126 | ||
127 | if filtered[0].sender == uid: | |
128 | continue | |
129 | 148 | |
130 | 149 | for rule in rules: |
131 | 150 | if 'enabled' in rule and not rule['enabled']: |
169 | 188 | the entire cache for the room. |
170 | 189 | """ |
171 | 190 | |
172 | def __init__(self, hs, room_id, rules_for_room_cache): | |
191 | def __init__(self, hs, room_id, rules_for_room_cache, room_push_rule_cache_metrics): | |
173 | 192 | """ |
174 | 193 | Args: |
175 | 194 | hs (HomeServer) |
176 | 195 | room_id (str) |
177 | 196 | rules_for_room_cache(Cache): The cache object that caches these |
178 | 197 | RoomsForUser objects. |
198 | room_push_rule_cache_metrics (CacheMetric) | |
179 | 199 | """ |
180 | 200 | self.room_id = room_id |
181 | 201 | self.is_mine_id = hs.is_mine_id |
182 | 202 | self.store = hs.get_datastore() |
203 | self.room_push_rule_cache_metrics = room_push_rule_cache_metrics | |
183 | 204 | |
184 | 205 | self.linearizer = Linearizer(name="rules_for_room") |
185 | 206 | |
221 | 242 | """ |
222 | 243 | state_group = context.state_group |
223 | 244 | |
245 | if state_group and self.state_group == state_group: | |
246 | logger.debug("Using cached rules for %r", self.room_id) | |
247 | self.room_push_rule_cache_metrics.inc_hits() | |
248 | defer.returnValue(self.rules_by_user) | |
249 | ||
224 | 250 | with (yield self.linearizer.queue(())): |
225 | 251 | if state_group and self.state_group == state_group: |
226 | 252 | logger.debug("Using cached rules for %r", self.room_id) |
253 | self.room_push_rule_cache_metrics.inc_hits() | |
227 | 254 | defer.returnValue(self.rules_by_user) |
255 | ||
256 | self.room_push_rule_cache_metrics.inc_misses() | |
228 | 257 | |
229 | 258 | ret_rules_by_user = {} |
230 | 259 | missing_member_event_ids = {} |
233 | 262 | # results. |
234 | 263 | ret_rules_by_user = self.rules_by_user |
235 | 264 | current_state_ids = context.delta_ids |
265 | ||
266 | push_rules_delta_state_cache_metric.inc_hits() | |
236 | 267 | else: |
237 | 268 | current_state_ids = context.current_state_ids |
269 | push_rules_delta_state_cache_metric.inc_misses() | |
270 | ||
271 | push_rules_state_size_counter.inc_by(len(current_state_ids)) | |
238 | 272 | |
239 | 273 | logger.debug( |
240 | 274 | "Looking for member changes in %r %r", state_group, current_state_ids |
280 | 314 | logger.debug("Found new member events %r", missing_member_event_ids) |
281 | 315 | yield self._update_rules_with_member_event_ids( |
282 | 316 | ret_rules_by_user, missing_member_event_ids, state_group, event |
317 | ) | |
318 | else: | |
319 | # The push rules didn't change but lets update the cache anyway | |
320 | self.update_cache( | |
321 | self.sequence, | |
322 | members={}, # There were no membership changes | |
323 | rules_by_user=ret_rules_by_user, | |
324 | state_group=state_group | |
283 | 325 | ) |
284 | 326 | |
285 | 327 | if logger.isEnabledFor(logging.DEBUG): |
379 | 421 | self.state_group = object() |
380 | 422 | self.member_map = {} |
381 | 423 | self.rules_by_user = {} |
424 | push_rules_invalidation_counter.inc() | |
382 | 425 | |
383 | 426 | def update_cache(self, sequence, members, rules_by_user, state_group): |
384 | 427 | if sequence == self.sequence: |
243 | 243 | |
244 | 244 | @defer.inlineCallbacks |
245 | 245 | def _build_notification_dict(self, event, tweaks, badge): |
246 | if self.data.get('format') == 'event_id_only': | |
247 | d = { | |
248 | 'notification': { | |
249 | 'event_id': event.event_id, | |
250 | 'room_id': event.room_id, | |
251 | 'counts': { | |
252 | 'unread': badge, | |
253 | }, | |
254 | 'devices': [ | |
255 | { | |
256 | 'app_id': self.app_id, | |
257 | 'pushkey': self.pushkey, | |
258 | 'pushkey_ts': long(self.pushkey_ts / 1000), | |
259 | 'data': self.data_minus_url, | |
260 | } | |
261 | ] | |
262 | } | |
263 | } | |
264 | defer.returnValue(d) | |
265 | ||
246 | 266 | ctx = yield push_tools.get_context_for_event( |
247 | 267 | self.store, self.state_handler, event, self.user_id |
248 | 268 | ) |
199 | 199 | return re.compile(r, flags=re.IGNORECASE) |
200 | 200 | |
201 | 201 | |
202 | def _flatten_dict(d, prefix=[], result={}): | |
202 | def _flatten_dict(d, prefix=[], result=None): | |
203 | if result is None: | |
204 | result = {} | |
203 | 205 | for key, value in d.items(): |
204 | 206 | if isinstance(value, basestring): |
205 | 207 | result[".".join(prefix + [key])] = value.lower() |
30 | 30 | "pyyaml": ["yaml"], |
31 | 31 | "pyasn1": ["pyasn1"], |
32 | 32 | "daemonize": ["daemonize"], |
33 | "py-bcrypt": ["bcrypt"], | |
33 | "bcrypt": ["bcrypt"], | |
34 | 34 | "pillow": ["PIL"], |
35 | 35 | "pydenticon": ["pydenticon"], |
36 | 36 | "ujson": ["ujson"], |
39 | 39 | "pymacaroons-pynacl": ["pymacaroons"], |
40 | 40 | "msgpack-python>=0.3.0": ["msgpack"], |
41 | 41 | "phonenumbers>=8.2.0": ["phonenumbers"], |
42 | "affinity": ["affinity"], | |
42 | 43 | } |
43 | 44 | CONDITIONAL_REQUIREMENTS = { |
44 | 45 | "web_client": { |
28 | 28 | max_entries=50000 * CACHE_SIZE_FACTOR, |
29 | 29 | ) |
30 | 30 | |
31 | def insert_client_ip(self, user, access_token, ip, user_agent, device_id): | |
31 | def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id): | |
32 | 32 | now = int(self._clock.time_msec()) |
33 | user_id = user.to_string() | |
34 | 33 | key = (user_id, access_token, ip) |
35 | 34 | |
36 | 35 | try: |
322 | 322 | |
323 | 323 | @classmethod |
324 | 324 | def from_line(cls, line): |
325 | user_id, access_token, ip, device_id, last_seen, user_agent = line.split(" ", 5) | |
326 | ||
327 | return cls(user_id, access_token, ip, user_agent, device_id, int(last_seen)) | |
328 | ||
329 | def to_line(self): | |
330 | return " ".join(( | |
331 | self.user_id, self.access_token, self.ip, self.device_id, | |
332 | str(self.last_seen), self.user_agent, | |
325 | user_id, jsn = line.split(" ", 1) | |
326 | ||
327 | access_token, ip, user_agent, device_id, last_seen = json.loads(jsn) | |
328 | ||
329 | return cls( | |
330 | user_id, access_token, ip, user_agent, device_id, last_seen | |
331 | ) | |
332 | ||
333 | def to_line(self): | |
334 | return self.user_id + " " + json.dumps(( | |
335 | self.access_token, self.ip, self.user_agent, self.device_id, | |
336 | self.last_seen, | |
333 | 337 | )) |
334 | 338 | |
335 | 339 |
243 | 243 | becoming full. |
244 | 244 | """ |
245 | 245 | if self.state == ConnectionStates.CLOSED: |
246 | logger.info("[%s] Not sending, connection closed", self.id()) | |
246 | logger.debug("[%s] Not sending, connection closed", self.id()) | |
247 | 247 | return |
248 | 248 | |
249 | 249 | if do_buffer and self.state != ConnectionStates.ESTABLISHED: |
263 | 263 | def _queue_command(self, cmd): |
264 | 264 | """Queue the command until the connection is ready to write to again. |
265 | 265 | """ |
266 | logger.info("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd) | |
266 | logger.debug("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd) | |
267 | 267 | self.pending_commands.append(cmd) |
268 | 268 | |
269 | 269 | if len(self.pending_commands) > self.max_line_buffer: |
167 | 167 | |
168 | 168 | DEFAULT_MESSAGE = ( |
169 | 169 | "Sharing illegal content on this server is not permitted and rooms in" |
170 | " violatation will be blocked." | |
170 | " violation will be blocked." | |
171 | 171 | ) |
172 | 172 | |
173 | 173 | def __init__(self, hs): |
295 | 295 | |
296 | 296 | class ResetPasswordRestServlet(ClientV1RestServlet): |
297 | 297 | """Post request to allow an administrator reset password for a user. |
298 | This need a user have a administrator access in Synapse. | |
298 | This needs user to have administrator access in Synapse. | |
299 | 299 | Example: |
300 | 300 | http://localhost:8008/_matrix/client/api/v1/admin/reset_password/ |
301 | 301 | @user:to_reset_password?access_token=admin_access_token |
318 | 318 | @defer.inlineCallbacks |
319 | 319 | def on_POST(self, request, target_user_id): |
320 | 320 | """Post request to allow an administrator reset password for a user. |
321 | This need a user have a administrator access in Synapse. | |
321 | This needs user to have administrator access in Synapse. | |
322 | 322 | """ |
323 | 323 | UserID.from_string(target_user_id) |
324 | 324 | requester = yield self.auth.get_user_by_req(request) |
342 | 342 | |
343 | 343 | class GetUsersPaginatedRestServlet(ClientV1RestServlet): |
344 | 344 | """Get request to get specific number of users from Synapse. |
345 | This need a user have a administrator access in Synapse. | |
345 | This needs user to have administrator access in Synapse. | |
346 | 346 | Example: |
347 | 347 | http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/ |
348 | 348 | @admin:user?access_token=admin_access_token&start=0&limit=10 |
361 | 361 | @defer.inlineCallbacks |
362 | 362 | def on_GET(self, request, target_user_id): |
363 | 363 | """Get request to get specific number of users from Synapse. |
364 | This need a user have a administrator access in Synapse. | |
364 | This needs user to have administrator access in Synapse. | |
365 | 365 | """ |
366 | 366 | target_user = UserID.from_string(target_user_id) |
367 | 367 | requester = yield self.auth.get_user_by_req(request) |
394 | 394 | @defer.inlineCallbacks |
395 | 395 | def on_POST(self, request, target_user_id): |
396 | 396 | """Post request to get specific number of users from Synapse.. |
397 | This need a user have a administrator access in Synapse. | |
397 | This needs user to have administrator access in Synapse. | |
398 | 398 | Example: |
399 | 399 | http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/ |
400 | 400 | @admin:user?access_token=admin_access_token |
432 | 432 | class SearchUsersRestServlet(ClientV1RestServlet): |
433 | 433 | """Get request to search user table for specific users according to |
434 | 434 | search term. |
435 | This need a user have a administrator access in Synapse. | |
435 | This needs user to have administrator access in Synapse. | |
436 | 436 | Example: |
437 | 437 | http://localhost:8008/_matrix/client/api/v1/admin/search_users/ |
438 | 438 | @admin:user?access_token=admin_access_token&term=alice |
452 | 452 | def on_GET(self, request, target_user_id): |
453 | 453 | """Get request to search user table for specific users according to |
454 | 454 | search term. |
455 | This need a user have a administrator access in Synapse. | |
455 | This needs user to have a administrator access in Synapse. | |
456 | 456 | """ |
457 | 457 | target_user = UserID.from_string(target_user_id) |
458 | 458 | requester = yield self.auth.get_user_by_req(request) |
187 | 187 | |
188 | 188 | user_id = requester.user.to_string() |
189 | 189 | |
190 | changed = yield self.device_handler.get_user_ids_changed( | |
190 | results = yield self.device_handler.get_user_ids_changed( | |
191 | 191 | user_id, from_token, |
192 | 192 | ) |
193 | 193 | |
194 | defer.returnValue((200, { | |
195 | "changed": list(changed), | |
196 | })) | |
194 | defer.returnValue((200, results)) | |
197 | 195 | |
198 | 196 | |
199 | 197 | class OneTimeKeyServlet(RestServlet): |
109 | 109 | filter_id = parse_string(request, "filter", default=None) |
110 | 110 | full_state = parse_boolean(request, "full_state", default=False) |
111 | 111 | |
112 | logger.info( | |
112 | logger.debug( | |
113 | 113 | "/sync: user=%r, timeout=%r, since=%r," |
114 | 114 | " set_presence=%r, filter_id=%r, device_id=%r" % ( |
115 | 115 | user, timeout, since, set_presence, filter_id, device_id |
163 | 163 | ) |
164 | 164 | |
165 | 165 | time_now = self.clock.time_msec() |
166 | ||
167 | joined = self.encode_joined( | |
168 | sync_result.joined, time_now, requester.access_token_id, filter.event_fields | |
169 | ) | |
170 | ||
171 | invited = self.encode_invited( | |
172 | sync_result.invited, time_now, requester.access_token_id | |
173 | ) | |
174 | ||
175 | archived = self.encode_archived( | |
176 | sync_result.archived, time_now, requester.access_token_id, | |
166 | response_content = self.encode_response( | |
167 | time_now, sync_result, requester.access_token_id, filter | |
168 | ) | |
169 | ||
170 | defer.returnValue((200, response_content)) | |
171 | ||
172 | @staticmethod | |
173 | def encode_response(time_now, sync_result, access_token_id, filter): | |
174 | joined = SyncRestServlet.encode_joined( | |
175 | sync_result.joined, time_now, access_token_id, filter.event_fields | |
176 | ) | |
177 | ||
178 | invited = SyncRestServlet.encode_invited( | |
179 | sync_result.invited, time_now, access_token_id, | |
180 | ) | |
181 | ||
182 | archived = SyncRestServlet.encode_archived( | |
183 | sync_result.archived, time_now, access_token_id, | |
177 | 184 | filter.event_fields, |
178 | 185 | ) |
179 | 186 | |
180 | response_content = { | |
187 | return { | |
181 | 188 | "account_data": {"events": sync_result.account_data}, |
182 | 189 | "to_device": {"events": sync_result.to_device}, |
183 | 190 | "device_lists": { |
184 | "changed": list(sync_result.device_lists), | |
191 | "changed": list(sync_result.device_lists.changed), | |
192 | "left": list(sync_result.device_lists.left), | |
185 | 193 | }, |
186 | "presence": self.encode_presence( | |
194 | "presence": SyncRestServlet.encode_presence( | |
187 | 195 | sync_result.presence, time_now |
188 | 196 | ), |
189 | 197 | "rooms": { |
195 | 203 | "next_batch": sync_result.next_batch.to_string(), |
196 | 204 | } |
197 | 205 | |
198 | defer.returnValue((200, response_content)) | |
199 | ||
200 | def encode_presence(self, events, time_now): | |
206 | @staticmethod | |
207 | def encode_presence(events, time_now): | |
201 | 208 | return { |
202 | 209 | "events": [ |
203 | 210 | { |
211 | 218 | ] |
212 | 219 | } |
213 | 220 | |
214 | def encode_joined(self, rooms, time_now, token_id, event_fields): | |
221 | @staticmethod | |
222 | def encode_joined(rooms, time_now, token_id, event_fields): | |
215 | 223 | """ |
216 | 224 | Encode the joined rooms in a sync result |
217 | 225 | |
230 | 238 | """ |
231 | 239 | joined = {} |
232 | 240 | for room in rooms: |
233 | joined[room.room_id] = self.encode_room( | |
241 | joined[room.room_id] = SyncRestServlet.encode_room( | |
234 | 242 | room, time_now, token_id, only_fields=event_fields |
235 | 243 | ) |
236 | 244 | |
237 | 245 | return joined |
238 | 246 | |
239 | def encode_invited(self, rooms, time_now, token_id): | |
247 | @staticmethod | |
248 | def encode_invited(rooms, time_now, token_id): | |
240 | 249 | """ |
241 | 250 | Encode the invited rooms in a sync result |
242 | 251 | |
269 | 278 | |
270 | 279 | return invited |
271 | 280 | |
272 | def encode_archived(self, rooms, time_now, token_id, event_fields): | |
281 | @staticmethod | |
282 | def encode_archived(rooms, time_now, token_id, event_fields): | |
273 | 283 | """ |
274 | 284 | Encode the archived rooms in a sync result |
275 | 285 | |
288 | 298 | """ |
289 | 299 | joined = {} |
290 | 300 | for room in rooms: |
291 | joined[room.room_id] = self.encode_room( | |
301 | joined[room.room_id] = SyncRestServlet.encode_room( | |
292 | 302 | room, time_now, token_id, joined=False, only_fields=event_fields |
293 | 303 | ) |
294 | 304 |
307 | 307 | " WHERE stream_id < ?" |
308 | 308 | ) |
309 | 309 | txn.execute(update_max_id_sql, (next_id, next_id)) |
310 | ||
311 | @cachedInlineCallbacks(num_args=2, cache_context=True, max_entries=5000) | |
312 | def is_ignored_by(self, ignored_user_id, ignorer_user_id, cache_context): | |
313 | ignored_account_data = yield self.get_global_account_data_by_type_for_user( | |
314 | "m.ignored_user_list", ignorer_user_id, | |
315 | on_invalidate=cache_context.invalidate, | |
316 | ) | |
317 | if not ignored_account_data: | |
318 | defer.returnValue(False) | |
319 | ||
320 | defer.returnValue( | |
321 | ignored_user_id in ignored_account_data.get("ignored_users", {}) | |
322 | ) |
55 | 55 | ) |
56 | 56 | reactor.addSystemEventTrigger("before", "shutdown", self._update_client_ips_batch) |
57 | 57 | |
58 | def insert_client_ip(self, user, access_token, ip, user_agent, device_id): | |
59 | now = int(self._clock.time_msec()) | |
60 | key = (user.to_string(), access_token, ip) | |
58 | def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id, | |
59 | now=None): | |
60 | if not now: | |
61 | now = int(self._clock.time_msec()) | |
62 | key = (user_id, access_token, ip) | |
61 | 63 | |
62 | 64 | try: |
63 | 65 | last_seen = self.client_ip_last_seen.get(key) |
112 | 112 | keys[key_id] = key |
113 | 113 | defer.returnValue(keys) |
114 | 114 | |
115 | @defer.inlineCallbacks | |
116 | 115 | def store_server_verify_key(self, server_name, from_server, time_now_ms, |
117 | 116 | verify_key): |
118 | 117 | """Stores a NACL verification key for the given server. |
119 | 118 | Args: |
120 | 119 | server_name (str): The name of the server. |
121 | key_id (str): The version of the key for the server. | |
122 | 120 | from_server (str): Where the verification key was looked up |
123 | ts_now_ms (int): The time now in milliseconds | |
124 | verification_key (VerifyKey): The NACL verify key. | |
125 | """ | |
126 | yield self._simple_upsert( | |
127 | table="server_signature_keys", | |
128 | keyvalues={ | |
129 | "server_name": server_name, | |
130 | "key_id": "%s:%s" % (verify_key.alg, verify_key.version), | |
131 | }, | |
132 | values={ | |
133 | "from_server": from_server, | |
134 | "ts_added_ms": time_now_ms, | |
135 | "verify_key": buffer(verify_key.encode()), | |
136 | }, | |
137 | desc="store_server_verify_key", | |
138 | ) | |
121 | time_now_ms (int): The time now in milliseconds | |
122 | verify_key (nacl.signing.VerifyKey): The NACL verify key. | |
123 | """ | |
124 | key_id = "%s:%s" % (verify_key.alg, verify_key.version) | |
125 | ||
126 | def _txn(txn): | |
127 | self._simple_upsert_txn( | |
128 | txn, | |
129 | table="server_signature_keys", | |
130 | keyvalues={ | |
131 | "server_name": server_name, | |
132 | "key_id": key_id, | |
133 | }, | |
134 | values={ | |
135 | "from_server": from_server, | |
136 | "ts_added_ms": time_now_ms, | |
137 | "verify_key": buffer(verify_key.encode()), | |
138 | }, | |
139 | ) | |
140 | txn.call_after( | |
141 | self._get_server_verify_key.invalidate, | |
142 | (server_name, key_id) | |
143 | ) | |
144 | ||
145 | return self.runInteraction("store_server_verify_key", _txn) | |
139 | 146 | |
140 | 147 | def store_server_keys_json(self, server_name, key_id, from_server, |
141 | 148 | ts_now_ms, ts_expires_ms, key_json_bytes): |
42 | 42 | |
43 | 43 | |
44 | 44 | @defer.inlineCallbacks |
45 | def filter_events_for_clients(store, user_tuples, events, event_id_to_state): | |
45 | def filter_events_for_clients(store, user_tuples, events, event_id_to_state, | |
46 | always_include_ids=frozenset()): | |
46 | 47 | """ Returns dict of user_id -> list of events that user is allowed to |
47 | 48 | see. |
48 | 49 | |
53 | 54 | * the user has not been a member of the room since the |
54 | 55 | given events |
55 | 56 | events ([synapse.events.EventBase]): list of events to filter |
57 | always_include_ids (set(event_id)): set of event ids to specifically | |
58 | include (unless sender is ignored) | |
56 | 59 | """ |
57 | 60 | forgotten = yield preserve_context_over_deferred(defer.gatherResults([ |
58 | 61 | defer.maybeDeferred( |
90 | 93 | if not event.is_state() and event.sender in ignore_list: |
91 | 94 | return False |
92 | 95 | |
96 | if event.event_id in always_include_ids: | |
97 | return True | |
98 | ||
93 | 99 | state = event_id_to_state[event.event_id] |
94 | 100 | |
95 | 101 | # get the room_visibility at the time of the event. |
188 | 194 | |
189 | 195 | |
190 | 196 | @defer.inlineCallbacks |
191 | def filter_events_for_clients_context(store, user_tuples, events, event_id_to_context): | |
192 | user_ids = set(u[0] for u in user_tuples) | |
193 | event_id_to_state = {} | |
194 | for event_id, context in event_id_to_context.items(): | |
195 | state = yield store.get_events([ | |
196 | e_id | |
197 | for key, e_id in context.current_state_ids.iteritems() | |
198 | if key == (EventTypes.RoomHistoryVisibility, "") | |
199 | or (key[0] == EventTypes.Member and key[1] in user_ids) | |
200 | ]) | |
201 | event_id_to_state[event_id] = state | |
202 | ||
203 | res = yield filter_events_for_clients( | |
204 | store, user_tuples, events, event_id_to_state | |
205 | ) | |
206 | defer.returnValue(res) | |
207 | ||
208 | ||
209 | @defer.inlineCallbacks | |
210 | def filter_events_for_client(store, user_id, events, is_peeking=False): | |
197 | def filter_events_for_client(store, user_id, events, is_peeking=False, | |
198 | always_include_ids=frozenset()): | |
211 | 199 | """ |
212 | 200 | Check which events a user is allowed to see |
213 | 201 | |
231 | 219 | types=types |
232 | 220 | ) |
233 | 221 | res = yield filter_events_for_clients( |
234 | store, [(user_id, is_peeking)], events, event_id_to_state | |
222 | store, [(user_id, is_peeking)], events, event_id_to_state, | |
223 | always_include_ids=always_include_ids, | |
235 | 224 | ) |
236 | 225 | defer.returnValue(res.get(user_id, [])) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2017 New Vector Ltd. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import time | |
15 | ||
16 | import signedjson.key | |
17 | import signedjson.sign | |
18 | from mock import Mock | |
19 | from synapse.api.errors import SynapseError | |
20 | from synapse.crypto import keyring | |
21 | from synapse.util import async, logcontext | |
22 | from synapse.util.logcontext import LoggingContext | |
23 | from tests import unittest, utils | |
24 | from twisted.internet import defer | |
25 | ||
26 | ||
27 | class MockPerspectiveServer(object): | |
28 | def __init__(self): | |
29 | self.server_name = "mock_server" | |
30 | self.key = signedjson.key.generate_signing_key(0) | |
31 | ||
32 | def get_verify_keys(self): | |
33 | vk = signedjson.key.get_verify_key(self.key) | |
34 | return { | |
35 | "%s:%s" % (vk.alg, vk.version): vk, | |
36 | } | |
37 | ||
38 | def get_signed_key(self, server_name, verify_key): | |
39 | key_id = "%s:%s" % (verify_key.alg, verify_key.version) | |
40 | res = { | |
41 | "server_name": server_name, | |
42 | "old_verify_keys": {}, | |
43 | "valid_until_ts": time.time() * 1000 + 3600, | |
44 | "verify_keys": { | |
45 | key_id: { | |
46 | "key": signedjson.key.encode_verify_key_base64(verify_key) | |
47 | } | |
48 | } | |
49 | } | |
50 | signedjson.sign.sign_json(res, self.server_name, self.key) | |
51 | return res | |
52 | ||
53 | ||
54 | class KeyringTestCase(unittest.TestCase): | |
55 | @defer.inlineCallbacks | |
56 | def setUp(self): | |
57 | self.mock_perspective_server = MockPerspectiveServer() | |
58 | self.http_client = Mock() | |
59 | self.hs = yield utils.setup_test_homeserver( | |
60 | handlers=None, | |
61 | http_client=self.http_client, | |
62 | ) | |
63 | self.hs.config.perspectives = { | |
64 | self.mock_perspective_server.server_name: | |
65 | self.mock_perspective_server.get_verify_keys() | |
66 | } | |
67 | ||
68 | def check_context(self, _, expected): | |
69 | self.assertEquals( | |
70 | getattr(LoggingContext.current_context(), "test_key", None), | |
71 | expected | |
72 | ) | |
73 | ||
74 | @defer.inlineCallbacks | |
75 | def test_wait_for_previous_lookups(self): | |
76 | sentinel_context = LoggingContext.current_context() | |
77 | ||
78 | kr = keyring.Keyring(self.hs) | |
79 | ||
80 | lookup_1_deferred = defer.Deferred() | |
81 | lookup_2_deferred = defer.Deferred() | |
82 | ||
83 | with LoggingContext("one") as context_one: | |
84 | context_one.test_key = "one" | |
85 | ||
86 | wait_1_deferred = kr.wait_for_previous_lookups( | |
87 | ["server1"], | |
88 | {"server1": lookup_1_deferred}, | |
89 | ) | |
90 | ||
91 | # there were no previous lookups, so the deferred should be ready | |
92 | self.assertTrue(wait_1_deferred.called) | |
93 | # ... so we should have preserved the LoggingContext. | |
94 | self.assertIs(LoggingContext.current_context(), context_one) | |
95 | wait_1_deferred.addBoth(self.check_context, "one") | |
96 | ||
97 | with LoggingContext("two") as context_two: | |
98 | context_two.test_key = "two" | |
99 | ||
100 | # set off another wait. It should block because the first lookup | |
101 | # hasn't yet completed. | |
102 | wait_2_deferred = kr.wait_for_previous_lookups( | |
103 | ["server1"], | |
104 | {"server1": lookup_2_deferred}, | |
105 | ) | |
106 | self.assertFalse(wait_2_deferred.called) | |
107 | # ... so we should have reset the LoggingContext. | |
108 | self.assertIs(LoggingContext.current_context(), sentinel_context) | |
109 | wait_2_deferred.addBoth(self.check_context, "two") | |
110 | ||
111 | # let the first lookup complete (in the sentinel context) | |
112 | lookup_1_deferred.callback(None) | |
113 | ||
114 | # now the second wait should complete and restore our | |
115 | # loggingcontext. | |
116 | yield wait_2_deferred | |
117 | ||
118 | @defer.inlineCallbacks | |
119 | def test_verify_json_objects_for_server_awaits_previous_requests(self): | |
120 | key1 = signedjson.key.generate_signing_key(1) | |
121 | ||
122 | kr = keyring.Keyring(self.hs) | |
123 | json1 = {} | |
124 | signedjson.sign.sign_json(json1, "server10", key1) | |
125 | ||
126 | persp_resp = { | |
127 | "server_keys": [ | |
128 | self.mock_perspective_server.get_signed_key( | |
129 | "server10", | |
130 | signedjson.key.get_verify_key(key1) | |
131 | ), | |
132 | ] | |
133 | } | |
134 | persp_deferred = defer.Deferred() | |
135 | ||
136 | @defer.inlineCallbacks | |
137 | def get_perspectives(**kwargs): | |
138 | self.assertEquals( | |
139 | LoggingContext.current_context().test_key, "11", | |
140 | ) | |
141 | with logcontext.PreserveLoggingContext(): | |
142 | yield persp_deferred | |
143 | defer.returnValue(persp_resp) | |
144 | self.http_client.post_json.side_effect = get_perspectives | |
145 | ||
146 | with LoggingContext("11") as context_11: | |
147 | context_11.test_key = "11" | |
148 | ||
149 | # start off a first set of lookups | |
150 | res_deferreds = kr.verify_json_objects_for_server( | |
151 | [("server10", json1), | |
152 | ("server11", {}) | |
153 | ] | |
154 | ) | |
155 | ||
156 | # the unsigned json should be rejected pretty quickly | |
157 | self.assertTrue(res_deferreds[1].called) | |
158 | try: | |
159 | yield res_deferreds[1] | |
160 | self.assertFalse("unsigned json didn't cause a failure") | |
161 | except SynapseError: | |
162 | pass | |
163 | ||
164 | self.assertFalse(res_deferreds[0].called) | |
165 | res_deferreds[0].addBoth(self.check_context, None) | |
166 | ||
167 | # wait a tick for it to send the request to the perspectives server | |
168 | # (it first tries the datastore) | |
169 | yield async.sleep(0.005) | |
170 | self.http_client.post_json.assert_called_once() | |
171 | ||
172 | self.assertIs(LoggingContext.current_context(), context_11) | |
173 | ||
174 | context_12 = LoggingContext("12") | |
175 | context_12.test_key = "12" | |
176 | with logcontext.PreserveLoggingContext(context_12): | |
177 | # a second request for a server with outstanding requests | |
178 | # should block rather than start a second call | |
179 | self.http_client.post_json.reset_mock() | |
180 | self.http_client.post_json.return_value = defer.Deferred() | |
181 | ||
182 | res_deferreds_2 = kr.verify_json_objects_for_server( | |
183 | [("server10", json1)], | |
184 | ) | |
185 | yield async.sleep(0.005) | |
186 | self.http_client.post_json.assert_not_called() | |
187 | res_deferreds_2[0].addBoth(self.check_context, None) | |
188 | ||
189 | # complete the first request | |
190 | with logcontext.PreserveLoggingContext(): | |
191 | persp_deferred.callback(persp_resp) | |
192 | self.assertIs(LoggingContext.current_context(), context_11) | |
193 | ||
194 | with logcontext.PreserveLoggingContext(): | |
195 | yield res_deferreds[0] | |
196 | yield res_deferreds_2[0] | |
197 | ||
198 | @defer.inlineCallbacks | |
199 | def test_verify_json_for_server(self): | |
200 | kr = keyring.Keyring(self.hs) | |
201 | ||
202 | key1 = signedjson.key.generate_signing_key(1) | |
203 | yield self.hs.datastore.store_server_verify_key( | |
204 | "server9", "", time.time() * 1000, | |
205 | signedjson.key.get_verify_key(key1), | |
206 | ) | |
207 | json1 = {} | |
208 | signedjson.sign.sign_json(json1, "server9", key1) | |
209 | ||
210 | sentinel_context = LoggingContext.current_context() | |
211 | ||
212 | with LoggingContext("one") as context_one: | |
213 | context_one.test_key = "one" | |
214 | ||
215 | defer = kr.verify_json_for_server("server9", {}) | |
216 | try: | |
217 | yield defer | |
218 | self.fail("should fail on unsigned json") | |
219 | except SynapseError: | |
220 | pass | |
221 | self.assertIs(LoggingContext.current_context(), context_one) | |
222 | ||
223 | defer = kr.verify_json_for_server("server9", json1) | |
224 | self.assertFalse(defer.called) | |
225 | self.assertIs(LoggingContext.current_context(), sentinel_context) | |
226 | yield defer | |
227 | ||
228 | self.assertIs(LoggingContext.current_context(), context_one) |
18 | 18 | import synapse.handlers.device |
19 | 19 | |
20 | 20 | import synapse.storage |
21 | from synapse import types | |
22 | 21 | from tests import unittest, utils |
23 | 22 | |
24 | 23 | user1 = "@boris:aaa" |
178 | 177 | |
179 | 178 | if ip is not None: |
180 | 179 | yield self.store.insert_client_ip( |
181 | types.UserID.from_string(user_id), | |
180 | user_id, | |
182 | 181 | access_token, ip, "user_agent", device_id) |
183 | 182 | self.clock.advance_time(1000) |
14 | 14 | |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | import synapse.server | |
18 | import synapse.storage | |
19 | import synapse.types | |
20 | 17 | import tests.unittest |
21 | 18 | import tests.utils |
22 | 19 | |
38 | 35 | self.clock.now = 12345678 |
39 | 36 | user_id = "@user:id" |
40 | 37 | yield self.store.insert_client_ip( |
41 | synapse.types.UserID.from_string(user_id), | |
38 | user_id, | |
42 | 39 | "access_token", "ip", "user_agent", "device_id", |
43 | 40 | ) |
44 | 41 |
23 | 23 | from tests.utils import MockClock |
24 | 24 | |
25 | 25 | |
26 | @unittest.DEBUG | |
26 | 27 | class DnsTestCase(unittest.TestCase): |
27 | 28 | |
28 | 29 | @defer.inlineCallbacks |
29 | 30 | def test_resolve(self): |
30 | 31 | dns_client_mock = Mock() |
31 | 32 | |
32 | service_name = "test_service.examle.com" | |
33 | service_name = "test_service.example.com" | |
33 | 34 | host_name = "example.com" |
34 | 35 | ip_address = "127.0.0.1" |
36 | ip6_address = "::1" | |
35 | 37 | |
36 | 38 | answer_srv = dns.RRHeader( |
37 | 39 | type=dns.SRV, |
47 | 49 | ) |
48 | 50 | ) |
49 | 51 | |
50 | dns_client_mock.lookupService.return_value = ([answer_srv], None, None) | |
51 | dns_client_mock.lookupAddress.return_value = ([answer_a], None, None) | |
52 | answer_aaaa = dns.RRHeader( | |
53 | type=dns.AAAA, | |
54 | payload=dns.Record_AAAA( | |
55 | address=ip6_address, | |
56 | ) | |
57 | ) | |
58 | ||
59 | dns_client_mock.lookupService.return_value = defer.succeed( | |
60 | ([answer_srv], None, None), | |
61 | ) | |
62 | dns_client_mock.lookupAddress.return_value = defer.succeed( | |
63 | ([answer_a], None, None), | |
64 | ) | |
65 | dns_client_mock.lookupIPV6Address.return_value = defer.succeed( | |
66 | ([answer_aaaa], None, None), | |
67 | ) | |
52 | 68 | |
53 | 69 | cache = {} |
54 | 70 | |
58 | 74 | |
59 | 75 | dns_client_mock.lookupService.assert_called_once_with(service_name) |
60 | 76 | dns_client_mock.lookupAddress.assert_called_once_with(host_name) |
77 | dns_client_mock.lookupIPV6Address.assert_called_once_with(host_name) | |
61 | 78 | |
62 | self.assertEquals(len(servers), 1) | |
79 | self.assertEquals(len(servers), 2) | |
63 | 80 | self.assertEquals(servers, cache[service_name]) |
64 | 81 | self.assertEquals(servers[0].host, ip_address) |
82 | self.assertEquals(servers[1].host, ip6_address) | |
65 | 83 | |
66 | 84 | @defer.inlineCallbacks |
67 | 85 | def test_from_cache_expired_and_dns_fail(self): |
55 | 55 | config.worker_replication_url = "" |
56 | 56 | config.worker_app = None |
57 | 57 | config.email_enable_notifs = False |
58 | config.block_non_admin_invites = False | |
58 | 59 | |
59 | 60 | config.use_frozen_dicts = True |
60 | 61 | config.database_config = {"name": "sqlite3"} |
13 | 13 | |
14 | 14 | setenv = |
15 | 15 | PYTHONDONTWRITEBYTECODE = no_byte_code |
16 | # As of twisted 16.4, trial tries to import the tests as a package, which | |
17 | # means it needs to be on the pythonpath. | |
18 | PYTHONPATH = {toxinidir} | |
16 | ||
19 | 17 | commands = |
20 | /bin/sh -c "find {toxinidir} -name '*.pyc' -delete ; coverage run {env:COVERAGE_OPTS:} --source={toxinidir}/synapse \ | |
21 | {envbindir}/trial {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:}" | |
18 | /usr/bin/find "{toxinidir}" -name '*.pyc' -delete | |
19 | coverage run {env:COVERAGE_OPTS:} --source="{toxinidir}/synapse" \ | |
20 | "{envbindir}/trial" {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:} | |
22 | 21 | {env:DUMP_COVERAGE_COMMAND:coverage report -m} |
22 | ||
23 | [testenv:py27] | |
24 | ||
25 | # As of twisted 16.4, trial tries to import the tests as a package (previously | |
26 | # it loaded the files explicitly), which means they need to be on the | |
27 | # pythonpath. Our sdist doesn't include the 'tests' package, so normally it | |
28 | # doesn't work within the tox virtualenv. | |
29 | # | |
30 | # As a workaround, we tell tox to do install with 'pip -e', which just | |
31 | # creates a symlink to the project directory instead of unpacking the sdist. | |
32 | # | |
33 | # (An alternative to this would be to set PYTHONPATH to include the project | |
34 | # directory. Note two problems with this: | |
35 | # | |
36 | # - if you set it via `setenv`, then it is also set during the 'install' | |
37 | # phase, which inhibits unpacking the sdist, so the virtualenv isn't | |
38 | # useful for anything else without setting PYTHONPATH similarly. | |
39 | # | |
40 | # - `synapse` is also loaded from PYTHONPATH so even if you only set | |
41 | # PYTHONPATH for the test phase, we're still running the tests against | |
42 | # the working copy rather than the contents of the sdist. So frankly | |
43 | # you might as well use -e in the first place. | |
44 | # | |
45 | # ) | |
46 | usedevelop=true | |
23 | 47 | |
24 | 48 | [testenv:packaging] |
25 | 49 | deps = |