Imported Upstream version 0.18.2
Erik Johnston
7 years ago
23 | 23 | .coverage |
24 | 24 | htmlcov |
25 | 25 | |
26 | demo/*.db | |
27 | demo/*.log | |
28 | demo/*.log.* | |
29 | demo/*.pid | |
26 | demo/*/*.db | |
27 | demo/*/*.log | |
28 | demo/*/*.log.* | |
29 | demo/*/*.pid | |
30 | 30 | demo/media_store.* |
31 | 31 | demo/etc |
32 | 32 |
0 | Changes in synapse v0.18.1 (2016-10-0) | |
0 | Changes in synapse v0.18.2 (2016-11-01) | |
1 | ======================================= | |
2 | ||
3 | No changes since v0.18.2-rc5 | |
4 | ||
5 | ||
6 | Changes in synapse v0.18.2-rc5 (2016-10-28) | |
7 | =========================================== | |
8 | ||
9 | Bug fixes: | |
10 | ||
11 | * Fix prometheus process metrics in worker processes (PR #1184) | |
12 | ||
13 | ||
14 | Changes in synapse v0.18.2-rc4 (2016-10-27) | |
15 | =========================================== | |
16 | ||
17 | Bug fixes: | |
18 | ||
19 | * Fix ``user_threepids`` schema delta, which in some instances prevented | |
20 | startup after upgrade (PR #1183) | |
21 | ||
22 | ||
23 | Changes in synapse v0.18.2-rc3 (2016-10-27) | |
24 | =========================================== | |
25 | ||
26 | Changes: | |
27 | ||
28 | * Allow clients to supply access tokens as headers (PR #1098) | |
29 | * Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164) | |
30 | * Make password reset email field case insensitive (PR #1170) | |
31 | * Reduce redundant database work in email pusher (PR #1174) | |
32 | * Allow configurable rate limiting per AS (PR #1175) | |
33 | * Check whether to ratelimit sooner to avoid work (PR #1176) | |
34 | * Standardise prometheus metrics (PR #1177) | |
35 | ||
36 | ||
37 | Bug fixes: | |
38 | ||
39 | * Fix incredibly slow back pagination query (PR #1178) | |
40 | * Fix infinite typing bug (PR #1179) | |
41 | ||
42 | ||
43 | Changes in synapse v0.18.2-rc2 (2016-10-25) | |
44 | =========================================== | |
45 | ||
46 | (This release did not include the changes advertised and was identical to RC1) | |
47 | ||
48 | ||
49 | Changes in synapse v0.18.2-rc1 (2016-10-17) | |
50 | =========================================== | |
51 | ||
52 | Changes: | |
53 | ||
54 | * Remove redundant event_auth index (PR #1113) | |
55 | * Reduce DB hits for replication (PR #1141) | |
56 | * Implement pluggable password auth (PR #1155) | |
57 | * Remove rate limiting from app service senders and fix get_or_create_user | |
58 | requester, thanks to Patrik Oldsberg (PR #1157) | |
59 | * window.postmessage for Interactive Auth fallback (PR #1159) | |
60 | * Use sys.executable instead of hardcoded python, thanks to Pedro Larroy | |
61 | (PR #1162) | |
62 | * Add config option for adding additional TLS fingerprints (PR #1167) | |
63 | * User-interactive auth on delete device (PR #1168) | |
64 | ||
65 | ||
66 | Bug fixes: | |
67 | ||
68 | * Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg | |
69 | (PR #1150) | |
70 | * Fix interactive auth to return 401 from for incorrect password (PR #1160, | |
71 | #1166) | |
72 | * Fix email push notifs being dropped (PR #1169) | |
73 | ||
74 | ||
75 | ||
76 | Changes in synapse v0.18.1 (2016-10-05) | |
1 | 77 | ====================================== |
2 | 78 | |
3 | 79 | No changes since v0.18.1-rc1 |
14 | 14 | |
15 | 15 | Restart synapse |
16 | 16 | |
17 | 3: Check out synapse-prometheus-config | |
18 | https://github.com/matrix-org/synapse-prometheus-config | |
17 | 3: Add a prometheus target for synapse. It needs to set the ``metrics_path`` | |
18 | to a non-default value:: | |
19 | 19 | |
20 | 4: Add ``synapse.html`` and ``synapse.rules`` | |
21 | The ``.html`` file needs to appear in prometheus's ``consoles`` directory, | |
22 | and the ``.rules`` file needs to be invoked somewhere in the main config | |
23 | file. A symlink to each from the git checkout into the prometheus directory | |
24 | might be easiest to ensure ``git pull`` keeps it updated. | |
20 | - job_name: "synapse" | |
21 | metrics_path: "/_synapse/metrics" | |
22 | static_configs: | |
23 | - targets: | |
24 | "my.server.here:9092" | |
25 | 25 | |
26 | 5: Add a prometheus target for synapse | |
27 | This is easiest if prometheus runs on the same machine as synapse, as it can | |
28 | then just use localhost:: | |
26 | Standard Metric Names | |
27 | --------------------- | |
29 | 28 | |
30 | global: { | |
31 | rule_file: "synapse.rules" | |
32 | } | |
29 | As of synapse version 0.18.2, the format of the process-wide metrics has been | |
30 | changed to fit prometheus standard naming conventions. Additionally the units | |
31 | have been changed to seconds, from miliseconds. | |
33 | 32 | |
34 | job: { | |
35 | name: "synapse" | |
33 | ================================== ============================= | |
34 | New name Old name | |
35 | ---------------------------------- ----------------------------- | |
36 | process_cpu_user_seconds_total process_resource_utime / 1000 | |
37 | process_cpu_system_seconds_total process_resource_stime / 1000 | |
38 | process_open_fds (no 'type' label) process_fds | |
39 | ================================== ============================= | |
36 | 40 | |
37 | target_group: { | |
38 | target: "http://localhost:9092/" | |
39 | } | |
40 | } | |
41 | The python-specific counts of garbage collector performance have been renamed. | |
41 | 42 | |
42 | 6: Start prometheus:: | |
43 | =========================== ====================== | |
44 | New name Old name | |
45 | --------------------------- ---------------------- | |
46 | python_gc_time reactor_gc_time | |
47 | python_gc_unreachable_total reactor_gc_unreachable | |
48 | python_gc_counts reactor_gc_counts | |
49 | =========================== ====================== | |
43 | 50 | |
44 | ./prometheus -config.file=prometheus.conf | |
51 | The twisted-specific reactor metrics have been renamed. | |
45 | 52 | |
46 | 7: Wait a few seconds for it to start and perform the first scrape, | |
47 | then visit the console: | |
48 | ||
49 | http://server-where-prometheus-runs:9090/consoles/synapse.html | |
53 | ==================================== ================= | |
54 | New name Old name | |
55 | ------------------------------------ ----------------- | |
56 | python_twisted_reactor_pending_calls reactor_tick_time | |
57 | python_twisted_reactor_tick_time reactor_tick_time | |
58 | ==================================== ================= |
17 | 17 | <div class="summarytext">{{ summary_text }}</div> |
18 | 18 | </td> |
19 | 19 | <td class="logo"> |
20 | {% if app_name == "Vector" %} | |
20 | {% if app_name == "Riot" %} | |
21 | <img src="http://matrix.org/img/riot-logo-email.png" width="83" height="83" alt="[Riot]"/> | |
22 | {% elif app_name == "Vector" %} | |
21 | 23 | <img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/> |
22 | 24 | {% else %} |
23 | 25 | <img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/> |
15 | 15 | """ This is a reference implementation of a Matrix home server. |
16 | 16 | """ |
17 | 17 | |
18 | __version__ = "0.18.1" | |
18 | __version__ = "0.18.2" |
602 | 602 | """ |
603 | 603 | # Can optionally look elsewhere in the request (e.g. headers) |
604 | 604 | try: |
605 | user_id = yield self._get_appservice_user_id(request) | |
605 | user_id, app_service = yield self._get_appservice_user_id(request) | |
606 | 606 | if user_id: |
607 | 607 | request.authenticated_entity = user_id |
608 | defer.returnValue(synapse.types.create_requester(user_id)) | |
608 | defer.returnValue( | |
609 | synapse.types.create_requester(user_id, app_service=app_service) | |
610 | ) | |
609 | 611 | |
610 | 612 | access_token = get_access_token_from_request( |
611 | 613 | request, self.TOKEN_NOT_FOUND_HTTP_STATUS |
643 | 645 | request.authenticated_entity = user.to_string() |
644 | 646 | |
645 | 647 | defer.returnValue(synapse.types.create_requester( |
646 | user, token_id, is_guest, device_id)) | |
648 | user, token_id, is_guest, device_id, app_service=app_service) | |
649 | ) | |
647 | 650 | except KeyError: |
648 | 651 | raise AuthError( |
649 | 652 | self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.", |
652 | 655 | |
653 | 656 | @defer.inlineCallbacks |
654 | 657 | def _get_appservice_user_id(self, request): |
655 | app_service = yield self.store.get_app_service_by_token( | |
658 | app_service = self.store.get_app_service_by_token( | |
656 | 659 | get_access_token_from_request( |
657 | 660 | request, self.TOKEN_NOT_FOUND_HTTP_STATUS |
658 | 661 | ) |
659 | 662 | ) |
660 | 663 | if app_service is None: |
661 | defer.returnValue(None) | |
664 | defer.returnValue((None, None)) | |
662 | 665 | |
663 | 666 | if "user_id" not in request.args: |
664 | defer.returnValue(app_service.sender) | |
667 | defer.returnValue((app_service.sender, app_service)) | |
665 | 668 | |
666 | 669 | user_id = request.args["user_id"][0] |
667 | 670 | if app_service.sender == user_id: |
668 | defer.returnValue(app_service.sender) | |
671 | defer.returnValue((app_service.sender, app_service)) | |
669 | 672 | |
670 | 673 | if not app_service.is_interested_in_user(user_id): |
671 | 674 | raise AuthError( |
677 | 680 | 403, |
678 | 681 | "Application service has not registered this user" |
679 | 682 | ) |
680 | defer.returnValue(user_id) | |
683 | defer.returnValue((user_id, app_service)) | |
681 | 684 | |
682 | 685 | @defer.inlineCallbacks |
683 | 686 | def get_user_by_access_token(self, token, rights="access"): |
854 | 857 | } |
855 | 858 | defer.returnValue(user_info) |
856 | 859 | |
857 | @defer.inlineCallbacks | |
858 | 860 | def get_appservice_by_req(self, request): |
859 | 861 | try: |
860 | 862 | token = get_access_token_from_request( |
861 | 863 | request, self.TOKEN_NOT_FOUND_HTTP_STATUS |
862 | 864 | ) |
863 | service = yield self.store.get_app_service_by_token(token) | |
865 | service = self.store.get_app_service_by_token(token) | |
864 | 866 | if not service: |
865 | 867 | logger.warn("Unrecognised appservice access token: %s" % (token,)) |
866 | 868 | raise AuthError( |
869 | 871 | errcode=Codes.UNKNOWN_TOKEN |
870 | 872 | ) |
871 | 873 | request.authenticated_entity = service.sender |
872 | defer.returnValue(service) | |
874 | return defer.succeed(service) | |
873 | 875 | except KeyError: |
874 | 876 | raise AuthError( |
875 | 877 | self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token." |
1001 | 1003 | 403, |
1002 | 1004 | "You are not allowed to set others state" |
1003 | 1005 | ) |
1004 | else: | |
1005 | sender_domain = UserID.from_string( | |
1006 | event.user_id | |
1007 | ).domain | |
1008 | ||
1009 | if sender_domain != event.state_key: | |
1010 | raise AuthError( | |
1011 | 403, | |
1012 | "You are not allowed to set others state" | |
1013 | ) | |
1014 | 1006 | |
1015 | 1007 | return True |
1016 | 1008 | |
1177 | 1169 | bool: False if no access_token was given, True otherwise. |
1178 | 1170 | """ |
1179 | 1171 | query_params = request.args.get("access_token") |
1180 | return bool(query_params) | |
1172 | auth_headers = request.requestHeaders.getRawHeaders("Authorization") | |
1173 | return bool(query_params) or bool(auth_headers) | |
1181 | 1174 | |
1182 | 1175 | |
1183 | 1176 | def get_access_token_from_request(request, token_not_found_http_status=401): |
1195 | 1188 | Raises: |
1196 | 1189 | AuthError: If there isn't an access_token in the request. |
1197 | 1190 | """ |
1191 | ||
1192 | auth_headers = request.requestHeaders.getRawHeaders("Authorization") | |
1198 | 1193 | query_params = request.args.get("access_token") |
1199 | # Try to get the access_token from the query params. | |
1200 | if not query_params: | |
1201 | raise AuthError( | |
1202 | token_not_found_http_status, | |
1203 | "Missing access token.", | |
1204 | errcode=Codes.MISSING_TOKEN | |
1205 | ) | |
1206 | ||
1207 | return query_params[0] | |
1194 | if auth_headers: | |
1195 | # Try the get the access_token from a "Authorization: Bearer" | |
1196 | # header | |
1197 | if query_params is not None: | |
1198 | raise AuthError( | |
1199 | token_not_found_http_status, | |
1200 | "Mixing Authorization headers and access_token query parameters.", | |
1201 | errcode=Codes.MISSING_TOKEN, | |
1202 | ) | |
1203 | if len(auth_headers) > 1: | |
1204 | raise AuthError( | |
1205 | token_not_found_http_status, | |
1206 | "Too many Authorization headers.", | |
1207 | errcode=Codes.MISSING_TOKEN, | |
1208 | ) | |
1209 | parts = auth_headers[0].split(" ") | |
1210 | if parts[0] == "Bearer" and len(parts) == 2: | |
1211 | return parts[1] | |
1212 | else: | |
1213 | raise AuthError( | |
1214 | token_not_found_http_status, | |
1215 | "Invalid Authorization header.", | |
1216 | errcode=Codes.MISSING_TOKEN, | |
1217 | ) | |
1218 | else: | |
1219 | # Try to get the access_token from the query params. | |
1220 | if not query_params: | |
1221 | raise AuthError( | |
1222 | token_not_found_http_status, | |
1223 | "Missing access token.", | |
1224 | errcode=Codes.MISSING_TOKEN | |
1225 | ) | |
1226 | ||
1227 | return query_params[0] |
22 | 22 | def __init__(self): |
23 | 23 | self.message_counts = collections.OrderedDict() |
24 | 24 | |
25 | def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count): | |
25 | def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count, update=True): | |
26 | 26 | """Can the user send a message? |
27 | 27 | Args: |
28 | 28 | user_id: The user sending a message. |
31 | 31 | second. |
32 | 32 | burst_count: How many messages the user can send before being |
33 | 33 | limited. |
34 | update (bool): Whether to update the message rates or not. This is | |
35 | useful to check if a message would be allowed to be sent before | |
36 | its ready to be actually sent. | |
34 | 37 | Returns: |
35 | 38 | A pair of a bool indicating if they can send a message now and a |
36 | 39 | time in seconds of when they can next send a message. |
37 | 40 | """ |
38 | 41 | self.prune_message_counts(time_now_s) |
39 | message_count, time_start, _ignored = self.message_counts.pop( | |
42 | message_count, time_start, _ignored = self.message_counts.get( | |
40 | 43 | user_id, (0., time_now_s, None), |
41 | 44 | ) |
42 | 45 | time_delta = time_now_s - time_start |
51 | 54 | allowed = True |
52 | 55 | message_count += 1 |
53 | 56 | |
54 | self.message_counts[user_id] = ( | |
55 | message_count, time_start, msg_rate_hz | |
56 | ) | |
57 | if update: | |
58 | self.message_counts[user_id] = ( | |
59 | message_count, time_start, msg_rate_hz | |
60 | ) | |
57 | 61 | |
58 | 62 | if msg_rate_hz > 0: |
59 | 63 | time_allowed = ( |
196 | 196 | yield start_pusher(user_id, app_id, pushkey) |
197 | 197 | |
198 | 198 | stream = results.get("events") |
199 | if stream: | |
199 | if stream and stream["rows"]: | |
200 | 200 | min_stream_id = stream["rows"][0][0] |
201 | 201 | max_stream_id = stream["position"] |
202 | 202 | preserve_fn(pusher_pool.on_new_notifications)( |
204 | 204 | ) |
205 | 205 | |
206 | 206 | stream = results.get("receipts") |
207 | if stream: | |
207 | if stream and stream["rows"]: | |
208 | 208 | rows = stream["rows"] |
209 | 209 | affected_room_ids = set(row[1] for row in rows) |
210 | 210 | min_stream_id = rows[0][0] |
23 | 23 | import sys |
24 | 24 | import yaml |
25 | 25 | |
26 | SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"] | |
26 | SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"] | |
27 | 27 | |
28 | 28 | GREEN = "\x1b[1;32m" |
29 | 29 | RED = "\x1b[1;31m" |
80 | 80 | NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS] |
81 | 81 | |
82 | 82 | def __init__(self, token, url=None, namespaces=None, hs_token=None, |
83 | sender=None, id=None, protocols=None): | |
83 | sender=None, id=None, protocols=None, rate_limited=True): | |
84 | 84 | self.token = token |
85 | 85 | self.url = url |
86 | 86 | self.hs_token = hs_token |
93 | 93 | self.protocols = set(protocols) |
94 | 94 | else: |
95 | 95 | self.protocols = set() |
96 | ||
97 | self.rate_limited = rate_limited | |
96 | 98 | |
97 | 99 | def _check_namespaces(self, namespaces): |
98 | 100 | # Sanity check that it is of the form: |
233 | 235 | def is_exclusive_room(self, room_id): |
234 | 236 | return self._is_exclusive(ApplicationService.NS_ROOMS, room_id) |
235 | 237 | |
238 | def is_rate_limited(self): | |
239 | return self.rate_limited | |
240 | ||
236 | 241 | def __str__(self): |
237 | 242 | return "ApplicationService: %s" % (self.__dict__,) |
109 | 109 | user = UserID(localpart, hostname) |
110 | 110 | user_id = user.to_string() |
111 | 111 | |
112 | # Rate limiting for users of this AS is on by default (excludes sender) | |
113 | rate_limited = True | |
114 | if isinstance(as_info.get("rate_limited"), bool): | |
115 | rate_limited = as_info.get("rate_limited") | |
116 | ||
112 | 117 | # namespace checks |
113 | 118 | if not isinstance(as_info.get("namespaces"), dict): |
114 | 119 | raise KeyError("Requires 'namespaces' object.") |
154 | 159 | sender=user_id, |
155 | 160 | id=as_info["id"], |
156 | 161 | protocols=protocols, |
162 | rate_limited=rate_limited | |
157 | 163 | ) |
29 | 29 | from .cas import CasConfig |
30 | 30 | from .password import PasswordConfig |
31 | 31 | from .jwt import JWTConfig |
32 | from .ldap import LDAPConfig | |
32 | from .password_auth_providers import PasswordAuthProviderConfig | |
33 | 33 | from .emailconfig import EmailConfig |
34 | 34 | from .workers import WorkerConfig |
35 | 35 | |
38 | 38 | RatelimitConfig, ContentRepositoryConfig, CaptchaConfig, |
39 | 39 | VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig, |
40 | 40 | AppServiceConfig, KeyConfig, SAML2Config, CasConfig, |
41 | JWTConfig, LDAPConfig, PasswordConfig, EmailConfig, | |
42 | WorkerConfig,): | |
41 | JWTConfig, PasswordConfig, EmailConfig, | |
42 | WorkerConfig, PasswordAuthProviderConfig,): | |
43 | 43 | pass |
44 | 44 | |
45 | 45 |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2015 Niklas Riekenbrauck | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from ._base import Config, ConfigError | |
16 | ||
17 | ||
18 | MISSING_LDAP3 = ( | |
19 | "Missing ldap3 library. This is required for LDAP Authentication." | |
20 | ) | |
21 | ||
22 | ||
23 | class LDAPMode(object): | |
24 | SIMPLE = "simple", | |
25 | SEARCH = "search", | |
26 | ||
27 | LIST = (SIMPLE, SEARCH) | |
28 | ||
29 | ||
30 | class LDAPConfig(Config): | |
31 | def read_config(self, config): | |
32 | ldap_config = config.get("ldap_config", {}) | |
33 | ||
34 | self.ldap_enabled = ldap_config.get("enabled", False) | |
35 | ||
36 | if self.ldap_enabled: | |
37 | # verify dependencies are available | |
38 | try: | |
39 | import ldap3 | |
40 | ldap3 # to stop unused lint | |
41 | except ImportError: | |
42 | raise ConfigError(MISSING_LDAP3) | |
43 | ||
44 | self.ldap_mode = LDAPMode.SIMPLE | |
45 | ||
46 | # verify config sanity | |
47 | self.require_keys(ldap_config, [ | |
48 | "uri", | |
49 | "base", | |
50 | "attributes", | |
51 | ]) | |
52 | ||
53 | self.ldap_uri = ldap_config["uri"] | |
54 | self.ldap_start_tls = ldap_config.get("start_tls", False) | |
55 | self.ldap_base = ldap_config["base"] | |
56 | self.ldap_attributes = ldap_config["attributes"] | |
57 | ||
58 | if "bind_dn" in ldap_config: | |
59 | self.ldap_mode = LDAPMode.SEARCH | |
60 | self.require_keys(ldap_config, [ | |
61 | "bind_dn", | |
62 | "bind_password", | |
63 | ]) | |
64 | ||
65 | self.ldap_bind_dn = ldap_config["bind_dn"] | |
66 | self.ldap_bind_password = ldap_config["bind_password"] | |
67 | self.ldap_filter = ldap_config.get("filter", None) | |
68 | ||
69 | # verify attribute lookup | |
70 | self.require_keys(ldap_config['attributes'], [ | |
71 | "uid", | |
72 | "name", | |
73 | "mail", | |
74 | ]) | |
75 | ||
76 | def require_keys(self, config, required): | |
77 | missing = [key for key in required if key not in config] | |
78 | if missing: | |
79 | raise ConfigError( | |
80 | "LDAP enabled but missing required config values: {}".format( | |
81 | ", ".join(missing) | |
82 | ) | |
83 | ) | |
84 | ||
85 | def default_config(self, **kwargs): | |
86 | return """\ | |
87 | # ldap_config: | |
88 | # enabled: true | |
89 | # uri: "ldap://ldap.example.com:389" | |
90 | # start_tls: true | |
91 | # base: "ou=users,dc=example,dc=com" | |
92 | # attributes: | |
93 | # uid: "cn" | |
94 | # mail: "email" | |
95 | # name: "givenName" | |
96 | # #bind_dn: | |
97 | # #bind_password: | |
98 | # #filter: "(objectClass=posixAccount)" | |
99 | """ |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2016 Openmarket | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from ._base import Config | |
16 | ||
17 | import importlib | |
18 | ||
19 | ||
20 | class PasswordAuthProviderConfig(Config): | |
21 | def read_config(self, config): | |
22 | self.password_providers = [] | |
23 | ||
24 | # We want to be backwards compatible with the old `ldap_config` | |
25 | # param. | |
26 | ldap_config = config.get("ldap_config", {}) | |
27 | self.ldap_enabled = ldap_config.get("enabled", False) | |
28 | if self.ldap_enabled: | |
29 | from synapse.util.ldap_auth_provider import LdapAuthProvider | |
30 | parsed_config = LdapAuthProvider.parse_config(ldap_config) | |
31 | self.password_providers.append((LdapAuthProvider, parsed_config)) | |
32 | ||
33 | providers = config.get("password_providers", []) | |
34 | for provider in providers: | |
35 | # We need to import the module, and then pick the class out of | |
36 | # that, so we split based on the last dot. | |
37 | module, clz = provider['module'].rsplit(".", 1) | |
38 | module = importlib.import_module(module) | |
39 | provider_class = getattr(module, clz) | |
40 | ||
41 | provider_config = provider_class.parse_config(provider["config"]) | |
42 | self.password_providers.append((provider_class, provider_config)) | |
43 | ||
44 | def default_config(self, **kwargs): | |
45 | return """\ | |
46 | # password_providers: | |
47 | # - module: "synapse.util.ldap_auth_provider.LdapAuthProvider" | |
48 | # config: | |
49 | # enabled: true | |
50 | # uri: "ldap://ldap.example.com:389" | |
51 | # start_tls: true | |
52 | # base: "ou=users,dc=example,dc=com" | |
53 | # attributes: | |
54 | # uid: "cn" | |
55 | # mail: "email" | |
56 | # name: "givenName" | |
57 | # #bind_dn: | |
58 | # #bind_password: | |
59 | # #filter: "(objectClass=posixAccount)" | |
60 | """ |
17 | 17 | from OpenSSL import crypto |
18 | 18 | import subprocess |
19 | 19 | import os |
20 | ||
21 | from hashlib import sha256 | |
22 | from unpaddedbase64 import encode_base64 | |
20 | 23 | |
21 | 24 | GENERATE_DH_PARAMS = False |
22 | 25 | |
40 | 43 | self.tls_dh_params_path = self.check_file( |
41 | 44 | config.get("tls_dh_params_path"), "tls_dh_params" |
42 | 45 | ) |
46 | ||
47 | self.tls_fingerprints = config["tls_fingerprints"] | |
48 | ||
49 | # Check that our own certificate is included in the list of fingerprints | |
50 | # and include it if it is not. | |
51 | x509_certificate_bytes = crypto.dump_certificate( | |
52 | crypto.FILETYPE_ASN1, | |
53 | self.tls_certificate | |
54 | ) | |
55 | sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest()) | |
56 | sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints) | |
57 | if sha256_fingerprint not in sha256_fingerprints: | |
58 | self.tls_fingerprints.append({u"sha256": sha256_fingerprint}) | |
43 | 59 | |
44 | 60 | # This config option applies to non-federation HTTP clients |
45 | 61 | # (e.g. for talking to recaptcha, identity servers, and such) |
72 | 88 | |
73 | 89 | # Don't bind to the https port |
74 | 90 | no_tls: False |
91 | ||
92 | # List of allowed TLS fingerprints for this server to publish along | |
93 | # with the signing keys for this server. Other matrix servers that | |
94 | # make HTTPS requests to this server will check that the TLS | |
95 | # certificates returned by this server match one of the fingerprints. | |
96 | # | |
97 | # Synapse automatically adds its the fingerprint of its own certificate | |
98 | # to the list. So if federation traffic is handle directly by synapse | |
99 | # then no modification to the list is required. | |
100 | # | |
101 | # If synapse is run behind a load balancer that handles the TLS then it | |
102 | # will be necessary to add the fingerprints of the certificates used by | |
103 | # the loadbalancers to this list if they are different to the one | |
104 | # synapse is using. | |
105 | # | |
106 | # Homeservers are permitted to cache the list of TLS fingerprints | |
107 | # returned in the key responses up to the "valid_until_ts" returned in | |
108 | # key. It may be necessary to publish the fingerprints of a new | |
109 | # certificate and wait until the "valid_until_ts" of the previous key | |
110 | # responses have passed before deploying it. | |
111 | tls_fingerprints: [] | |
112 | # tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}] | |
75 | 113 | """ % locals() |
76 | 114 | |
77 | 115 | def read_tls_certificate(self, cert_path): |
54 | 54 | |
55 | 55 | def ratelimit(self, requester): |
56 | 56 | time_now = self.clock.time() |
57 | user_id = requester.user.to_string() | |
58 | ||
59 | # The AS user itself is never rate limited. | |
60 | app_service = self.store.get_app_service_by_user_id(user_id) | |
61 | if app_service is not None: | |
62 | return # do not ratelimit app service senders | |
63 | ||
64 | # Disable rate limiting of users belonging to any AS that is configured | |
65 | # not to be rate limited in its registration file (rate_limited: true|false). | |
66 | if requester.app_service and not requester.app_service.is_rate_limited(): | |
67 | return | |
68 | ||
57 | 69 | allowed, time_allowed = self.ratelimiter.send_message( |
58 | requester.user.to_string(), time_now, | |
70 | user_id, time_now, | |
59 | 71 | msg_rate_hz=self.hs.config.rc_messages_per_second, |
60 | 72 | burst_count=self.hs.config.rc_message_burst_count, |
61 | 73 | ) |
58 | 58 | Args: |
59 | 59 | current_id(int): The current maximum ID. |
60 | 60 | """ |
61 | services = yield self.store.get_app_services() | |
61 | services = self.store.get_app_services() | |
62 | 62 | if not services or not self.notify_appservices: |
63 | 63 | return |
64 | 64 | |
141 | 141 | association can be found. |
142 | 142 | """ |
143 | 143 | room_alias_str = room_alias.to_string() |
144 | services = yield self.store.get_app_services() | |
144 | services = self.store.get_app_services() | |
145 | 145 | alias_query_services = [ |
146 | 146 | s for s in services if ( |
147 | 147 | s.is_interested_in_alias(room_alias_str) |
176 | 176 | |
177 | 177 | @defer.inlineCallbacks |
178 | 178 | def get_3pe_protocols(self, only_protocol=None): |
179 | services = yield self.store.get_app_services() | |
179 | services = self.store.get_app_services() | |
180 | 180 | protocols = {} |
181 | 181 | |
182 | 182 | # Collect up all the individual protocol responses out of the ASes |
223 | 223 | list<ApplicationService>: A list of services interested in this |
224 | 224 | event based on the service regex. |
225 | 225 | """ |
226 | services = yield self.store.get_app_services() | |
226 | services = self.store.get_app_services() | |
227 | 227 | interested_list = [ |
228 | 228 | s for s in services if ( |
229 | 229 | yield s.is_interested(event, self.store) |
231 | 231 | ] |
232 | 232 | defer.returnValue(interested_list) |
233 | 233 | |
234 | @defer.inlineCallbacks | |
235 | 234 | def _get_services_for_user(self, user_id): |
236 | services = yield self.store.get_app_services() | |
235 | services = self.store.get_app_services() | |
237 | 236 | interested_list = [ |
238 | 237 | s for s in services if ( |
239 | 238 | s.is_interested_in_user(user_id) |
240 | 239 | ) |
241 | 240 | ] |
242 | defer.returnValue(interested_list) | |
243 | ||
244 | @defer.inlineCallbacks | |
241 | return defer.succeed(interested_list) | |
242 | ||
245 | 243 | def _get_services_for_3pn(self, protocol): |
246 | services = yield self.store.get_app_services() | |
244 | services = self.store.get_app_services() | |
247 | 245 | interested_list = [ |
248 | 246 | s for s in services if s.is_interested_in_protocol(protocol) |
249 | 247 | ] |
250 | defer.returnValue(interested_list) | |
248 | return defer.succeed(interested_list) | |
251 | 249 | |
252 | 250 | @defer.inlineCallbacks |
253 | 251 | def _is_unknown_user(self, user_id): |
263 | 261 | return |
264 | 262 | |
265 | 263 | # user not found; could be the AS though, so check. |
266 | services = yield self.store.get_app_services() | |
264 | services = self.store.get_app_services() | |
267 | 265 | service_list = [s for s in services if s.sender == user_id] |
268 | 266 | defer.returnValue(len(service_list) == 0) |
269 | 267 |
19 | 19 | from synapse.types import UserID |
20 | 20 | from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError |
21 | 21 | from synapse.util.async import run_on_reactor |
22 | from synapse.config.ldap import LDAPMode | |
23 | 22 | |
24 | 23 | from twisted.web.client import PartialDownloadError |
25 | 24 | |
27 | 26 | import bcrypt |
28 | 27 | import pymacaroons |
29 | 28 | import simplejson |
30 | ||
31 | try: | |
32 | import ldap3 | |
33 | import ldap3.core.exceptions | |
34 | except ImportError: | |
35 | ldap3 = None | |
36 | pass | |
37 | 29 | |
38 | 30 | import synapse.util.stringutils as stringutils |
39 | 31 | |
58 | 50 | } |
59 | 51 | self.bcrypt_rounds = hs.config.bcrypt_rounds |
60 | 52 | self.sessions = {} |
61 | self.INVALID_TOKEN_HTTP_STATUS = 401 | |
62 | ||
63 | self.ldap_enabled = hs.config.ldap_enabled | |
64 | if self.ldap_enabled: | |
65 | if not ldap3: | |
66 | raise RuntimeError( | |
67 | 'Missing ldap3 library. This is required for LDAP Authentication.' | |
68 | ) | |
69 | self.ldap_mode = hs.config.ldap_mode | |
70 | self.ldap_uri = hs.config.ldap_uri | |
71 | self.ldap_start_tls = hs.config.ldap_start_tls | |
72 | self.ldap_base = hs.config.ldap_base | |
73 | self.ldap_attributes = hs.config.ldap_attributes | |
74 | if self.ldap_mode == LDAPMode.SEARCH: | |
75 | self.ldap_bind_dn = hs.config.ldap_bind_dn | |
76 | self.ldap_bind_password = hs.config.ldap_bind_password | |
77 | self.ldap_filter = hs.config.ldap_filter | |
53 | ||
54 | account_handler = _AccountHandler( | |
55 | hs, check_user_exists=self.check_user_exists | |
56 | ) | |
57 | ||
58 | self.password_providers = [ | |
59 | module(config=config, account_handler=account_handler) | |
60 | for module, config in hs.config.password_providers | |
61 | ] | |
78 | 62 | |
79 | 63 | self.hs = hs # FIXME better possibility to access registrationHandler later? |
80 | 64 | self.device_handler = hs.get_device_handler() |
148 | 132 | creds = session['creds'] |
149 | 133 | |
150 | 134 | # check auth type currently being presented |
135 | errordict = {} | |
151 | 136 | if 'type' in authdict: |
152 | if authdict['type'] not in self.checkers: | |
137 | login_type = authdict['type'] | |
138 | if login_type not in self.checkers: | |
153 | 139 | raise LoginError(400, "", Codes.UNRECOGNIZED) |
154 | result = yield self.checkers[authdict['type']](authdict, clientip) | |
155 | if result: | |
156 | creds[authdict['type']] = result | |
157 | self._save_session(session) | |
140 | try: | |
141 | result = yield self.checkers[login_type](authdict, clientip) | |
142 | if result: | |
143 | creds[login_type] = result | |
144 | self._save_session(session) | |
145 | except LoginError, e: | |
146 | if login_type == LoginType.EMAIL_IDENTITY: | |
147 | # riot used to have a bug where it would request a new | |
148 | # validation token (thus sending a new email) each time it | |
149 | # got a 401 with a 'flows' field. | |
150 | # (https://github.com/vector-im/vector-web/issues/2447). | |
151 | # | |
152 | # Grandfather in the old behaviour for now to avoid | |
153 | # breaking old riot deployments. | |
154 | raise e | |
155 | ||
156 | # this step failed. Merge the error dict into the response | |
157 | # so that the client can have another go. | |
158 | errordict = e.error_dict() | |
158 | 159 | |
159 | 160 | for f in flows: |
160 | 161 | if len(set(f) - set(creds.keys())) == 0: |
163 | 164 | |
164 | 165 | ret = self._auth_dict_for_flows(flows, session) |
165 | 166 | ret['completed'] = creds.keys() |
167 | ret.update(errordict) | |
166 | 168 | defer.returnValue((False, ret, clientdict, session['id'])) |
167 | 169 | |
168 | 170 | @defer.inlineCallbacks |
430 | 432 | defer.Deferred: (str) canonical_user_id, or None if zero or |
431 | 433 | multiple matches |
432 | 434 | """ |
433 | try: | |
434 | res = yield self._find_user_id_and_pwd_hash(user_id) | |
435 | res = yield self._find_user_id_and_pwd_hash(user_id) | |
436 | if res is not None: | |
435 | 437 | defer.returnValue(res[0]) |
436 | except LoginError: | |
437 | defer.returnValue(None) | |
438 | defer.returnValue(None) | |
438 | 439 | |
439 | 440 | @defer.inlineCallbacks |
440 | 441 | def _find_user_id_and_pwd_hash(self, user_id): |
441 | 442 | """Checks to see if a user with the given id exists. Will check case |
442 | insensitively, but will throw if there are multiple inexact matches. | |
443 | insensitively, but will return None if there are multiple inexact | |
444 | matches. | |
443 | 445 | |
444 | 446 | Returns: |
445 | 447 | tuple: A 2-tuple of `(canonical_user_id, password_hash)` |
448 | None: if there is not exactly one match | |
446 | 449 | """ |
447 | 450 | user_infos = yield self.store.get_users_by_id_case_insensitive(user_id) |
451 | ||
452 | result = None | |
448 | 453 | if not user_infos: |
449 | 454 | logger.warn("Attempted to login as %s but they do not exist", user_id) |
450 | raise LoginError(403, "", errcode=Codes.FORBIDDEN) | |
451 | ||
452 | if len(user_infos) > 1: | |
453 | if user_id not in user_infos: | |
454 | logger.warn( | |
455 | "Attempted to login as %s but it matches more than one user " | |
456 | "inexactly: %r", | |
457 | user_id, user_infos.keys() | |
458 | ) | |
459 | raise LoginError(403, "", errcode=Codes.FORBIDDEN) | |
460 | ||
461 | defer.returnValue((user_id, user_infos[user_id])) | |
455 | elif len(user_infos) == 1: | |
456 | # a single match (possibly not exact) | |
457 | result = user_infos.popitem() | |
458 | elif user_id in user_infos: | |
459 | # multiple matches, but one is exact | |
460 | result = (user_id, user_infos[user_id]) | |
462 | 461 | else: |
463 | defer.returnValue(user_infos.popitem()) | |
462 | # multiple matches, none of them exact | |
463 | logger.warn( | |
464 | "Attempted to login as %s but it matches more than one user " | |
465 | "inexactly: %r", | |
466 | user_id, user_infos.keys() | |
467 | ) | |
468 | defer.returnValue(result) | |
464 | 469 | |
465 | 470 | @defer.inlineCallbacks |
466 | 471 | def _check_password(self, user_id, password): |
474 | 479 | Returns: |
475 | 480 | (str) the canonical_user_id |
476 | 481 | Raises: |
477 | LoginError if the password was incorrect | |
478 | """ | |
479 | valid_ldap = yield self._check_ldap_password(user_id, password) | |
480 | if valid_ldap: | |
481 | defer.returnValue(user_id) | |
482 | ||
483 | result = yield self._check_local_password(user_id, password) | |
484 | defer.returnValue(result) | |
482 | LoginError if login fails | |
483 | """ | |
484 | for provider in self.password_providers: | |
485 | is_valid = yield provider.check_password(user_id, password) | |
486 | if is_valid: | |
487 | defer.returnValue(user_id) | |
488 | ||
489 | canonical_user_id = yield self._check_local_password(user_id, password) | |
490 | ||
491 | if canonical_user_id: | |
492 | defer.returnValue(canonical_user_id) | |
493 | ||
494 | # unknown username or invalid password. We raise a 403 here, but note | |
495 | # that if we're doing user-interactive login, it turns all LoginErrors | |
496 | # into a 401 anyway. | |
497 | raise LoginError( | |
498 | 403, "Invalid password", | |
499 | errcode=Codes.FORBIDDEN | |
500 | ) | |
485 | 501 | |
486 | 502 | @defer.inlineCallbacks |
487 | 503 | def _check_local_password(self, user_id, password): |
488 | 504 | """Authenticate a user against the local password database. |
489 | 505 | |
490 | user_id is checked case insensitively, but will throw if there are | |
506 | user_id is checked case insensitively, but will return None if there are | |
491 | 507 | multiple inexact matches. |
492 | 508 | |
493 | 509 | Args: |
494 | 510 | user_id (str): complete @user:id |
495 | 511 | Returns: |
496 | (str) the canonical_user_id | |
497 | Raises: | |
498 | LoginError if the password was incorrect | |
499 | """ | |
500 | user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id) | |
512 | (str) the canonical_user_id, or None if unknown user / bad password | |
513 | """ | |
514 | lookupres = yield self._find_user_id_and_pwd_hash(user_id) | |
515 | if not lookupres: | |
516 | defer.returnValue(None) | |
517 | (user_id, password_hash) = lookupres | |
501 | 518 | result = self.validate_hash(password, password_hash) |
502 | 519 | if not result: |
503 | 520 | logger.warn("Failed password login for user %s", user_id) |
504 | raise LoginError(403, "", errcode=Codes.FORBIDDEN) | |
521 | defer.returnValue(None) | |
505 | 522 | defer.returnValue(user_id) |
506 | ||
507 | def _ldap_simple_bind(self, server, localpart, password): | |
508 | """ Attempt a simple bind with the credentials | |
509 | given by the user against the LDAP server. | |
510 | ||
511 | Returns True, LDAP3Connection | |
512 | if the bind was successful | |
513 | Returns False, None | |
514 | if an error occured | |
515 | """ | |
516 | ||
517 | try: | |
518 | # bind with the the local users ldap credentials | |
519 | bind_dn = "{prop}={value},{base}".format( | |
520 | prop=self.ldap_attributes['uid'], | |
521 | value=localpart, | |
522 | base=self.ldap_base | |
523 | ) | |
524 | conn = ldap3.Connection(server, bind_dn, password) | |
525 | logger.debug( | |
526 | "Established LDAP connection in simple bind mode: %s", | |
527 | conn | |
528 | ) | |
529 | ||
530 | if self.ldap_start_tls: | |
531 | conn.start_tls() | |
532 | logger.debug( | |
533 | "Upgraded LDAP connection in simple bind mode through StartTLS: %s", | |
534 | conn | |
535 | ) | |
536 | ||
537 | if conn.bind(): | |
538 | # GOOD: bind okay | |
539 | logger.debug("LDAP Bind successful in simple bind mode.") | |
540 | return True, conn | |
541 | ||
542 | # BAD: bind failed | |
543 | logger.info( | |
544 | "Binding against LDAP failed for '%s' failed: %s", | |
545 | localpart, conn.result['description'] | |
546 | ) | |
547 | conn.unbind() | |
548 | return False, None | |
549 | ||
550 | except ldap3.core.exceptions.LDAPException as e: | |
551 | logger.warn("Error during LDAP authentication: %s", e) | |
552 | return False, None | |
553 | ||
554 | def _ldap_authenticated_search(self, server, localpart, password): | |
555 | """ Attempt to login with the preconfigured bind_dn | |
556 | and then continue searching and filtering within | |
557 | the base_dn | |
558 | ||
559 | Returns (True, LDAP3Connection) | |
560 | if a single matching DN within the base was found | |
561 | that matched the filter expression, and with which | |
562 | a successful bind was achieved | |
563 | ||
564 | The LDAP3Connection returned is the instance that was used to | |
565 | verify the password not the one using the configured bind_dn. | |
566 | Returns (False, None) | |
567 | if an error occured | |
568 | """ | |
569 | ||
570 | try: | |
571 | conn = ldap3.Connection( | |
572 | server, | |
573 | self.ldap_bind_dn, | |
574 | self.ldap_bind_password | |
575 | ) | |
576 | logger.debug( | |
577 | "Established LDAP connection in search mode: %s", | |
578 | conn | |
579 | ) | |
580 | ||
581 | if self.ldap_start_tls: | |
582 | conn.start_tls() | |
583 | logger.debug( | |
584 | "Upgraded LDAP connection in search mode through StartTLS: %s", | |
585 | conn | |
586 | ) | |
587 | ||
588 | if not conn.bind(): | |
589 | logger.warn( | |
590 | "Binding against LDAP with `bind_dn` failed: %s", | |
591 | conn.result['description'] | |
592 | ) | |
593 | conn.unbind() | |
594 | return False, None | |
595 | ||
596 | # construct search_filter like (uid=localpart) | |
597 | query = "({prop}={value})".format( | |
598 | prop=self.ldap_attributes['uid'], | |
599 | value=localpart | |
600 | ) | |
601 | if self.ldap_filter: | |
602 | # combine with the AND expression | |
603 | query = "(&{query}{filter})".format( | |
604 | query=query, | |
605 | filter=self.ldap_filter | |
606 | ) | |
607 | logger.debug( | |
608 | "LDAP search filter: %s", | |
609 | query | |
610 | ) | |
611 | conn.search( | |
612 | search_base=self.ldap_base, | |
613 | search_filter=query | |
614 | ) | |
615 | ||
616 | if len(conn.response) == 1: | |
617 | # GOOD: found exactly one result | |
618 | user_dn = conn.response[0]['dn'] | |
619 | logger.debug('LDAP search found dn: %s', user_dn) | |
620 | ||
621 | # unbind and simple bind with user_dn to verify the password | |
622 | # Note: do not use rebind(), for some reason it did not verify | |
623 | # the password for me! | |
624 | conn.unbind() | |
625 | return self._ldap_simple_bind(server, localpart, password) | |
626 | else: | |
627 | # BAD: found 0 or > 1 results, abort! | |
628 | if len(conn.response) == 0: | |
629 | logger.info( | |
630 | "LDAP search returned no results for '%s'", | |
631 | localpart | |
632 | ) | |
633 | else: | |
634 | logger.info( | |
635 | "LDAP search returned too many (%s) results for '%s'", | |
636 | len(conn.response), localpart | |
637 | ) | |
638 | conn.unbind() | |
639 | return False, None | |
640 | ||
641 | except ldap3.core.exceptions.LDAPException as e: | |
642 | logger.warn("Error during LDAP authentication: %s", e) | |
643 | return False, None | |
644 | ||
645 | @defer.inlineCallbacks | |
646 | def _check_ldap_password(self, user_id, password): | |
647 | """ Attempt to authenticate a user against an LDAP Server | |
648 | and register an account if none exists. | |
649 | ||
650 | Returns: | |
651 | True if authentication against LDAP was successful | |
652 | """ | |
653 | ||
654 | if not ldap3 or not self.ldap_enabled: | |
655 | defer.returnValue(False) | |
656 | ||
657 | localpart = UserID.from_string(user_id).localpart | |
658 | ||
659 | try: | |
660 | server = ldap3.Server(self.ldap_uri) | |
661 | logger.debug( | |
662 | "Attempting LDAP connection with %s", | |
663 | self.ldap_uri | |
664 | ) | |
665 | ||
666 | if self.ldap_mode == LDAPMode.SIMPLE: | |
667 | result, conn = self._ldap_simple_bind( | |
668 | server=server, localpart=localpart, password=password | |
669 | ) | |
670 | logger.debug( | |
671 | 'LDAP authentication method simple bind returned: %s (conn: %s)', | |
672 | result, | |
673 | conn | |
674 | ) | |
675 | if not result: | |
676 | defer.returnValue(False) | |
677 | elif self.ldap_mode == LDAPMode.SEARCH: | |
678 | result, conn = self._ldap_authenticated_search( | |
679 | server=server, localpart=localpart, password=password | |
680 | ) | |
681 | logger.debug( | |
682 | 'LDAP auth method authenticated search returned: %s (conn: %s)', | |
683 | result, | |
684 | conn | |
685 | ) | |
686 | if not result: | |
687 | defer.returnValue(False) | |
688 | else: | |
689 | raise RuntimeError( | |
690 | 'Invalid LDAP mode specified: {mode}'.format( | |
691 | mode=self.ldap_mode | |
692 | ) | |
693 | ) | |
694 | ||
695 | try: | |
696 | logger.info( | |
697 | "User authenticated against LDAP server: %s", | |
698 | conn | |
699 | ) | |
700 | except NameError: | |
701 | logger.warn("Authentication method yielded no LDAP connection, aborting!") | |
702 | defer.returnValue(False) | |
703 | ||
704 | # check if user with user_id exists | |
705 | if (yield self.check_user_exists(user_id)): | |
706 | # exists, authentication complete | |
707 | conn.unbind() | |
708 | defer.returnValue(True) | |
709 | ||
710 | else: | |
711 | # does not exist, fetch metadata for account creation from | |
712 | # existing ldap connection | |
713 | query = "({prop}={value})".format( | |
714 | prop=self.ldap_attributes['uid'], | |
715 | value=localpart | |
716 | ) | |
717 | ||
718 | if self.ldap_mode == LDAPMode.SEARCH and self.ldap_filter: | |
719 | query = "(&{filter}{user_filter})".format( | |
720 | filter=query, | |
721 | user_filter=self.ldap_filter | |
722 | ) | |
723 | logger.debug( | |
724 | "ldap registration filter: %s", | |
725 | query | |
726 | ) | |
727 | ||
728 | conn.search( | |
729 | search_base=self.ldap_base, | |
730 | search_filter=query, | |
731 | attributes=[ | |
732 | self.ldap_attributes['name'], | |
733 | self.ldap_attributes['mail'] | |
734 | ] | |
735 | ) | |
736 | ||
737 | if len(conn.response) == 1: | |
738 | attrs = conn.response[0]['attributes'] | |
739 | mail = attrs[self.ldap_attributes['mail']][0] | |
740 | name = attrs[self.ldap_attributes['name']][0] | |
741 | ||
742 | # create account | |
743 | registration_handler = self.hs.get_handlers().registration_handler | |
744 | user_id, access_token = ( | |
745 | yield registration_handler.register(localpart=localpart) | |
746 | ) | |
747 | ||
748 | # TODO: bind email, set displayname with data from ldap directory | |
749 | ||
750 | logger.info( | |
751 | "Registration based on LDAP data was successful: %d: %s (%s, %)", | |
752 | user_id, | |
753 | localpart, | |
754 | name, | |
755 | ||
756 | ) | |
757 | ||
758 | defer.returnValue(True) | |
759 | else: | |
760 | if len(conn.response) == 0: | |
761 | logger.warn("LDAP registration failed, no result.") | |
762 | else: | |
763 | logger.warn( | |
764 | "LDAP registration failed, too many results (%s)", | |
765 | len(conn.response) | |
766 | ) | |
767 | ||
768 | defer.returnValue(False) | |
769 | ||
770 | defer.returnValue(False) | |
771 | ||
772 | except ldap3.core.exceptions.LDAPException as e: | |
773 | logger.warn("Error during ldap authentication: %s", e) | |
774 | defer.returnValue(False) | |
775 | 523 | |
776 | 524 | @defer.inlineCallbacks |
777 | 525 | def issue_access_token(self, user_id, device_id=None): |
862 | 610 | |
863 | 611 | @defer.inlineCallbacks |
864 | 612 | def add_threepid(self, user_id, medium, address, validated_at): |
613 | # 'Canonicalise' email addresses down to lower case. | |
614 | # We've now moving towards the Home Server being the entity that | |
615 | # is responsible for validating threepids used for resetting passwords | |
616 | # on accounts, so in future Synapse will gain knowledge of specific | |
617 | # types (mediums) of threepid. For now, we still use the existing | |
618 | # infrastructure, but this is the start of synapse gaining knowledge | |
619 | # of specific types of threepid (and fixes the fact that checking | |
620 | # for the presenc eof an email address during password reset was | |
621 | # case sensitive). | |
622 | if medium == 'email': | |
623 | address = address.lower() | |
624 | ||
865 | 625 | yield self.store.user_add_threepid( |
866 | 626 | user_id, medium, address, validated_at, |
867 | 627 | self.hs.get_clock().time_msec() |
910 | 670 | stored_hash.encode('utf-8')) == stored_hash |
911 | 671 | else: |
912 | 672 | return False |
673 | ||
674 | ||
675 | class _AccountHandler(object): | |
676 | """A proxy object that gets passed to password auth providers so they | |
677 | can register new users etc if necessary. | |
678 | """ | |
679 | def __init__(self, hs, check_user_exists): | |
680 | self.hs = hs | |
681 | ||
682 | self._check_user_exists = check_user_exists | |
683 | ||
684 | def check_user_exists(self, user_id): | |
685 | """Check if user exissts. | |
686 | ||
687 | Returns: | |
688 | Deferred(bool) | |
689 | """ | |
690 | return self._check_user_exists(user_id) | |
691 | ||
692 | def register(self, localpart): | |
693 | """Registers a new user with given localpart | |
694 | ||
695 | Returns: | |
696 | Deferred: a 2-tuple of (user_id, access_token) | |
697 | """ | |
698 | reg = self.hs.get_handlers().registration_handler | |
699 | return reg.register(localpart=localpart) |
287 | 287 | result = yield as_handler.query_room_alias_exists(room_alias) |
288 | 288 | defer.returnValue(result) |
289 | 289 | |
290 | @defer.inlineCallbacks | |
291 | 290 | def can_modify_alias(self, alias, user_id=None): |
292 | 291 | # Any application service "interested" in an alias they are regexing on |
293 | 292 | # can modify the alias. |
294 | 293 | # Users can only modify the alias if ALL the interested services have |
295 | 294 | # non-exclusive locks on the alias (or there are no interested services) |
296 | services = yield self.store.get_app_services() | |
295 | services = self.store.get_app_services() | |
297 | 296 | interested_services = [ |
298 | 297 | s for s in services if s.is_interested_in_alias(alias.to_string()) |
299 | 298 | ] |
301 | 300 | for service in interested_services: |
302 | 301 | if user_id == service.sender: |
303 | 302 | # this user IS the app service so they can do whatever they like |
304 | defer.returnValue(True) | |
305 | return | |
303 | return defer.succeed(True) | |
306 | 304 | elif service.is_exclusive_alias(alias.to_string()): |
307 | 305 | # another service has an exclusive lock on this alias. |
308 | defer.returnValue(False) | |
309 | return | |
306 | return defer.succeed(False) | |
310 | 307 | # either no interested services, or no service with an exclusive lock |
311 | defer.returnValue(True) | |
308 | return defer.succeed(True) | |
312 | 309 | |
313 | 310 | @defer.inlineCallbacks |
314 | 311 | def _user_can_delete_alias(self, alias, user_id): |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | 17 | from synapse.api.constants import EventTypes, Membership |
18 | from synapse.api.errors import AuthError, Codes, SynapseError | |
18 | from synapse.api.errors import AuthError, Codes, SynapseError, LimitExceededError | |
19 | 19 | from synapse.crypto.event_signing import add_hashes_and_signatures |
20 | 20 | from synapse.events.utils import serialize_event |
21 | 21 | from synapse.events.validator import EventValidator |
81 | 81 | room_token = pagin_config.from_token.room_key |
82 | 82 | else: |
83 | 83 | pagin_config.from_token = ( |
84 | yield self.hs.get_event_sources().get_current_token( | |
85 | direction='b' | |
84 | yield self.hs.get_event_sources().get_current_token_for_room( | |
85 | room_id=room_id | |
86 | 86 | ) |
87 | 87 | ) |
88 | 88 | room_token = pagin_config.from_token.room_key |
236 | 236 | raise SynapseError( |
237 | 237 | 500, |
238 | 238 | "Tried to send member event through non-member codepath" |
239 | ) | |
240 | ||
241 | # We check here if we are currently being rate limited, so that we | |
242 | # don't do unnecessary work. We check again just before we actually | |
243 | # send the event. | |
244 | time_now = self.clock.time() | |
245 | allowed, time_allowed = self.ratelimiter.send_message( | |
246 | event.sender, time_now, | |
247 | msg_rate_hz=self.hs.config.rc_messages_per_second, | |
248 | burst_count=self.hs.config.rc_message_burst_count, | |
249 | update=False, | |
250 | ) | |
251 | if not allowed: | |
252 | raise LimitExceededError( | |
253 | retry_after_ms=int(1000 * (time_allowed - time_now)), | |
239 | 254 | ) |
240 | 255 | |
241 | 256 | user = UserID.from_string(event.sender) |
64 | 64 | defer.returnValue(result["displayname"]) |
65 | 65 | |
66 | 66 | @defer.inlineCallbacks |
67 | def set_displayname(self, target_user, requester, new_displayname): | |
67 | def set_displayname(self, target_user, requester, new_displayname, by_admin=False): | |
68 | 68 | """target_user is the user whose displayname is to be changed; |
69 | 69 | auth_user is the user attempting to make this change.""" |
70 | 70 | if not self.hs.is_mine(target_user): |
71 | 71 | raise SynapseError(400, "User is not hosted on this Home Server") |
72 | 72 | |
73 | if target_user != requester.user: | |
73 | if not by_admin and target_user != requester.user: | |
74 | 74 | raise AuthError(400, "Cannot set another user's displayname") |
75 | 75 | |
76 | 76 | if new_displayname == '': |
110 | 110 | defer.returnValue(result["avatar_url"]) |
111 | 111 | |
112 | 112 | @defer.inlineCallbacks |
113 | def set_avatar_url(self, target_user, requester, new_avatar_url): | |
113 | def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False): | |
114 | 114 | """target_user is the user whose avatar_url is to be changed; |
115 | 115 | auth_user is the user attempting to make this change.""" |
116 | 116 | if not self.hs.is_mine(target_user): |
117 | 117 | raise SynapseError(400, "User is not hosted on this Home Server") |
118 | 118 | |
119 | if target_user != requester.user: | |
119 | if not by_admin and target_user != requester.user: | |
120 | 120 | raise AuthError(400, "Cannot set another user's avatar_url") |
121 | 121 | |
122 | 122 | yield self.store.set_profile_avatar_url( |
18 | 18 | |
19 | 19 | from twisted.internet import defer |
20 | 20 | |
21 | import synapse.types | |
22 | 21 | from synapse.api.errors import ( |
23 | 22 | AuthError, Codes, SynapseError, RegistrationError, InvalidCaptchaError |
24 | 23 | ) |
193 | 192 | def appservice_register(self, user_localpart, as_token): |
194 | 193 | user = UserID(user_localpart, self.hs.hostname) |
195 | 194 | user_id = user.to_string() |
196 | service = yield self.store.get_app_service_by_token(as_token) | |
195 | service = self.store.get_app_service_by_token(as_token) | |
197 | 196 | if not service: |
198 | 197 | raise AuthError(403, "Invalid application service token.") |
199 | 198 | if not service.is_interested_in_user(user_id): |
304 | 303 | # XXX: This should be a deferred list, shouldn't it? |
305 | 304 | yield identity_handler.bind_threepid(c, user_id) |
306 | 305 | |
307 | @defer.inlineCallbacks | |
308 | 306 | def check_user_id_not_appservice_exclusive(self, user_id, allowed_appservice=None): |
309 | 307 | # valid user IDs must not clash with any user ID namespaces claimed by |
310 | 308 | # application services. |
311 | services = yield self.store.get_app_services() | |
309 | services = self.store.get_app_services() | |
312 | 310 | interested_services = [ |
313 | 311 | s for s in services |
314 | 312 | if s.is_interested_in_user(user_id) |
370 | 368 | defer.returnValue(data) |
371 | 369 | |
372 | 370 | @defer.inlineCallbacks |
373 | def get_or_create_user(self, localpart, displayname, duration_in_ms, | |
371 | def get_or_create_user(self, requester, localpart, displayname, duration_in_ms, | |
374 | 372 | password_hash=None): |
375 | 373 | """Creates a new user if the user does not exist, |
376 | 374 | else revokes all previous access tokens and generates a new one. |
417 | 415 | if displayname is not None: |
418 | 416 | logger.info("setting user display name: %s -> %s", user_id, displayname) |
419 | 417 | profile_handler = self.hs.get_handlers().profile_handler |
420 | requester = synapse.types.create_requester(user) | |
421 | 418 | yield profile_handler.set_displayname( |
422 | user, requester, displayname | |
419 | user, requester, displayname, by_admin=True, | |
423 | 420 | ) |
424 | 421 | |
425 | 422 | defer.returnValue((user_id, token)) |
436 | 436 | logger.warn("Stream has topological part!!!! %r", from_key) |
437 | 437 | from_key = "s%s" % (from_token.stream,) |
438 | 438 | |
439 | app_service = yield self.store.get_app_service_by_user_id( | |
439 | app_service = self.store.get_app_service_by_user_id( | |
440 | 440 | user.to_string() |
441 | 441 | ) |
442 | 442 | if app_service: |
474 | 474 | |
475 | 475 | defer.returnValue((events, end_key)) |
476 | 476 | |
477 | def get_current_key(self, direction='f'): | |
478 | return self.store.get_room_events_max_id(direction) | |
477 | def get_current_key(self): | |
478 | return self.store.get_room_events_max_id() | |
479 | ||
480 | def get_current_key_for_room(self, room_id): | |
481 | return self.store.get_room_events_max_id(room_id) | |
479 | 482 | |
480 | 483 | @defer.inlineCallbacks |
481 | 484 | def get_pagination_rows(self, user, config, key): |
787 | 787 | |
788 | 788 | assert since_token |
789 | 789 | |
790 | app_service = yield self.store.get_app_service_by_user_id(user_id) | |
790 | app_service = self.store.get_app_service_by_user_id(user_id) | |
791 | 791 | if app_service: |
792 | 792 | rooms = yield self.store.get_app_service_rooms(app_service) |
793 | 793 | joined_room_ids = set(r.room_id for r in rooms) |
87 | 87 | continue |
88 | 88 | |
89 | 89 | until = self._member_typing_until.get(member, None) |
90 | if not until or until < now: | |
90 | if not until or until <= now: | |
91 | 91 | logger.info("Timing out typing for: %s", member.user_id) |
92 | 92 | preserve_fn(self._stopped_typing)(member) |
93 | 93 | continue |
96 | 96 | # user. |
97 | 97 | if self.hs.is_mine_id(member.user_id): |
98 | 98 | last_fed_poke = self._member_last_federation_poke.get(member, None) |
99 | if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL < now: | |
99 | if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now: | |
100 | 100 | preserve_fn(self._push_remote)( |
101 | 101 | member=member, |
102 | 102 | typing=True |
103 | 103 | ) |
104 | ||
105 | # Add a paranoia timer to ensure that we always have a timer for | |
106 | # each person typing. | |
107 | self.wheel_timer.insert( | |
108 | now=now, | |
109 | obj=member, | |
110 | then=now + 60 * 1000, | |
111 | ) | |
104 | 112 | |
105 | 113 | def is_typing(self, member): |
106 | 114 | return member.user_id in self._room_typing.get(member.room_id, []) |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | # Because otherwise 'resource' collides with synapse.metrics.resource | |
16 | from __future__ import absolute_import | |
17 | ||
18 | 15 | import logging |
19 | from resource import getrusage, RUSAGE_SELF | |
20 | 16 | import functools |
21 | import os | |
22 | import stat | |
23 | 17 | import time |
24 | 18 | import gc |
25 | 19 | |
29 | 23 | CounterMetric, CallbackMetric, DistributionMetric, CacheMetric, |
30 | 24 | MemoryUsageMetric, |
31 | 25 | ) |
26 | from .process_collector import register_process_collector | |
32 | 27 | |
33 | 28 | |
34 | 29 | logger = logging.getLogger(__name__) |
35 | 30 | |
36 | 31 | |
37 | 32 | all_metrics = [] |
33 | all_collectors = [] | |
38 | 34 | |
39 | 35 | |
40 | 36 | class Metrics(object): |
44 | 40 | |
45 | 41 | def __init__(self, name): |
46 | 42 | self.name_prefix = name |
43 | ||
44 | def make_subspace(self, name): | |
45 | return Metrics("%s_%s" % (self.name_prefix, name)) | |
46 | ||
47 | def register_collector(self, func): | |
48 | all_collectors.append(func) | |
47 | 49 | |
48 | 50 | def _register(self, metric_class, name, *args, **kwargs): |
49 | 51 | full_name = "%s_%s" % (self.name_prefix, name) |
93 | 95 | def render_all(): |
94 | 96 | strs = [] |
95 | 97 | |
96 | # TODO(paul): Internal hack | |
97 | update_resource_metrics() | |
98 | for collector in all_collectors: | |
99 | collector() | |
98 | 100 | |
99 | 101 | for metric in all_metrics: |
100 | 102 | try: |
108 | 110 | return "\n".join(strs) |
109 | 111 | |
110 | 112 | |
111 | # Now register some standard process-wide state metrics, to give indications of | |
112 | # process resource usage | |
113 | ||
114 | rusage = None | |
115 | ||
116 | ||
117 | def update_resource_metrics(): | |
118 | global rusage | |
119 | rusage = getrusage(RUSAGE_SELF) | |
120 | ||
121 | resource_metrics = get_metrics_for("process.resource") | |
122 | ||
123 | # msecs | |
124 | resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000) | |
125 | resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000) | |
126 | ||
127 | # kilobytes | |
128 | resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * 1024) | |
129 | ||
130 | TYPES = { | |
131 | stat.S_IFSOCK: "SOCK", | |
132 | stat.S_IFLNK: "LNK", | |
133 | stat.S_IFREG: "REG", | |
134 | stat.S_IFBLK: "BLK", | |
135 | stat.S_IFDIR: "DIR", | |
136 | stat.S_IFCHR: "CHR", | |
137 | stat.S_IFIFO: "FIFO", | |
138 | } | |
139 | ||
140 | ||
141 | def _process_fds(): | |
142 | counts = {(k,): 0 for k in TYPES.values()} | |
143 | counts[("other",)] = 0 | |
144 | ||
145 | # Not every OS will have a /proc/self/fd directory | |
146 | if not os.path.exists("/proc/self/fd"): | |
147 | return counts | |
148 | ||
149 | for fd in os.listdir("/proc/self/fd"): | |
150 | try: | |
151 | s = os.stat("/proc/self/fd/%s" % (fd)) | |
152 | fmt = stat.S_IFMT(s.st_mode) | |
153 | if fmt in TYPES: | |
154 | t = TYPES[fmt] | |
155 | else: | |
156 | t = "other" | |
157 | ||
158 | counts[(t,)] += 1 | |
159 | except OSError: | |
160 | # the dirh itself used by listdir() is usually missing by now | |
161 | pass | |
162 | ||
163 | return counts | |
164 | ||
165 | get_metrics_for("process").register_callback("fds", _process_fds, labels=["type"]) | |
166 | ||
167 | 113 | reactor_metrics = get_metrics_for("reactor") |
168 | 114 | tick_time = reactor_metrics.register_distribution("tick_time") |
169 | 115 | pending_calls_metric = reactor_metrics.register_distribution("pending_calls") |
174 | 120 | reactor_metrics.register_callback( |
175 | 121 | "gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"] |
176 | 122 | ) |
123 | ||
124 | register_process_collector(get_metrics_for("process")) | |
177 | 125 | |
178 | 126 | |
179 | 127 | def runUntilCurrentTimer(func): |
97 | 97 | value = self.callback() |
98 | 98 | |
99 | 99 | if self.is_scalar(): |
100 | return ["%s %d" % (self.name, value)] | |
100 | return ["%s %.12g" % (self.name, value)] | |
101 | 101 | |
102 | return ["%s%s %d" % (self.name, self._render_key(k), value[k]) | |
102 | return ["%s%s %.12g" % (self.name, self._render_key(k), value[k]) | |
103 | 103 | for k in sorted(value.keys())] |
104 | 104 | |
105 | 105 |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2015, 2016 OpenMarket Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | # Because otherwise 'resource' collides with synapse.metrics.resource | |
16 | from __future__ import absolute_import | |
17 | ||
18 | import os | |
19 | import stat | |
20 | from resource import getrusage, RUSAGE_SELF | |
21 | ||
22 | ||
23 | TICKS_PER_SEC = 100 | |
24 | BYTES_PER_PAGE = 4096 | |
25 | ||
26 | HAVE_PROC_STAT = os.path.exists("/proc/stat") | |
27 | HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat") | |
28 | HAVE_PROC_SELF_LIMITS = os.path.exists("/proc/self/limits") | |
29 | HAVE_PROC_SELF_FD = os.path.exists("/proc/self/fd") | |
30 | ||
31 | TYPES = { | |
32 | stat.S_IFSOCK: "SOCK", | |
33 | stat.S_IFLNK: "LNK", | |
34 | stat.S_IFREG: "REG", | |
35 | stat.S_IFBLK: "BLK", | |
36 | stat.S_IFDIR: "DIR", | |
37 | stat.S_IFCHR: "CHR", | |
38 | stat.S_IFIFO: "FIFO", | |
39 | } | |
40 | ||
41 | # Field indexes from /proc/self/stat, taken from the proc(5) manpage | |
42 | STAT_FIELDS = { | |
43 | "utime": 14, | |
44 | "stime": 15, | |
45 | "starttime": 22, | |
46 | "vsize": 23, | |
47 | "rss": 24, | |
48 | } | |
49 | ||
50 | ||
51 | rusage = None | |
52 | stats = {} | |
53 | fd_counts = None | |
54 | ||
55 | # In order to report process_start_time_seconds we need to know the | |
56 | # machine's boot time, because the value in /proc/self/stat is relative to | |
57 | # this | |
58 | boot_time = None | |
59 | if HAVE_PROC_STAT: | |
60 | with open("/proc/stat") as _procstat: | |
61 | for line in _procstat: | |
62 | if line.startswith("btime "): | |
63 | boot_time = int(line.split()[1]) | |
64 | ||
65 | ||
66 | def update_resource_metrics(): | |
67 | global rusage | |
68 | rusage = getrusage(RUSAGE_SELF) | |
69 | ||
70 | if HAVE_PROC_SELF_STAT: | |
71 | global stats | |
72 | with open("/proc/self/stat") as s: | |
73 | line = s.read() | |
74 | # line is PID (command) more stats go here ... | |
75 | raw_stats = line.split(") ", 1)[1].split(" ") | |
76 | ||
77 | for (name, index) in STAT_FIELDS.iteritems(): | |
78 | # subtract 3 from the index, because proc(5) is 1-based, and | |
79 | # we've lost the first two fields in PID and COMMAND above | |
80 | stats[name] = int(raw_stats[index - 3]) | |
81 | ||
82 | global fd_counts | |
83 | fd_counts = _process_fds() | |
84 | ||
85 | ||
86 | def _process_fds(): | |
87 | counts = {(k,): 0 for k in TYPES.values()} | |
88 | counts[("other",)] = 0 | |
89 | ||
90 | # Not every OS will have a /proc/self/fd directory | |
91 | if not HAVE_PROC_SELF_FD: | |
92 | return counts | |
93 | ||
94 | for fd in os.listdir("/proc/self/fd"): | |
95 | try: | |
96 | s = os.stat("/proc/self/fd/%s" % (fd)) | |
97 | fmt = stat.S_IFMT(s.st_mode) | |
98 | if fmt in TYPES: | |
99 | t = TYPES[fmt] | |
100 | else: | |
101 | t = "other" | |
102 | ||
103 | counts[(t,)] += 1 | |
104 | except OSError: | |
105 | # the dirh itself used by listdir() is usually missing by now | |
106 | pass | |
107 | ||
108 | return counts | |
109 | ||
110 | ||
111 | def register_process_collector(process_metrics): | |
112 | # Legacy synapse-invented metric names | |
113 | ||
114 | resource_metrics = process_metrics.make_subspace("resource") | |
115 | ||
116 | resource_metrics.register_collector(update_resource_metrics) | |
117 | ||
118 | # msecs | |
119 | resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000) | |
120 | resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000) | |
121 | ||
122 | # kilobytes | |
123 | resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * 1024) | |
124 | ||
125 | process_metrics.register_callback("fds", _process_fds, labels=["type"]) | |
126 | ||
127 | # New prometheus-standard metric names | |
128 | ||
129 | if HAVE_PROC_SELF_STAT: | |
130 | process_metrics.register_callback( | |
131 | "cpu_user_seconds_total", | |
132 | lambda: float(stats["utime"]) / TICKS_PER_SEC | |
133 | ) | |
134 | process_metrics.register_callback( | |
135 | "cpu_system_seconds_total", | |
136 | lambda: float(stats["stime"]) / TICKS_PER_SEC | |
137 | ) | |
138 | process_metrics.register_callback( | |
139 | "cpu_seconds_total", | |
140 | lambda: (float(stats["utime"] + stats["stime"])) / TICKS_PER_SEC | |
141 | ) | |
142 | ||
143 | process_metrics.register_callback( | |
144 | "virtual_memory_bytes", | |
145 | lambda: int(stats["vsize"]) | |
146 | ) | |
147 | process_metrics.register_callback( | |
148 | "resident_memory_bytes", | |
149 | lambda: int(stats["rss"]) * BYTES_PER_PAGE | |
150 | ) | |
151 | ||
152 | process_metrics.register_callback( | |
153 | "start_time_seconds", | |
154 | lambda: boot_time + int(stats["starttime"]) / TICKS_PER_SEC | |
155 | ) | |
156 | ||
157 | if HAVE_PROC_SELF_FD: | |
158 | process_metrics.register_callback( | |
159 | "open_fds", | |
160 | lambda: sum(fd_counts.values()) | |
161 | ) | |
162 | ||
163 | if HAVE_PROC_SELF_LIMITS: | |
164 | def _get_max_fds(): | |
165 | with open("/proc/self/limits") as limits: | |
166 | for line in limits: | |
167 | if not line.startswith("Max open files "): | |
168 | continue | |
169 | # Line is Max open files $SOFT $HARD | |
170 | return int(line.split()[3]) | |
171 | return None | |
172 | ||
173 | process_metrics.register_callback( | |
174 | "max_fds", | |
175 | lambda: _get_max_fds() | |
176 | ) |
149 | 149 | |
150 | 150 | soonest_due_at = None |
151 | 151 | |
152 | if not unprocessed: | |
153 | yield self.save_last_stream_ordering_and_success(self.max_stream_ordering) | |
154 | return | |
155 | ||
152 | 156 | for push_action in unprocessed: |
153 | 157 | received_at = push_action['received_ts'] |
154 | 158 | if received_at is None: |
327 | 327 | return messagevars |
328 | 328 | |
329 | 329 | @defer.inlineCallbacks |
330 | def make_summary_text(self, notifs_by_room, state_by_room, | |
330 | def make_summary_text(self, notifs_by_room, room_state_ids, | |
331 | 331 | notif_events, user_id, reason): |
332 | 332 | if len(notifs_by_room) == 1: |
333 | 333 | # Only one room has new stuff |
337 | 337 | # want the generated-from-names one here otherwise we'll |
338 | 338 | # end up with, "new message from Bob in the Bob room" |
339 | 339 | room_name = yield calculate_room_name( |
340 | self.store, state_by_room[room_id], user_id, fallback_to_members=False | |
340 | self.store, room_state_ids[room_id], user_id, fallback_to_members=False | |
341 | 341 | ) |
342 | 342 | |
343 | my_member_event = state_by_room[room_id][("m.room.member", user_id)] | |
343 | my_member_event_id = room_state_ids[room_id][("m.room.member", user_id)] | |
344 | my_member_event = yield self.store.get_event(my_member_event_id) | |
344 | 345 | if my_member_event.content["membership"] == "invite": |
345 | inviter_member_event = state_by_room[room_id][ | |
346 | inviter_member_event_id = room_state_ids[room_id][ | |
346 | 347 | ("m.room.member", my_member_event.sender) |
347 | 348 | ] |
349 | inviter_member_event = yield self.store.get_event( | |
350 | inviter_member_event_id | |
351 | ) | |
348 | 352 | inviter_name = name_from_member_event(inviter_member_event) |
349 | 353 | |
350 | 354 | if room_name is None: |
363 | 367 | if len(notifs_by_room[room_id]) == 1: |
364 | 368 | # There is just the one notification, so give some detail |
365 | 369 | event = notif_events[notifs_by_room[room_id][0]["event_id"]] |
366 | if ("m.room.member", event.sender) in state_by_room[room_id]: | |
367 | state_event = state_by_room[room_id][("m.room.member", event.sender)] | |
370 | if ("m.room.member", event.sender) in room_state_ids[room_id]: | |
371 | state_event_id = room_state_ids[room_id][ | |
372 | ("m.room.member", event.sender) | |
373 | ] | |
374 | state_event = yield self.store.get_event(state_event_id) | |
368 | 375 | sender_name = name_from_member_event(state_event) |
369 | 376 | |
370 | 377 | if sender_name is not None and room_name is not None: |
394 | 401 | for n in notifs_by_room[room_id] |
395 | 402 | ])) |
396 | 403 | |
404 | member_events = yield self.store.get_events([ | |
405 | room_state_ids[room_id][("m.room.member", s)] | |
406 | for s in sender_ids | |
407 | ]) | |
408 | ||
397 | 409 | defer.returnValue(MESSAGES_FROM_PERSON % { |
398 | "person": descriptor_from_member_events([ | |
399 | state_by_room[room_id][("m.room.member", s)] | |
400 | for s in sender_ids | |
401 | ]), | |
410 | "person": descriptor_from_member_events(member_events.values()), | |
402 | 411 | "app": self.app_name, |
403 | 412 | }) |
404 | 413 | else: |
418 | 427 | for n in notifs_by_room[reason['room_id']] |
419 | 428 | ])) |
420 | 429 | |
430 | member_events = yield self.store.get_events([ | |
431 | room_state_ids[room_id][("m.room.member", s)] | |
432 | for s in sender_ids | |
433 | ]) | |
434 | ||
421 | 435 | defer.returnValue(MESSAGES_FROM_PERSON_AND_OTHERS % { |
422 | "person": descriptor_from_member_events([ | |
423 | state_by_room[reason['room_id']][("m.room.member", s)] | |
424 | for s in sender_ids | |
425 | ]), | |
436 | "person": descriptor_from_member_events(member_events.values()), | |
426 | 437 | "app": self.app_name, |
427 | 438 | }) |
428 | 439 |
16 | 16 | from synapse.http.server import request_handler, finish_request |
17 | 17 | from synapse.replication.pusher_resource import PusherResource |
18 | 18 | from synapse.replication.presence_resource import PresenceResource |
19 | from synapse.api.errors import SynapseError | |
19 | 20 | |
20 | 21 | from twisted.web.resource import Resource |
21 | 22 | from twisted.web.server import NOT_DONE_YET |
165 | 166 | def replicate(): |
166 | 167 | return self.replicate(request_streams, limit) |
167 | 168 | |
168 | result = yield self.notifier.wait_for_replication(replicate, timeout) | |
169 | writer = yield self.notifier.wait_for_replication(replicate, timeout) | |
170 | result = writer.finish() | |
169 | 171 | |
170 | 172 | for stream_name, stream_content in result.items(): |
171 | 173 | logger.info( |
184 | 186 | writer = _Writer() |
185 | 187 | current_token = yield self.current_replication_token() |
186 | 188 | logger.debug("Replicating up to %r", current_token) |
189 | ||
190 | if limit == 0: | |
191 | raise SynapseError(400, "Limit cannot be 0") | |
187 | 192 | |
188 | 193 | yield self.account_data(writer, current_token, limit, request_streams) |
189 | 194 | yield self.events(writer, current_token, limit, request_streams) |
199 | 204 | self.streams(writer, current_token, request_streams) |
200 | 205 | |
201 | 206 | logger.debug("Replicated %d rows", writer.total) |
202 | defer.returnValue(writer.finish()) | |
207 | defer.returnValue(writer) | |
203 | 208 | |
204 | 209 | def streams(self, writer, current_token, request_streams): |
205 | 210 | request_token = request_streams.get("streams") |
236 | 241 | request_events = current_token.events |
237 | 242 | if request_backfill is None: |
238 | 243 | request_backfill = current_token.backfill |
244 | ||
245 | no_new_tokens = ( | |
246 | request_events == current_token.events | |
247 | and request_backfill == current_token.backfill | |
248 | ) | |
249 | if no_new_tokens: | |
250 | return | |
251 | ||
239 | 252 | res = yield self.store.get_all_new_events( |
240 | 253 | request_backfill, request_events, |
241 | 254 | current_token.backfill, current_token.events, |
242 | 255 | limit |
243 | 256 | ) |
244 | writer.write_header_and_rows("events", res.new_forward_events, ( | |
245 | "position", "internal", "json", "state_group" | |
246 | )) | |
247 | writer.write_header_and_rows("backfill", res.new_backfill_events, ( | |
248 | "position", "internal", "json", "state_group" | |
249 | )) | |
257 | ||
258 | upto_events_token = _position_from_rows( | |
259 | res.new_forward_events, current_token.events | |
260 | ) | |
261 | ||
262 | upto_backfill_token = _position_from_rows( | |
263 | res.new_backfill_events, current_token.backfill | |
264 | ) | |
265 | ||
266 | if request_events != upto_events_token: | |
267 | writer.write_header_and_rows("events", res.new_forward_events, ( | |
268 | "position", "internal", "json", "state_group" | |
269 | ), position=upto_events_token) | |
270 | ||
271 | if request_backfill != upto_backfill_token: | |
272 | writer.write_header_and_rows("backfill", res.new_backfill_events, ( | |
273 | "position", "internal", "json", "state_group", | |
274 | ), position=upto_backfill_token) | |
275 | ||
250 | 276 | writer.write_header_and_rows( |
251 | 277 | "forward_ex_outliers", res.forward_ex_outliers, |
252 | ("position", "event_id", "state_group") | |
278 | ("position", "event_id", "state_group"), | |
253 | 279 | ) |
254 | 280 | writer.write_header_and_rows( |
255 | 281 | "backward_ex_outliers", res.backward_ex_outliers, |
256 | ("position", "event_id", "state_group") | |
282 | ("position", "event_id", "state_group"), | |
257 | 283 | ) |
258 | 284 | writer.write_header_and_rows( |
259 | "state_resets", res.state_resets, ("position",) | |
285 | "state_resets", res.state_resets, ("position",), | |
260 | 286 | ) |
261 | 287 | |
262 | 288 | @defer.inlineCallbacks |
265 | 291 | |
266 | 292 | request_presence = request_streams.get("presence") |
267 | 293 | |
268 | if request_presence is not None: | |
294 | if request_presence is not None and request_presence != current_position: | |
269 | 295 | presence_rows = yield self.presence_handler.get_all_presence_updates( |
270 | 296 | request_presence, current_position |
271 | 297 | ) |
298 | upto_token = _position_from_rows(presence_rows, current_position) | |
272 | 299 | writer.write_header_and_rows("presence", presence_rows, ( |
273 | 300 | "position", "user_id", "state", "last_active_ts", |
274 | 301 | "last_federation_update_ts", "last_user_sync_ts", |
275 | 302 | "status_msg", "currently_active", |
276 | )) | |
303 | ), position=upto_token) | |
277 | 304 | |
278 | 305 | @defer.inlineCallbacks |
279 | 306 | def typing(self, writer, current_token, request_streams): |
281 | 308 | |
282 | 309 | request_typing = request_streams.get("typing") |
283 | 310 | |
284 | if request_typing is not None: | |
311 | if request_typing is not None and request_typing != current_position: | |
285 | 312 | # If they have a higher token than current max, we can assume that |
286 | 313 | # they had been talking to a previous instance of the master. Since |
287 | 314 | # we reset the token on restart, the best (but hacky) thing we can |
292 | 319 | typing_rows = yield self.typing_handler.get_all_typing_updates( |
293 | 320 | request_typing, current_position |
294 | 321 | ) |
322 | upto_token = _position_from_rows(typing_rows, current_position) | |
295 | 323 | writer.write_header_and_rows("typing", typing_rows, ( |
296 | 324 | "position", "room_id", "typing" |
297 | )) | |
325 | ), position=upto_token) | |
298 | 326 | |
299 | 327 | @defer.inlineCallbacks |
300 | 328 | def receipts(self, writer, current_token, limit, request_streams): |
302 | 330 | |
303 | 331 | request_receipts = request_streams.get("receipts") |
304 | 332 | |
305 | if request_receipts is not None: | |
333 | if request_receipts is not None and request_receipts != current_position: | |
306 | 334 | receipts_rows = yield self.store.get_all_updated_receipts( |
307 | 335 | request_receipts, current_position, limit |
308 | 336 | ) |
337 | upto_token = _position_from_rows(receipts_rows, current_position) | |
309 | 338 | writer.write_header_and_rows("receipts", receipts_rows, ( |
310 | 339 | "position", "room_id", "receipt_type", "user_id", "event_id", "data" |
311 | )) | |
340 | ), position=upto_token) | |
312 | 341 | |
313 | 342 | @defer.inlineCallbacks |
314 | 343 | def account_data(self, writer, current_token, limit, request_streams): |
323 | 352 | user_account_data = current_position |
324 | 353 | if room_account_data is None: |
325 | 354 | room_account_data = current_position |
355 | ||
356 | no_new_tokens = ( | |
357 | user_account_data == current_position | |
358 | and room_account_data == current_position | |
359 | ) | |
360 | if no_new_tokens: | |
361 | return | |
362 | ||
326 | 363 | user_rows, room_rows = yield self.store.get_all_updated_account_data( |
327 | 364 | user_account_data, room_account_data, current_position, limit |
328 | 365 | ) |
366 | ||
367 | upto_users_token = _position_from_rows(user_rows, current_position) | |
368 | upto_rooms_token = _position_from_rows(room_rows, current_position) | |
369 | ||
329 | 370 | writer.write_header_and_rows("user_account_data", user_rows, ( |
330 | 371 | "position", "user_id", "type", "content" |
331 | )) | |
372 | ), position=upto_users_token) | |
332 | 373 | writer.write_header_and_rows("room_account_data", room_rows, ( |
333 | 374 | "position", "user_id", "room_id", "type", "content" |
334 | )) | |
375 | ), position=upto_rooms_token) | |
335 | 376 | |
336 | 377 | if tag_account_data is not None: |
337 | 378 | tag_rows = yield self.store.get_all_updated_tags( |
338 | 379 | tag_account_data, current_position, limit |
339 | 380 | ) |
381 | upto_tag_token = _position_from_rows(tag_rows, current_position) | |
340 | 382 | writer.write_header_and_rows("tag_account_data", tag_rows, ( |
341 | 383 | "position", "user_id", "room_id", "tags" |
342 | )) | |
384 | ), position=upto_tag_token) | |
343 | 385 | |
344 | 386 | @defer.inlineCallbacks |
345 | 387 | def push_rules(self, writer, current_token, limit, request_streams): |
347 | 389 | |
348 | 390 | push_rules = request_streams.get("push_rules") |
349 | 391 | |
350 | if push_rules is not None: | |
392 | if push_rules is not None and push_rules != current_position: | |
351 | 393 | rows = yield self.store.get_all_push_rule_updates( |
352 | 394 | push_rules, current_position, limit |
353 | 395 | ) |
396 | upto_token = _position_from_rows(rows, current_position) | |
354 | 397 | writer.write_header_and_rows("push_rules", rows, ( |
355 | 398 | "position", "event_stream_ordering", "user_id", "rule_id", "op", |
356 | 399 | "priority_class", "priority", "conditions", "actions" |
357 | )) | |
400 | ), position=upto_token) | |
358 | 401 | |
359 | 402 | @defer.inlineCallbacks |
360 | 403 | def pushers(self, writer, current_token, limit, request_streams): |
362 | 405 | |
363 | 406 | pushers = request_streams.get("pushers") |
364 | 407 | |
365 | if pushers is not None: | |
408 | if pushers is not None and pushers != current_position: | |
366 | 409 | updated, deleted = yield self.store.get_all_updated_pushers( |
367 | 410 | pushers, current_position, limit |
368 | 411 | ) |
412 | upto_token = _position_from_rows(updated, current_position) | |
369 | 413 | writer.write_header_and_rows("pushers", updated, ( |
370 | 414 | "position", "user_id", "access_token", "profile_tag", "kind", |
371 | 415 | "app_id", "app_display_name", "device_display_name", "pushkey", |
372 | 416 | "ts", "lang", "data" |
373 | )) | |
417 | ), position=upto_token) | |
374 | 418 | writer.write_header_and_rows("deleted_pushers", deleted, ( |
375 | 419 | "position", "user_id", "app_id", "pushkey" |
376 | )) | |
420 | ), position=upto_token) | |
377 | 421 | |
378 | 422 | @defer.inlineCallbacks |
379 | 423 | def caches(self, writer, current_token, limit, request_streams): |
381 | 425 | |
382 | 426 | caches = request_streams.get("caches") |
383 | 427 | |
384 | if caches is not None: | |
428 | if caches is not None and caches != current_position: | |
385 | 429 | updated_caches = yield self.store.get_all_updated_caches( |
386 | 430 | caches, current_position, limit |
387 | 431 | ) |
432 | upto_token = _position_from_rows(updated_caches, current_position) | |
388 | 433 | writer.write_header_and_rows("caches", updated_caches, ( |
389 | 434 | "position", "cache_func", "keys", "invalidation_ts" |
390 | )) | |
435 | ), position=upto_token) | |
391 | 436 | |
392 | 437 | @defer.inlineCallbacks |
393 | 438 | def to_device(self, writer, current_token, limit, request_streams): |
395 | 440 | |
396 | 441 | to_device = request_streams.get("to_device") |
397 | 442 | |
398 | if to_device is not None: | |
443 | if to_device is not None and to_device != current_position: | |
399 | 444 | to_device_rows = yield self.store.get_all_new_device_messages( |
400 | 445 | to_device, current_position, limit |
401 | 446 | ) |
447 | upto_token = _position_from_rows(to_device_rows, current_position) | |
402 | 448 | writer.write_header_and_rows("to_device", to_device_rows, ( |
403 | 449 | "position", "user_id", "device_id", "message_json" |
404 | )) | |
450 | ), position=upto_token) | |
405 | 451 | |
406 | 452 | @defer.inlineCallbacks |
407 | 453 | def public_rooms(self, writer, current_token, limit, request_streams): |
409 | 455 | |
410 | 456 | public_rooms = request_streams.get("public_rooms") |
411 | 457 | |
412 | if public_rooms is not None: | |
458 | if public_rooms is not None and public_rooms != current_position: | |
413 | 459 | public_rooms_rows = yield self.store.get_all_new_public_rooms( |
414 | 460 | public_rooms, current_position, limit |
415 | 461 | ) |
462 | upto_token = _position_from_rows(public_rooms_rows, current_position) | |
416 | 463 | writer.write_header_and_rows("public_rooms", public_rooms_rows, ( |
417 | 464 | "position", "room_id", "visibility" |
418 | )) | |
465 | ), position=upto_token) | |
419 | 466 | |
420 | 467 | |
421 | 468 | class _Writer(object): |
425 | 472 | self.total = 0 |
426 | 473 | |
427 | 474 | def write_header_and_rows(self, name, rows, fields, position=None): |
428 | if not rows: | |
429 | return | |
430 | ||
431 | 475 | if position is None: |
432 | position = rows[-1][0] | |
476 | if rows: | |
477 | position = rows[-1][0] | |
478 | else: | |
479 | return | |
433 | 480 | |
434 | 481 | self.streams[name] = { |
435 | 482 | "position": position if type(position) is int else str(position), |
438 | 485 | } |
439 | 486 | |
440 | 487 | self.total += len(rows) |
488 | ||
489 | def __nonzero__(self): | |
490 | return bool(self.total) | |
441 | 491 | |
442 | 492 | def finish(self): |
443 | 493 | return self.streams |
460 | 510 | |
461 | 511 | def __str__(self): |
462 | 512 | return "_".join(str(value) for value in self) |
513 | ||
514 | ||
515 | def _position_from_rows(rows, current_position): | |
516 | """Calculates a position to return for a stream. Ideally we want to return the | |
517 | position of the last row, as that will be the most correct. However, if there | |
518 | are no rows we fall back to using the current position to stop us from | |
519 | repeatedly hitting the storage layer unncessarily thinking there are updates. | |
520 | (Not all advances of the token correspond to an actual update) | |
521 | ||
522 | We can't just always return the current position, as we often limit the | |
523 | number of rows we replicate, and so the stream may lag. The assumption is | |
524 | that if the storage layer returns no new rows then we are not lagging and | |
525 | we are at the `current_position`. | |
526 | """ | |
527 | if rows: | |
528 | return rows[-1][0] | |
529 | return current_position |
21 | 21 | from .base import ClientV1RestServlet, client_path_patterns |
22 | 22 | import synapse.util.stringutils as stringutils |
23 | 23 | from synapse.http.servlet import parse_json_object_from_request |
24 | from synapse.types import create_requester | |
24 | 25 | |
25 | 26 | from synapse.util.async import run_on_reactor |
26 | 27 | |
390 | 391 | user_json = parse_json_object_from_request(request) |
391 | 392 | |
392 | 393 | access_token = get_access_token_from_request(request) |
393 | app_service = yield self.store.get_app_service_by_token( | |
394 | app_service = self.store.get_app_service_by_token( | |
394 | 395 | access_token |
395 | 396 | ) |
396 | 397 | if not app_service: |
397 | 398 | raise SynapseError(403, "Invalid application service token.") |
398 | 399 | |
400 | requester = create_requester(app_service.sender) | |
401 | ||
399 | 402 | logger.debug("creating user: %s", user_json) |
400 | ||
401 | response = yield self._do_create(user_json) | |
403 | response = yield self._do_create(requester, user_json) | |
402 | 404 | |
403 | 405 | defer.returnValue((200, response)) |
404 | 406 | |
406 | 408 | return 403, {} |
407 | 409 | |
408 | 410 | @defer.inlineCallbacks |
409 | def _do_create(self, user_json): | |
411 | def _do_create(self, requester, user_json): | |
410 | 412 | yield run_on_reactor() |
411 | 413 | |
412 | 414 | if "localpart" not in user_json: |
432 | 434 | |
433 | 435 | handler = self.handlers.registration_handler |
434 | 436 | user_id, token = yield handler.get_or_create_user( |
437 | requester=requester, | |
435 | 438 | localpart=localpart, |
436 | 439 | displayname=displayname, |
437 | 440 | duration_in_ms=(duration_seconds * 1000), |
76 | 76 | user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'> |
77 | 77 | <link rel="stylesheet" href="/_matrix/static/client/register/style.css"> |
78 | 78 | <script> |
79 | if (window.onAuthDone != undefined) { | |
79 | if (window.onAuthDone) { | |
80 | 80 | window.onAuthDone(); |
81 | } else if (window.opener && window.opener.postMessage) { | |
82 | window.opener.postMessage("authDone", "*"); | |
81 | 83 | } |
82 | 84 | </script> |
83 | 85 | </head> |
16 | 16 | |
17 | 17 | from twisted.internet import defer |
18 | 18 | |
19 | from synapse.api import constants, errors | |
19 | 20 | from synapse.http import servlet |
20 | 21 | from ._base import client_v2_patterns |
21 | 22 | |
57 | 58 | self.hs = hs |
58 | 59 | self.auth = hs.get_auth() |
59 | 60 | self.device_handler = hs.get_device_handler() |
61 | self.auth_handler = hs.get_auth_handler() | |
60 | 62 | |
61 | 63 | @defer.inlineCallbacks |
62 | 64 | def on_GET(self, request, device_id): |
69 | 71 | |
70 | 72 | @defer.inlineCallbacks |
71 | 73 | def on_DELETE(self, request, device_id): |
72 | # XXX: it's not completely obvious we want to expose this endpoint. | |
73 | # It allows the client to delete access tokens, which feels like a | |
74 | # thing which merits extra auth. But if we want to do the interactive- | |
75 | # auth dance, we should really make it possible to delete more than one | |
76 | # device at a time. | |
74 | try: | |
75 | body = servlet.parse_json_object_from_request(request) | |
76 | ||
77 | except errors.SynapseError as e: | |
78 | if e.errcode == errors.Codes.NOT_JSON: | |
79 | # deal with older clients which didn't pass a JSON dict | |
80 | # the same as those that pass an empty dict | |
81 | body = {} | |
82 | else: | |
83 | raise | |
84 | ||
85 | authed, result, params, _ = yield self.auth_handler.check_auth([ | |
86 | [constants.LoginType.PASSWORD], | |
87 | ], body, self.hs.get_ip_from_request(request)) | |
88 | ||
89 | if not authed: | |
90 | defer.returnValue((401, result)) | |
91 | ||
77 | 92 | requester = yield self.auth.get_user_by_req(request) |
78 | 93 | yield self.device_handler.delete_device( |
79 | 94 | requester.user.to_string(), |
14 | 14 | |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | from synapse.api.errors import AuthError, SynapseError | |
17 | from synapse.api.errors import AuthError, SynapseError, StoreError, Codes | |
18 | 18 | from synapse.http.servlet import RestServlet, parse_json_object_from_request |
19 | 19 | from synapse.types import UserID |
20 | 20 | |
44 | 44 | raise AuthError(403, "Cannot get filters for other users") |
45 | 45 | |
46 | 46 | if not self.hs.is_mine(target_user): |
47 | raise SynapseError(400, "Can only get filters for local users") | |
47 | raise AuthError(403, "Can only get filters for local users") | |
48 | 48 | |
49 | 49 | try: |
50 | 50 | filter_id = int(filter_id) |
58 | 58 | ) |
59 | 59 | |
60 | 60 | defer.returnValue((200, filter.get_filter_json())) |
61 | except KeyError: | |
62 | raise SynapseError(400, "No such filter") | |
61 | except (KeyError, StoreError): | |
62 | raise SynapseError(400, "No such filter", errcode=Codes.NOT_FOUND) | |
63 | 63 | |
64 | 64 | |
65 | 65 | class CreateFilterRestServlet(RestServlet): |
73 | 73 | |
74 | 74 | @defer.inlineCallbacks |
75 | 75 | def on_POST(self, request, user_id): |
76 | ||
76 | 77 | target_user = UserID.from_string(user_id) |
77 | 78 | requester = yield self.auth.get_user_by_req(request) |
78 | 79 | |
80 | 81 | raise AuthError(403, "Cannot create filters for other users") |
81 | 82 | |
82 | 83 | if not self.hs.is_mine(target_user): |
83 | raise SynapseError(400, "Can only create filters for local users") | |
84 | raise AuthError(403, "Can only create filters for local users") | |
84 | 85 | |
85 | 86 | content = parse_json_object_from_request(request) |
86 | ||
87 | 87 | filter_id = yield self.filtering.add_user_filter( |
88 | 88 | user_localpart=target_user.localpart, |
89 | 89 | user_filter=content, |
18 | 18 | from signedjson.sign import sign_json |
19 | 19 | from unpaddedbase64 import encode_base64 |
20 | 20 | from canonicaljson import encode_canonical_json |
21 | from hashlib import sha256 | |
22 | from OpenSSL import crypto | |
23 | 21 | import logging |
24 | 22 | |
25 | 23 | |
47 | 45 | "expired_ts": # integer posix timestamp when the key expired. |
48 | 46 | "key": # base64 encoded NACL verification key. |
49 | 47 | } |
50 | } | |
51 | "tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert. | |
48 | }, | |
49 | "tls_fingerprints": [ # Fingerprints of the TLS certs this server uses. | |
50 | { | |
51 | "sha256": # base64 encoded sha256 fingerprint of the X509 cert | |
52 | }, | |
53 | ], | |
52 | 54 | "signatures": { |
53 | 55 | "this.server.example.com": { |
54 | 56 | "algorithm:version": # NACL signature for this server |
89 | 91 | u"expired_ts": key.expired, |
90 | 92 | } |
91 | 93 | |
92 | x509_certificate_bytes = crypto.dump_certificate( | |
93 | crypto.FILETYPE_ASN1, | |
94 | self.config.tls_certificate | |
95 | ) | |
96 | ||
97 | sha256_fingerprint = sha256(x509_certificate_bytes).digest() | |
94 | tls_fingerprints = self.config.tls_fingerprints | |
98 | 95 | |
99 | 96 | json_object = { |
100 | 97 | u"valid_until_ts": self.valid_until_ts, |
101 | 98 | u"server_name": self.config.server_name, |
102 | 99 | u"verify_keys": verify_keys, |
103 | 100 | u"old_verify_keys": old_verify_keys, |
104 | u"tls_fingerprints": [{ | |
105 | u"sha256": encode_base64(sha256_fingerprint), | |
106 | }] | |
101 | u"tls_fingerprints": tls_fingerprints, | |
107 | 102 | } |
108 | 103 | for key in self.config.signing_key: |
109 | 104 | json_object = sign_json( |
84 | 84 | sql_logger.debug("[SQL] {%s} %s", self.name, sql) |
85 | 85 | |
86 | 86 | sql = self.database_engine.convert_param_style(sql) |
87 | ||
88 | 87 | if args: |
89 | 88 | try: |
90 | 89 | sql_logger.debug( |
36 | 36 | ) |
37 | 37 | |
38 | 38 | def get_app_services(self): |
39 | return defer.succeed(self.services_cache) | |
39 | return self.services_cache | |
40 | 40 | |
41 | 41 | def get_app_service_by_user_id(self, user_id): |
42 | 42 | """Retrieve an application service from their user ID. |
53 | 53 | """ |
54 | 54 | for service in self.services_cache: |
55 | 55 | if service.sender == user_id: |
56 | return defer.succeed(service) | |
57 | return defer.succeed(None) | |
56 | return service | |
57 | return None | |
58 | 58 | |
59 | 59 | def get_app_service_by_token(self, token): |
60 | 60 | """Get the application service with the given appservice token. |
66 | 66 | """ |
67 | 67 | for service in self.services_cache: |
68 | 68 | if service.token == token: |
69 | return defer.succeed(service) | |
70 | return defer.succeed(None) | |
69 | return service | |
70 | return None | |
71 | 71 | |
72 | 72 | def get_app_service_rooms(self, service): |
73 | 73 | """Get a list of RoomsForUser for this application service. |
162 | 162 | ["as_id"] |
163 | 163 | ) |
164 | 164 | # NB: This assumes this class is linked with ApplicationServiceStore |
165 | as_list = yield self.get_app_services() | |
165 | as_list = self.get_app_services() | |
166 | 166 | services = [] |
167 | 167 | |
168 | 168 | for res in results: |
602 | 602 | "rejections", |
603 | 603 | "redactions", |
604 | 604 | "room_memberships", |
605 | "state_events", | |
606 | 605 | "topics" |
607 | 606 | ): |
608 | 607 | txn.executemany( |
24 | 24 | |
25 | 25 | # Remember to update this number every time a change is made to database |
26 | 26 | # schema files, so the users will be informed on server restarts. |
27 | SCHEMA_VERSION = 36 | |
27 | SCHEMA_VERSION = 37 | |
28 | 28 | |
29 | 29 | dir_path = os.path.abspath(os.path.dirname(__file__)) |
30 | 30 |
319 | 319 | txn.execute(sql, (prev_id, current_id, limit,)) |
320 | 320 | return txn.fetchall() |
321 | 321 | |
322 | if prev_id == current_id: | |
323 | return defer.succeed([]) | |
324 | ||
322 | 325 | return self.runInteraction( |
323 | 326 | "get_all_new_public_rooms", get_all_new_public_rooms |
324 | 327 | ) |
0 | # Copyright 2016 OpenMarket Ltd | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | from synapse.storage.prepare_database import get_statements | |
15 | from synapse.storage.engines import PostgresEngine | |
16 | ||
17 | import logging | |
18 | ||
19 | logger = logging.getLogger(__name__) | |
20 | ||
21 | DROP_INDICES = """ | |
22 | -- We only ever query based on event_id | |
23 | DROP INDEX IF EXISTS state_events_room_id; | |
24 | DROP INDEX IF EXISTS state_events_type; | |
25 | DROP INDEX IF EXISTS state_events_state_key; | |
26 | ||
27 | -- room_id is indexed elsewhere | |
28 | DROP INDEX IF EXISTS current_state_events_room_id; | |
29 | DROP INDEX IF EXISTS current_state_events_state_key; | |
30 | DROP INDEX IF EXISTS current_state_events_type; | |
31 | ||
32 | DROP INDEX IF EXISTS transactions_have_ref; | |
33 | ||
34 | -- (topological_ordering, stream_ordering, room_id) seems like a strange index, | |
35 | -- and is used incredibly rarely. | |
36 | DROP INDEX IF EXISTS events_order_topo_stream_room; | |
37 | ||
38 | DROP INDEX IF EXISTS event_search_ev_idx; | |
39 | """ | |
40 | ||
41 | POSTGRES_DROP_CONSTRAINT = """ | |
42 | ALTER TABLE event_auth DROP CONSTRAINT IF EXISTS event_auth_event_id_auth_id_room_id_key; | |
43 | """ | |
44 | ||
45 | SQLITE_DROP_CONSTRAINT = """ | |
46 | DROP INDEX IF EXISTS evauth_edges_id; | |
47 | ||
48 | CREATE TABLE IF NOT EXISTS event_auth_new( | |
49 | event_id TEXT NOT NULL, | |
50 | auth_id TEXT NOT NULL, | |
51 | room_id TEXT NOT NULL | |
52 | ); | |
53 | ||
54 | INSERT INTO event_auth_new | |
55 | SELECT event_id, auth_id, room_id | |
56 | FROM event_auth; | |
57 | ||
58 | DROP TABLE event_auth; | |
59 | ||
60 | ALTER TABLE event_auth_new RENAME TO event_auth; | |
61 | ||
62 | CREATE INDEX evauth_edges_id ON event_auth(event_id); | |
63 | """ | |
64 | ||
65 | ||
66 | def run_create(cur, database_engine, *args, **kwargs): | |
67 | for statement in get_statements(DROP_INDICES.splitlines()): | |
68 | cur.execute(statement) | |
69 | ||
70 | if isinstance(database_engine, PostgresEngine): | |
71 | drop_constraint = POSTGRES_DROP_CONSTRAINT | |
72 | else: | |
73 | drop_constraint = SQLITE_DROP_CONSTRAINT | |
74 | ||
75 | for statement in get_statements(drop_constraint.splitlines()): | |
76 | cur.execute(statement) | |
77 | ||
78 | ||
79 | def run_upgrade(cur, database_engine, *args, **kwargs): | |
80 | pass |
0 | /* Copyright 2016 OpenMarket Ltd | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | /* | |
16 | * Update any email addresses that were stored with mixed case into all | |
17 | * lowercase | |
18 | */ | |
19 | ||
20 | -- There may be "duplicate" emails (with different case) already in the table, | |
21 | -- so we find them and move all but the most recently used account. | |
22 | UPDATE user_threepids | |
23 | SET medium = 'email_old' | |
24 | WHERE medium = 'email' | |
25 | AND address IN ( | |
26 | -- We select all the addresses that are linked to the user_id that is NOT | |
27 | -- the most recently created. | |
28 | SELECT u.address | |
29 | FROM | |
30 | user_threepids AS u, | |
31 | -- `duplicate_addresses` is a table of all the email addresses that | |
32 | -- appear multiple times and when the binding was created | |
33 | ( | |
34 | SELECT lower(u1.address) AS address, max(u1.added_at) AS max_ts | |
35 | FROM user_threepids AS u1 | |
36 | INNER JOIN user_threepids AS u2 ON u1.medium = u2.medium AND lower(u1.address) = lower(u2.address) AND u1.address != u2.address | |
37 | WHERE u1.medium = 'email' AND u2.medium = 'email' | |
38 | GROUP BY lower(u1.address) | |
39 | ) AS duplicate_addresses | |
40 | WHERE | |
41 | lower(u.address) = duplicate_addresses.address | |
42 | AND u.added_at != max_ts -- NOT the most recently created | |
43 | ); | |
44 | ||
45 | ||
46 | -- This update is now safe since we've removed the duplicate addresses. | |
47 | UPDATE user_threepids SET address = LOWER(address) WHERE medium = 'email'; | |
48 | ||
49 | ||
50 | /* Add an index for the select we do on passwored reset */ | |
51 | CREATE INDEX user_threepids_medium_address on user_threepids (medium, address); |
520 | 520 | ) |
521 | 521 | |
522 | 522 | @defer.inlineCallbacks |
523 | def get_room_events_max_id(self, direction='f'): | |
523 | def get_room_events_max_id(self, room_id=None): | |
524 | """Returns the current token for rooms stream. | |
525 | ||
526 | By default, it returns the current global stream token. Specifying a | |
527 | `room_id` causes it to return the current room specific topological | |
528 | token. | |
529 | """ | |
524 | 530 | token = yield self._stream_id_gen.get_current_token() |
525 | if direction != 'b': | |
531 | if room_id is None: | |
526 | 532 | defer.returnValue("s%d" % (token,)) |
527 | 533 | else: |
528 | 534 | topo = yield self.runInteraction( |
529 | "_get_max_topological_txn", self._get_max_topological_txn | |
535 | "_get_max_topological_txn", self._get_max_topological_txn, | |
536 | room_id, | |
530 | 537 | ) |
531 | 538 | defer.returnValue("t%d-%d" % (topo, token)) |
532 | 539 | |
578 | 585 | lambda r: r[0][0] if r else 0 |
579 | 586 | ) |
580 | 587 | |
581 | def _get_max_topological_txn(self, txn): | |
588 | def _get_max_topological_txn(self, txn, room_id): | |
582 | 589 | txn.execute( |
583 | 590 | "SELECT MAX(topological_ordering) FROM events" |
584 | " WHERE outlier = ?", | |
585 | (False,) | |
591 | " WHERE room_id = ?", | |
592 | (room_id,) | |
586 | 593 | ) |
587 | 594 | |
588 | 595 | rows = txn.fetchall() |
40 | 40 | self.store = hs.get_datastore() |
41 | 41 | |
42 | 42 | @defer.inlineCallbacks |
43 | def get_current_token(self, direction='f'): | |
43 | def get_current_token(self): | |
44 | 44 | push_rules_key, _ = self.store.get_push_rules_stream_token() |
45 | 45 | to_device_key = self.store.get_to_device_stream_token() |
46 | 46 | |
47 | 47 | token = StreamToken( |
48 | 48 | room_key=( |
49 | yield self.sources["room"].get_current_key(direction) | |
49 | yield self.sources["room"].get_current_key() | |
50 | 50 | ), |
51 | 51 | presence_key=( |
52 | 52 | yield self.sources["presence"].get_current_key() |
64 | 64 | to_device_key=to_device_key, |
65 | 65 | ) |
66 | 66 | defer.returnValue(token) |
67 | ||
68 | @defer.inlineCallbacks | |
69 | def get_current_token_for_room(self, room_id): | |
70 | push_rules_key, _ = self.store.get_push_rules_stream_token() | |
71 | to_device_key = self.store.get_to_device_stream_token() | |
72 | ||
73 | token = StreamToken( | |
74 | room_key=( | |
75 | yield self.sources["room"].get_current_key_for_room(room_id) | |
76 | ), | |
77 | presence_key=( | |
78 | yield self.sources["presence"].get_current_key() | |
79 | ), | |
80 | typing_key=( | |
81 | yield self.sources["typing"].get_current_key() | |
82 | ), | |
83 | receipt_key=( | |
84 | yield self.sources["receipt"].get_current_key() | |
85 | ), | |
86 | account_data_key=( | |
87 | yield self.sources["account_data"].get_current_key() | |
88 | ), | |
89 | push_rules_key=push_rules_key, | |
90 | to_device_key=to_device_key, | |
91 | ) | |
92 | defer.returnValue(token) |
17 | 17 | from collections import namedtuple |
18 | 18 | |
19 | 19 | |
20 | Requester = namedtuple("Requester", | |
21 | ["user", "access_token_id", "is_guest", "device_id"]) | |
20 | Requester = namedtuple("Requester", [ | |
21 | "user", "access_token_id", "is_guest", "device_id", "app_service", | |
22 | ]) | |
22 | 23 | """ |
23 | 24 | Represents the user making a request |
24 | 25 | |
28 | 29 | request, or None if it came via the appservice API or similar |
29 | 30 | is_guest (bool): True if the user making this request is a guest user |
30 | 31 | device_id (str|None): device_id which was set at authentication time |
32 | app_service (ApplicationService|None): the AS requesting on behalf of the user | |
31 | 33 | """ |
32 | 34 | |
33 | 35 | |
34 | 36 | def create_requester(user_id, access_token_id=None, is_guest=False, |
35 | device_id=None): | |
37 | device_id=None, app_service=None): | |
36 | 38 | """ |
37 | 39 | Create a new ``Requester`` object |
38 | 40 | |
42 | 44 | request, or None if it came via the appservice API or similar |
43 | 45 | is_guest (bool): True if the user making this request is a guest user |
44 | 46 | device_id (str|None): device_id which was set at authentication time |
47 | app_service (ApplicationService|None): the AS requesting on behalf of the user | |
45 | 48 | |
46 | 49 | Returns: |
47 | 50 | Requester |
48 | 51 | """ |
49 | 52 | if not isinstance(user_id, UserID): |
50 | 53 | user_id = UserID.from_string(user_id) |
51 | return Requester(user_id, access_token_id, is_guest, device_id) | |
54 | return Requester(user_id, access_token_id, is_guest, device_id, app_service) | |
52 | 55 | |
53 | 56 | |
54 | 57 | def get_domain_from_id(string): |
0 | ||
1 | from twisted.internet import defer | |
2 | ||
3 | from synapse.config._base import ConfigError | |
4 | from synapse.types import UserID | |
5 | ||
6 | import ldap3 | |
7 | import ldap3.core.exceptions | |
8 | ||
9 | import logging | |
10 | ||
11 | try: | |
12 | import ldap3 | |
13 | import ldap3.core.exceptions | |
14 | except ImportError: | |
15 | ldap3 = None | |
16 | pass | |
17 | ||
18 | ||
19 | logger = logging.getLogger(__name__) | |
20 | ||
21 | ||
22 | class LDAPMode(object): | |
23 | SIMPLE = "simple", | |
24 | SEARCH = "search", | |
25 | ||
26 | LIST = (SIMPLE, SEARCH) | |
27 | ||
28 | ||
29 | class LdapAuthProvider(object): | |
30 | __version__ = "0.1" | |
31 | ||
32 | def __init__(self, config, account_handler): | |
33 | self.account_handler = account_handler | |
34 | ||
35 | if not ldap3: | |
36 | raise RuntimeError( | |
37 | 'Missing ldap3 library. This is required for LDAP Authentication.' | |
38 | ) | |
39 | ||
40 | self.ldap_mode = config.mode | |
41 | self.ldap_uri = config.uri | |
42 | self.ldap_start_tls = config.start_tls | |
43 | self.ldap_base = config.base | |
44 | self.ldap_attributes = config.attributes | |
45 | if self.ldap_mode == LDAPMode.SEARCH: | |
46 | self.ldap_bind_dn = config.bind_dn | |
47 | self.ldap_bind_password = config.bind_password | |
48 | self.ldap_filter = config.filter | |
49 | ||
50 | @defer.inlineCallbacks | |
51 | def check_password(self, user_id, password): | |
52 | """ Attempt to authenticate a user against an LDAP Server | |
53 | and register an account if none exists. | |
54 | ||
55 | Returns: | |
56 | True if authentication against LDAP was successful | |
57 | """ | |
58 | localpart = UserID.from_string(user_id).localpart | |
59 | ||
60 | try: | |
61 | server = ldap3.Server(self.ldap_uri) | |
62 | logger.debug( | |
63 | "Attempting LDAP connection with %s", | |
64 | self.ldap_uri | |
65 | ) | |
66 | ||
67 | if self.ldap_mode == LDAPMode.SIMPLE: | |
68 | result, conn = self._ldap_simple_bind( | |
69 | server=server, localpart=localpart, password=password | |
70 | ) | |
71 | logger.debug( | |
72 | 'LDAP authentication method simple bind returned: %s (conn: %s)', | |
73 | result, | |
74 | conn | |
75 | ) | |
76 | if not result: | |
77 | defer.returnValue(False) | |
78 | elif self.ldap_mode == LDAPMode.SEARCH: | |
79 | result, conn = self._ldap_authenticated_search( | |
80 | server=server, localpart=localpart, password=password | |
81 | ) | |
82 | logger.debug( | |
83 | 'LDAP auth method authenticated search returned: %s (conn: %s)', | |
84 | result, | |
85 | conn | |
86 | ) | |
87 | if not result: | |
88 | defer.returnValue(False) | |
89 | else: | |
90 | raise RuntimeError( | |
91 | 'Invalid LDAP mode specified: {mode}'.format( | |
92 | mode=self.ldap_mode | |
93 | ) | |
94 | ) | |
95 | ||
96 | try: | |
97 | logger.info( | |
98 | "User authenticated against LDAP server: %s", | |
99 | conn | |
100 | ) | |
101 | except NameError: | |
102 | logger.warn( | |
103 | "Authentication method yielded no LDAP connection, aborting!" | |
104 | ) | |
105 | defer.returnValue(False) | |
106 | ||
107 | # check if user with user_id exists | |
108 | if (yield self.account_handler.check_user_exists(user_id)): | |
109 | # exists, authentication complete | |
110 | conn.unbind() | |
111 | defer.returnValue(True) | |
112 | ||
113 | else: | |
114 | # does not exist, fetch metadata for account creation from | |
115 | # existing ldap connection | |
116 | query = "({prop}={value})".format( | |
117 | prop=self.ldap_attributes['uid'], | |
118 | value=localpart | |
119 | ) | |
120 | ||
121 | if self.ldap_mode == LDAPMode.SEARCH and self.ldap_filter: | |
122 | query = "(&{filter}{user_filter})".format( | |
123 | filter=query, | |
124 | user_filter=self.ldap_filter | |
125 | ) | |
126 | logger.debug( | |
127 | "ldap registration filter: %s", | |
128 | query | |
129 | ) | |
130 | ||
131 | conn.search( | |
132 | search_base=self.ldap_base, | |
133 | search_filter=query, | |
134 | attributes=[ | |
135 | self.ldap_attributes['name'], | |
136 | self.ldap_attributes['mail'] | |
137 | ] | |
138 | ) | |
139 | ||
140 | if len(conn.response) == 1: | |
141 | attrs = conn.response[0]['attributes'] | |
142 | mail = attrs[self.ldap_attributes['mail']][0] | |
143 | name = attrs[self.ldap_attributes['name']][0] | |
144 | ||
145 | # create account | |
146 | user_id, access_token = ( | |
147 | yield self.account_handler.register(localpart=localpart) | |
148 | ) | |
149 | ||
150 | # TODO: bind email, set displayname with data from ldap directory | |
151 | ||
152 | logger.info( | |
153 | "Registration based on LDAP data was successful: %d: %s (%s, %)", | |
154 | user_id, | |
155 | localpart, | |
156 | name, | |
157 | ||
158 | ) | |
159 | ||
160 | defer.returnValue(True) | |
161 | else: | |
162 | if len(conn.response) == 0: | |
163 | logger.warn("LDAP registration failed, no result.") | |
164 | else: | |
165 | logger.warn( | |
166 | "LDAP registration failed, too many results (%s)", | |
167 | len(conn.response) | |
168 | ) | |
169 | ||
170 | defer.returnValue(False) | |
171 | ||
172 | defer.returnValue(False) | |
173 | ||
174 | except ldap3.core.exceptions.LDAPException as e: | |
175 | logger.warn("Error during ldap authentication: %s", e) | |
176 | defer.returnValue(False) | |
177 | ||
178 | @staticmethod | |
179 | def parse_config(config): | |
180 | class _LdapConfig(object): | |
181 | pass | |
182 | ||
183 | ldap_config = _LdapConfig() | |
184 | ||
185 | ldap_config.enabled = config.get("enabled", False) | |
186 | ||
187 | ldap_config.mode = LDAPMode.SIMPLE | |
188 | ||
189 | # verify config sanity | |
190 | _require_keys(config, [ | |
191 | "uri", | |
192 | "base", | |
193 | "attributes", | |
194 | ]) | |
195 | ||
196 | ldap_config.uri = config["uri"] | |
197 | ldap_config.start_tls = config.get("start_tls", False) | |
198 | ldap_config.base = config["base"] | |
199 | ldap_config.attributes = config["attributes"] | |
200 | ||
201 | if "bind_dn" in config: | |
202 | ldap_config.mode = LDAPMode.SEARCH | |
203 | _require_keys(config, [ | |
204 | "bind_dn", | |
205 | "bind_password", | |
206 | ]) | |
207 | ||
208 | ldap_config.bind_dn = config["bind_dn"] | |
209 | ldap_config.bind_password = config["bind_password"] | |
210 | ldap_config.filter = config.get("filter", None) | |
211 | ||
212 | # verify attribute lookup | |
213 | _require_keys(config['attributes'], [ | |
214 | "uid", | |
215 | "name", | |
216 | "mail", | |
217 | ]) | |
218 | ||
219 | return ldap_config | |
220 | ||
221 | def _ldap_simple_bind(self, server, localpart, password): | |
222 | """ Attempt a simple bind with the credentials | |
223 | given by the user against the LDAP server. | |
224 | ||
225 | Returns True, LDAP3Connection | |
226 | if the bind was successful | |
227 | Returns False, None | |
228 | if an error occured | |
229 | """ | |
230 | ||
231 | try: | |
232 | # bind with the the local users ldap credentials | |
233 | bind_dn = "{prop}={value},{base}".format( | |
234 | prop=self.ldap_attributes['uid'], | |
235 | value=localpart, | |
236 | base=self.ldap_base | |
237 | ) | |
238 | conn = ldap3.Connection(server, bind_dn, password) | |
239 | logger.debug( | |
240 | "Established LDAP connection in simple bind mode: %s", | |
241 | conn | |
242 | ) | |
243 | ||
244 | if self.ldap_start_tls: | |
245 | conn.start_tls() | |
246 | logger.debug( | |
247 | "Upgraded LDAP connection in simple bind mode through StartTLS: %s", | |
248 | conn | |
249 | ) | |
250 | ||
251 | if conn.bind(): | |
252 | # GOOD: bind okay | |
253 | logger.debug("LDAP Bind successful in simple bind mode.") | |
254 | return True, conn | |
255 | ||
256 | # BAD: bind failed | |
257 | logger.info( | |
258 | "Binding against LDAP failed for '%s' failed: %s", | |
259 | localpart, conn.result['description'] | |
260 | ) | |
261 | conn.unbind() | |
262 | return False, None | |
263 | ||
264 | except ldap3.core.exceptions.LDAPException as e: | |
265 | logger.warn("Error during LDAP authentication: %s", e) | |
266 | return False, None | |
267 | ||
268 | def _ldap_authenticated_search(self, server, localpart, password): | |
269 | """ Attempt to login with the preconfigured bind_dn | |
270 | and then continue searching and filtering within | |
271 | the base_dn | |
272 | ||
273 | Returns (True, LDAP3Connection) | |
274 | if a single matching DN within the base was found | |
275 | that matched the filter expression, and with which | |
276 | a successful bind was achieved | |
277 | ||
278 | The LDAP3Connection returned is the instance that was used to | |
279 | verify the password not the one using the configured bind_dn. | |
280 | Returns (False, None) | |
281 | if an error occured | |
282 | """ | |
283 | ||
284 | try: | |
285 | conn = ldap3.Connection( | |
286 | server, | |
287 | self.ldap_bind_dn, | |
288 | self.ldap_bind_password | |
289 | ) | |
290 | logger.debug( | |
291 | "Established LDAP connection in search mode: %s", | |
292 | conn | |
293 | ) | |
294 | ||
295 | if self.ldap_start_tls: | |
296 | conn.start_tls() | |
297 | logger.debug( | |
298 | "Upgraded LDAP connection in search mode through StartTLS: %s", | |
299 | conn | |
300 | ) | |
301 | ||
302 | if not conn.bind(): | |
303 | logger.warn( | |
304 | "Binding against LDAP with `bind_dn` failed: %s", | |
305 | conn.result['description'] | |
306 | ) | |
307 | conn.unbind() | |
308 | return False, None | |
309 | ||
310 | # construct search_filter like (uid=localpart) | |
311 | query = "({prop}={value})".format( | |
312 | prop=self.ldap_attributes['uid'], | |
313 | value=localpart | |
314 | ) | |
315 | if self.ldap_filter: | |
316 | # combine with the AND expression | |
317 | query = "(&{query}{filter})".format( | |
318 | query=query, | |
319 | filter=self.ldap_filter | |
320 | ) | |
321 | logger.debug( | |
322 | "LDAP search filter: %s", | |
323 | query | |
324 | ) | |
325 | conn.search( | |
326 | search_base=self.ldap_base, | |
327 | search_filter=query | |
328 | ) | |
329 | ||
330 | if len(conn.response) == 1: | |
331 | # GOOD: found exactly one result | |
332 | user_dn = conn.response[0]['dn'] | |
333 | logger.debug('LDAP search found dn: %s', user_dn) | |
334 | ||
335 | # unbind and simple bind with user_dn to verify the password | |
336 | # Note: do not use rebind(), for some reason it did not verify | |
337 | # the password for me! | |
338 | conn.unbind() | |
339 | return self._ldap_simple_bind(server, localpart, password) | |
340 | else: | |
341 | # BAD: found 0 or > 1 results, abort! | |
342 | if len(conn.response) == 0: | |
343 | logger.info( | |
344 | "LDAP search returned no results for '%s'", | |
345 | localpart | |
346 | ) | |
347 | else: | |
348 | logger.info( | |
349 | "LDAP search returned too many (%s) results for '%s'", | |
350 | len(conn.response), localpart | |
351 | ) | |
352 | conn.unbind() | |
353 | return False, None | |
354 | ||
355 | except ldap3.core.exceptions.LDAPException as e: | |
356 | logger.warn("Error during LDAP authentication: %s", e) | |
357 | return False, None | |
358 | ||
359 | ||
360 | def _require_keys(config, required): | |
361 | missing = [key for key in required if key not in config] | |
362 | if missing: | |
363 | raise ConfigError( | |
364 | "LDAP enabled but missing required config values: {}".format( | |
365 | ", ".join(missing) | |
366 | ) | |
367 | ) |
19 | 19 | from synapse.api.auth import Auth |
20 | 20 | from synapse.api.errors import AuthError |
21 | 21 | from synapse.types import UserID |
22 | from tests.utils import setup_test_homeserver | |
22 | from tests.utils import setup_test_homeserver, mock_getRawHeaders | |
23 | 23 | |
24 | 24 | import pymacaroons |
25 | 25 | |
50 | 50 | |
51 | 51 | request = Mock(args={}) |
52 | 52 | request.args["access_token"] = [self.test_token] |
53 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
53 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
54 | 54 | requester = yield self.auth.get_user_by_req(request) |
55 | 55 | self.assertEquals(requester.user.to_string(), self.test_user) |
56 | 56 | |
60 | 60 | |
61 | 61 | request = Mock(args={}) |
62 | 62 | request.args["access_token"] = [self.test_token] |
63 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
63 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
64 | 64 | d = self.auth.get_user_by_req(request) |
65 | 65 | self.failureResultOf(d, AuthError) |
66 | 66 | |
73 | 73 | self.store.get_user_by_access_token = Mock(return_value=user_info) |
74 | 74 | |
75 | 75 | request = Mock(args={}) |
76 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
76 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
77 | 77 | d = self.auth.get_user_by_req(request) |
78 | 78 | self.failureResultOf(d, AuthError) |
79 | 79 | |
85 | 85 | |
86 | 86 | request = Mock(args={}) |
87 | 87 | request.args["access_token"] = [self.test_token] |
88 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
88 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
89 | 89 | requester = yield self.auth.get_user_by_req(request) |
90 | 90 | self.assertEquals(requester.user.to_string(), self.test_user) |
91 | 91 | |
95 | 95 | |
96 | 96 | request = Mock(args={}) |
97 | 97 | request.args["access_token"] = [self.test_token] |
98 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
98 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
99 | 99 | d = self.auth.get_user_by_req(request) |
100 | 100 | self.failureResultOf(d, AuthError) |
101 | 101 | |
105 | 105 | self.store.get_user_by_access_token = Mock(return_value=None) |
106 | 106 | |
107 | 107 | request = Mock(args={}) |
108 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
108 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
109 | 109 | d = self.auth.get_user_by_req(request) |
110 | 110 | self.failureResultOf(d, AuthError) |
111 | 111 | |
120 | 120 | request = Mock(args={}) |
121 | 121 | request.args["access_token"] = [self.test_token] |
122 | 122 | request.args["user_id"] = [masquerading_user_id] |
123 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
123 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
124 | 124 | requester = yield self.auth.get_user_by_req(request) |
125 | 125 | self.assertEquals(requester.user.to_string(), masquerading_user_id) |
126 | 126 | |
134 | 134 | request = Mock(args={}) |
135 | 135 | request.args["access_token"] = [self.test_token] |
136 | 136 | request.args["user_id"] = [masquerading_user_id] |
137 | request.requestHeaders.getRawHeaders = Mock(return_value=[""]) | |
137 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
138 | 138 | d = self.auth.get_user_by_req(request) |
139 | 139 | self.failureResultOf(d, AuthError) |
140 | 140 |
16 | 16 | from .. import unittest |
17 | 17 | |
18 | 18 | from synapse.handlers.register import RegistrationHandler |
19 | from synapse.types import UserID | |
19 | from synapse.types import UserID, create_requester | |
20 | 20 | |
21 | 21 | from tests.utils import setup_test_homeserver |
22 | 22 | |
56 | 56 | local_part = "someone" |
57 | 57 | display_name = "someone" |
58 | 58 | user_id = "@someone:test" |
59 | requester = create_requester("@as:test") | |
59 | 60 | result_user_id, result_token = yield self.handler.get_or_create_user( |
60 | local_part, display_name, duration_ms) | |
61 | requester, local_part, display_name, duration_ms) | |
61 | 62 | self.assertEquals(result_user_id, user_id) |
62 | 63 | self.assertEquals(result_token, 'secret') |
63 | 64 | |
73 | 74 | local_part = "frank" |
74 | 75 | display_name = "Frank" |
75 | 76 | user_id = "@frank:test" |
77 | requester = create_requester("@as:test") | |
76 | 78 | result_user_id, result_token = yield self.handler.get_or_create_user( |
77 | local_part, display_name, duration_ms) | |
79 | requester, local_part, display_name, duration_ms) | |
78 | 80 | self.assertEquals(result_user_id, user_id) |
79 | 81 | self.assertEquals(result_token, 'secret') |
218 | 218 | "user_id": self.u_onion.to_string(), |
219 | 219 | "typing": True, |
220 | 220 | } |
221 | ) | |
221 | ), | |
222 | federation_auth=True, | |
222 | 223 | ) |
223 | 224 | |
224 | 225 | self.on_new_event.assert_has_calls([ |
41 | 41 | @defer.inlineCallbacks |
42 | 42 | def replicate(self): |
43 | 43 | streams = self.slaved_store.stream_positions() |
44 | result = yield self.replication.replicate(streams, 100) | |
44 | writer = yield self.replication.replicate(streams, 100) | |
45 | result = writer.finish() | |
45 | 46 | yield self.slaved_store.process_replication(result) |
46 | 47 | |
47 | 48 | @defer.inlineCallbacks |
119 | 119 | self.hs.clock.advance_time_msec(1) |
120 | 120 | code, body = yield get |
121 | 121 | self.assertEquals(code, 200) |
122 | self.assertEquals(body, {}) | |
122 | self.assertEquals(body.get("rows", []), []) | |
123 | 123 | test_timeout.__name__ = "test_timeout_%s" % (stream) |
124 | 124 | return test_timeout |
125 | 125 | |
194 | 194 | self.assertIn("field_names", stream) |
195 | 195 | field_names = stream["field_names"] |
196 | 196 | self.assertIn("rows", stream) |
197 | self.assertTrue(stream["rows"]) | |
198 | 197 | for row in stream["rows"]: |
199 | 198 | self.assertEquals( |
200 | 199 | len(row), len(field_names), |
16 | 16 | from twisted.internet import defer |
17 | 17 | from mock import Mock |
18 | 18 | from tests import unittest |
19 | from tests.utils import mock_getRawHeaders | |
19 | 20 | import json |
20 | 21 | |
21 | 22 | |
29 | 30 | path='/_matrix/client/api/v1/createUser' |
30 | 31 | ) |
31 | 32 | self.request.args = {} |
33 | self.request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
32 | 34 | |
33 | self.appservice = None | |
34 | self.auth = Mock(get_appservice_by_req=Mock( | |
35 | side_effect=lambda x: defer.succeed(self.appservice)) | |
35 | self.registration_handler = Mock() | |
36 | ||
37 | self.appservice = Mock(sender="@as:test") | |
38 | self.datastore = Mock( | |
39 | get_app_service_by_token=Mock(return_value=self.appservice) | |
36 | 40 | ) |
37 | 41 | |
38 | self.auth_result = (False, None, None, None) | |
39 | self.auth_handler = Mock( | |
40 | check_auth=Mock(side_effect=lambda x, y, z: self.auth_result), | |
41 | get_session_data=Mock(return_value=None) | |
42 | ) | |
43 | self.registration_handler = Mock() | |
44 | self.identity_handler = Mock() | |
45 | self.login_handler = Mock() | |
46 | ||
47 | # do the dance to hook it up to the hs global | |
48 | self.handlers = Mock( | |
49 | auth_handler=self.auth_handler, | |
42 | # do the dance to hook things up to the hs global | |
43 | handlers = Mock( | |
50 | 44 | registration_handler=self.registration_handler, |
51 | identity_handler=self.identity_handler, | |
52 | login_handler=self.login_handler | |
53 | 45 | ) |
54 | 46 | self.hs = Mock() |
55 | self.hs.hostname = "supergbig~testing~thing.com" | |
56 | self.hs.get_auth = Mock(return_value=self.auth) | |
57 | self.hs.get_handlers = Mock(return_value=self.handlers) | |
58 | self.hs.config.enable_registration = True | |
59 | # init the thing we're testing | |
47 | self.hs.hostname = "superbig~testing~thing.com" | |
48 | self.hs.get_datastore = Mock(return_value=self.datastore) | |
49 | self.hs.get_handlers = Mock(return_value=handlers) | |
60 | 50 | self.servlet = CreateUserRestServlet(self.hs) |
61 | 51 | |
62 | 52 | @defer.inlineCallbacks |
14 | 14 | |
15 | 15 | from twisted.internet import defer |
16 | 16 | |
17 | from . import V2AlphaRestTestCase | |
17 | from tests import unittest | |
18 | 18 | |
19 | 19 | from synapse.rest.client.v2_alpha import filter |
20 | 20 | |
21 | from synapse.api.errors import StoreError | |
21 | from synapse.api.errors import Codes | |
22 | ||
23 | import synapse.types | |
24 | ||
25 | from synapse.types import UserID | |
26 | ||
27 | from ....utils import MockHttpResource, setup_test_homeserver | |
28 | ||
29 | PATH_PREFIX = "/_matrix/client/v2_alpha" | |
22 | 30 | |
23 | 31 | |
24 | class FilterTestCase(V2AlphaRestTestCase): | |
32 | class FilterTestCase(unittest.TestCase): | |
33 | ||
25 | 34 | USER_ID = "@apple:test" |
35 | EXAMPLE_FILTER = {"type": ["m.*"]} | |
36 | EXAMPLE_FILTER_JSON = '{"type": ["m.*"]}' | |
26 | 37 | TO_REGISTER = [filter] |
27 | 38 | |
28 | def make_datastore_mock(self): | |
29 | datastore = super(FilterTestCase, self).make_datastore_mock() | |
39 | @defer.inlineCallbacks | |
40 | def setUp(self): | |
41 | self.mock_resource = MockHttpResource(prefix=PATH_PREFIX) | |
30 | 42 | |
31 | self._user_filters = {} | |
43 | self.hs = yield setup_test_homeserver( | |
44 | http_client=None, | |
45 | resource_for_client=self.mock_resource, | |
46 | resource_for_federation=self.mock_resource, | |
47 | ) | |
32 | 48 | |
33 | def add_user_filter(user_localpart, definition): | |
34 | filters = self._user_filters.setdefault(user_localpart, []) | |
35 | filter_id = len(filters) | |
36 | filters.append(definition) | |
37 | return defer.succeed(filter_id) | |
38 | datastore.add_user_filter = add_user_filter | |
49 | self.auth = self.hs.get_auth() | |
39 | 50 | |
40 | def get_user_filter(user_localpart, filter_id): | |
41 | if user_localpart not in self._user_filters: | |
42 | raise StoreError(404, "No user") | |
43 | filters = self._user_filters[user_localpart] | |
44 | if filter_id >= len(filters): | |
45 | raise StoreError(404, "No filter") | |
46 | return defer.succeed(filters[filter_id]) | |
47 | datastore.get_user_filter = get_user_filter | |
51 | def get_user_by_access_token(token=None, allow_guest=False): | |
52 | return { | |
53 | "user": UserID.from_string(self.USER_ID), | |
54 | "token_id": 1, | |
55 | "is_guest": False, | |
56 | } | |
48 | 57 | |
49 | return datastore | |
58 | def get_user_by_req(request, allow_guest=False, rights="access"): | |
59 | return synapse.types.create_requester( | |
60 | UserID.from_string(self.USER_ID), 1, False, None) | |
61 | ||
62 | self.auth.get_user_by_access_token = get_user_by_access_token | |
63 | self.auth.get_user_by_req = get_user_by_req | |
64 | ||
65 | self.store = self.hs.get_datastore() | |
66 | self.filtering = self.hs.get_filtering() | |
67 | ||
68 | for r in self.TO_REGISTER: | |
69 | r.register_servlets(self.hs, self.mock_resource) | |
50 | 70 | |
51 | 71 | @defer.inlineCallbacks |
52 | 72 | def test_add_filter(self): |
53 | 73 | (code, response) = yield self.mock_resource.trigger( |
54 | "POST", "/user/%s/filter" % (self.USER_ID), '{"type": ["m.*"]}' | |
74 | "POST", "/user/%s/filter" % (self.USER_ID), self.EXAMPLE_FILTER_JSON | |
55 | 75 | ) |
56 | 76 | self.assertEquals(200, code) |
57 | 77 | self.assertEquals({"filter_id": "0"}, response) |
78 | filter = yield self.store.get_user_filter( | |
79 | user_localpart='apple', | |
80 | filter_id=0, | |
81 | ) | |
82 | self.assertEquals(filter, self.EXAMPLE_FILTER) | |
58 | 83 | |
59 | self.assertIn("apple", self._user_filters) | |
60 | self.assertEquals(len(self._user_filters["apple"]), 1) | |
61 | self.assertEquals({"type": ["m.*"]}, self._user_filters["apple"][0]) | |
84 | @defer.inlineCallbacks | |
85 | def test_add_filter_for_other_user(self): | |
86 | (code, response) = yield self.mock_resource.trigger( | |
87 | "POST", "/user/%s/filter" % ('@watermelon:test'), self.EXAMPLE_FILTER_JSON | |
88 | ) | |
89 | self.assertEquals(403, code) | |
90 | self.assertEquals(response['errcode'], Codes.FORBIDDEN) | |
91 | ||
92 | @defer.inlineCallbacks | |
93 | def test_add_filter_non_local_user(self): | |
94 | _is_mine = self.hs.is_mine | |
95 | self.hs.is_mine = lambda target_user: False | |
96 | (code, response) = yield self.mock_resource.trigger( | |
97 | "POST", "/user/%s/filter" % (self.USER_ID), self.EXAMPLE_FILTER_JSON | |
98 | ) | |
99 | self.hs.is_mine = _is_mine | |
100 | self.assertEquals(403, code) | |
101 | self.assertEquals(response['errcode'], Codes.FORBIDDEN) | |
62 | 102 | |
63 | 103 | @defer.inlineCallbacks |
64 | 104 | def test_get_filter(self): |
65 | self._user_filters["apple"] = [ | |
66 | {"type": ["m.*"]} | |
67 | ] | |
68 | ||
105 | filter_id = yield self.filtering.add_user_filter( | |
106 | user_localpart='apple', | |
107 | user_filter=self.EXAMPLE_FILTER | |
108 | ) | |
69 | 109 | (code, response) = yield self.mock_resource.trigger_get( |
70 | "/user/%s/filter/0" % (self.USER_ID) | |
110 | "/user/%s/filter/%s" % (self.USER_ID, filter_id) | |
71 | 111 | ) |
72 | 112 | self.assertEquals(200, code) |
73 | self.assertEquals({"type": ["m.*"]}, response) | |
113 | self.assertEquals(self.EXAMPLE_FILTER, response) | |
74 | 114 | |
75 | 115 | @defer.inlineCallbacks |
116 | def test_get_filter_non_existant(self): | |
117 | (code, response) = yield self.mock_resource.trigger_get( | |
118 | "/user/%s/filter/12382148321" % (self.USER_ID) | |
119 | ) | |
120 | self.assertEquals(400, code) | |
121 | self.assertEquals(response['errcode'], Codes.NOT_FOUND) | |
122 | ||
123 | # Currently invalid params do not have an appropriate errcode | |
124 | # in errors.py | |
125 | @defer.inlineCallbacks | |
126 | def test_get_filter_invalid_id(self): | |
127 | (code, response) = yield self.mock_resource.trigger_get( | |
128 | "/user/%s/filter/foobar" % (self.USER_ID) | |
129 | ) | |
130 | self.assertEquals(400, code) | |
131 | ||
132 | # No ID also returns an invalid_id error | |
133 | @defer.inlineCallbacks | |
76 | 134 | def test_get_filter_no_id(self): |
77 | self._user_filters["apple"] = [ | |
78 | {"type": ["m.*"]} | |
79 | ] | |
80 | ||
81 | 135 | (code, response) = yield self.mock_resource.trigger_get( |
82 | "/user/%s/filter/2" % (self.USER_ID) | |
136 | "/user/%s/filter/" % (self.USER_ID) | |
83 | 137 | ) |
84 | self.assertEquals(404, code) | |
85 | ||
86 | @defer.inlineCallbacks | |
87 | def test_get_filter_no_user(self): | |
88 | (code, response) = yield self.mock_resource.trigger_get( | |
89 | "/user/%s/filter/0" % (self.USER_ID) | |
90 | ) | |
91 | self.assertEquals(404, code) | |
138 | self.assertEquals(400, code) |
2 | 2 | from twisted.internet import defer |
3 | 3 | from mock import Mock |
4 | 4 | from tests import unittest |
5 | from tests.utils import mock_getRawHeaders | |
5 | 6 | import json |
6 | 7 | |
7 | 8 | |
15 | 16 | path='/_matrix/api/v2_alpha/register' |
16 | 17 | ) |
17 | 18 | self.request.args = {} |
19 | self.request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
18 | 20 | |
19 | 21 | self.appservice = None |
20 | 22 | self.auth = Mock(get_appservice_by_req=Mock( |
21 | side_effect=lambda x: defer.succeed(self.appservice)) | |
23 | side_effect=lambda x: self.appservice) | |
22 | 24 | ) |
23 | 25 | |
24 | 26 | self.auth_result = (False, None, None, None) |
36 | 36 | config = Mock( |
37 | 37 | app_service_config_files=self.as_yaml_files, |
38 | 38 | event_cache_size=1, |
39 | password_providers=[], | |
39 | 40 | ) |
40 | 41 | hs = yield setup_test_homeserver(config=config) |
41 | 42 | |
70 | 71 | outfile.write(yaml.dump(as_yaml)) |
71 | 72 | self.as_yaml_files.append(as_token) |
72 | 73 | |
73 | @defer.inlineCallbacks | |
74 | 74 | def test_retrieve_unknown_service_token(self): |
75 | service = yield self.store.get_app_service_by_token("invalid_token") | |
75 | service = self.store.get_app_service_by_token("invalid_token") | |
76 | 76 | self.assertEquals(service, None) |
77 | 77 | |
78 | @defer.inlineCallbacks | |
79 | 78 | def test_retrieval_of_service(self): |
80 | stored_service = yield self.store.get_app_service_by_token( | |
79 | stored_service = self.store.get_app_service_by_token( | |
81 | 80 | self.as_token |
82 | 81 | ) |
83 | 82 | self.assertEquals(stored_service.token, self.as_token) |
96 | 95 | [] |
97 | 96 | ) |
98 | 97 | |
99 | @defer.inlineCallbacks | |
100 | 98 | def test_retrieval_of_all_services(self): |
101 | services = yield self.store.get_app_services() | |
99 | services = self.store.get_app_services() | |
102 | 100 | self.assertEquals(len(services), 3) |
103 | 101 | |
104 | 102 | |
111 | 109 | config = Mock( |
112 | 110 | app_service_config_files=self.as_yaml_files, |
113 | 111 | event_cache_size=1, |
112 | password_providers=[], | |
114 | 113 | ) |
115 | 114 | hs = yield setup_test_homeserver(config=config) |
116 | 115 | self.db_pool = hs.get_db_pool() |
439 | 438 | f1 = self._write_config(suffix="1") |
440 | 439 | f2 = self._write_config(suffix="2") |
441 | 440 | |
442 | config = Mock(app_service_config_files=[f1, f2], event_cache_size=1) | |
441 | config = Mock( | |
442 | app_service_config_files=[f1, f2], event_cache_size=1, | |
443 | password_providers=[] | |
444 | ) | |
443 | 445 | hs = yield setup_test_homeserver(config=config, datastore=Mock()) |
444 | 446 | |
445 | 447 | ApplicationServiceStore(hs) |
449 | 451 | f1 = self._write_config(id="id", suffix="1") |
450 | 452 | f2 = self._write_config(id="id", suffix="2") |
451 | 453 | |
452 | config = Mock(app_service_config_files=[f1, f2], event_cache_size=1) | |
454 | config = Mock( | |
455 | app_service_config_files=[f1, f2], event_cache_size=1, | |
456 | password_providers=[] | |
457 | ) | |
453 | 458 | hs = yield setup_test_homeserver(config=config, datastore=Mock()) |
454 | 459 | |
455 | 460 | with self.assertRaises(ConfigError) as cm: |
465 | 470 | f1 = self._write_config(as_token="as_token", suffix="1") |
466 | 471 | f2 = self._write_config(as_token="as_token", suffix="2") |
467 | 472 | |
468 | config = Mock(app_service_config_files=[f1, f2], event_cache_size=1) | |
473 | config = Mock( | |
474 | app_service_config_files=[f1, f2], event_cache_size=1, | |
475 | password_providers=[] | |
476 | ) | |
469 | 477 | hs = yield setup_test_homeserver(config=config, datastore=Mock()) |
470 | 478 | |
471 | 479 | with self.assertRaises(ConfigError) as cm: |
51 | 51 | config.server_name = name |
52 | 52 | config.trusted_third_party_id_servers = [] |
53 | 53 | config.room_invite_state_types = [] |
54 | config.password_providers = [] | |
54 | 55 | |
55 | 56 | config.use_frozen_dicts = True |
56 | 57 | config.database_config = {"name": "sqlite3"} |
114 | 115 | return getcallargs(pattern_func, *invoked_args, **invoked_kargs) |
115 | 116 | |
116 | 117 | |
118 | def mock_getRawHeaders(headers=None): | |
119 | headers = headers if headers is not None else {} | |
120 | ||
121 | def getRawHeaders(name, default=None): | |
122 | return headers.get(name, default) | |
123 | ||
124 | return getRawHeaders | |
125 | ||
126 | ||
117 | 127 | # This is a mock /resource/ not an entire server |
118 | 128 | class MockHttpResource(HttpServer): |
119 | 129 | |
126 | 136 | |
127 | 137 | @patch('twisted.web.http.Request') |
128 | 138 | @defer.inlineCallbacks |
129 | def trigger(self, http_method, path, content, mock_request): | |
139 | def trigger(self, http_method, path, content, mock_request, federation_auth=False): | |
130 | 140 | """ Fire an HTTP event. |
131 | 141 | |
132 | 142 | Args: |
154 | 164 | |
155 | 165 | mock_request.getClientIP.return_value = "-" |
156 | 166 | |
157 | mock_request.requestHeaders.getRawHeaders.return_value = [ | |
158 | "X-Matrix origin=test,key=,sig=" | |
159 | ] | |
167 | headers = {} | |
168 | if federation_auth: | |
169 | headers["Authorization"] = ["X-Matrix origin=test,key=,sig="] | |
170 | mock_request.requestHeaders.getRawHeaders = mock_getRawHeaders(headers) | |
160 | 171 | |
161 | 172 | # return the right path if the event requires it |
162 | 173 | mock_request.path = path |
187 | 198 | ) |
188 | 199 | defer.returnValue((code, response)) |
189 | 200 | except CodeMessageException as e: |
190 | defer.returnValue((e.code, cs_error(e.msg))) | |
201 | defer.returnValue((e.code, cs_error(e.msg, code=e.errcode))) | |
191 | 202 | |
192 | 203 | raise KeyError("No event can handle %s" % path) |
193 | 204 |