Codebase list matrix-synapse / 1093536
Imported Upstream version 0.18.2 Erik Johnston 7 years ago
57 changed file(s) with 1568 addition(s) and 809 deletion(s). Raw diff Collapse all Expand all
2323 .coverage
2424 htmlcov
2525
26 demo/*.db
27 demo/*.log
28 demo/*.log.*
29 demo/*.pid
26 demo/*/*.db
27 demo/*/*.log
28 demo/*/*.log.*
29 demo/*/*.pid
3030 demo/media_store.*
3131 demo/etc
3232
0 Changes in synapse v0.18.1 (2016-10-0)
0 Changes in synapse v0.18.2 (2016-11-01)
1 =======================================
2
3 No changes since v0.18.2-rc5
4
5
6 Changes in synapse v0.18.2-rc5 (2016-10-28)
7 ===========================================
8
9 Bug fixes:
10
11 * Fix prometheus process metrics in worker processes (PR #1184)
12
13
14 Changes in synapse v0.18.2-rc4 (2016-10-27)
15 ===========================================
16
17 Bug fixes:
18
19 * Fix ``user_threepids`` schema delta, which in some instances prevented
20 startup after upgrade (PR #1183)
21
22
23 Changes in synapse v0.18.2-rc3 (2016-10-27)
24 ===========================================
25
26 Changes:
27
28 * Allow clients to supply access tokens as headers (PR #1098)
29 * Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164)
30 * Make password reset email field case insensitive (PR #1170)
31 * Reduce redundant database work in email pusher (PR #1174)
32 * Allow configurable rate limiting per AS (PR #1175)
33 * Check whether to ratelimit sooner to avoid work (PR #1176)
34 * Standardise prometheus metrics (PR #1177)
35
36
37 Bug fixes:
38
39 * Fix incredibly slow back pagination query (PR #1178)
40 * Fix infinite typing bug (PR #1179)
41
42
43 Changes in synapse v0.18.2-rc2 (2016-10-25)
44 ===========================================
45
46 (This release did not include the changes advertised and was identical to RC1)
47
48
49 Changes in synapse v0.18.2-rc1 (2016-10-17)
50 ===========================================
51
52 Changes:
53
54 * Remove redundant event_auth index (PR #1113)
55 * Reduce DB hits for replication (PR #1141)
56 * Implement pluggable password auth (PR #1155)
57 * Remove rate limiting from app service senders and fix get_or_create_user
58 requester, thanks to Patrik Oldsberg (PR #1157)
59 * window.postmessage for Interactive Auth fallback (PR #1159)
60 * Use sys.executable instead of hardcoded python, thanks to Pedro Larroy
61 (PR #1162)
62 * Add config option for adding additional TLS fingerprints (PR #1167)
63 * User-interactive auth on delete device (PR #1168)
64
65
66 Bug fixes:
67
68 * Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg
69 (PR #1150)
70 * Fix interactive auth to return 401 from for incorrect password (PR #1160,
71 #1166)
72 * Fix email push notifs being dropped (PR #1169)
73
74
75
76 Changes in synapse v0.18.1 (2016-10-05)
177 ======================================
278
379 No changes since v0.18.1-rc1
1414
1515 Restart synapse
1616
17 3: Check out synapse-prometheus-config
18 https://github.com/matrix-org/synapse-prometheus-config
17 3: Add a prometheus target for synapse. It needs to set the ``metrics_path``
18 to a non-default value::
1919
20 4: Add ``synapse.html`` and ``synapse.rules``
21 The ``.html`` file needs to appear in prometheus's ``consoles`` directory,
22 and the ``.rules`` file needs to be invoked somewhere in the main config
23 file. A symlink to each from the git checkout into the prometheus directory
24 might be easiest to ensure ``git pull`` keeps it updated.
20 - job_name: "synapse"
21 metrics_path: "/_synapse/metrics"
22 static_configs:
23 - targets:
24 "my.server.here:9092"
2525
26 5: Add a prometheus target for synapse
27 This is easiest if prometheus runs on the same machine as synapse, as it can
28 then just use localhost::
26 Standard Metric Names
27 ---------------------
2928
30 global: {
31 rule_file: "synapse.rules"
32 }
29 As of synapse version 0.18.2, the format of the process-wide metrics has been
30 changed to fit prometheus standard naming conventions. Additionally the units
31 have been changed to seconds, from miliseconds.
3332
34 job: {
35 name: "synapse"
33 ================================== =============================
34 New name Old name
35 ---------------------------------- -----------------------------
36 process_cpu_user_seconds_total process_resource_utime / 1000
37 process_cpu_system_seconds_total process_resource_stime / 1000
38 process_open_fds (no 'type' label) process_fds
39 ================================== =============================
3640
37 target_group: {
38 target: "http://localhost:9092/"
39 }
40 }
41 The python-specific counts of garbage collector performance have been renamed.
4142
42 6: Start prometheus::
43 =========================== ======================
44 New name Old name
45 --------------------------- ----------------------
46 python_gc_time reactor_gc_time
47 python_gc_unreachable_total reactor_gc_unreachable
48 python_gc_counts reactor_gc_counts
49 =========================== ======================
4350
44 ./prometheus -config.file=prometheus.conf
51 The twisted-specific reactor metrics have been renamed.
4552
46 7: Wait a few seconds for it to start and perform the first scrape,
47 then visit the console:
48
49 http://server-where-prometheus-runs:9090/consoles/synapse.html
53 ==================================== =================
54 New name Old name
55 ------------------------------------ -----------------
56 python_twisted_reactor_pending_calls reactor_tick_time
57 python_twisted_reactor_tick_time reactor_tick_time
58 ==================================== =================
1717 <div class="summarytext">{{ summary_text }}</div>
1818 </td>
1919 <td class="logo">
20 {% if app_name == "Vector" %}
20 {% if app_name == "Riot" %}
21 <img src="http://matrix.org/img/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
22 {% elif app_name == "Vector" %}
2123 <img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
2224 {% else %}
2325 <img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
1515 """ This is a reference implementation of a Matrix home server.
1616 """
1717
18 __version__ = "0.18.1"
18 __version__ = "0.18.2"
602602 """
603603 # Can optionally look elsewhere in the request (e.g. headers)
604604 try:
605 user_id = yield self._get_appservice_user_id(request)
605 user_id, app_service = yield self._get_appservice_user_id(request)
606606 if user_id:
607607 request.authenticated_entity = user_id
608 defer.returnValue(synapse.types.create_requester(user_id))
608 defer.returnValue(
609 synapse.types.create_requester(user_id, app_service=app_service)
610 )
609611
610612 access_token = get_access_token_from_request(
611613 request, self.TOKEN_NOT_FOUND_HTTP_STATUS
643645 request.authenticated_entity = user.to_string()
644646
645647 defer.returnValue(synapse.types.create_requester(
646 user, token_id, is_guest, device_id))
648 user, token_id, is_guest, device_id, app_service=app_service)
649 )
647650 except KeyError:
648651 raise AuthError(
649652 self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.",
652655
653656 @defer.inlineCallbacks
654657 def _get_appservice_user_id(self, request):
655 app_service = yield self.store.get_app_service_by_token(
658 app_service = self.store.get_app_service_by_token(
656659 get_access_token_from_request(
657660 request, self.TOKEN_NOT_FOUND_HTTP_STATUS
658661 )
659662 )
660663 if app_service is None:
661 defer.returnValue(None)
664 defer.returnValue((None, None))
662665
663666 if "user_id" not in request.args:
664 defer.returnValue(app_service.sender)
667 defer.returnValue((app_service.sender, app_service))
665668
666669 user_id = request.args["user_id"][0]
667670 if app_service.sender == user_id:
668 defer.returnValue(app_service.sender)
671 defer.returnValue((app_service.sender, app_service))
669672
670673 if not app_service.is_interested_in_user(user_id):
671674 raise AuthError(
677680 403,
678681 "Application service has not registered this user"
679682 )
680 defer.returnValue(user_id)
683 defer.returnValue((user_id, app_service))
681684
682685 @defer.inlineCallbacks
683686 def get_user_by_access_token(self, token, rights="access"):
854857 }
855858 defer.returnValue(user_info)
856859
857 @defer.inlineCallbacks
858860 def get_appservice_by_req(self, request):
859861 try:
860862 token = get_access_token_from_request(
861863 request, self.TOKEN_NOT_FOUND_HTTP_STATUS
862864 )
863 service = yield self.store.get_app_service_by_token(token)
865 service = self.store.get_app_service_by_token(token)
864866 if not service:
865867 logger.warn("Unrecognised appservice access token: %s" % (token,))
866868 raise AuthError(
869871 errcode=Codes.UNKNOWN_TOKEN
870872 )
871873 request.authenticated_entity = service.sender
872 defer.returnValue(service)
874 return defer.succeed(service)
873875 except KeyError:
874876 raise AuthError(
875877 self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token."
10011003 403,
10021004 "You are not allowed to set others state"
10031005 )
1004 else:
1005 sender_domain = UserID.from_string(
1006 event.user_id
1007 ).domain
1008
1009 if sender_domain != event.state_key:
1010 raise AuthError(
1011 403,
1012 "You are not allowed to set others state"
1013 )
10141006
10151007 return True
10161008
11771169 bool: False if no access_token was given, True otherwise.
11781170 """
11791171 query_params = request.args.get("access_token")
1180 return bool(query_params)
1172 auth_headers = request.requestHeaders.getRawHeaders("Authorization")
1173 return bool(query_params) or bool(auth_headers)
11811174
11821175
11831176 def get_access_token_from_request(request, token_not_found_http_status=401):
11951188 Raises:
11961189 AuthError: If there isn't an access_token in the request.
11971190 """
1191
1192 auth_headers = request.requestHeaders.getRawHeaders("Authorization")
11981193 query_params = request.args.get("access_token")
1199 # Try to get the access_token from the query params.
1200 if not query_params:
1201 raise AuthError(
1202 token_not_found_http_status,
1203 "Missing access token.",
1204 errcode=Codes.MISSING_TOKEN
1205 )
1206
1207 return query_params[0]
1194 if auth_headers:
1195 # Try the get the access_token from a "Authorization: Bearer"
1196 # header
1197 if query_params is not None:
1198 raise AuthError(
1199 token_not_found_http_status,
1200 "Mixing Authorization headers and access_token query parameters.",
1201 errcode=Codes.MISSING_TOKEN,
1202 )
1203 if len(auth_headers) > 1:
1204 raise AuthError(
1205 token_not_found_http_status,
1206 "Too many Authorization headers.",
1207 errcode=Codes.MISSING_TOKEN,
1208 )
1209 parts = auth_headers[0].split(" ")
1210 if parts[0] == "Bearer" and len(parts) == 2:
1211 return parts[1]
1212 else:
1213 raise AuthError(
1214 token_not_found_http_status,
1215 "Invalid Authorization header.",
1216 errcode=Codes.MISSING_TOKEN,
1217 )
1218 else:
1219 # Try to get the access_token from the query params.
1220 if not query_params:
1221 raise AuthError(
1222 token_not_found_http_status,
1223 "Missing access token.",
1224 errcode=Codes.MISSING_TOKEN
1225 )
1226
1227 return query_params[0]
2222 def __init__(self):
2323 self.message_counts = collections.OrderedDict()
2424
25 def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count):
25 def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count, update=True):
2626 """Can the user send a message?
2727 Args:
2828 user_id: The user sending a message.
3131 second.
3232 burst_count: How many messages the user can send before being
3333 limited.
34 update (bool): Whether to update the message rates or not. This is
35 useful to check if a message would be allowed to be sent before
36 its ready to be actually sent.
3437 Returns:
3538 A pair of a bool indicating if they can send a message now and a
3639 time in seconds of when they can next send a message.
3740 """
3841 self.prune_message_counts(time_now_s)
39 message_count, time_start, _ignored = self.message_counts.pop(
42 message_count, time_start, _ignored = self.message_counts.get(
4043 user_id, (0., time_now_s, None),
4144 )
4245 time_delta = time_now_s - time_start
5154 allowed = True
5255 message_count += 1
5356
54 self.message_counts[user_id] = (
55 message_count, time_start, msg_rate_hz
56 )
57 if update:
58 self.message_counts[user_id] = (
59 message_count, time_start, msg_rate_hz
60 )
5761
5862 if msg_rate_hz > 0:
5963 time_allowed = (
196196 yield start_pusher(user_id, app_id, pushkey)
197197
198198 stream = results.get("events")
199 if stream:
199 if stream and stream["rows"]:
200200 min_stream_id = stream["rows"][0][0]
201201 max_stream_id = stream["position"]
202202 preserve_fn(pusher_pool.on_new_notifications)(
204204 )
205205
206206 stream = results.get("receipts")
207 if stream:
207 if stream and stream["rows"]:
208208 rows = stream["rows"]
209209 affected_room_ids = set(row[1] for row in rows)
210210 min_stream_id = rows[0][0]
2323 import sys
2424 import yaml
2525
26 SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
26 SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
2727
2828 GREEN = "\x1b[1;32m"
2929 RED = "\x1b[1;31m"
8080 NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
8181
8282 def __init__(self, token, url=None, namespaces=None, hs_token=None,
83 sender=None, id=None, protocols=None):
83 sender=None, id=None, protocols=None, rate_limited=True):
8484 self.token = token
8585 self.url = url
8686 self.hs_token = hs_token
9393 self.protocols = set(protocols)
9494 else:
9595 self.protocols = set()
96
97 self.rate_limited = rate_limited
9698
9799 def _check_namespaces(self, namespaces):
98100 # Sanity check that it is of the form:
233235 def is_exclusive_room(self, room_id):
234236 return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
235237
238 def is_rate_limited(self):
239 return self.rate_limited
240
236241 def __str__(self):
237242 return "ApplicationService: %s" % (self.__dict__,)
109109 user = UserID(localpart, hostname)
110110 user_id = user.to_string()
111111
112 # Rate limiting for users of this AS is on by default (excludes sender)
113 rate_limited = True
114 if isinstance(as_info.get("rate_limited"), bool):
115 rate_limited = as_info.get("rate_limited")
116
112117 # namespace checks
113118 if not isinstance(as_info.get("namespaces"), dict):
114119 raise KeyError("Requires 'namespaces' object.")
154159 sender=user_id,
155160 id=as_info["id"],
156161 protocols=protocols,
162 rate_limited=rate_limited
157163 )
2929 from .cas import CasConfig
3030 from .password import PasswordConfig
3131 from .jwt import JWTConfig
32 from .ldap import LDAPConfig
32 from .password_auth_providers import PasswordAuthProviderConfig
3333 from .emailconfig import EmailConfig
3434 from .workers import WorkerConfig
3535
3838 RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
3939 VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
4040 AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
41 JWTConfig, LDAPConfig, PasswordConfig, EmailConfig,
42 WorkerConfig,):
41 JWTConfig, PasswordConfig, EmailConfig,
42 WorkerConfig, PasswordAuthProviderConfig,):
4343 pass
4444
4545
+0
-100
synapse/config/ldap.py less more
0 # -*- coding: utf-8 -*-
1 # Copyright 2015 Niklas Riekenbrauck
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from ._base import Config, ConfigError
16
17
18 MISSING_LDAP3 = (
19 "Missing ldap3 library. This is required for LDAP Authentication."
20 )
21
22
23 class LDAPMode(object):
24 SIMPLE = "simple",
25 SEARCH = "search",
26
27 LIST = (SIMPLE, SEARCH)
28
29
30 class LDAPConfig(Config):
31 def read_config(self, config):
32 ldap_config = config.get("ldap_config", {})
33
34 self.ldap_enabled = ldap_config.get("enabled", False)
35
36 if self.ldap_enabled:
37 # verify dependencies are available
38 try:
39 import ldap3
40 ldap3 # to stop unused lint
41 except ImportError:
42 raise ConfigError(MISSING_LDAP3)
43
44 self.ldap_mode = LDAPMode.SIMPLE
45
46 # verify config sanity
47 self.require_keys(ldap_config, [
48 "uri",
49 "base",
50 "attributes",
51 ])
52
53 self.ldap_uri = ldap_config["uri"]
54 self.ldap_start_tls = ldap_config.get("start_tls", False)
55 self.ldap_base = ldap_config["base"]
56 self.ldap_attributes = ldap_config["attributes"]
57
58 if "bind_dn" in ldap_config:
59 self.ldap_mode = LDAPMode.SEARCH
60 self.require_keys(ldap_config, [
61 "bind_dn",
62 "bind_password",
63 ])
64
65 self.ldap_bind_dn = ldap_config["bind_dn"]
66 self.ldap_bind_password = ldap_config["bind_password"]
67 self.ldap_filter = ldap_config.get("filter", None)
68
69 # verify attribute lookup
70 self.require_keys(ldap_config['attributes'], [
71 "uid",
72 "name",
73 "mail",
74 ])
75
76 def require_keys(self, config, required):
77 missing = [key for key in required if key not in config]
78 if missing:
79 raise ConfigError(
80 "LDAP enabled but missing required config values: {}".format(
81 ", ".join(missing)
82 )
83 )
84
85 def default_config(self, **kwargs):
86 return """\
87 # ldap_config:
88 # enabled: true
89 # uri: "ldap://ldap.example.com:389"
90 # start_tls: true
91 # base: "ou=users,dc=example,dc=com"
92 # attributes:
93 # uid: "cn"
94 # mail: "email"
95 # name: "givenName"
96 # #bind_dn:
97 # #bind_password:
98 # #filter: "(objectClass=posixAccount)"
99 """
0 # -*- coding: utf-8 -*-
1 # Copyright 2016 Openmarket
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from ._base import Config
16
17 import importlib
18
19
20 class PasswordAuthProviderConfig(Config):
21 def read_config(self, config):
22 self.password_providers = []
23
24 # We want to be backwards compatible with the old `ldap_config`
25 # param.
26 ldap_config = config.get("ldap_config", {})
27 self.ldap_enabled = ldap_config.get("enabled", False)
28 if self.ldap_enabled:
29 from synapse.util.ldap_auth_provider import LdapAuthProvider
30 parsed_config = LdapAuthProvider.parse_config(ldap_config)
31 self.password_providers.append((LdapAuthProvider, parsed_config))
32
33 providers = config.get("password_providers", [])
34 for provider in providers:
35 # We need to import the module, and then pick the class out of
36 # that, so we split based on the last dot.
37 module, clz = provider['module'].rsplit(".", 1)
38 module = importlib.import_module(module)
39 provider_class = getattr(module, clz)
40
41 provider_config = provider_class.parse_config(provider["config"])
42 self.password_providers.append((provider_class, provider_config))
43
44 def default_config(self, **kwargs):
45 return """\
46 # password_providers:
47 # - module: "synapse.util.ldap_auth_provider.LdapAuthProvider"
48 # config:
49 # enabled: true
50 # uri: "ldap://ldap.example.com:389"
51 # start_tls: true
52 # base: "ou=users,dc=example,dc=com"
53 # attributes:
54 # uid: "cn"
55 # mail: "email"
56 # name: "givenName"
57 # #bind_dn:
58 # #bind_password:
59 # #filter: "(objectClass=posixAccount)"
60 """
1717 from OpenSSL import crypto
1818 import subprocess
1919 import os
20
21 from hashlib import sha256
22 from unpaddedbase64 import encode_base64
2023
2124 GENERATE_DH_PARAMS = False
2225
4043 self.tls_dh_params_path = self.check_file(
4144 config.get("tls_dh_params_path"), "tls_dh_params"
4245 )
46
47 self.tls_fingerprints = config["tls_fingerprints"]
48
49 # Check that our own certificate is included in the list of fingerprints
50 # and include it if it is not.
51 x509_certificate_bytes = crypto.dump_certificate(
52 crypto.FILETYPE_ASN1,
53 self.tls_certificate
54 )
55 sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest())
56 sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints)
57 if sha256_fingerprint not in sha256_fingerprints:
58 self.tls_fingerprints.append({u"sha256": sha256_fingerprint})
4359
4460 # This config option applies to non-federation HTTP clients
4561 # (e.g. for talking to recaptcha, identity servers, and such)
7288
7389 # Don't bind to the https port
7490 no_tls: False
91
92 # List of allowed TLS fingerprints for this server to publish along
93 # with the signing keys for this server. Other matrix servers that
94 # make HTTPS requests to this server will check that the TLS
95 # certificates returned by this server match one of the fingerprints.
96 #
97 # Synapse automatically adds its the fingerprint of its own certificate
98 # to the list. So if federation traffic is handle directly by synapse
99 # then no modification to the list is required.
100 #
101 # If synapse is run behind a load balancer that handles the TLS then it
102 # will be necessary to add the fingerprints of the certificates used by
103 # the loadbalancers to this list if they are different to the one
104 # synapse is using.
105 #
106 # Homeservers are permitted to cache the list of TLS fingerprints
107 # returned in the key responses up to the "valid_until_ts" returned in
108 # key. It may be necessary to publish the fingerprints of a new
109 # certificate and wait until the "valid_until_ts" of the previous key
110 # responses have passed before deploying it.
111 tls_fingerprints: []
112 # tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
75113 """ % locals()
76114
77115 def read_tls_certificate(self, cert_path):
5454
5555 def ratelimit(self, requester):
5656 time_now = self.clock.time()
57 user_id = requester.user.to_string()
58
59 # The AS user itself is never rate limited.
60 app_service = self.store.get_app_service_by_user_id(user_id)
61 if app_service is not None:
62 return # do not ratelimit app service senders
63
64 # Disable rate limiting of users belonging to any AS that is configured
65 # not to be rate limited in its registration file (rate_limited: true|false).
66 if requester.app_service and not requester.app_service.is_rate_limited():
67 return
68
5769 allowed, time_allowed = self.ratelimiter.send_message(
58 requester.user.to_string(), time_now,
70 user_id, time_now,
5971 msg_rate_hz=self.hs.config.rc_messages_per_second,
6072 burst_count=self.hs.config.rc_message_burst_count,
6173 )
5858 Args:
5959 current_id(int): The current maximum ID.
6060 """
61 services = yield self.store.get_app_services()
61 services = self.store.get_app_services()
6262 if not services or not self.notify_appservices:
6363 return
6464
141141 association can be found.
142142 """
143143 room_alias_str = room_alias.to_string()
144 services = yield self.store.get_app_services()
144 services = self.store.get_app_services()
145145 alias_query_services = [
146146 s for s in services if (
147147 s.is_interested_in_alias(room_alias_str)
176176
177177 @defer.inlineCallbacks
178178 def get_3pe_protocols(self, only_protocol=None):
179 services = yield self.store.get_app_services()
179 services = self.store.get_app_services()
180180 protocols = {}
181181
182182 # Collect up all the individual protocol responses out of the ASes
223223 list<ApplicationService>: A list of services interested in this
224224 event based on the service regex.
225225 """
226 services = yield self.store.get_app_services()
226 services = self.store.get_app_services()
227227 interested_list = [
228228 s for s in services if (
229229 yield s.is_interested(event, self.store)
231231 ]
232232 defer.returnValue(interested_list)
233233
234 @defer.inlineCallbacks
235234 def _get_services_for_user(self, user_id):
236 services = yield self.store.get_app_services()
235 services = self.store.get_app_services()
237236 interested_list = [
238237 s for s in services if (
239238 s.is_interested_in_user(user_id)
240239 )
241240 ]
242 defer.returnValue(interested_list)
243
244 @defer.inlineCallbacks
241 return defer.succeed(interested_list)
242
245243 def _get_services_for_3pn(self, protocol):
246 services = yield self.store.get_app_services()
244 services = self.store.get_app_services()
247245 interested_list = [
248246 s for s in services if s.is_interested_in_protocol(protocol)
249247 ]
250 defer.returnValue(interested_list)
248 return defer.succeed(interested_list)
251249
252250 @defer.inlineCallbacks
253251 def _is_unknown_user(self, user_id):
263261 return
264262
265263 # user not found; could be the AS though, so check.
266 services = yield self.store.get_app_services()
264 services = self.store.get_app_services()
267265 service_list = [s for s in services if s.sender == user_id]
268266 defer.returnValue(len(service_list) == 0)
269267
1919 from synapse.types import UserID
2020 from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
2121 from synapse.util.async import run_on_reactor
22 from synapse.config.ldap import LDAPMode
2322
2423 from twisted.web.client import PartialDownloadError
2524
2726 import bcrypt
2827 import pymacaroons
2928 import simplejson
30
31 try:
32 import ldap3
33 import ldap3.core.exceptions
34 except ImportError:
35 ldap3 = None
36 pass
3729
3830 import synapse.util.stringutils as stringutils
3931
5850 }
5951 self.bcrypt_rounds = hs.config.bcrypt_rounds
6052 self.sessions = {}
61 self.INVALID_TOKEN_HTTP_STATUS = 401
62
63 self.ldap_enabled = hs.config.ldap_enabled
64 if self.ldap_enabled:
65 if not ldap3:
66 raise RuntimeError(
67 'Missing ldap3 library. This is required for LDAP Authentication.'
68 )
69 self.ldap_mode = hs.config.ldap_mode
70 self.ldap_uri = hs.config.ldap_uri
71 self.ldap_start_tls = hs.config.ldap_start_tls
72 self.ldap_base = hs.config.ldap_base
73 self.ldap_attributes = hs.config.ldap_attributes
74 if self.ldap_mode == LDAPMode.SEARCH:
75 self.ldap_bind_dn = hs.config.ldap_bind_dn
76 self.ldap_bind_password = hs.config.ldap_bind_password
77 self.ldap_filter = hs.config.ldap_filter
53
54 account_handler = _AccountHandler(
55 hs, check_user_exists=self.check_user_exists
56 )
57
58 self.password_providers = [
59 module(config=config, account_handler=account_handler)
60 for module, config in hs.config.password_providers
61 ]
7862
7963 self.hs = hs # FIXME better possibility to access registrationHandler later?
8064 self.device_handler = hs.get_device_handler()
148132 creds = session['creds']
149133
150134 # check auth type currently being presented
135 errordict = {}
151136 if 'type' in authdict:
152 if authdict['type'] not in self.checkers:
137 login_type = authdict['type']
138 if login_type not in self.checkers:
153139 raise LoginError(400, "", Codes.UNRECOGNIZED)
154 result = yield self.checkers[authdict['type']](authdict, clientip)
155 if result:
156 creds[authdict['type']] = result
157 self._save_session(session)
140 try:
141 result = yield self.checkers[login_type](authdict, clientip)
142 if result:
143 creds[login_type] = result
144 self._save_session(session)
145 except LoginError, e:
146 if login_type == LoginType.EMAIL_IDENTITY:
147 # riot used to have a bug where it would request a new
148 # validation token (thus sending a new email) each time it
149 # got a 401 with a 'flows' field.
150 # (https://github.com/vector-im/vector-web/issues/2447).
151 #
152 # Grandfather in the old behaviour for now to avoid
153 # breaking old riot deployments.
154 raise e
155
156 # this step failed. Merge the error dict into the response
157 # so that the client can have another go.
158 errordict = e.error_dict()
158159
159160 for f in flows:
160161 if len(set(f) - set(creds.keys())) == 0:
163164
164165 ret = self._auth_dict_for_flows(flows, session)
165166 ret['completed'] = creds.keys()
167 ret.update(errordict)
166168 defer.returnValue((False, ret, clientdict, session['id']))
167169
168170 @defer.inlineCallbacks
430432 defer.Deferred: (str) canonical_user_id, or None if zero or
431433 multiple matches
432434 """
433 try:
434 res = yield self._find_user_id_and_pwd_hash(user_id)
435 res = yield self._find_user_id_and_pwd_hash(user_id)
436 if res is not None:
435437 defer.returnValue(res[0])
436 except LoginError:
437 defer.returnValue(None)
438 defer.returnValue(None)
438439
439440 @defer.inlineCallbacks
440441 def _find_user_id_and_pwd_hash(self, user_id):
441442 """Checks to see if a user with the given id exists. Will check case
442 insensitively, but will throw if there are multiple inexact matches.
443 insensitively, but will return None if there are multiple inexact
444 matches.
443445
444446 Returns:
445447 tuple: A 2-tuple of `(canonical_user_id, password_hash)`
448 None: if there is not exactly one match
446449 """
447450 user_infos = yield self.store.get_users_by_id_case_insensitive(user_id)
451
452 result = None
448453 if not user_infos:
449454 logger.warn("Attempted to login as %s but they do not exist", user_id)
450 raise LoginError(403, "", errcode=Codes.FORBIDDEN)
451
452 if len(user_infos) > 1:
453 if user_id not in user_infos:
454 logger.warn(
455 "Attempted to login as %s but it matches more than one user "
456 "inexactly: %r",
457 user_id, user_infos.keys()
458 )
459 raise LoginError(403, "", errcode=Codes.FORBIDDEN)
460
461 defer.returnValue((user_id, user_infos[user_id]))
455 elif len(user_infos) == 1:
456 # a single match (possibly not exact)
457 result = user_infos.popitem()
458 elif user_id in user_infos:
459 # multiple matches, but one is exact
460 result = (user_id, user_infos[user_id])
462461 else:
463 defer.returnValue(user_infos.popitem())
462 # multiple matches, none of them exact
463 logger.warn(
464 "Attempted to login as %s but it matches more than one user "
465 "inexactly: %r",
466 user_id, user_infos.keys()
467 )
468 defer.returnValue(result)
464469
465470 @defer.inlineCallbacks
466471 def _check_password(self, user_id, password):
474479 Returns:
475480 (str) the canonical_user_id
476481 Raises:
477 LoginError if the password was incorrect
478 """
479 valid_ldap = yield self._check_ldap_password(user_id, password)
480 if valid_ldap:
481 defer.returnValue(user_id)
482
483 result = yield self._check_local_password(user_id, password)
484 defer.returnValue(result)
482 LoginError if login fails
483 """
484 for provider in self.password_providers:
485 is_valid = yield provider.check_password(user_id, password)
486 if is_valid:
487 defer.returnValue(user_id)
488
489 canonical_user_id = yield self._check_local_password(user_id, password)
490
491 if canonical_user_id:
492 defer.returnValue(canonical_user_id)
493
494 # unknown username or invalid password. We raise a 403 here, but note
495 # that if we're doing user-interactive login, it turns all LoginErrors
496 # into a 401 anyway.
497 raise LoginError(
498 403, "Invalid password",
499 errcode=Codes.FORBIDDEN
500 )
485501
486502 @defer.inlineCallbacks
487503 def _check_local_password(self, user_id, password):
488504 """Authenticate a user against the local password database.
489505
490 user_id is checked case insensitively, but will throw if there are
506 user_id is checked case insensitively, but will return None if there are
491507 multiple inexact matches.
492508
493509 Args:
494510 user_id (str): complete @user:id
495511 Returns:
496 (str) the canonical_user_id
497 Raises:
498 LoginError if the password was incorrect
499 """
500 user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
512 (str) the canonical_user_id, or None if unknown user / bad password
513 """
514 lookupres = yield self._find_user_id_and_pwd_hash(user_id)
515 if not lookupres:
516 defer.returnValue(None)
517 (user_id, password_hash) = lookupres
501518 result = self.validate_hash(password, password_hash)
502519 if not result:
503520 logger.warn("Failed password login for user %s", user_id)
504 raise LoginError(403, "", errcode=Codes.FORBIDDEN)
521 defer.returnValue(None)
505522 defer.returnValue(user_id)
506
507 def _ldap_simple_bind(self, server, localpart, password):
508 """ Attempt a simple bind with the credentials
509 given by the user against the LDAP server.
510
511 Returns True, LDAP3Connection
512 if the bind was successful
513 Returns False, None
514 if an error occured
515 """
516
517 try:
518 # bind with the the local users ldap credentials
519 bind_dn = "{prop}={value},{base}".format(
520 prop=self.ldap_attributes['uid'],
521 value=localpart,
522 base=self.ldap_base
523 )
524 conn = ldap3.Connection(server, bind_dn, password)
525 logger.debug(
526 "Established LDAP connection in simple bind mode: %s",
527 conn
528 )
529
530 if self.ldap_start_tls:
531 conn.start_tls()
532 logger.debug(
533 "Upgraded LDAP connection in simple bind mode through StartTLS: %s",
534 conn
535 )
536
537 if conn.bind():
538 # GOOD: bind okay
539 logger.debug("LDAP Bind successful in simple bind mode.")
540 return True, conn
541
542 # BAD: bind failed
543 logger.info(
544 "Binding against LDAP failed for '%s' failed: %s",
545 localpart, conn.result['description']
546 )
547 conn.unbind()
548 return False, None
549
550 except ldap3.core.exceptions.LDAPException as e:
551 logger.warn("Error during LDAP authentication: %s", e)
552 return False, None
553
554 def _ldap_authenticated_search(self, server, localpart, password):
555 """ Attempt to login with the preconfigured bind_dn
556 and then continue searching and filtering within
557 the base_dn
558
559 Returns (True, LDAP3Connection)
560 if a single matching DN within the base was found
561 that matched the filter expression, and with which
562 a successful bind was achieved
563
564 The LDAP3Connection returned is the instance that was used to
565 verify the password not the one using the configured bind_dn.
566 Returns (False, None)
567 if an error occured
568 """
569
570 try:
571 conn = ldap3.Connection(
572 server,
573 self.ldap_bind_dn,
574 self.ldap_bind_password
575 )
576 logger.debug(
577 "Established LDAP connection in search mode: %s",
578 conn
579 )
580
581 if self.ldap_start_tls:
582 conn.start_tls()
583 logger.debug(
584 "Upgraded LDAP connection in search mode through StartTLS: %s",
585 conn
586 )
587
588 if not conn.bind():
589 logger.warn(
590 "Binding against LDAP with `bind_dn` failed: %s",
591 conn.result['description']
592 )
593 conn.unbind()
594 return False, None
595
596 # construct search_filter like (uid=localpart)
597 query = "({prop}={value})".format(
598 prop=self.ldap_attributes['uid'],
599 value=localpart
600 )
601 if self.ldap_filter:
602 # combine with the AND expression
603 query = "(&{query}{filter})".format(
604 query=query,
605 filter=self.ldap_filter
606 )
607 logger.debug(
608 "LDAP search filter: %s",
609 query
610 )
611 conn.search(
612 search_base=self.ldap_base,
613 search_filter=query
614 )
615
616 if len(conn.response) == 1:
617 # GOOD: found exactly one result
618 user_dn = conn.response[0]['dn']
619 logger.debug('LDAP search found dn: %s', user_dn)
620
621 # unbind and simple bind with user_dn to verify the password
622 # Note: do not use rebind(), for some reason it did not verify
623 # the password for me!
624 conn.unbind()
625 return self._ldap_simple_bind(server, localpart, password)
626 else:
627 # BAD: found 0 or > 1 results, abort!
628 if len(conn.response) == 0:
629 logger.info(
630 "LDAP search returned no results for '%s'",
631 localpart
632 )
633 else:
634 logger.info(
635 "LDAP search returned too many (%s) results for '%s'",
636 len(conn.response), localpart
637 )
638 conn.unbind()
639 return False, None
640
641 except ldap3.core.exceptions.LDAPException as e:
642 logger.warn("Error during LDAP authentication: %s", e)
643 return False, None
644
645 @defer.inlineCallbacks
646 def _check_ldap_password(self, user_id, password):
647 """ Attempt to authenticate a user against an LDAP Server
648 and register an account if none exists.
649
650 Returns:
651 True if authentication against LDAP was successful
652 """
653
654 if not ldap3 or not self.ldap_enabled:
655 defer.returnValue(False)
656
657 localpart = UserID.from_string(user_id).localpart
658
659 try:
660 server = ldap3.Server(self.ldap_uri)
661 logger.debug(
662 "Attempting LDAP connection with %s",
663 self.ldap_uri
664 )
665
666 if self.ldap_mode == LDAPMode.SIMPLE:
667 result, conn = self._ldap_simple_bind(
668 server=server, localpart=localpart, password=password
669 )
670 logger.debug(
671 'LDAP authentication method simple bind returned: %s (conn: %s)',
672 result,
673 conn
674 )
675 if not result:
676 defer.returnValue(False)
677 elif self.ldap_mode == LDAPMode.SEARCH:
678 result, conn = self._ldap_authenticated_search(
679 server=server, localpart=localpart, password=password
680 )
681 logger.debug(
682 'LDAP auth method authenticated search returned: %s (conn: %s)',
683 result,
684 conn
685 )
686 if not result:
687 defer.returnValue(False)
688 else:
689 raise RuntimeError(
690 'Invalid LDAP mode specified: {mode}'.format(
691 mode=self.ldap_mode
692 )
693 )
694
695 try:
696 logger.info(
697 "User authenticated against LDAP server: %s",
698 conn
699 )
700 except NameError:
701 logger.warn("Authentication method yielded no LDAP connection, aborting!")
702 defer.returnValue(False)
703
704 # check if user with user_id exists
705 if (yield self.check_user_exists(user_id)):
706 # exists, authentication complete
707 conn.unbind()
708 defer.returnValue(True)
709
710 else:
711 # does not exist, fetch metadata for account creation from
712 # existing ldap connection
713 query = "({prop}={value})".format(
714 prop=self.ldap_attributes['uid'],
715 value=localpart
716 )
717
718 if self.ldap_mode == LDAPMode.SEARCH and self.ldap_filter:
719 query = "(&{filter}{user_filter})".format(
720 filter=query,
721 user_filter=self.ldap_filter
722 )
723 logger.debug(
724 "ldap registration filter: %s",
725 query
726 )
727
728 conn.search(
729 search_base=self.ldap_base,
730 search_filter=query,
731 attributes=[
732 self.ldap_attributes['name'],
733 self.ldap_attributes['mail']
734 ]
735 )
736
737 if len(conn.response) == 1:
738 attrs = conn.response[0]['attributes']
739 mail = attrs[self.ldap_attributes['mail']][0]
740 name = attrs[self.ldap_attributes['name']][0]
741
742 # create account
743 registration_handler = self.hs.get_handlers().registration_handler
744 user_id, access_token = (
745 yield registration_handler.register(localpart=localpart)
746 )
747
748 # TODO: bind email, set displayname with data from ldap directory
749
750 logger.info(
751 "Registration based on LDAP data was successful: %d: %s (%s, %)",
752 user_id,
753 localpart,
754 name,
755 mail
756 )
757
758 defer.returnValue(True)
759 else:
760 if len(conn.response) == 0:
761 logger.warn("LDAP registration failed, no result.")
762 else:
763 logger.warn(
764 "LDAP registration failed, too many results (%s)",
765 len(conn.response)
766 )
767
768 defer.returnValue(False)
769
770 defer.returnValue(False)
771
772 except ldap3.core.exceptions.LDAPException as e:
773 logger.warn("Error during ldap authentication: %s", e)
774 defer.returnValue(False)
775523
776524 @defer.inlineCallbacks
777525 def issue_access_token(self, user_id, device_id=None):
862610
863611 @defer.inlineCallbacks
864612 def add_threepid(self, user_id, medium, address, validated_at):
613 # 'Canonicalise' email addresses down to lower case.
614 # We've now moving towards the Home Server being the entity that
615 # is responsible for validating threepids used for resetting passwords
616 # on accounts, so in future Synapse will gain knowledge of specific
617 # types (mediums) of threepid. For now, we still use the existing
618 # infrastructure, but this is the start of synapse gaining knowledge
619 # of specific types of threepid (and fixes the fact that checking
620 # for the presenc eof an email address during password reset was
621 # case sensitive).
622 if medium == 'email':
623 address = address.lower()
624
865625 yield self.store.user_add_threepid(
866626 user_id, medium, address, validated_at,
867627 self.hs.get_clock().time_msec()
910670 stored_hash.encode('utf-8')) == stored_hash
911671 else:
912672 return False
673
674
675 class _AccountHandler(object):
676 """A proxy object that gets passed to password auth providers so they
677 can register new users etc if necessary.
678 """
679 def __init__(self, hs, check_user_exists):
680 self.hs = hs
681
682 self._check_user_exists = check_user_exists
683
684 def check_user_exists(self, user_id):
685 """Check if user exissts.
686
687 Returns:
688 Deferred(bool)
689 """
690 return self._check_user_exists(user_id)
691
692 def register(self, localpart):
693 """Registers a new user with given localpart
694
695 Returns:
696 Deferred: a 2-tuple of (user_id, access_token)
697 """
698 reg = self.hs.get_handlers().registration_handler
699 return reg.register(localpart=localpart)
287287 result = yield as_handler.query_room_alias_exists(room_alias)
288288 defer.returnValue(result)
289289
290 @defer.inlineCallbacks
291290 def can_modify_alias(self, alias, user_id=None):
292291 # Any application service "interested" in an alias they are regexing on
293292 # can modify the alias.
294293 # Users can only modify the alias if ALL the interested services have
295294 # non-exclusive locks on the alias (or there are no interested services)
296 services = yield self.store.get_app_services()
295 services = self.store.get_app_services()
297296 interested_services = [
298297 s for s in services if s.is_interested_in_alias(alias.to_string())
299298 ]
301300 for service in interested_services:
302301 if user_id == service.sender:
303302 # this user IS the app service so they can do whatever they like
304 defer.returnValue(True)
305 return
303 return defer.succeed(True)
306304 elif service.is_exclusive_alias(alias.to_string()):
307305 # another service has an exclusive lock on this alias.
308 defer.returnValue(False)
309 return
306 return defer.succeed(False)
310307 # either no interested services, or no service with an exclusive lock
311 defer.returnValue(True)
308 return defer.succeed(True)
312309
313310 @defer.inlineCallbacks
314311 def _user_can_delete_alias(self, alias, user_id):
1515 from twisted.internet import defer
1616
1717 from synapse.api.constants import EventTypes, Membership
18 from synapse.api.errors import AuthError, Codes, SynapseError
18 from synapse.api.errors import AuthError, Codes, SynapseError, LimitExceededError
1919 from synapse.crypto.event_signing import add_hashes_and_signatures
2020 from synapse.events.utils import serialize_event
2121 from synapse.events.validator import EventValidator
8181 room_token = pagin_config.from_token.room_key
8282 else:
8383 pagin_config.from_token = (
84 yield self.hs.get_event_sources().get_current_token(
85 direction='b'
84 yield self.hs.get_event_sources().get_current_token_for_room(
85 room_id=room_id
8686 )
8787 )
8888 room_token = pagin_config.from_token.room_key
236236 raise SynapseError(
237237 500,
238238 "Tried to send member event through non-member codepath"
239 )
240
241 # We check here if we are currently being rate limited, so that we
242 # don't do unnecessary work. We check again just before we actually
243 # send the event.
244 time_now = self.clock.time()
245 allowed, time_allowed = self.ratelimiter.send_message(
246 event.sender, time_now,
247 msg_rate_hz=self.hs.config.rc_messages_per_second,
248 burst_count=self.hs.config.rc_message_burst_count,
249 update=False,
250 )
251 if not allowed:
252 raise LimitExceededError(
253 retry_after_ms=int(1000 * (time_allowed - time_now)),
239254 )
240255
241256 user = UserID.from_string(event.sender)
6464 defer.returnValue(result["displayname"])
6565
6666 @defer.inlineCallbacks
67 def set_displayname(self, target_user, requester, new_displayname):
67 def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
6868 """target_user is the user whose displayname is to be changed;
6969 auth_user is the user attempting to make this change."""
7070 if not self.hs.is_mine(target_user):
7171 raise SynapseError(400, "User is not hosted on this Home Server")
7272
73 if target_user != requester.user:
73 if not by_admin and target_user != requester.user:
7474 raise AuthError(400, "Cannot set another user's displayname")
7575
7676 if new_displayname == '':
110110 defer.returnValue(result["avatar_url"])
111111
112112 @defer.inlineCallbacks
113 def set_avatar_url(self, target_user, requester, new_avatar_url):
113 def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False):
114114 """target_user is the user whose avatar_url is to be changed;
115115 auth_user is the user attempting to make this change."""
116116 if not self.hs.is_mine(target_user):
117117 raise SynapseError(400, "User is not hosted on this Home Server")
118118
119 if target_user != requester.user:
119 if not by_admin and target_user != requester.user:
120120 raise AuthError(400, "Cannot set another user's avatar_url")
121121
122122 yield self.store.set_profile_avatar_url(
1818
1919 from twisted.internet import defer
2020
21 import synapse.types
2221 from synapse.api.errors import (
2322 AuthError, Codes, SynapseError, RegistrationError, InvalidCaptchaError
2423 )
193192 def appservice_register(self, user_localpart, as_token):
194193 user = UserID(user_localpart, self.hs.hostname)
195194 user_id = user.to_string()
196 service = yield self.store.get_app_service_by_token(as_token)
195 service = self.store.get_app_service_by_token(as_token)
197196 if not service:
198197 raise AuthError(403, "Invalid application service token.")
199198 if not service.is_interested_in_user(user_id):
304303 # XXX: This should be a deferred list, shouldn't it?
305304 yield identity_handler.bind_threepid(c, user_id)
306305
307 @defer.inlineCallbacks
308306 def check_user_id_not_appservice_exclusive(self, user_id, allowed_appservice=None):
309307 # valid user IDs must not clash with any user ID namespaces claimed by
310308 # application services.
311 services = yield self.store.get_app_services()
309 services = self.store.get_app_services()
312310 interested_services = [
313311 s for s in services
314312 if s.is_interested_in_user(user_id)
370368 defer.returnValue(data)
371369
372370 @defer.inlineCallbacks
373 def get_or_create_user(self, localpart, displayname, duration_in_ms,
371 def get_or_create_user(self, requester, localpart, displayname, duration_in_ms,
374372 password_hash=None):
375373 """Creates a new user if the user does not exist,
376374 else revokes all previous access tokens and generates a new one.
417415 if displayname is not None:
418416 logger.info("setting user display name: %s -> %s", user_id, displayname)
419417 profile_handler = self.hs.get_handlers().profile_handler
420 requester = synapse.types.create_requester(user)
421418 yield profile_handler.set_displayname(
422 user, requester, displayname
419 user, requester, displayname, by_admin=True,
423420 )
424421
425422 defer.returnValue((user_id, token))
436436 logger.warn("Stream has topological part!!!! %r", from_key)
437437 from_key = "s%s" % (from_token.stream,)
438438
439 app_service = yield self.store.get_app_service_by_user_id(
439 app_service = self.store.get_app_service_by_user_id(
440440 user.to_string()
441441 )
442442 if app_service:
474474
475475 defer.returnValue((events, end_key))
476476
477 def get_current_key(self, direction='f'):
478 return self.store.get_room_events_max_id(direction)
477 def get_current_key(self):
478 return self.store.get_room_events_max_id()
479
480 def get_current_key_for_room(self, room_id):
481 return self.store.get_room_events_max_id(room_id)
479482
480483 @defer.inlineCallbacks
481484 def get_pagination_rows(self, user, config, key):
787787
788788 assert since_token
789789
790 app_service = yield self.store.get_app_service_by_user_id(user_id)
790 app_service = self.store.get_app_service_by_user_id(user_id)
791791 if app_service:
792792 rooms = yield self.store.get_app_service_rooms(app_service)
793793 joined_room_ids = set(r.room_id for r in rooms)
8787 continue
8888
8989 until = self._member_typing_until.get(member, None)
90 if not until or until < now:
90 if not until or until <= now:
9191 logger.info("Timing out typing for: %s", member.user_id)
9292 preserve_fn(self._stopped_typing)(member)
9393 continue
9696 # user.
9797 if self.hs.is_mine_id(member.user_id):
9898 last_fed_poke = self._member_last_federation_poke.get(member, None)
99 if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL < now:
99 if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
100100 preserve_fn(self._push_remote)(
101101 member=member,
102102 typing=True
103103 )
104
105 # Add a paranoia timer to ensure that we always have a timer for
106 # each person typing.
107 self.wheel_timer.insert(
108 now=now,
109 obj=member,
110 then=now + 60 * 1000,
111 )
104112
105113 def is_typing(self, member):
106114 return member.user_id in self._room_typing.get(member.room_id, [])
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 # Because otherwise 'resource' collides with synapse.metrics.resource
16 from __future__ import absolute_import
17
1815 import logging
19 from resource import getrusage, RUSAGE_SELF
2016 import functools
21 import os
22 import stat
2317 import time
2418 import gc
2519
2923 CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
3024 MemoryUsageMetric,
3125 )
26 from .process_collector import register_process_collector
3227
3328
3429 logger = logging.getLogger(__name__)
3530
3631
3732 all_metrics = []
33 all_collectors = []
3834
3935
4036 class Metrics(object):
4440
4541 def __init__(self, name):
4642 self.name_prefix = name
43
44 def make_subspace(self, name):
45 return Metrics("%s_%s" % (self.name_prefix, name))
46
47 def register_collector(self, func):
48 all_collectors.append(func)
4749
4850 def _register(self, metric_class, name, *args, **kwargs):
4951 full_name = "%s_%s" % (self.name_prefix, name)
9395 def render_all():
9496 strs = []
9597
96 # TODO(paul): Internal hack
97 update_resource_metrics()
98 for collector in all_collectors:
99 collector()
98100
99101 for metric in all_metrics:
100102 try:
108110 return "\n".join(strs)
109111
110112
111 # Now register some standard process-wide state metrics, to give indications of
112 # process resource usage
113
114 rusage = None
115
116
117 def update_resource_metrics():
118 global rusage
119 rusage = getrusage(RUSAGE_SELF)
120
121 resource_metrics = get_metrics_for("process.resource")
122
123 # msecs
124 resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000)
125 resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000)
126
127 # kilobytes
128 resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * 1024)
129
130 TYPES = {
131 stat.S_IFSOCK: "SOCK",
132 stat.S_IFLNK: "LNK",
133 stat.S_IFREG: "REG",
134 stat.S_IFBLK: "BLK",
135 stat.S_IFDIR: "DIR",
136 stat.S_IFCHR: "CHR",
137 stat.S_IFIFO: "FIFO",
138 }
139
140
141 def _process_fds():
142 counts = {(k,): 0 for k in TYPES.values()}
143 counts[("other",)] = 0
144
145 # Not every OS will have a /proc/self/fd directory
146 if not os.path.exists("/proc/self/fd"):
147 return counts
148
149 for fd in os.listdir("/proc/self/fd"):
150 try:
151 s = os.stat("/proc/self/fd/%s" % (fd))
152 fmt = stat.S_IFMT(s.st_mode)
153 if fmt in TYPES:
154 t = TYPES[fmt]
155 else:
156 t = "other"
157
158 counts[(t,)] += 1
159 except OSError:
160 # the dirh itself used by listdir() is usually missing by now
161 pass
162
163 return counts
164
165 get_metrics_for("process").register_callback("fds", _process_fds, labels=["type"])
166
167113 reactor_metrics = get_metrics_for("reactor")
168114 tick_time = reactor_metrics.register_distribution("tick_time")
169115 pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
174120 reactor_metrics.register_callback(
175121 "gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"]
176122 )
123
124 register_process_collector(get_metrics_for("process"))
177125
178126
179127 def runUntilCurrentTimer(func):
9797 value = self.callback()
9898
9999 if self.is_scalar():
100 return ["%s %d" % (self.name, value)]
100 return ["%s %.12g" % (self.name, value)]
101101
102 return ["%s%s %d" % (self.name, self._render_key(k), value[k])
102 return ["%s%s %.12g" % (self.name, self._render_key(k), value[k])
103103 for k in sorted(value.keys())]
104104
105105
0 # -*- coding: utf-8 -*-
1 # Copyright 2015, 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Because otherwise 'resource' collides with synapse.metrics.resource
16 from __future__ import absolute_import
17
18 import os
19 import stat
20 from resource import getrusage, RUSAGE_SELF
21
22
23 TICKS_PER_SEC = 100
24 BYTES_PER_PAGE = 4096
25
26 HAVE_PROC_STAT = os.path.exists("/proc/stat")
27 HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
28 HAVE_PROC_SELF_LIMITS = os.path.exists("/proc/self/limits")
29 HAVE_PROC_SELF_FD = os.path.exists("/proc/self/fd")
30
31 TYPES = {
32 stat.S_IFSOCK: "SOCK",
33 stat.S_IFLNK: "LNK",
34 stat.S_IFREG: "REG",
35 stat.S_IFBLK: "BLK",
36 stat.S_IFDIR: "DIR",
37 stat.S_IFCHR: "CHR",
38 stat.S_IFIFO: "FIFO",
39 }
40
41 # Field indexes from /proc/self/stat, taken from the proc(5) manpage
42 STAT_FIELDS = {
43 "utime": 14,
44 "stime": 15,
45 "starttime": 22,
46 "vsize": 23,
47 "rss": 24,
48 }
49
50
51 rusage = None
52 stats = {}
53 fd_counts = None
54
55 # In order to report process_start_time_seconds we need to know the
56 # machine's boot time, because the value in /proc/self/stat is relative to
57 # this
58 boot_time = None
59 if HAVE_PROC_STAT:
60 with open("/proc/stat") as _procstat:
61 for line in _procstat:
62 if line.startswith("btime "):
63 boot_time = int(line.split()[1])
64
65
66 def update_resource_metrics():
67 global rusage
68 rusage = getrusage(RUSAGE_SELF)
69
70 if HAVE_PROC_SELF_STAT:
71 global stats
72 with open("/proc/self/stat") as s:
73 line = s.read()
74 # line is PID (command) more stats go here ...
75 raw_stats = line.split(") ", 1)[1].split(" ")
76
77 for (name, index) in STAT_FIELDS.iteritems():
78 # subtract 3 from the index, because proc(5) is 1-based, and
79 # we've lost the first two fields in PID and COMMAND above
80 stats[name] = int(raw_stats[index - 3])
81
82 global fd_counts
83 fd_counts = _process_fds()
84
85
86 def _process_fds():
87 counts = {(k,): 0 for k in TYPES.values()}
88 counts[("other",)] = 0
89
90 # Not every OS will have a /proc/self/fd directory
91 if not HAVE_PROC_SELF_FD:
92 return counts
93
94 for fd in os.listdir("/proc/self/fd"):
95 try:
96 s = os.stat("/proc/self/fd/%s" % (fd))
97 fmt = stat.S_IFMT(s.st_mode)
98 if fmt in TYPES:
99 t = TYPES[fmt]
100 else:
101 t = "other"
102
103 counts[(t,)] += 1
104 except OSError:
105 # the dirh itself used by listdir() is usually missing by now
106 pass
107
108 return counts
109
110
111 def register_process_collector(process_metrics):
112 # Legacy synapse-invented metric names
113
114 resource_metrics = process_metrics.make_subspace("resource")
115
116 resource_metrics.register_collector(update_resource_metrics)
117
118 # msecs
119 resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000)
120 resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000)
121
122 # kilobytes
123 resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * 1024)
124
125 process_metrics.register_callback("fds", _process_fds, labels=["type"])
126
127 # New prometheus-standard metric names
128
129 if HAVE_PROC_SELF_STAT:
130 process_metrics.register_callback(
131 "cpu_user_seconds_total",
132 lambda: float(stats["utime"]) / TICKS_PER_SEC
133 )
134 process_metrics.register_callback(
135 "cpu_system_seconds_total",
136 lambda: float(stats["stime"]) / TICKS_PER_SEC
137 )
138 process_metrics.register_callback(
139 "cpu_seconds_total",
140 lambda: (float(stats["utime"] + stats["stime"])) / TICKS_PER_SEC
141 )
142
143 process_metrics.register_callback(
144 "virtual_memory_bytes",
145 lambda: int(stats["vsize"])
146 )
147 process_metrics.register_callback(
148 "resident_memory_bytes",
149 lambda: int(stats["rss"]) * BYTES_PER_PAGE
150 )
151
152 process_metrics.register_callback(
153 "start_time_seconds",
154 lambda: boot_time + int(stats["starttime"]) / TICKS_PER_SEC
155 )
156
157 if HAVE_PROC_SELF_FD:
158 process_metrics.register_callback(
159 "open_fds",
160 lambda: sum(fd_counts.values())
161 )
162
163 if HAVE_PROC_SELF_LIMITS:
164 def _get_max_fds():
165 with open("/proc/self/limits") as limits:
166 for line in limits:
167 if not line.startswith("Max open files "):
168 continue
169 # Line is Max open files $SOFT $HARD
170 return int(line.split()[3])
171 return None
172
173 process_metrics.register_callback(
174 "max_fds",
175 lambda: _get_max_fds()
176 )
149149
150150 soonest_due_at = None
151151
152 if not unprocessed:
153 yield self.save_last_stream_ordering_and_success(self.max_stream_ordering)
154 return
155
152156 for push_action in unprocessed:
153157 received_at = push_action['received_ts']
154158 if received_at is None:
327327 return messagevars
328328
329329 @defer.inlineCallbacks
330 def make_summary_text(self, notifs_by_room, state_by_room,
330 def make_summary_text(self, notifs_by_room, room_state_ids,
331331 notif_events, user_id, reason):
332332 if len(notifs_by_room) == 1:
333333 # Only one room has new stuff
337337 # want the generated-from-names one here otherwise we'll
338338 # end up with, "new message from Bob in the Bob room"
339339 room_name = yield calculate_room_name(
340 self.store, state_by_room[room_id], user_id, fallback_to_members=False
340 self.store, room_state_ids[room_id], user_id, fallback_to_members=False
341341 )
342342
343 my_member_event = state_by_room[room_id][("m.room.member", user_id)]
343 my_member_event_id = room_state_ids[room_id][("m.room.member", user_id)]
344 my_member_event = yield self.store.get_event(my_member_event_id)
344345 if my_member_event.content["membership"] == "invite":
345 inviter_member_event = state_by_room[room_id][
346 inviter_member_event_id = room_state_ids[room_id][
346347 ("m.room.member", my_member_event.sender)
347348 ]
349 inviter_member_event = yield self.store.get_event(
350 inviter_member_event_id
351 )
348352 inviter_name = name_from_member_event(inviter_member_event)
349353
350354 if room_name is None:
363367 if len(notifs_by_room[room_id]) == 1:
364368 # There is just the one notification, so give some detail
365369 event = notif_events[notifs_by_room[room_id][0]["event_id"]]
366 if ("m.room.member", event.sender) in state_by_room[room_id]:
367 state_event = state_by_room[room_id][("m.room.member", event.sender)]
370 if ("m.room.member", event.sender) in room_state_ids[room_id]:
371 state_event_id = room_state_ids[room_id][
372 ("m.room.member", event.sender)
373 ]
374 state_event = yield self.store.get_event(state_event_id)
368375 sender_name = name_from_member_event(state_event)
369376
370377 if sender_name is not None and room_name is not None:
394401 for n in notifs_by_room[room_id]
395402 ]))
396403
404 member_events = yield self.store.get_events([
405 room_state_ids[room_id][("m.room.member", s)]
406 for s in sender_ids
407 ])
408
397409 defer.returnValue(MESSAGES_FROM_PERSON % {
398 "person": descriptor_from_member_events([
399 state_by_room[room_id][("m.room.member", s)]
400 for s in sender_ids
401 ]),
410 "person": descriptor_from_member_events(member_events.values()),
402411 "app": self.app_name,
403412 })
404413 else:
418427 for n in notifs_by_room[reason['room_id']]
419428 ]))
420429
430 member_events = yield self.store.get_events([
431 room_state_ids[room_id][("m.room.member", s)]
432 for s in sender_ids
433 ])
434
421435 defer.returnValue(MESSAGES_FROM_PERSON_AND_OTHERS % {
422 "person": descriptor_from_member_events([
423 state_by_room[reason['room_id']][("m.room.member", s)]
424 for s in sender_ids
425 ]),
436 "person": descriptor_from_member_events(member_events.values()),
426437 "app": self.app_name,
427438 })
428439
1616 from synapse.http.server import request_handler, finish_request
1717 from synapse.replication.pusher_resource import PusherResource
1818 from synapse.replication.presence_resource import PresenceResource
19 from synapse.api.errors import SynapseError
1920
2021 from twisted.web.resource import Resource
2122 from twisted.web.server import NOT_DONE_YET
165166 def replicate():
166167 return self.replicate(request_streams, limit)
167168
168 result = yield self.notifier.wait_for_replication(replicate, timeout)
169 writer = yield self.notifier.wait_for_replication(replicate, timeout)
170 result = writer.finish()
169171
170172 for stream_name, stream_content in result.items():
171173 logger.info(
184186 writer = _Writer()
185187 current_token = yield self.current_replication_token()
186188 logger.debug("Replicating up to %r", current_token)
189
190 if limit == 0:
191 raise SynapseError(400, "Limit cannot be 0")
187192
188193 yield self.account_data(writer, current_token, limit, request_streams)
189194 yield self.events(writer, current_token, limit, request_streams)
199204 self.streams(writer, current_token, request_streams)
200205
201206 logger.debug("Replicated %d rows", writer.total)
202 defer.returnValue(writer.finish())
207 defer.returnValue(writer)
203208
204209 def streams(self, writer, current_token, request_streams):
205210 request_token = request_streams.get("streams")
236241 request_events = current_token.events
237242 if request_backfill is None:
238243 request_backfill = current_token.backfill
244
245 no_new_tokens = (
246 request_events == current_token.events
247 and request_backfill == current_token.backfill
248 )
249 if no_new_tokens:
250 return
251
239252 res = yield self.store.get_all_new_events(
240253 request_backfill, request_events,
241254 current_token.backfill, current_token.events,
242255 limit
243256 )
244 writer.write_header_and_rows("events", res.new_forward_events, (
245 "position", "internal", "json", "state_group"
246 ))
247 writer.write_header_and_rows("backfill", res.new_backfill_events, (
248 "position", "internal", "json", "state_group"
249 ))
257
258 upto_events_token = _position_from_rows(
259 res.new_forward_events, current_token.events
260 )
261
262 upto_backfill_token = _position_from_rows(
263 res.new_backfill_events, current_token.backfill
264 )
265
266 if request_events != upto_events_token:
267 writer.write_header_and_rows("events", res.new_forward_events, (
268 "position", "internal", "json", "state_group"
269 ), position=upto_events_token)
270
271 if request_backfill != upto_backfill_token:
272 writer.write_header_and_rows("backfill", res.new_backfill_events, (
273 "position", "internal", "json", "state_group",
274 ), position=upto_backfill_token)
275
250276 writer.write_header_and_rows(
251277 "forward_ex_outliers", res.forward_ex_outliers,
252 ("position", "event_id", "state_group")
278 ("position", "event_id", "state_group"),
253279 )
254280 writer.write_header_and_rows(
255281 "backward_ex_outliers", res.backward_ex_outliers,
256 ("position", "event_id", "state_group")
282 ("position", "event_id", "state_group"),
257283 )
258284 writer.write_header_and_rows(
259 "state_resets", res.state_resets, ("position",)
285 "state_resets", res.state_resets, ("position",),
260286 )
261287
262288 @defer.inlineCallbacks
265291
266292 request_presence = request_streams.get("presence")
267293
268 if request_presence is not None:
294 if request_presence is not None and request_presence != current_position:
269295 presence_rows = yield self.presence_handler.get_all_presence_updates(
270296 request_presence, current_position
271297 )
298 upto_token = _position_from_rows(presence_rows, current_position)
272299 writer.write_header_and_rows("presence", presence_rows, (
273300 "position", "user_id", "state", "last_active_ts",
274301 "last_federation_update_ts", "last_user_sync_ts",
275302 "status_msg", "currently_active",
276 ))
303 ), position=upto_token)
277304
278305 @defer.inlineCallbacks
279306 def typing(self, writer, current_token, request_streams):
281308
282309 request_typing = request_streams.get("typing")
283310
284 if request_typing is not None:
311 if request_typing is not None and request_typing != current_position:
285312 # If they have a higher token than current max, we can assume that
286313 # they had been talking to a previous instance of the master. Since
287314 # we reset the token on restart, the best (but hacky) thing we can
292319 typing_rows = yield self.typing_handler.get_all_typing_updates(
293320 request_typing, current_position
294321 )
322 upto_token = _position_from_rows(typing_rows, current_position)
295323 writer.write_header_and_rows("typing", typing_rows, (
296324 "position", "room_id", "typing"
297 ))
325 ), position=upto_token)
298326
299327 @defer.inlineCallbacks
300328 def receipts(self, writer, current_token, limit, request_streams):
302330
303331 request_receipts = request_streams.get("receipts")
304332
305 if request_receipts is not None:
333 if request_receipts is not None and request_receipts != current_position:
306334 receipts_rows = yield self.store.get_all_updated_receipts(
307335 request_receipts, current_position, limit
308336 )
337 upto_token = _position_from_rows(receipts_rows, current_position)
309338 writer.write_header_and_rows("receipts", receipts_rows, (
310339 "position", "room_id", "receipt_type", "user_id", "event_id", "data"
311 ))
340 ), position=upto_token)
312341
313342 @defer.inlineCallbacks
314343 def account_data(self, writer, current_token, limit, request_streams):
323352 user_account_data = current_position
324353 if room_account_data is None:
325354 room_account_data = current_position
355
356 no_new_tokens = (
357 user_account_data == current_position
358 and room_account_data == current_position
359 )
360 if no_new_tokens:
361 return
362
326363 user_rows, room_rows = yield self.store.get_all_updated_account_data(
327364 user_account_data, room_account_data, current_position, limit
328365 )
366
367 upto_users_token = _position_from_rows(user_rows, current_position)
368 upto_rooms_token = _position_from_rows(room_rows, current_position)
369
329370 writer.write_header_and_rows("user_account_data", user_rows, (
330371 "position", "user_id", "type", "content"
331 ))
372 ), position=upto_users_token)
332373 writer.write_header_and_rows("room_account_data", room_rows, (
333374 "position", "user_id", "room_id", "type", "content"
334 ))
375 ), position=upto_rooms_token)
335376
336377 if tag_account_data is not None:
337378 tag_rows = yield self.store.get_all_updated_tags(
338379 tag_account_data, current_position, limit
339380 )
381 upto_tag_token = _position_from_rows(tag_rows, current_position)
340382 writer.write_header_and_rows("tag_account_data", tag_rows, (
341383 "position", "user_id", "room_id", "tags"
342 ))
384 ), position=upto_tag_token)
343385
344386 @defer.inlineCallbacks
345387 def push_rules(self, writer, current_token, limit, request_streams):
347389
348390 push_rules = request_streams.get("push_rules")
349391
350 if push_rules is not None:
392 if push_rules is not None and push_rules != current_position:
351393 rows = yield self.store.get_all_push_rule_updates(
352394 push_rules, current_position, limit
353395 )
396 upto_token = _position_from_rows(rows, current_position)
354397 writer.write_header_and_rows("push_rules", rows, (
355398 "position", "event_stream_ordering", "user_id", "rule_id", "op",
356399 "priority_class", "priority", "conditions", "actions"
357 ))
400 ), position=upto_token)
358401
359402 @defer.inlineCallbacks
360403 def pushers(self, writer, current_token, limit, request_streams):
362405
363406 pushers = request_streams.get("pushers")
364407
365 if pushers is not None:
408 if pushers is not None and pushers != current_position:
366409 updated, deleted = yield self.store.get_all_updated_pushers(
367410 pushers, current_position, limit
368411 )
412 upto_token = _position_from_rows(updated, current_position)
369413 writer.write_header_and_rows("pushers", updated, (
370414 "position", "user_id", "access_token", "profile_tag", "kind",
371415 "app_id", "app_display_name", "device_display_name", "pushkey",
372416 "ts", "lang", "data"
373 ))
417 ), position=upto_token)
374418 writer.write_header_and_rows("deleted_pushers", deleted, (
375419 "position", "user_id", "app_id", "pushkey"
376 ))
420 ), position=upto_token)
377421
378422 @defer.inlineCallbacks
379423 def caches(self, writer, current_token, limit, request_streams):
381425
382426 caches = request_streams.get("caches")
383427
384 if caches is not None:
428 if caches is not None and caches != current_position:
385429 updated_caches = yield self.store.get_all_updated_caches(
386430 caches, current_position, limit
387431 )
432 upto_token = _position_from_rows(updated_caches, current_position)
388433 writer.write_header_and_rows("caches", updated_caches, (
389434 "position", "cache_func", "keys", "invalidation_ts"
390 ))
435 ), position=upto_token)
391436
392437 @defer.inlineCallbacks
393438 def to_device(self, writer, current_token, limit, request_streams):
395440
396441 to_device = request_streams.get("to_device")
397442
398 if to_device is not None:
443 if to_device is not None and to_device != current_position:
399444 to_device_rows = yield self.store.get_all_new_device_messages(
400445 to_device, current_position, limit
401446 )
447 upto_token = _position_from_rows(to_device_rows, current_position)
402448 writer.write_header_and_rows("to_device", to_device_rows, (
403449 "position", "user_id", "device_id", "message_json"
404 ))
450 ), position=upto_token)
405451
406452 @defer.inlineCallbacks
407453 def public_rooms(self, writer, current_token, limit, request_streams):
409455
410456 public_rooms = request_streams.get("public_rooms")
411457
412 if public_rooms is not None:
458 if public_rooms is not None and public_rooms != current_position:
413459 public_rooms_rows = yield self.store.get_all_new_public_rooms(
414460 public_rooms, current_position, limit
415461 )
462 upto_token = _position_from_rows(public_rooms_rows, current_position)
416463 writer.write_header_and_rows("public_rooms", public_rooms_rows, (
417464 "position", "room_id", "visibility"
418 ))
465 ), position=upto_token)
419466
420467
421468 class _Writer(object):
425472 self.total = 0
426473
427474 def write_header_and_rows(self, name, rows, fields, position=None):
428 if not rows:
429 return
430
431475 if position is None:
432 position = rows[-1][0]
476 if rows:
477 position = rows[-1][0]
478 else:
479 return
433480
434481 self.streams[name] = {
435482 "position": position if type(position) is int else str(position),
438485 }
439486
440487 self.total += len(rows)
488
489 def __nonzero__(self):
490 return bool(self.total)
441491
442492 def finish(self):
443493 return self.streams
460510
461511 def __str__(self):
462512 return "_".join(str(value) for value in self)
513
514
515 def _position_from_rows(rows, current_position):
516 """Calculates a position to return for a stream. Ideally we want to return the
517 position of the last row, as that will be the most correct. However, if there
518 are no rows we fall back to using the current position to stop us from
519 repeatedly hitting the storage layer unncessarily thinking there are updates.
520 (Not all advances of the token correspond to an actual update)
521
522 We can't just always return the current position, as we often limit the
523 number of rows we replicate, and so the stream may lag. The assumption is
524 that if the storage layer returns no new rows then we are not lagging and
525 we are at the `current_position`.
526 """
527 if rows:
528 return rows[-1][0]
529 return current_position
2121 from .base import ClientV1RestServlet, client_path_patterns
2222 import synapse.util.stringutils as stringutils
2323 from synapse.http.servlet import parse_json_object_from_request
24 from synapse.types import create_requester
2425
2526 from synapse.util.async import run_on_reactor
2627
390391 user_json = parse_json_object_from_request(request)
391392
392393 access_token = get_access_token_from_request(request)
393 app_service = yield self.store.get_app_service_by_token(
394 app_service = self.store.get_app_service_by_token(
394395 access_token
395396 )
396397 if not app_service:
397398 raise SynapseError(403, "Invalid application service token.")
398399
400 requester = create_requester(app_service.sender)
401
399402 logger.debug("creating user: %s", user_json)
400
401 response = yield self._do_create(user_json)
403 response = yield self._do_create(requester, user_json)
402404
403405 defer.returnValue((200, response))
404406
406408 return 403, {}
407409
408410 @defer.inlineCallbacks
409 def _do_create(self, user_json):
411 def _do_create(self, requester, user_json):
410412 yield run_on_reactor()
411413
412414 if "localpart" not in user_json:
432434
433435 handler = self.handlers.registration_handler
434436 user_id, token = yield handler.get_or_create_user(
437 requester=requester,
435438 localpart=localpart,
436439 displayname=displayname,
437440 duration_in_ms=(duration_seconds * 1000),
7676 user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
7777 <link rel="stylesheet" href="/_matrix/static/client/register/style.css">
7878 <script>
79 if (window.onAuthDone != undefined) {
79 if (window.onAuthDone) {
8080 window.onAuthDone();
81 } else if (window.opener && window.opener.postMessage) {
82 window.opener.postMessage("authDone", "*");
8183 }
8284 </script>
8385 </head>
1616
1717 from twisted.internet import defer
1818
19 from synapse.api import constants, errors
1920 from synapse.http import servlet
2021 from ._base import client_v2_patterns
2122
5758 self.hs = hs
5859 self.auth = hs.get_auth()
5960 self.device_handler = hs.get_device_handler()
61 self.auth_handler = hs.get_auth_handler()
6062
6163 @defer.inlineCallbacks
6264 def on_GET(self, request, device_id):
6971
7072 @defer.inlineCallbacks
7173 def on_DELETE(self, request, device_id):
72 # XXX: it's not completely obvious we want to expose this endpoint.
73 # It allows the client to delete access tokens, which feels like a
74 # thing which merits extra auth. But if we want to do the interactive-
75 # auth dance, we should really make it possible to delete more than one
76 # device at a time.
74 try:
75 body = servlet.parse_json_object_from_request(request)
76
77 except errors.SynapseError as e:
78 if e.errcode == errors.Codes.NOT_JSON:
79 # deal with older clients which didn't pass a JSON dict
80 # the same as those that pass an empty dict
81 body = {}
82 else:
83 raise
84
85 authed, result, params, _ = yield self.auth_handler.check_auth([
86 [constants.LoginType.PASSWORD],
87 ], body, self.hs.get_ip_from_request(request))
88
89 if not authed:
90 defer.returnValue((401, result))
91
7792 requester = yield self.auth.get_user_by_req(request)
7893 yield self.device_handler.delete_device(
7994 requester.user.to_string(),
1414
1515 from twisted.internet import defer
1616
17 from synapse.api.errors import AuthError, SynapseError
17 from synapse.api.errors import AuthError, SynapseError, StoreError, Codes
1818 from synapse.http.servlet import RestServlet, parse_json_object_from_request
1919 from synapse.types import UserID
2020
4444 raise AuthError(403, "Cannot get filters for other users")
4545
4646 if not self.hs.is_mine(target_user):
47 raise SynapseError(400, "Can only get filters for local users")
47 raise AuthError(403, "Can only get filters for local users")
4848
4949 try:
5050 filter_id = int(filter_id)
5858 )
5959
6060 defer.returnValue((200, filter.get_filter_json()))
61 except KeyError:
62 raise SynapseError(400, "No such filter")
61 except (KeyError, StoreError):
62 raise SynapseError(400, "No such filter", errcode=Codes.NOT_FOUND)
6363
6464
6565 class CreateFilterRestServlet(RestServlet):
7373
7474 @defer.inlineCallbacks
7575 def on_POST(self, request, user_id):
76
7677 target_user = UserID.from_string(user_id)
7778 requester = yield self.auth.get_user_by_req(request)
7879
8081 raise AuthError(403, "Cannot create filters for other users")
8182
8283 if not self.hs.is_mine(target_user):
83 raise SynapseError(400, "Can only create filters for local users")
84 raise AuthError(403, "Can only create filters for local users")
8485
8586 content = parse_json_object_from_request(request)
86
8787 filter_id = yield self.filtering.add_user_filter(
8888 user_localpart=target_user.localpart,
8989 user_filter=content,
1818 from signedjson.sign import sign_json
1919 from unpaddedbase64 import encode_base64
2020 from canonicaljson import encode_canonical_json
21 from hashlib import sha256
22 from OpenSSL import crypto
2321 import logging
2422
2523
4745 "expired_ts": # integer posix timestamp when the key expired.
4846 "key": # base64 encoded NACL verification key.
4947 }
50 }
51 "tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
48 },
49 "tls_fingerprints": [ # Fingerprints of the TLS certs this server uses.
50 {
51 "sha256": # base64 encoded sha256 fingerprint of the X509 cert
52 },
53 ],
5254 "signatures": {
5355 "this.server.example.com": {
5456 "algorithm:version": # NACL signature for this server
8991 u"expired_ts": key.expired,
9092 }
9193
92 x509_certificate_bytes = crypto.dump_certificate(
93 crypto.FILETYPE_ASN1,
94 self.config.tls_certificate
95 )
96
97 sha256_fingerprint = sha256(x509_certificate_bytes).digest()
94 tls_fingerprints = self.config.tls_fingerprints
9895
9996 json_object = {
10097 u"valid_until_ts": self.valid_until_ts,
10198 u"server_name": self.config.server_name,
10299 u"verify_keys": verify_keys,
103100 u"old_verify_keys": old_verify_keys,
104 u"tls_fingerprints": [{
105 u"sha256": encode_base64(sha256_fingerprint),
106 }]
101 u"tls_fingerprints": tls_fingerprints,
107102 }
108103 for key in self.config.signing_key:
109104 json_object = sign_json(
8484 sql_logger.debug("[SQL] {%s} %s", self.name, sql)
8585
8686 sql = self.database_engine.convert_param_style(sql)
87
8887 if args:
8988 try:
9089 sql_logger.debug(
3636 )
3737
3838 def get_app_services(self):
39 return defer.succeed(self.services_cache)
39 return self.services_cache
4040
4141 def get_app_service_by_user_id(self, user_id):
4242 """Retrieve an application service from their user ID.
5353 """
5454 for service in self.services_cache:
5555 if service.sender == user_id:
56 return defer.succeed(service)
57 return defer.succeed(None)
56 return service
57 return None
5858
5959 def get_app_service_by_token(self, token):
6060 """Get the application service with the given appservice token.
6666 """
6767 for service in self.services_cache:
6868 if service.token == token:
69 return defer.succeed(service)
70 return defer.succeed(None)
69 return service
70 return None
7171
7272 def get_app_service_rooms(self, service):
7373 """Get a list of RoomsForUser for this application service.
162162 ["as_id"]
163163 )
164164 # NB: This assumes this class is linked with ApplicationServiceStore
165 as_list = yield self.get_app_services()
165 as_list = self.get_app_services()
166166 services = []
167167
168168 for res in results:
602602 "rejections",
603603 "redactions",
604604 "room_memberships",
605 "state_events",
606605 "topics"
607606 ):
608607 txn.executemany(
2424
2525 # Remember to update this number every time a change is made to database
2626 # schema files, so the users will be informed on server restarts.
27 SCHEMA_VERSION = 36
27 SCHEMA_VERSION = 37
2828
2929 dir_path = os.path.abspath(os.path.dirname(__file__))
3030
319319 txn.execute(sql, (prev_id, current_id, limit,))
320320 return txn.fetchall()
321321
322 if prev_id == current_id:
323 return defer.succeed([])
324
322325 return self.runInteraction(
323326 "get_all_new_public_rooms", get_all_new_public_rooms
324327 )
0 # Copyright 2016 OpenMarket Ltd
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from synapse.storage.prepare_database import get_statements
15 from synapse.storage.engines import PostgresEngine
16
17 import logging
18
19 logger = logging.getLogger(__name__)
20
21 DROP_INDICES = """
22 -- We only ever query based on event_id
23 DROP INDEX IF EXISTS state_events_room_id;
24 DROP INDEX IF EXISTS state_events_type;
25 DROP INDEX IF EXISTS state_events_state_key;
26
27 -- room_id is indexed elsewhere
28 DROP INDEX IF EXISTS current_state_events_room_id;
29 DROP INDEX IF EXISTS current_state_events_state_key;
30 DROP INDEX IF EXISTS current_state_events_type;
31
32 DROP INDEX IF EXISTS transactions_have_ref;
33
34 -- (topological_ordering, stream_ordering, room_id) seems like a strange index,
35 -- and is used incredibly rarely.
36 DROP INDEX IF EXISTS events_order_topo_stream_room;
37
38 DROP INDEX IF EXISTS event_search_ev_idx;
39 """
40
41 POSTGRES_DROP_CONSTRAINT = """
42 ALTER TABLE event_auth DROP CONSTRAINT IF EXISTS event_auth_event_id_auth_id_room_id_key;
43 """
44
45 SQLITE_DROP_CONSTRAINT = """
46 DROP INDEX IF EXISTS evauth_edges_id;
47
48 CREATE TABLE IF NOT EXISTS event_auth_new(
49 event_id TEXT NOT NULL,
50 auth_id TEXT NOT NULL,
51 room_id TEXT NOT NULL
52 );
53
54 INSERT INTO event_auth_new
55 SELECT event_id, auth_id, room_id
56 FROM event_auth;
57
58 DROP TABLE event_auth;
59
60 ALTER TABLE event_auth_new RENAME TO event_auth;
61
62 CREATE INDEX evauth_edges_id ON event_auth(event_id);
63 """
64
65
66 def run_create(cur, database_engine, *args, **kwargs):
67 for statement in get_statements(DROP_INDICES.splitlines()):
68 cur.execute(statement)
69
70 if isinstance(database_engine, PostgresEngine):
71 drop_constraint = POSTGRES_DROP_CONSTRAINT
72 else:
73 drop_constraint = SQLITE_DROP_CONSTRAINT
74
75 for statement in get_statements(drop_constraint.splitlines()):
76 cur.execute(statement)
77
78
79 def run_upgrade(cur, database_engine, *args, **kwargs):
80 pass
0 /* Copyright 2016 OpenMarket Ltd
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 /*
16 * Update any email addresses that were stored with mixed case into all
17 * lowercase
18 */
19
20 -- There may be "duplicate" emails (with different case) already in the table,
21 -- so we find them and move all but the most recently used account.
22 UPDATE user_threepids
23 SET medium = 'email_old'
24 WHERE medium = 'email'
25 AND address IN (
26 -- We select all the addresses that are linked to the user_id that is NOT
27 -- the most recently created.
28 SELECT u.address
29 FROM
30 user_threepids AS u,
31 -- `duplicate_addresses` is a table of all the email addresses that
32 -- appear multiple times and when the binding was created
33 (
34 SELECT lower(u1.address) AS address, max(u1.added_at) AS max_ts
35 FROM user_threepids AS u1
36 INNER JOIN user_threepids AS u2 ON u1.medium = u2.medium AND lower(u1.address) = lower(u2.address) AND u1.address != u2.address
37 WHERE u1.medium = 'email' AND u2.medium = 'email'
38 GROUP BY lower(u1.address)
39 ) AS duplicate_addresses
40 WHERE
41 lower(u.address) = duplicate_addresses.address
42 AND u.added_at != max_ts -- NOT the most recently created
43 );
44
45
46 -- This update is now safe since we've removed the duplicate addresses.
47 UPDATE user_threepids SET address = LOWER(address) WHERE medium = 'email';
48
49
50 /* Add an index for the select we do on passwored reset */
51 CREATE INDEX user_threepids_medium_address on user_threepids (medium, address);
520520 )
521521
522522 @defer.inlineCallbacks
523 def get_room_events_max_id(self, direction='f'):
523 def get_room_events_max_id(self, room_id=None):
524 """Returns the current token for rooms stream.
525
526 By default, it returns the current global stream token. Specifying a
527 `room_id` causes it to return the current room specific topological
528 token.
529 """
524530 token = yield self._stream_id_gen.get_current_token()
525 if direction != 'b':
531 if room_id is None:
526532 defer.returnValue("s%d" % (token,))
527533 else:
528534 topo = yield self.runInteraction(
529 "_get_max_topological_txn", self._get_max_topological_txn
535 "_get_max_topological_txn", self._get_max_topological_txn,
536 room_id,
530537 )
531538 defer.returnValue("t%d-%d" % (topo, token))
532539
578585 lambda r: r[0][0] if r else 0
579586 )
580587
581 def _get_max_topological_txn(self, txn):
588 def _get_max_topological_txn(self, txn, room_id):
582589 txn.execute(
583590 "SELECT MAX(topological_ordering) FROM events"
584 " WHERE outlier = ?",
585 (False,)
591 " WHERE room_id = ?",
592 (room_id,)
586593 )
587594
588595 rows = txn.fetchall()
4040 self.store = hs.get_datastore()
4141
4242 @defer.inlineCallbacks
43 def get_current_token(self, direction='f'):
43 def get_current_token(self):
4444 push_rules_key, _ = self.store.get_push_rules_stream_token()
4545 to_device_key = self.store.get_to_device_stream_token()
4646
4747 token = StreamToken(
4848 room_key=(
49 yield self.sources["room"].get_current_key(direction)
49 yield self.sources["room"].get_current_key()
5050 ),
5151 presence_key=(
5252 yield self.sources["presence"].get_current_key()
6464 to_device_key=to_device_key,
6565 )
6666 defer.returnValue(token)
67
68 @defer.inlineCallbacks
69 def get_current_token_for_room(self, room_id):
70 push_rules_key, _ = self.store.get_push_rules_stream_token()
71 to_device_key = self.store.get_to_device_stream_token()
72
73 token = StreamToken(
74 room_key=(
75 yield self.sources["room"].get_current_key_for_room(room_id)
76 ),
77 presence_key=(
78 yield self.sources["presence"].get_current_key()
79 ),
80 typing_key=(
81 yield self.sources["typing"].get_current_key()
82 ),
83 receipt_key=(
84 yield self.sources["receipt"].get_current_key()
85 ),
86 account_data_key=(
87 yield self.sources["account_data"].get_current_key()
88 ),
89 push_rules_key=push_rules_key,
90 to_device_key=to_device_key,
91 )
92 defer.returnValue(token)
1717 from collections import namedtuple
1818
1919
20 Requester = namedtuple("Requester",
21 ["user", "access_token_id", "is_guest", "device_id"])
20 Requester = namedtuple("Requester", [
21 "user", "access_token_id", "is_guest", "device_id", "app_service",
22 ])
2223 """
2324 Represents the user making a request
2425
2829 request, or None if it came via the appservice API or similar
2930 is_guest (bool): True if the user making this request is a guest user
3031 device_id (str|None): device_id which was set at authentication time
32 app_service (ApplicationService|None): the AS requesting on behalf of the user
3133 """
3234
3335
3436 def create_requester(user_id, access_token_id=None, is_guest=False,
35 device_id=None):
37 device_id=None, app_service=None):
3638 """
3739 Create a new ``Requester`` object
3840
4244 request, or None if it came via the appservice API or similar
4345 is_guest (bool): True if the user making this request is a guest user
4446 device_id (str|None): device_id which was set at authentication time
47 app_service (ApplicationService|None): the AS requesting on behalf of the user
4548
4649 Returns:
4750 Requester
4851 """
4952 if not isinstance(user_id, UserID):
5053 user_id = UserID.from_string(user_id)
51 return Requester(user_id, access_token_id, is_guest, device_id)
54 return Requester(user_id, access_token_id, is_guest, device_id, app_service)
5255
5356
5457 def get_domain_from_id(string):
0
1 from twisted.internet import defer
2
3 from synapse.config._base import ConfigError
4 from synapse.types import UserID
5
6 import ldap3
7 import ldap3.core.exceptions
8
9 import logging
10
11 try:
12 import ldap3
13 import ldap3.core.exceptions
14 except ImportError:
15 ldap3 = None
16 pass
17
18
19 logger = logging.getLogger(__name__)
20
21
22 class LDAPMode(object):
23 SIMPLE = "simple",
24 SEARCH = "search",
25
26 LIST = (SIMPLE, SEARCH)
27
28
29 class LdapAuthProvider(object):
30 __version__ = "0.1"
31
32 def __init__(self, config, account_handler):
33 self.account_handler = account_handler
34
35 if not ldap3:
36 raise RuntimeError(
37 'Missing ldap3 library. This is required for LDAP Authentication.'
38 )
39
40 self.ldap_mode = config.mode
41 self.ldap_uri = config.uri
42 self.ldap_start_tls = config.start_tls
43 self.ldap_base = config.base
44 self.ldap_attributes = config.attributes
45 if self.ldap_mode == LDAPMode.SEARCH:
46 self.ldap_bind_dn = config.bind_dn
47 self.ldap_bind_password = config.bind_password
48 self.ldap_filter = config.filter
49
50 @defer.inlineCallbacks
51 def check_password(self, user_id, password):
52 """ Attempt to authenticate a user against an LDAP Server
53 and register an account if none exists.
54
55 Returns:
56 True if authentication against LDAP was successful
57 """
58 localpart = UserID.from_string(user_id).localpart
59
60 try:
61 server = ldap3.Server(self.ldap_uri)
62 logger.debug(
63 "Attempting LDAP connection with %s",
64 self.ldap_uri
65 )
66
67 if self.ldap_mode == LDAPMode.SIMPLE:
68 result, conn = self._ldap_simple_bind(
69 server=server, localpart=localpart, password=password
70 )
71 logger.debug(
72 'LDAP authentication method simple bind returned: %s (conn: %s)',
73 result,
74 conn
75 )
76 if not result:
77 defer.returnValue(False)
78 elif self.ldap_mode == LDAPMode.SEARCH:
79 result, conn = self._ldap_authenticated_search(
80 server=server, localpart=localpart, password=password
81 )
82 logger.debug(
83 'LDAP auth method authenticated search returned: %s (conn: %s)',
84 result,
85 conn
86 )
87 if not result:
88 defer.returnValue(False)
89 else:
90 raise RuntimeError(
91 'Invalid LDAP mode specified: {mode}'.format(
92 mode=self.ldap_mode
93 )
94 )
95
96 try:
97 logger.info(
98 "User authenticated against LDAP server: %s",
99 conn
100 )
101 except NameError:
102 logger.warn(
103 "Authentication method yielded no LDAP connection, aborting!"
104 )
105 defer.returnValue(False)
106
107 # check if user with user_id exists
108 if (yield self.account_handler.check_user_exists(user_id)):
109 # exists, authentication complete
110 conn.unbind()
111 defer.returnValue(True)
112
113 else:
114 # does not exist, fetch metadata for account creation from
115 # existing ldap connection
116 query = "({prop}={value})".format(
117 prop=self.ldap_attributes['uid'],
118 value=localpart
119 )
120
121 if self.ldap_mode == LDAPMode.SEARCH and self.ldap_filter:
122 query = "(&{filter}{user_filter})".format(
123 filter=query,
124 user_filter=self.ldap_filter
125 )
126 logger.debug(
127 "ldap registration filter: %s",
128 query
129 )
130
131 conn.search(
132 search_base=self.ldap_base,
133 search_filter=query,
134 attributes=[
135 self.ldap_attributes['name'],
136 self.ldap_attributes['mail']
137 ]
138 )
139
140 if len(conn.response) == 1:
141 attrs = conn.response[0]['attributes']
142 mail = attrs[self.ldap_attributes['mail']][0]
143 name = attrs[self.ldap_attributes['name']][0]
144
145 # create account
146 user_id, access_token = (
147 yield self.account_handler.register(localpart=localpart)
148 )
149
150 # TODO: bind email, set displayname with data from ldap directory
151
152 logger.info(
153 "Registration based on LDAP data was successful: %d: %s (%s, %)",
154 user_id,
155 localpart,
156 name,
157 mail
158 )
159
160 defer.returnValue(True)
161 else:
162 if len(conn.response) == 0:
163 logger.warn("LDAP registration failed, no result.")
164 else:
165 logger.warn(
166 "LDAP registration failed, too many results (%s)",
167 len(conn.response)
168 )
169
170 defer.returnValue(False)
171
172 defer.returnValue(False)
173
174 except ldap3.core.exceptions.LDAPException as e:
175 logger.warn("Error during ldap authentication: %s", e)
176 defer.returnValue(False)
177
178 @staticmethod
179 def parse_config(config):
180 class _LdapConfig(object):
181 pass
182
183 ldap_config = _LdapConfig()
184
185 ldap_config.enabled = config.get("enabled", False)
186
187 ldap_config.mode = LDAPMode.SIMPLE
188
189 # verify config sanity
190 _require_keys(config, [
191 "uri",
192 "base",
193 "attributes",
194 ])
195
196 ldap_config.uri = config["uri"]
197 ldap_config.start_tls = config.get("start_tls", False)
198 ldap_config.base = config["base"]
199 ldap_config.attributes = config["attributes"]
200
201 if "bind_dn" in config:
202 ldap_config.mode = LDAPMode.SEARCH
203 _require_keys(config, [
204 "bind_dn",
205 "bind_password",
206 ])
207
208 ldap_config.bind_dn = config["bind_dn"]
209 ldap_config.bind_password = config["bind_password"]
210 ldap_config.filter = config.get("filter", None)
211
212 # verify attribute lookup
213 _require_keys(config['attributes'], [
214 "uid",
215 "name",
216 "mail",
217 ])
218
219 return ldap_config
220
221 def _ldap_simple_bind(self, server, localpart, password):
222 """ Attempt a simple bind with the credentials
223 given by the user against the LDAP server.
224
225 Returns True, LDAP3Connection
226 if the bind was successful
227 Returns False, None
228 if an error occured
229 """
230
231 try:
232 # bind with the the local users ldap credentials
233 bind_dn = "{prop}={value},{base}".format(
234 prop=self.ldap_attributes['uid'],
235 value=localpart,
236 base=self.ldap_base
237 )
238 conn = ldap3.Connection(server, bind_dn, password)
239 logger.debug(
240 "Established LDAP connection in simple bind mode: %s",
241 conn
242 )
243
244 if self.ldap_start_tls:
245 conn.start_tls()
246 logger.debug(
247 "Upgraded LDAP connection in simple bind mode through StartTLS: %s",
248 conn
249 )
250
251 if conn.bind():
252 # GOOD: bind okay
253 logger.debug("LDAP Bind successful in simple bind mode.")
254 return True, conn
255
256 # BAD: bind failed
257 logger.info(
258 "Binding against LDAP failed for '%s' failed: %s",
259 localpart, conn.result['description']
260 )
261 conn.unbind()
262 return False, None
263
264 except ldap3.core.exceptions.LDAPException as e:
265 logger.warn("Error during LDAP authentication: %s", e)
266 return False, None
267
268 def _ldap_authenticated_search(self, server, localpart, password):
269 """ Attempt to login with the preconfigured bind_dn
270 and then continue searching and filtering within
271 the base_dn
272
273 Returns (True, LDAP3Connection)
274 if a single matching DN within the base was found
275 that matched the filter expression, and with which
276 a successful bind was achieved
277
278 The LDAP3Connection returned is the instance that was used to
279 verify the password not the one using the configured bind_dn.
280 Returns (False, None)
281 if an error occured
282 """
283
284 try:
285 conn = ldap3.Connection(
286 server,
287 self.ldap_bind_dn,
288 self.ldap_bind_password
289 )
290 logger.debug(
291 "Established LDAP connection in search mode: %s",
292 conn
293 )
294
295 if self.ldap_start_tls:
296 conn.start_tls()
297 logger.debug(
298 "Upgraded LDAP connection in search mode through StartTLS: %s",
299 conn
300 )
301
302 if not conn.bind():
303 logger.warn(
304 "Binding against LDAP with `bind_dn` failed: %s",
305 conn.result['description']
306 )
307 conn.unbind()
308 return False, None
309
310 # construct search_filter like (uid=localpart)
311 query = "({prop}={value})".format(
312 prop=self.ldap_attributes['uid'],
313 value=localpart
314 )
315 if self.ldap_filter:
316 # combine with the AND expression
317 query = "(&{query}{filter})".format(
318 query=query,
319 filter=self.ldap_filter
320 )
321 logger.debug(
322 "LDAP search filter: %s",
323 query
324 )
325 conn.search(
326 search_base=self.ldap_base,
327 search_filter=query
328 )
329
330 if len(conn.response) == 1:
331 # GOOD: found exactly one result
332 user_dn = conn.response[0]['dn']
333 logger.debug('LDAP search found dn: %s', user_dn)
334
335 # unbind and simple bind with user_dn to verify the password
336 # Note: do not use rebind(), for some reason it did not verify
337 # the password for me!
338 conn.unbind()
339 return self._ldap_simple_bind(server, localpart, password)
340 else:
341 # BAD: found 0 or > 1 results, abort!
342 if len(conn.response) == 0:
343 logger.info(
344 "LDAP search returned no results for '%s'",
345 localpart
346 )
347 else:
348 logger.info(
349 "LDAP search returned too many (%s) results for '%s'",
350 len(conn.response), localpart
351 )
352 conn.unbind()
353 return False, None
354
355 except ldap3.core.exceptions.LDAPException as e:
356 logger.warn("Error during LDAP authentication: %s", e)
357 return False, None
358
359
360 def _require_keys(config, required):
361 missing = [key for key in required if key not in config]
362 if missing:
363 raise ConfigError(
364 "LDAP enabled but missing required config values: {}".format(
365 ", ".join(missing)
366 )
367 )
1919 from synapse.api.auth import Auth
2020 from synapse.api.errors import AuthError
2121 from synapse.types import UserID
22 from tests.utils import setup_test_homeserver
22 from tests.utils import setup_test_homeserver, mock_getRawHeaders
2323
2424 import pymacaroons
2525
5050
5151 request = Mock(args={})
5252 request.args["access_token"] = [self.test_token]
53 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
53 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
5454 requester = yield self.auth.get_user_by_req(request)
5555 self.assertEquals(requester.user.to_string(), self.test_user)
5656
6060
6161 request = Mock(args={})
6262 request.args["access_token"] = [self.test_token]
63 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
63 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
6464 d = self.auth.get_user_by_req(request)
6565 self.failureResultOf(d, AuthError)
6666
7373 self.store.get_user_by_access_token = Mock(return_value=user_info)
7474
7575 request = Mock(args={})
76 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
76 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
7777 d = self.auth.get_user_by_req(request)
7878 self.failureResultOf(d, AuthError)
7979
8585
8686 request = Mock(args={})
8787 request.args["access_token"] = [self.test_token]
88 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
88 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
8989 requester = yield self.auth.get_user_by_req(request)
9090 self.assertEquals(requester.user.to_string(), self.test_user)
9191
9595
9696 request = Mock(args={})
9797 request.args["access_token"] = [self.test_token]
98 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
98 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
9999 d = self.auth.get_user_by_req(request)
100100 self.failureResultOf(d, AuthError)
101101
105105 self.store.get_user_by_access_token = Mock(return_value=None)
106106
107107 request = Mock(args={})
108 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
108 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
109109 d = self.auth.get_user_by_req(request)
110110 self.failureResultOf(d, AuthError)
111111
120120 request = Mock(args={})
121121 request.args["access_token"] = [self.test_token]
122122 request.args["user_id"] = [masquerading_user_id]
123 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
123 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
124124 requester = yield self.auth.get_user_by_req(request)
125125 self.assertEquals(requester.user.to_string(), masquerading_user_id)
126126
134134 request = Mock(args={})
135135 request.args["access_token"] = [self.test_token]
136136 request.args["user_id"] = [masquerading_user_id]
137 request.requestHeaders.getRawHeaders = Mock(return_value=[""])
137 request.requestHeaders.getRawHeaders = mock_getRawHeaders()
138138 d = self.auth.get_user_by_req(request)
139139 self.failureResultOf(d, AuthError)
140140
1616 from .. import unittest
1717
1818 from synapse.handlers.register import RegistrationHandler
19 from synapse.types import UserID
19 from synapse.types import UserID, create_requester
2020
2121 from tests.utils import setup_test_homeserver
2222
5656 local_part = "someone"
5757 display_name = "someone"
5858 user_id = "@someone:test"
59 requester = create_requester("@as:test")
5960 result_user_id, result_token = yield self.handler.get_or_create_user(
60 local_part, display_name, duration_ms)
61 requester, local_part, display_name, duration_ms)
6162 self.assertEquals(result_user_id, user_id)
6263 self.assertEquals(result_token, 'secret')
6364
7374 local_part = "frank"
7475 display_name = "Frank"
7576 user_id = "@frank:test"
77 requester = create_requester("@as:test")
7678 result_user_id, result_token = yield self.handler.get_or_create_user(
77 local_part, display_name, duration_ms)
79 requester, local_part, display_name, duration_ms)
7880 self.assertEquals(result_user_id, user_id)
7981 self.assertEquals(result_token, 'secret')
218218 "user_id": self.u_onion.to_string(),
219219 "typing": True,
220220 }
221 )
221 ),
222 federation_auth=True,
222223 )
223224
224225 self.on_new_event.assert_has_calls([
4141 @defer.inlineCallbacks
4242 def replicate(self):
4343 streams = self.slaved_store.stream_positions()
44 result = yield self.replication.replicate(streams, 100)
44 writer = yield self.replication.replicate(streams, 100)
45 result = writer.finish()
4546 yield self.slaved_store.process_replication(result)
4647
4748 @defer.inlineCallbacks
119119 self.hs.clock.advance_time_msec(1)
120120 code, body = yield get
121121 self.assertEquals(code, 200)
122 self.assertEquals(body, {})
122 self.assertEquals(body.get("rows", []), [])
123123 test_timeout.__name__ = "test_timeout_%s" % (stream)
124124 return test_timeout
125125
194194 self.assertIn("field_names", stream)
195195 field_names = stream["field_names"]
196196 self.assertIn("rows", stream)
197 self.assertTrue(stream["rows"])
198197 for row in stream["rows"]:
199198 self.assertEquals(
200199 len(row), len(field_names),
1616 from twisted.internet import defer
1717 from mock import Mock
1818 from tests import unittest
19 from tests.utils import mock_getRawHeaders
1920 import json
2021
2122
2930 path='/_matrix/client/api/v1/createUser'
3031 )
3132 self.request.args = {}
33 self.request.requestHeaders.getRawHeaders = mock_getRawHeaders()
3234
33 self.appservice = None
34 self.auth = Mock(get_appservice_by_req=Mock(
35 side_effect=lambda x: defer.succeed(self.appservice))
35 self.registration_handler = Mock()
36
37 self.appservice = Mock(sender="@as:test")
38 self.datastore = Mock(
39 get_app_service_by_token=Mock(return_value=self.appservice)
3640 )
3741
38 self.auth_result = (False, None, None, None)
39 self.auth_handler = Mock(
40 check_auth=Mock(side_effect=lambda x, y, z: self.auth_result),
41 get_session_data=Mock(return_value=None)
42 )
43 self.registration_handler = Mock()
44 self.identity_handler = Mock()
45 self.login_handler = Mock()
46
47 # do the dance to hook it up to the hs global
48 self.handlers = Mock(
49 auth_handler=self.auth_handler,
42 # do the dance to hook things up to the hs global
43 handlers = Mock(
5044 registration_handler=self.registration_handler,
51 identity_handler=self.identity_handler,
52 login_handler=self.login_handler
5345 )
5446 self.hs = Mock()
55 self.hs.hostname = "supergbig~testing~thing.com"
56 self.hs.get_auth = Mock(return_value=self.auth)
57 self.hs.get_handlers = Mock(return_value=self.handlers)
58 self.hs.config.enable_registration = True
59 # init the thing we're testing
47 self.hs.hostname = "superbig~testing~thing.com"
48 self.hs.get_datastore = Mock(return_value=self.datastore)
49 self.hs.get_handlers = Mock(return_value=handlers)
6050 self.servlet = CreateUserRestServlet(self.hs)
6151
6252 @defer.inlineCallbacks
1414
1515 from twisted.internet import defer
1616
17 from . import V2AlphaRestTestCase
17 from tests import unittest
1818
1919 from synapse.rest.client.v2_alpha import filter
2020
21 from synapse.api.errors import StoreError
21 from synapse.api.errors import Codes
22
23 import synapse.types
24
25 from synapse.types import UserID
26
27 from ....utils import MockHttpResource, setup_test_homeserver
28
29 PATH_PREFIX = "/_matrix/client/v2_alpha"
2230
2331
24 class FilterTestCase(V2AlphaRestTestCase):
32 class FilterTestCase(unittest.TestCase):
33
2534 USER_ID = "@apple:test"
35 EXAMPLE_FILTER = {"type": ["m.*"]}
36 EXAMPLE_FILTER_JSON = '{"type": ["m.*"]}'
2637 TO_REGISTER = [filter]
2738
28 def make_datastore_mock(self):
29 datastore = super(FilterTestCase, self).make_datastore_mock()
39 @defer.inlineCallbacks
40 def setUp(self):
41 self.mock_resource = MockHttpResource(prefix=PATH_PREFIX)
3042
31 self._user_filters = {}
43 self.hs = yield setup_test_homeserver(
44 http_client=None,
45 resource_for_client=self.mock_resource,
46 resource_for_federation=self.mock_resource,
47 )
3248
33 def add_user_filter(user_localpart, definition):
34 filters = self._user_filters.setdefault(user_localpart, [])
35 filter_id = len(filters)
36 filters.append(definition)
37 return defer.succeed(filter_id)
38 datastore.add_user_filter = add_user_filter
49 self.auth = self.hs.get_auth()
3950
40 def get_user_filter(user_localpart, filter_id):
41 if user_localpart not in self._user_filters:
42 raise StoreError(404, "No user")
43 filters = self._user_filters[user_localpart]
44 if filter_id >= len(filters):
45 raise StoreError(404, "No filter")
46 return defer.succeed(filters[filter_id])
47 datastore.get_user_filter = get_user_filter
51 def get_user_by_access_token(token=None, allow_guest=False):
52 return {
53 "user": UserID.from_string(self.USER_ID),
54 "token_id": 1,
55 "is_guest": False,
56 }
4857
49 return datastore
58 def get_user_by_req(request, allow_guest=False, rights="access"):
59 return synapse.types.create_requester(
60 UserID.from_string(self.USER_ID), 1, False, None)
61
62 self.auth.get_user_by_access_token = get_user_by_access_token
63 self.auth.get_user_by_req = get_user_by_req
64
65 self.store = self.hs.get_datastore()
66 self.filtering = self.hs.get_filtering()
67
68 for r in self.TO_REGISTER:
69 r.register_servlets(self.hs, self.mock_resource)
5070
5171 @defer.inlineCallbacks
5272 def test_add_filter(self):
5373 (code, response) = yield self.mock_resource.trigger(
54 "POST", "/user/%s/filter" % (self.USER_ID), '{"type": ["m.*"]}'
74 "POST", "/user/%s/filter" % (self.USER_ID), self.EXAMPLE_FILTER_JSON
5575 )
5676 self.assertEquals(200, code)
5777 self.assertEquals({"filter_id": "0"}, response)
78 filter = yield self.store.get_user_filter(
79 user_localpart='apple',
80 filter_id=0,
81 )
82 self.assertEquals(filter, self.EXAMPLE_FILTER)
5883
59 self.assertIn("apple", self._user_filters)
60 self.assertEquals(len(self._user_filters["apple"]), 1)
61 self.assertEquals({"type": ["m.*"]}, self._user_filters["apple"][0])
84 @defer.inlineCallbacks
85 def test_add_filter_for_other_user(self):
86 (code, response) = yield self.mock_resource.trigger(
87 "POST", "/user/%s/filter" % ('@watermelon:test'), self.EXAMPLE_FILTER_JSON
88 )
89 self.assertEquals(403, code)
90 self.assertEquals(response['errcode'], Codes.FORBIDDEN)
91
92 @defer.inlineCallbacks
93 def test_add_filter_non_local_user(self):
94 _is_mine = self.hs.is_mine
95 self.hs.is_mine = lambda target_user: False
96 (code, response) = yield self.mock_resource.trigger(
97 "POST", "/user/%s/filter" % (self.USER_ID), self.EXAMPLE_FILTER_JSON
98 )
99 self.hs.is_mine = _is_mine
100 self.assertEquals(403, code)
101 self.assertEquals(response['errcode'], Codes.FORBIDDEN)
62102
63103 @defer.inlineCallbacks
64104 def test_get_filter(self):
65 self._user_filters["apple"] = [
66 {"type": ["m.*"]}
67 ]
68
105 filter_id = yield self.filtering.add_user_filter(
106 user_localpart='apple',
107 user_filter=self.EXAMPLE_FILTER
108 )
69109 (code, response) = yield self.mock_resource.trigger_get(
70 "/user/%s/filter/0" % (self.USER_ID)
110 "/user/%s/filter/%s" % (self.USER_ID, filter_id)
71111 )
72112 self.assertEquals(200, code)
73 self.assertEquals({"type": ["m.*"]}, response)
113 self.assertEquals(self.EXAMPLE_FILTER, response)
74114
75115 @defer.inlineCallbacks
116 def test_get_filter_non_existant(self):
117 (code, response) = yield self.mock_resource.trigger_get(
118 "/user/%s/filter/12382148321" % (self.USER_ID)
119 )
120 self.assertEquals(400, code)
121 self.assertEquals(response['errcode'], Codes.NOT_FOUND)
122
123 # Currently invalid params do not have an appropriate errcode
124 # in errors.py
125 @defer.inlineCallbacks
126 def test_get_filter_invalid_id(self):
127 (code, response) = yield self.mock_resource.trigger_get(
128 "/user/%s/filter/foobar" % (self.USER_ID)
129 )
130 self.assertEquals(400, code)
131
132 # No ID also returns an invalid_id error
133 @defer.inlineCallbacks
76134 def test_get_filter_no_id(self):
77 self._user_filters["apple"] = [
78 {"type": ["m.*"]}
79 ]
80
81135 (code, response) = yield self.mock_resource.trigger_get(
82 "/user/%s/filter/2" % (self.USER_ID)
136 "/user/%s/filter/" % (self.USER_ID)
83137 )
84 self.assertEquals(404, code)
85
86 @defer.inlineCallbacks
87 def test_get_filter_no_user(self):
88 (code, response) = yield self.mock_resource.trigger_get(
89 "/user/%s/filter/0" % (self.USER_ID)
90 )
91 self.assertEquals(404, code)
138 self.assertEquals(400, code)
22 from twisted.internet import defer
33 from mock import Mock
44 from tests import unittest
5 from tests.utils import mock_getRawHeaders
56 import json
67
78
1516 path='/_matrix/api/v2_alpha/register'
1617 )
1718 self.request.args = {}
19 self.request.requestHeaders.getRawHeaders = mock_getRawHeaders()
1820
1921 self.appservice = None
2022 self.auth = Mock(get_appservice_by_req=Mock(
21 side_effect=lambda x: defer.succeed(self.appservice))
23 side_effect=lambda x: self.appservice)
2224 )
2325
2426 self.auth_result = (False, None, None, None)
3636 config = Mock(
3737 app_service_config_files=self.as_yaml_files,
3838 event_cache_size=1,
39 password_providers=[],
3940 )
4041 hs = yield setup_test_homeserver(config=config)
4142
7071 outfile.write(yaml.dump(as_yaml))
7172 self.as_yaml_files.append(as_token)
7273
73 @defer.inlineCallbacks
7474 def test_retrieve_unknown_service_token(self):
75 service = yield self.store.get_app_service_by_token("invalid_token")
75 service = self.store.get_app_service_by_token("invalid_token")
7676 self.assertEquals(service, None)
7777
78 @defer.inlineCallbacks
7978 def test_retrieval_of_service(self):
80 stored_service = yield self.store.get_app_service_by_token(
79 stored_service = self.store.get_app_service_by_token(
8180 self.as_token
8281 )
8382 self.assertEquals(stored_service.token, self.as_token)
9695 []
9796 )
9897
99 @defer.inlineCallbacks
10098 def test_retrieval_of_all_services(self):
101 services = yield self.store.get_app_services()
99 services = self.store.get_app_services()
102100 self.assertEquals(len(services), 3)
103101
104102
111109 config = Mock(
112110 app_service_config_files=self.as_yaml_files,
113111 event_cache_size=1,
112 password_providers=[],
114113 )
115114 hs = yield setup_test_homeserver(config=config)
116115 self.db_pool = hs.get_db_pool()
439438 f1 = self._write_config(suffix="1")
440439 f2 = self._write_config(suffix="2")
441440
442 config = Mock(app_service_config_files=[f1, f2], event_cache_size=1)
441 config = Mock(
442 app_service_config_files=[f1, f2], event_cache_size=1,
443 password_providers=[]
444 )
443445 hs = yield setup_test_homeserver(config=config, datastore=Mock())
444446
445447 ApplicationServiceStore(hs)
449451 f1 = self._write_config(id="id", suffix="1")
450452 f2 = self._write_config(id="id", suffix="2")
451453
452 config = Mock(app_service_config_files=[f1, f2], event_cache_size=1)
454 config = Mock(
455 app_service_config_files=[f1, f2], event_cache_size=1,
456 password_providers=[]
457 )
453458 hs = yield setup_test_homeserver(config=config, datastore=Mock())
454459
455460 with self.assertRaises(ConfigError) as cm:
465470 f1 = self._write_config(as_token="as_token", suffix="1")
466471 f2 = self._write_config(as_token="as_token", suffix="2")
467472
468 config = Mock(app_service_config_files=[f1, f2], event_cache_size=1)
473 config = Mock(
474 app_service_config_files=[f1, f2], event_cache_size=1,
475 password_providers=[]
476 )
469477 hs = yield setup_test_homeserver(config=config, datastore=Mock())
470478
471479 with self.assertRaises(ConfigError) as cm:
5151 config.server_name = name
5252 config.trusted_third_party_id_servers = []
5353 config.room_invite_state_types = []
54 config.password_providers = []
5455
5556 config.use_frozen_dicts = True
5657 config.database_config = {"name": "sqlite3"}
114115 return getcallargs(pattern_func, *invoked_args, **invoked_kargs)
115116
116117
118 def mock_getRawHeaders(headers=None):
119 headers = headers if headers is not None else {}
120
121 def getRawHeaders(name, default=None):
122 return headers.get(name, default)
123
124 return getRawHeaders
125
126
117127 # This is a mock /resource/ not an entire server
118128 class MockHttpResource(HttpServer):
119129
126136
127137 @patch('twisted.web.http.Request')
128138 @defer.inlineCallbacks
129 def trigger(self, http_method, path, content, mock_request):
139 def trigger(self, http_method, path, content, mock_request, federation_auth=False):
130140 """ Fire an HTTP event.
131141
132142 Args:
154164
155165 mock_request.getClientIP.return_value = "-"
156166
157 mock_request.requestHeaders.getRawHeaders.return_value = [
158 "X-Matrix origin=test,key=,sig="
159 ]
167 headers = {}
168 if federation_auth:
169 headers["Authorization"] = ["X-Matrix origin=test,key=,sig="]
170 mock_request.requestHeaders.getRawHeaders = mock_getRawHeaders(headers)
160171
161172 # return the right path if the event requires it
162173 mock_request.path = path
187198 )
188199 defer.returnValue((code, response))
189200 except CodeMessageException as e:
190 defer.returnValue((e.code, cs_error(e.msg)))
201 defer.returnValue((e.code, cs_error(e.msg, code=e.errcode)))
191202
192203 raise KeyError("No event can handle %s" % path)
193204