Codebase list matrix-synapse / da91dde
New upstream version 0.33.9 Richard van der Hoff 5 years ago
95 changed file(s) with 2881 addition(s) and 1004 deletion(s). Raw diff Collapse all Expand all
2222 - develop
2323 - /^release-v/
2424
25 # When running the tox environments that call Twisted Trial, we can pass the -j
26 # flag to run the tests concurrently. We set this to 2 for CPU bound tests
27 # (SQLite) and 4 for I/O bound tests (PostgreSQL).
2528 matrix:
2629 fast_finish: true
2730 include:
3235 env: TOX_ENV="pep8,check_isort"
3336
3437 - python: 2.7
35 env: TOX_ENV=py27
38 env: TOX_ENV=py27 TRIAL_FLAGS="-j 2"
3639
3740 - python: 2.7
38 env: TOX_ENV=py27-old
41 env: TOX_ENV=py27-old TRIAL_FLAGS="-j 2"
3942
4043 - python: 2.7
4144 env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
4346 - postgresql
4447
4548 - python: 3.5
46 env: TOX_ENV=py35
49 env: TOX_ENV=py35 TRIAL_FLAGS="-j 2"
4750
4851 - python: 3.6
49 env: TOX_ENV=py36
52 env: TOX_ENV=py36 TRIAL_FLAGS="-j 2"
5053
5154 - python: 3.6
5255 env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"
0 Synapse 0.33.9 (2018-11-19)
1 ===========================
2
3 No significant changes.
4
5
6 Synapse 0.33.9rc1 (2018-11-14)
7 ==============================
8
9 Features
10 --------
11
12 - Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled. ([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133), [\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184))
13 - Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099), [\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101))
14
15
16 Bugfixes
17 --------
18
19 - Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095))
20 - Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113))
21 - Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122))
22 - fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123))
23 - If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127))
24 - Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132))
25 - Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135))
26 - Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140), [\#4163](https://github.com/matrix-org/synapse/issues/4163))
27 - Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157))
28 - The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161))
29 - Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164))
30
31
32 Deprecations and Removals
33 -------------------------
34
35 - The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106))
36 - The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118))
37 - The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119))
38 - Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2. ([\#4120](https://github.com/matrix-org/synapse/issues/4120))
39
40
41 Internal Changes
42 ----------------
43
44 - Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778))
45 - Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006))
46 - The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108))
47 - Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109))
48 - Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character ([\#4110](https://github.com/matrix-org/synapse/issues/4110))
49 - Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121))
50 - Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124))
51 - Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128))
52 - Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137))
53 - The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138))
54 - Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139))
55 - Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149))
56 - add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155))
57 - HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156))
58 - Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165))
59
60
061 Synapse 0.33.8 (2018-11-01)
162 ===========================
263
55 services:
66
77 synapse:
8 build: ../..
8 build:
9 context: ../..
10 dockerfile: docker/Dockerfile
911 image: docker.io/matrixdotorg/synapse:latest
10 # Since snyapse does not retry to connect to the database, restart upon
12 # Since synapse does not retry to connect to the database, restart upon
1113 # failure
1214 restart: unless-stopped
1315 # See the readme for a full documentation of the environment settings
4648 # You may store the database tables in a local folder..
4749 - ./schemas:/var/lib/postgresql/data
4850 # .. or store them on some high performance storage for better results
49 # - /path/to/ssd/storage:/var/lib/postfesql/data
51 # - /path/to/ssd/storage:/var/lib/postgresql/data
0 Purge history API examples
1 ==========================
2
3 # `purge_history.sh`
4
5 A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
6 purge all messages in a list of rooms up to a certain event. You can select a
7 timeframe or a number of messages that you want to keep in the room.
8
9 Just configure the variables DOMAIN, ADMIN, ROOMS_ARRAY and TIME at the top of
10 the script.
11
12 # `purge_remote_media.sh`
13
14 A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
15 purge all old cached remote media.
0 #!/bin/bash
1
2 # this script will use the api:
3 # https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst
4 #
5 # It will purge all messages in a list of rooms up to a cetrain event
6
7 ###################################################################################################
8 # define your domain and admin user
9 ###################################################################################################
10 # add this user as admin in your home server:
11 DOMAIN=yourserver.tld
12 # add this user as admin in your home server:
13 ADMIN="@you_admin_username:$DOMAIN"
14
15 API_URL="$DOMAIN:8008/_matrix/client/r0"
16
17 ###################################################################################################
18 #choose the rooms to prune old messages from (add a free comment at the end)
19 ###################################################################################################
20 # the room_id's you can get e.g. from your Riot clients "View Source" button on each message
21 ROOMS_ARRAY=(
22 '!DgvjtOljKujDBrxyHk:matrix.org#riot:matrix.org'
23 '!QtykxKocfZaZOUrTwp:matrix.org#Matrix HQ'
24 )
25
26 # ALTERNATIVELY:
27 # you can select all the rooms that are not encrypted and loop over the result:
28 # SELECT room_id FROM rooms WHERE room_id NOT IN (SELECT DISTINCT room_id FROM events WHERE type ='m.room.encrypted')
29 # or
30 # select all rooms with at least 100 members:
31 # SELECT q.room_id FROM (select count(*) as numberofusers, room_id FROM current_state_events WHERE type ='m.room.member'
32 # GROUP BY room_id) AS q LEFT JOIN room_aliases a ON q.room_id=a.room_id WHERE q.numberofusers > 100 ORDER BY numberofusers desc
33
34 ###################################################################################################
35 # evaluate the EVENT_ID before which should be pruned
36 ###################################################################################################
37 # choose a time before which the messages should be pruned:
38 TIME='12 months ago'
39 # ALTERNATIVELY:
40 # a certain time:
41 # TIME='2016-08-31 23:59:59'
42
43 # creates a timestamp from the given time string:
44 UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
45
46 # ALTERNATIVELY:
47 # prune all messages that are older than 1000 messages ago:
48 # LAST_MESSAGES=1000
49 # SQL_GET_EVENT="SELECT event_id from events WHERE type='m.room.message' AND room_id ='$ROOM' ORDER BY received_ts DESC LIMIT 1 offset $(($LAST_MESSAGES - 1))"
50
51 # ALTERNATIVELY:
52 # select the EVENT_ID manually:
53 #EVENT_ID='$1471814088343495zpPNI:matrix.org' # an example event from 21st of Aug 2016 by Matthew
54
55 ###################################################################################################
56 # make the admin user a server admin in the database with
57 ###################################################################################################
58 # psql -A -t --dbname=synapse -c "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
59
60 ###################################################################################################
61 # database function
62 ###################################################################################################
63 sql (){
64 # for sqlite3:
65 #sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
66 # for postgres:
67 psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
68 }
69
70 ###################################################################################################
71 # get an access token
72 ###################################################################################################
73 # for example externally by watching Riot in your browser's network inspector
74 # or internally on the server locally, use this:
75 TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
76 AUTH="Authorization: Bearer $TOKEN"
77
78 ###################################################################################################
79 # check, if your TOKEN works. For example this works:
80 ###################################################################################################
81 # $ curl --header "$AUTH" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
82
83 ###################################################################################################
84 # finally start pruning the room:
85 ###################################################################################################
86 POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation
87
88 for ROOM in "${ROOMS_ARRAY[@]}"; do
89 echo "########################################### $(date) ################# "
90 echo "pruning room: $ROOM ..."
91 ROOM=${ROOM%#*}
92 #set -x
93 echo "check for alias in db..."
94 # for postgres:
95 sql "SELECT * FROM room_aliases WHERE room_id='$ROOM'"
96 echo "get event..."
97 # for postgres:
98 EVENT_ID=$(sql "SELECT event_id FROM events WHERE type='m.room.message' AND received_ts<'$UNIX_TIMESTAMP' AND room_id='$ROOM' ORDER BY received_ts DESC LIMIT 1;")
99 if [ "$EVENT_ID" == "" ]; then
100 echo "no event $TIME"
101 else
102 echo "event: $EVENT_ID"
103 SLEEP=2
104 set -x
105 # call purge
106 OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
107 PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
108 if [ "$PURGE_ID" == "" ]; then
109 # probably the history purge is already in progress for $ROOM
110 : "continuing with next room"
111 else
112 while : ; do
113 # get status of purge and sleep longer each time if still active
114 sleep $SLEEP
115 STATUS=$(curl --header "$AUTH" -s GET "$API_URL/admin/purge_history_status/$PURGE_ID" |grep status|cut -d'"' -f4)
116 : "$ROOM --> Status: $STATUS"
117 [[ "$STATUS" == "active" ]] || break
118 SLEEP=$((SLEEP + 1))
119 done
120 fi
121 set +x
122 sleep 1
123 fi
124 done
125
126
127 ###################################################################################################
128 # additionally
129 ###################################################################################################
130 # to benefit from pruning large amounts of data, you need to call VACUUM to free the unused space.
131 # This can take a very long time (hours) and the client have to be stopped while you do so:
132 # $ synctl stop
133 # $ sqlite3 -line homeserver.db "vacuum;"
134 # $ synctl start
135
136 # This could be set, so you don't need to prune every time after deleting some rows:
137 # $ sqlite3 homeserver.db "PRAGMA auto_vacuum = FULL;"
138 # be cautious, it could make the database somewhat slow if there are a lot of deletions
139
140 exit
0 #!/bin/bash
1
2 DOMAIN=yourserver.tld
3 # add this user as admin in your home server:
4 ADMIN="@you_admin_username:$DOMAIN"
5
6 API_URL="$DOMAIN:8008/_matrix/client/r0"
7
8 # choose a time before which the messages should be pruned:
9 # TIME='2016-08-31 23:59:59'
10 TIME='12 months ago'
11
12 # creates a timestamp from the given time string:
13 UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
14
15
16 ###################################################################################################
17 # database function
18 ###################################################################################################
19 sql (){
20 # for sqlite3:
21 #sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
22 # for postgres:
23 psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
24 }
25
26 ###############################################################################
27 # make the admin user a server admin in the database with
28 ###############################################################################
29 # sql "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
30
31 ###############################################################################
32 # get an access token
33 ###############################################################################
34 # for example externally by watching Riot in your browser's network inspector
35 # or internally on the server locally, use this:
36 TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
37
38 ###############################################################################
39 # check, if your TOKEN works. For example this works:
40 ###############################################################################
41 # curl --header "Authorization: Bearer $TOKEN" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
42
43 ###############################################################################
44 # optional check size before
45 ###############################################################################
46 # echo calculate used storage before ...
47 # du -shc ../.synapse/media_store/*
48
49 ###############################################################################
50 # finally start pruning media:
51 ###############################################################################
52 set -x # for debugging the generated string
53 curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
3030 template - currently this must always be `en` (for "English");
3131 internationalisation support is intended for the future.
3232
33 The template for the policy itself should be versioned and named according to
33 The template for the policy itself should be versioned and named according to
3434 the version: for example `1.0.html`. The version of the policy which the user
3535 has agreed to is stored in the database.
3636
8484 an error "Missing string query parameter 'u'". It is now possible to manually
8585 construct URIs where users can give their consent.
8686
87 ### Enabling consent tracking at registration
88
89 1. Add the following to your configuration:
90
91 ```yaml
92 user_consent:
93 require_at_registration: true
94 policy_name: "Privacy Policy" # or whatever you'd like to call the policy
95 ```
96
97 2. In your consent templates, make use of the `public_version` variable to
98 see if an unauthenticated user is viewing the page. This is typically
99 wrapped around the form that would be used to actually agree to the document:
100
101 ```
102 {% if not public_version %}
103 <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
104 <form method="post" action="consent">
105 <input type="hidden" name="v" value="{{version}}"/>
106 <input type="hidden" name="u" value="{{user}}"/>
107 <input type="hidden" name="h" value="{{userhmac}}"/>
108 <input type="submit" value="Sure thing!"/>
109 </form>
110 {% endif %}
111 ```
112
113 3. Restart Synapse to apply the changes.
114
115 Visiting `https://<server>/_matrix/consent` should now give you a view of the privacy
116 document. This is what users will be able to see when registering for accounts.
117
87118 ### Constructing the consent URI
88119
89120 It may be useful to manually construct the "consent URI" for a given user - for
103134
104135 This should result in a URI which looks something like:
105136 `https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
137
138
139 Note that not providing a `u` parameter will be interpreted as wanting to view
140 the document from an unauthenticated perspective, such as prior to registration.
141 Therefore, the `h` parameter is not required in this scenario. To enable this
142 behaviour, set `require_at_registration` to `true` in your `user_consent` config.
106143
107144
108145 Sending users a server notice asking them to agree to the policy
1111 <p>
1212 All your base are belong to us.
1313 </p>
14 <form method="post" action="consent">
15 <input type="hidden" name="v" value="{{version}}"/>
16 <input type="hidden" name="u" value="{{user}}"/>
17 <input type="hidden" name="h" value="{{userhmac}}"/>
18 <input type="submit" value="Sure thing!"/>
19 </form>
14 {% if not public_version %}
15 <!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
16 <form method="post" action="consent">
17 <input type="hidden" name="v" value="{{version}}"/>
18 <input type="hidden" name="u" value="{{user}}"/>
19 <input type="hidden" name="h" value="{{userhmac}}"/>
20 <input type="submit" value="Sure thing!"/>
21 </form>
22 {% endif %}
2023 {% endif %}
2124 </body>
2225 </html>
1313
1414 # set up the virtualenv
1515 tox -e py27 --notest -v
16
17 TOX_BIN=$TOX_DIR/py27/bin
18
19 # cryptography 2.2 requires setuptools >= 18.5.
20 #
21 # older versions of virtualenv (?) give us a virtualenv with the same version
22 # of setuptools as is installed on the system python (and tox runs virtualenv
23 # under python3, so we get the version of setuptools that is installed on that).
24 #
25 # anyway, make sure that we have a recent enough setuptools.
26 $TOX_BIN/pip install 'setuptools>=18.5'
27
28 # we also need a semi-recent version of pip, because old ones fail to install
29 # the "enum34" dependency of cryptography.
30 $TOX_BIN/pip install 'pip>=10'
31
32 { python synapse/python_dependencies.py
33 echo lxml
34 } | xargs $TOX_BIN/pip install
22 import argparse
33 import getpass
44 import sys
5 import unicodedata
56
67 import bcrypt
78 import yaml
89
9 bcrypt_rounds=12
10 bcrypt_rounds = 12
1011 password_pepper = ""
12
1113
1214 def prompt_for_pass():
1315 password = getpass.getpass("Password: ")
2224
2325 return password
2426
27
2528 if __name__ == "__main__":
2629 parser = argparse.ArgumentParser(
27 description="Calculate the hash of a new password, so that passwords"
28 " can be reset")
30 description=(
31 "Calculate the hash of a new password, so that passwords can be reset"
32 )
33 )
2934 parser.add_argument(
30 "-p", "--password",
35 "-p",
36 "--password",
3137 default=None,
3238 help="New password for user. Will prompt if omitted.",
3339 )
3440 parser.add_argument(
35 "-c", "--config",
41 "-c",
42 "--config",
3643 type=argparse.FileType('r'),
37 help="Path to server config file. Used to read in bcrypt_rounds and password_pepper.",
44 help=(
45 "Path to server config file. "
46 "Used to read in bcrypt_rounds and password_pepper."
47 ),
3848 )
3949
4050 args = parser.parse_args()
4858 if not password:
4959 password = prompt_for_pass()
5060
51 print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))
61 # On Python 2, make sure we decode it to Unicode before we normalise it
62 if isinstance(password, bytes):
63 try:
64 password = password.decode(sys.stdin.encoding)
65 except UnicodeDecodeError:
66 print(
67 "ERROR! Your password is not decodable using your terminal encoding (%s)."
68 % (sys.stdin.encoding,)
69 )
70
71 pw = unicodedata.normalize("NFKC", password)
72
73 hashed = bcrypt.hashpw(
74 pw.encode('utf8') + password_pepper.encode("utf8"),
75 bcrypt.gensalt(bcrypt_rounds),
76 ).decode('ascii')
77
78 print(hashed)
153153 s = requests.Session()
154154 s.mount("matrix://", MatrixConnectionAdapter())
155155
156 headers = {"Host": destination, "Authorization": authorization_headers[0]}
157
158 if method == "POST":
159 headers["Content-Type"] = "application/json"
160
156161 result = s.request(
157162 method=method,
158163 url=dest,
159 headers={"Host": destination, "Authorization": authorization_headers[0]},
164 headers=headers,
160165 verify=False,
161166 data=content,
162167 )
202207 parser.add_argument(
203208 "-X",
204209 "--method",
205 help="HTTP method to use for the request. Defaults to GET if --data is"
210 help="HTTP method to use for the request. Defaults to GET if --body is"
206211 "unspecified, POST if it is.",
207212 )
208213
+0
-39
scripts-dev/make_identicons.pl less more
0 #!/usr/bin/env perl
1
2 use strict;
3 use warnings;
4
5 use DBI;
6 use DBD::SQLite;
7 use JSON;
8 use Getopt::Long;
9
10 my $db; # = "homeserver.db";
11 my $server = "http://localhost:8008";
12 my $size = 320;
13
14 GetOptions("db|d=s", \$db,
15 "server|s=s", \$server,
16 "width|w=i", \$size) or usage();
17
18 usage() unless $db;
19
20 my $dbh = DBI->connect("dbi:SQLite:dbname=$db","","") || die $DBI::errstr;
21
22 my $res = $dbh->selectall_arrayref("select token, name from access_tokens, users where access_tokens.user_id = users.id group by user_id") || die $DBI::errstr;
23
24 foreach (@$res) {
25 my ($token, $mxid) = ($_->[0], $_->[1]);
26 my ($user_id) = ($mxid =~ m/@(.*):/);
27 my ($url) = $dbh->selectrow_array("select avatar_url from profiles where user_id=?", undef, $user_id);
28 if (!$url || $url =~ /#auto$/) {
29 `curl -s -o tmp.png "$server/_matrix/media/v1/identicon?name=${mxid}&width=$size&height=$size"`;
30 my $json = `curl -s -X POST -H "Content-Type: image/png" -T "tmp.png" $server/_matrix/media/v1/upload?access_token=$token`;
31 my $content_uri = from_json($json)->{content_uri};
32 `curl -X PUT -H "Content-Type: application/json" --data '{ "avatar_url": "${content_uri}#auto"}' $server/_matrix/client/api/v1/profile/${mxid}/avatar_url?access_token=$token`;
33 }
34 }
35
36 sub usage {
37 die "usage: ./make-identicons.pl\n\t-d database [e.g. homeserver.db]\n\t-s homeserver (default: http://localhost:8008)\n\t-w identicon size in pixels (default 320)";
38 }
2626 except ImportError:
2727 pass
2828
29 __version__ = "0.33.8"
29 __version__ = "0.33.9"
5050 EMAIL_IDENTITY = u"m.login.email.identity"
5151 MSISDN = u"m.login.msisdn"
5252 RECAPTCHA = u"m.login.recaptcha"
53 TERMS = u"m.login.terms"
5354 DUMMY = u"m.login.dummy"
5455
5556 # Only for C/S API v1
6061 class EventTypes(object):
6162 Member = "m.room.member"
6263 Create = "m.room.create"
64 Tombstone = "m.room.tombstone"
6365 JoinRules = "m.room.join_rules"
6466 PowerLevels = "m.room.power_levels"
6567 Aliases = "m.room.aliases"
100102 class RoomVersions(object):
101103 V1 = "1"
102104 VDH_TEST = "vdh-test-version"
105 STATE_V2_TEST = "state-v2-test"
103106
104107
105108 # the version we will give rooms which are created on this server
107110
108111 # vdh-test-version is a placeholder to get room versioning support working and tested
109112 # until we have a working v2.
110 KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
113 KNOWN_ROOM_VERSIONS = {
114 RoomVersions.V1,
115 RoomVersions.VDH_TEST,
116 RoomVersions.STATE_V2_TEST,
117 }
111118
112119 ServerNoticeMsgType = "m.server_notice"
113120 ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
2727 STATIC_PREFIX = "/_matrix/static"
2828 WEB_CLIENT_PREFIX = "/_matrix/client"
2929 CONTENT_REPO_PREFIX = "/_matrix/content"
30 SERVER_KEY_PREFIX = "/_matrix/key/v1"
3130 SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
3231 MEDIA_PREFIX = "/_matrix/media/r0"
3332 LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
3636 FEDERATION_PREFIX,
3737 LEGACY_MEDIA_PREFIX,
3838 MEDIA_PREFIX,
39 SERVER_KEY_PREFIX,
4039 SERVER_KEY_V2_PREFIX,
4140 STATIC_PREFIX,
4241 WEB_CLIENT_PREFIX,
5857 from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
5958 from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
6059 from synapse.rest import ClientRestResource
61 from synapse.rest.key.v1.server_key_resource import LocalKey
6260 from synapse.rest.key.v2 import KeyApiV2Resource
6361 from synapse.rest.media.v0.content_repository import ContentRepoResource
6462 from synapse.server import HomeServer
235233 )
236234
237235 if name in ["keys", "federation"]:
238 resources.update({
239 SERVER_KEY_PREFIX: LocalKey(self),
240 SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
241 })
236 resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
242237
243238 if name == "webclient":
244239 resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
225225 class SynchrotronTyping(object):
226226 def __init__(self, hs):
227227 self._latest_room_serial = 0
228 self._reset()
229
230 def _reset(self):
231 """
232 Reset the typing handler's data caches.
233 """
234 # map room IDs to serial numbers
228235 self._room_serials = {}
236 # map room IDs to sets of users currently typing
229237 self._room_typing = {}
230238
231239 def stream_positions(self):
235243 return {"typing": self._latest_room_serial}
236244
237245 def process_replication_rows(self, token, rows):
246 if self._latest_room_serial > token:
247 # The master has gone backwards. To prevent inconsistent data, just
248 # clear everything.
249 self._reset()
250
251 # Set the latest serial token to whatever the server gave us.
238252 self._latest_room_serial = token
239253
240254 for row in rows:
4141 # until the user consents to the privacy policy. The value of the setting is
4242 # used as the text of the error.
4343 #
44 # 'require_at_registration', if enabled, will add a step to the registration
45 # process, similar to how captcha works. Users will be required to accept the
46 # policy before their account is created.
47 #
48 # 'policy_name' is the display name of the policy users will see when registering
49 # for an account. Has no effect unless `require_at_registration` is enabled.
50 # Defaults to "Privacy Policy".
51 #
4452 # user_consent:
4553 # template_dir: res/templates/privacy
4654 # version: 1.0
5361 # block_events_error: >-
5462 # To continue using this homeserver you must review and agree to the
5563 # terms and conditions at %(consent_uri)s
64 # require_at_registration: False
65 # policy_name: Privacy Policy
5666 #
5767 """
5868
6676 self.user_consent_server_notice_content = None
6777 self.user_consent_server_notice_to_guests = False
6878 self.block_events_without_consent_error = None
79 self.user_consent_at_registration = False
80 self.user_consent_policy_name = "Privacy Policy"
6981
7082 def read_config(self, config):
7183 consent_config = config.get("user_consent")
8294 self.user_consent_server_notice_to_guests = bool(consent_config.get(
8395 "send_server_notice_to_guests", False,
8496 ))
97 self.user_consent_at_registration = bool(consent_config.get(
98 "require_at_registration", False,
99 ))
100 self.user_consent_policy_name = consent_config.get(
101 "policy_name", "Privacy Policy",
102 )
85103
86104 def default_config(self, **kwargs):
87105 return DEFAULT_CONFIG
4949 maxBytes: 104857600
5050 backupCount: 10
5151 filters: [context]
52 encoding: utf8
5253 console:
5354 class: logging.StreamHandler
5455 formatter: precise
1414
1515 import logging
1616
17 from six.moves import urllib
18
1719 from canonicaljson import json
1820
1921 from twisted.internet import defer, reactor
2729
2830 logger = logging.getLogger(__name__)
2931
30 KEY_API_V1 = b"/_matrix/key/v1/"
32 KEY_API_V2 = "/_matrix/key/v2/server/%s"
3133
3234
3335 @defer.inlineCallbacks
34 def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
36 def fetch_server_key(server_name, tls_client_options_factory, key_id):
3537 """Fetch the keys for a remote server."""
3638
3739 factory = SynapseKeyClientFactory()
38 factory.path = path
40 factory.path = KEY_API_V2 % (urllib.parse.quote(key_id), )
3941 factory.host = server_name
4042 endpoint = matrix_federation_endpoint(
4143 reactor, server_name, tls_client_options_factory, timeout=30
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
2 # Copyright 2017 New Vector Ltd.
2 # Copyright 2017, 2018 New Vector Ltd.
33 #
44 # Licensed under the Apache License, Version 2.0 (the "License");
55 # you may not use this file except in compliance with the License.
1616 import hashlib
1717 import logging
1818 from collections import namedtuple
19
20 from six.moves import urllib
2119
2220 from signedjson.key import (
2321 decode_verify_key_bytes,
394392
395393 @defer.inlineCallbacks
396394 def get_keys_from_server(self, server_name_and_key_ids):
397 @defer.inlineCallbacks
398 def get_key(server_name, key_ids):
399 keys = None
400 try:
401 keys = yield self.get_server_verify_key_v2_direct(
402 server_name, key_ids
403 )
404 except Exception as e:
405 logger.info(
406 "Unable to get key %r for %r directly: %s %s",
407 key_ids, server_name,
408 type(e).__name__, str(e),
409 )
410
411 if not keys:
412 keys = yield self.get_server_verify_key_v1_direct(
413 server_name, key_ids
414 )
415
416 keys = {server_name: keys}
417
418 defer.returnValue(keys)
419
420395 results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
421396 [
422 run_in_background(get_key, server_name, key_ids)
397 run_in_background(
398 self.get_server_verify_key_v2_direct,
399 server_name,
400 key_ids,
401 )
423402 for server_name, key_ids in server_name_and_key_ids
424403 ],
425404 consumeErrors=True,
524503 continue
525504
526505 (response, tls_certificate) = yield fetch_server_key(
527 server_name, self.hs.tls_client_options_factory,
528 path=("/_matrix/key/v2/server/%s" % (
529 urllib.parse.quote(requested_key_id),
530 )).encode("ascii"),
506 server_name, self.hs.tls_client_options_factory, requested_key_id
531507 )
532508
533509 if (u"signatures" not in response
655631 results[server_name] = response_keys
656632
657633 defer.returnValue(results)
658
659 @defer.inlineCallbacks
660 def get_server_verify_key_v1_direct(self, server_name, key_ids):
661 """Finds a verification key for the server with one of the key ids.
662 Args:
663 server_name (str): The name of the server to fetch a key for.
664 keys_ids (list of str): The key_ids to check for.
665 """
666
667 # Try to fetch the key from the remote server.
668
669 (response, tls_certificate) = yield fetch_server_key(
670 server_name, self.hs.tls_client_options_factory
671 )
672
673 # Check the response.
674
675 x509_certificate_bytes = crypto.dump_certificate(
676 crypto.FILETYPE_ASN1, tls_certificate
677 )
678
679 if ("signatures" not in response
680 or server_name not in response["signatures"]):
681 raise KeyLookupError("Key response not signed by remote server")
682
683 if "tls_certificate" not in response:
684 raise KeyLookupError("Key response missing TLS certificate")
685
686 tls_certificate_b64 = response["tls_certificate"]
687
688 if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
689 raise KeyLookupError("TLS certificate doesn't match")
690
691 # Cache the result in the datastore.
692
693 time_now_ms = self.clock.time_msec()
694
695 verify_keys = {}
696 for key_id, key_base64 in response["verify_keys"].items():
697 if is_signing_algorithm_supported(key_id):
698 key_bytes = decode_base64(key_base64)
699 verify_key = decode_verify_key_bytes(key_id, key_bytes)
700 verify_key.time_added = time_now_ms
701 verify_keys[key_id] = verify_key
702
703 for key_id in response["signatures"][server_name]:
704 if key_id not in response["verify_keys"]:
705 raise KeyLookupError(
706 "Key response must include verification keys for all"
707 " signatures"
708 )
709 if key_id in verify_keys:
710 verify_signed_json(
711 response,
712 server_name,
713 verify_keys[key_id]
714 )
715
716 yield self.store.store_server_certificate(
717 server_name,
718 server_name,
719 time_now_ms,
720 tls_certificate,
721 )
722
723 yield self.store_keys(
724 server_name=server_name,
725 from_server=server_name,
726 verify_keys=verify_keys,
727 )
728
729 defer.returnValue(verify_keys)
730634
731635 def store_keys(self, server_name, from_server, verify_keys):
732636 """Store a collection of verify keys for a given server
199199 membership = event.content["membership"]
200200
201201 # Check if this is the room creator joining:
202 if len(event.prev_events) == 1 and Membership.JOIN == membership:
202 if len(event.prev_event_ids()) == 1 and Membership.JOIN == membership:
203203 # Get room creation event:
204204 key = (EventTypes.Create, "", )
205205 create = auth_events.get(key)
206 if create and event.prev_events[0][0] == create.event_id:
206 if create and event.prev_event_ids()[0] == create.event_id:
207207 if create.content["creator"] == event.state_key:
208208 return
209209
158158 def keys(self):
159159 return six.iterkeys(self._event_dict)
160160
161 def prev_event_ids(self):
162 """Returns the list of prev event IDs. The order matches the order
163 specified in the event, though there is no meaning to it.
164
165 Returns:
166 list[str]: The list of event IDs of this event's prev_events
167 """
168 return [e for e, _ in self.prev_events]
169
170 def auth_event_ids(self):
171 """Returns the list of auth event IDs. The order matches the order
172 specified in the event, though there is no meaning to it.
173
174 Returns:
175 list[str]: The list of event IDs of this event's auth_events
176 """
177 return [e for e, _ in self.auth_events]
178
161179
162180 class FrozenEvent(EventBase):
163181 def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
161161 p["age_ts"] = request_time - int(p["age"])
162162 del p["age"]
163163
164 # We try and pull out an event ID so that if later checks fail we
165 # can log something sensible. We don't mandate an event ID here in
166 # case future event formats get rid of the key.
167 possible_event_id = p.get("event_id", "<Unknown>")
168
169 # Now we get the room ID so that we can check that we know the
170 # version of the room.
171 room_id = p.get("room_id")
172 if not room_id:
173 logger.info(
174 "Ignoring PDU as does not have a room_id. Event ID: %s",
175 possible_event_id,
176 )
177 continue
178
179 try:
180 # In future we will actually use the room version to parse the
181 # PDU into an event.
182 yield self.store.get_room_version(room_id)
183 except NotFoundError:
184 logger.info("Ignoring PDU for unknown room_id: %s", room_id)
185 continue
186
164187 event = event_from_pdu_json(p)
165 room_id = event.room_id
166188 pdus_by_room.setdefault(room_id, []).append(event)
167189
168190 pdu_results = {}
321343 )
322344 else:
323345 defer.returnValue((404, ""))
324
325 @defer.inlineCallbacks
326 @log_function
327 def on_pull_request(self, origin, versions):
328 raise NotImplementedError("Pull transactions not implemented")
329346
330347 @defer.inlineCallbacks
331348 def on_query_request(self, query_type, args):
182182 # banned then it won't receive the event because it won't
183183 # be in the room after the ban.
184184 destinations = yield self.state.get_current_hosts_in_room(
185 event.room_id, latest_event_ids=[
186 prev_id for prev_id, _ in event.prev_events
187 ],
185 event.room_id, latest_event_ids=event.prev_event_ids(),
188186 )
189187 except Exception:
190188 logger.exception(
359359 raise
360360
361361 defer.returnValue((code, response))
362
363
364 class FederationPullServlet(BaseFederationServlet):
365 PATH = "/pull/"
366
367 # This is for when someone asks us for everything since version X
368 def on_GET(self, origin, content, query):
369 return self.handler.on_pull_request(query["origin"][0], query["v"])
370362
371363
372364 class FederationEventServlet(BaseFederationServlet):
12601252
12611253 FEDERATION_SERVLET_CLASSES = (
12621254 FederationSendServlet,
1263 FederationPullServlet,
12641255 FederationEventServlet,
12651256 FederationStateServlet,
12661257 FederationStateIdsServlet,
116116 "Require 'transaction_id' to construct a Transaction"
117117 )
118118
119 for p in pdus:
120 p.transaction_id = kwargs["transaction_id"]
121
122119 kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
123120
124121 return Transaction(**kwargs)
5858 LoginType.EMAIL_IDENTITY: self._check_email_identity,
5959 LoginType.MSISDN: self._check_msisdn,
6060 LoginType.DUMMY: self._check_dummy_auth,
61 LoginType.TERMS: self._check_terms_auth,
6162 }
6263 self.bcrypt_rounds = hs.config.bcrypt_rounds
6364
430431 def _check_dummy_auth(self, authdict, _):
431432 return defer.succeed(True)
432433
434 def _check_terms_auth(self, authdict, _):
435 return defer.succeed(True)
436
433437 @defer.inlineCallbacks
434438 def _check_threepid(self, medium, authdict):
435439 if 'threepid_creds' not in authdict:
461465 def _get_params_recaptcha(self):
462466 return {"public_key": self.hs.config.recaptcha_public_key}
463467
468 def _get_params_terms(self):
469 return {
470 "policies": {
471 "privacy_policy": {
472 "version": self.hs.config.user_consent_version,
473 "en": {
474 "name": self.hs.config.user_consent_policy_name,
475 "url": "%s/_matrix/consent?v=%s" % (
476 self.hs.config.public_baseurl,
477 self.hs.config.user_consent_version,
478 ),
479 },
480 },
481 },
482 }
483
464484 def _auth_dict_for_flows(self, flows, session):
465485 public_flows = []
466486 for f in flows:
468488
469489 get_params = {
470490 LoginType.RECAPTCHA: self._get_params_recaptcha,
491 LoginType.TERMS: self._get_params_terms,
471492 }
472493
473494 params = {}
137137 )
138138
139139 @defer.inlineCallbacks
140 def delete_association(self, requester, room_alias):
141 # association deletion for human users
142
140 def delete_association(self, requester, room_alias, send_event=True):
141 """Remove an alias from the directory
142
143 (this is only meant for human users; AS users should call
144 delete_appservice_association)
145
146 Args:
147 requester (Requester):
148 room_alias (RoomAlias):
149 send_event (bool): Whether to send an updated m.room.aliases event.
150 Note that, if we delete the canonical alias, we will always attempt
151 to send an m.room.canonical_alias event
152
153 Returns:
154 Deferred[unicode]: room id that the alias used to point to
155
156 Raises:
157 NotFoundError: if the alias doesn't exist
158
159 AuthError: if the user doesn't have perms to delete the alias (ie, the user
160 is neither the creator of the alias, nor a server admin.
161
162 SynapseError: if the alias belongs to an AS
163 """
143164 user_id = requester.user.to_string()
144165
145166 try:
167188 room_id = yield self._delete_association(room_alias)
168189
169190 try:
170 yield self.send_room_alias_update_event(
171 requester,
172 room_id
173 )
191 if send_event:
192 yield self.send_room_alias_update_event(
193 requester,
194 room_id
195 )
174196
175197 yield self._update_canonical_alias(
176198 requester,
1818
1919 from twisted.internet import defer
2020
21 from synapse.api.errors import RoomKeysVersionError, StoreError, SynapseError
21 from synapse.api.errors import NotFoundError, RoomKeysVersionError, StoreError
2222 from synapse.util.async_helpers import Linearizer
2323
2424 logger = logging.getLogger(__name__)
5454 room_id(string): room ID to get keys for, for None to get keys for all rooms
5555 session_id(string): session ID to get keys for, for None to get keys for all
5656 sessions
57 Raises:
58 NotFoundError: if the backup version does not exist
5759 Returns:
5860 A deferred list of dicts giving the session_data and message metadata for
5961 these room keys.
6264 # we deliberately take the lock to get keys so that changing the version
6365 # works atomically
6466 with (yield self._upload_linearizer.queue(user_id)):
67 # make sure the backup version exists
68 try:
69 yield self.store.get_e2e_room_keys_version_info(user_id, version)
70 except StoreError as e:
71 if e.code == 404:
72 raise NotFoundError("Unknown backup version")
73 else:
74 raise
75
6576 results = yield self.store.get_e2e_room_keys(
6677 user_id, version, room_id, session_id
6778 )
68
69 if results['rooms'] == {}:
70 raise SynapseError(404, "No room_keys found")
7179
7280 defer.returnValue(results)
7381
119127 }
120128
121129 Raises:
122 SynapseError: with code 404 if there are no versions defined
130 NotFoundError: if there are no versions defined
123131 RoomKeysVersionError: if the uploaded version is not the current version
124132 """
125133
133141 version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
134142 except StoreError as e:
135143 if e.code == 404:
136 raise SynapseError(404, "Version '%s' not found" % (version,))
144 raise NotFoundError("Version '%s' not found" % (version,))
137145 else:
138146 raise
139147
147155 raise RoomKeysVersionError(current_version=version_info['version'])
148156 except StoreError as e:
149157 if e.code == 404:
150 raise SynapseError(404, "Version '%s' not found" % (version,))
158 raise NotFoundError("Version '%s' not found" % (version,))
151159 else:
152160 raise
153161
201201 self.room_queues[room_id].append((pdu, origin))
202202 return
203203
204 # If we're no longer in the room just ditch the event entirely. This
205 # is probably an old server that has come back and thinks we're still
206 # in the room (or we've been rejoined to the room by a state reset).
204 # If we're not in the room just ditch the event entirely. This is
205 # probably an old server that has come back and thinks we're still in
206 # the room (or we've been rejoined to the room by a state reset).
207207 #
208 # If we were never in the room then maybe our database got vaped and
209 # we should check if we *are* in fact in the room. If we are then we
210 # can magically rejoin the room.
208 # Note that if we were never in the room then we would have already
209 # dropped the event, since we wouldn't know the room version.
211210 is_in_room = yield self.auth.check_host_in_room(
212211 room_id,
213212 self.server_name
214213 )
215214 if not is_in_room:
216 was_in_room = yield self.store.was_host_joined(
217 pdu.room_id, self.server_name,
218 )
219 if was_in_room:
220 logger.info(
221 "[%s %s] Ignoring PDU from %s as we've left the room",
222 room_id, event_id, origin,
223 )
224 defer.returnValue(None)
215 logger.info(
216 "[%s %s] Ignoring PDU from %s as we're not in the room",
217 room_id, event_id, origin,
218 )
219 defer.returnValue(None)
225220
226221 state = None
227222 auth_chain = []
238233 room_id, event_id, min_depth,
239234 )
240235
241 prevs = {e_id for e_id, _ in pdu.prev_events}
236 prevs = set(pdu.prev_event_ids())
242237 seen = yield self.store.have_seen_events(prevs)
243238
244239 if min_depth and pdu.depth < min_depth:
556551 room_id, event_id, event,
557552 )
558553
559 # FIXME (erikj): Awful hack to make the case where we are not currently
560 # in the room work
561 # If state and auth_chain are None, then we don't need to do this check
562 # as we already know we have enough state in the DB to handle this
563 # event.
564 if state and auth_chain and not event.internal_metadata.is_outlier():
565 is_in_room = yield self.auth.check_host_in_room(
566 room_id,
567 self.server_name
568 )
569 else:
570 is_in_room = True
571
572 if not is_in_room:
554 event_ids = set()
555 if state:
556 event_ids |= {e.event_id for e in state}
557 if auth_chain:
558 event_ids |= {e.event_id for e in auth_chain}
559
560 seen_ids = yield self.store.have_seen_events(event_ids)
561
562 if state and auth_chain is not None:
563 # If we have any state or auth_chain given to us by the replication
564 # layer, then we should handle them (if we haven't before.)
565
566 event_infos = []
567
568 for e in itertools.chain(auth_chain, state):
569 if e.event_id in seen_ids:
570 continue
571 e.internal_metadata.outlier = True
572 auth_ids = e.auth_event_ids()
573 auth = {
574 (e.type, e.state_key): e for e in auth_chain
575 if e.event_id in auth_ids or e.type == EventTypes.Create
576 }
577 event_infos.append({
578 "event": e,
579 "auth_events": auth,
580 })
581 seen_ids.add(e.event_id)
582
573583 logger.info(
574 "[%s %s] Got event for room we're not in",
575 room_id, event_id,
576 )
577
578 try:
579 yield self._persist_auth_tree(
580 origin, auth_chain, state, event
581 )
582 except AuthError as e:
583 raise FederationError(
584 "ERROR",
585 e.code,
586 e.msg,
587 affected=event_id,
588 )
589
590 else:
591 event_ids = set()
592 if state:
593 event_ids |= {e.event_id for e in state}
594 if auth_chain:
595 event_ids |= {e.event_id for e in auth_chain}
596
597 seen_ids = yield self.store.have_seen_events(event_ids)
598
599 if state and auth_chain is not None:
600 # If we have any state or auth_chain given to us by the replication
601 # layer, then we should handle them (if we haven't before.)
602
603 event_infos = []
604
605 for e in itertools.chain(auth_chain, state):
606 if e.event_id in seen_ids:
607 continue
608 e.internal_metadata.outlier = True
609 auth_ids = [e_id for e_id, _ in e.auth_events]
610 auth = {
611 (e.type, e.state_key): e for e in auth_chain
612 if e.event_id in auth_ids or e.type == EventTypes.Create
613 }
614 event_infos.append({
615 "event": e,
616 "auth_events": auth,
617 })
618 seen_ids.add(e.event_id)
619
620 logger.info(
621 "[%s %s] persisting newly-received auth/state events %s",
622 room_id, event_id, [e["event"].event_id for e in event_infos]
623 )
624 yield self._handle_new_events(origin, event_infos)
625
626 try:
627 context = yield self._handle_new_event(
628 origin,
629 event,
630 state=state,
631 )
632 except AuthError as e:
633 raise FederationError(
634 "ERROR",
635 e.code,
636 e.msg,
637 affected=event.event_id,
638 )
584 "[%s %s] persisting newly-received auth/state events %s",
585 room_id, event_id, [e["event"].event_id for e in event_infos]
586 )
587 yield self._handle_new_events(origin, event_infos)
588
589 try:
590 context = yield self._handle_new_event(
591 origin,
592 event,
593 state=state,
594 )
595 except AuthError as e:
596 raise FederationError(
597 "ERROR",
598 e.code,
599 e.msg,
600 affected=event.event_id,
601 )
639602
640603 room = yield self.store.get_room(room_id)
641604
725688 edges = [
726689 ev.event_id
727690 for ev in events
728 if set(e_id for e_id, _ in ev.prev_events) - event_ids
691 if set(ev.prev_event_ids()) - event_ids
729692 ]
730693
731694 logger.info(
752715 required_auth = set(
753716 a_id
754717 for event in events + list(state_events.values()) + list(auth_events.values())
755 for a_id, _ in event.auth_events
718 for a_id in event.auth_event_ids()
756719 )
757720 auth_events.update({
758721 e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
768731 auth_events.update(ret_events)
769732
770733 required_auth.update(
771 a_id for event in ret_events.values() for a_id, _ in event.auth_events
734 a_id for event in ret_events.values() for a_id in event.auth_event_ids()
772735 )
773736 missing_auth = required_auth - set(auth_events)
774737
795758 required_auth.update(
796759 a_id
797760 for event in results if event
798 for a_id, _ in event.auth_events
761 for a_id in event.auth_event_ids()
799762 )
800763 missing_auth = required_auth - set(auth_events)
801764
815778 "auth_events": {
816779 (auth_events[a_id].type, auth_events[a_id].state_key):
817780 auth_events[a_id]
818 for a_id, _ in a.auth_events
781 for a_id in a.auth_event_ids()
819782 if a_id in auth_events
820783 }
821784 })
827790 "auth_events": {
828791 (auth_events[a_id].type, auth_events[a_id].state_key):
829792 auth_events[a_id]
830 for a_id, _ in event_map[e_id].auth_events
793 for a_id in event_map[e_id].auth_event_ids()
831794 if a_id in auth_events
832795 }
833796 })
10401003 Raises:
10411004 SynapseError if the event does not pass muster
10421005 """
1043 if len(ev.prev_events) > 20:
1006 if len(ev.prev_event_ids()) > 20:
10441007 logger.warn("Rejecting event %s which has %i prev_events",
1045 ev.event_id, len(ev.prev_events))
1008 ev.event_id, len(ev.prev_event_ids()))
10461009 raise SynapseError(
10471010 http_client.BAD_REQUEST,
10481011 "Too many prev_events",
10491012 )
10501013
1051 if len(ev.auth_events) > 10:
1014 if len(ev.auth_event_ids()) > 10:
10521015 logger.warn("Rejecting event %s which has %i auth_events",
1053 ev.event_id, len(ev.auth_events))
1016 ev.event_id, len(ev.auth_event_ids()))
10541017 raise SynapseError(
10551018 http_client.BAD_REQUEST,
10561019 "Too many auth_events",
10751038 def on_event_auth(self, event_id):
10761039 event = yield self.store.get_event(event_id)
10771040 auth = yield self.store.get_auth_chain(
1078 [auth_id for auth_id, _ in event.auth_events],
1041 [auth_id for auth_id in event.auth_event_ids()],
10791042 include_given=True
10801043 )
10811044 defer.returnValue([e for e in auth])
16971660
16981661 missing_auth_events = set()
16991662 for e in itertools.chain(auth_events, state, [event]):
1700 for e_id, _ in e.auth_events:
1663 for e_id in e.auth_event_ids():
17011664 if e_id not in event_map:
17021665 missing_auth_events.add(e_id)
17031666
17161679 for e in itertools.chain(auth_events, state, [event]):
17171680 auth_for_e = {
17181681 (event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
1719 for e_id, _ in e.auth_events
1682 for e_id in e.auth_event_ids()
17201683 if e_id in event_map
17211684 }
17221685 if create_event:
17841747
17851748 # This is a hack to fix some old rooms where the initial join event
17861749 # didn't reference the create event in its auth events.
1787 if event.type == EventTypes.Member and not event.auth_events:
1788 if len(event.prev_events) == 1 and event.depth < 5:
1750 if event.type == EventTypes.Member and not event.auth_event_ids():
1751 if len(event.prev_event_ids()) == 1 and event.depth < 5:
17891752 c = yield self.store.get_event(
1790 event.prev_events[0][0],
1753 event.prev_event_ids()[0],
17911754 allow_none=True,
17921755 )
17931756 if c and c.type == EventTypes.Create:
18341797
18351798 # Now get the current auth_chain for the event.
18361799 local_auth_chain = yield self.store.get_auth_chain(
1837 [auth_id for auth_id, _ in event.auth_events],
1800 [auth_id for auth_id in event.auth_event_ids()],
18381801 include_given=True
18391802 )
18401803
18901853 """
18911854 # Check if we have all the auth events.
18921855 current_state = set(e.event_id for e in auth_events.values())
1893 event_auth_events = set(e_id for e_id, _ in event.auth_events)
1856 event_auth_events = set(event.auth_event_ids())
18941857
18951858 if event.is_state():
18961859 event_key = (event.type, event.state_key)
19341897 continue
19351898
19361899 try:
1937 auth_ids = [e_id for e_id, _ in e.auth_events]
1900 auth_ids = e.auth_event_ids()
19381901 auth = {
19391902 (e.type, e.state_key): e for e in remote_auth_chain
19401903 if e.event_id in auth_ids or e.type == EventTypes.Create
19551918 pass
19561919
19571920 have_events = yield self.store.get_seen_events_with_rejections(
1958 [e_id for e_id, _ in event.auth_events]
1921 event.auth_event_ids()
19591922 )
19601923 seen_events = set(have_events.keys())
19611924 except Exception:
20572020 continue
20582021
20592022 try:
2060 auth_ids = [e_id for e_id, _ in ev.auth_events]
2023 auth_ids = ev.auth_event_ids()
20612024 auth = {
20622025 (e.type, e.state_key): e
20632026 for e in result["auth_chain"]
22492212 missing_remote_ids = [e.event_id for e in missing_remotes]
22502213 base_remote_rejected = list(missing_remotes)
22512214 for e in missing_remotes:
2252 for e_id, _ in e.auth_events:
2215 for e_id in e.auth_event_ids():
22532216 if e_id in missing_remote_ids:
22542217 try:
22552218 base_remote_rejected.remove(e)
426426
427427 if event.is_state():
428428 prev_state = yield self.deduplicate_state_event(event, context)
429 logger.info(
430 "Not bothering to persist duplicate state event %s", event.event_id,
431 )
429432 if prev_state is not None:
430433 defer.returnValue(prev_state)
431434
4949 self._auth_handler = hs.get_auth_handler()
5050 self.profile_handler = hs.get_profile_handler()
5151 self.user_directory_handler = hs.get_user_directory_handler()
52 self.room_creation_handler = self.hs.get_room_creation_handler()
5352 self.captcha_client = CaptchaServerHttpClient(hs)
5453
5554 self._next_generated_user_id = None
240239 else:
241240 # create room expects the localpart of the room alias
242241 room_alias_localpart = room_alias.localpart
243 yield self.room_creation_handler.create_room(
242
243 # getting the RoomCreationHandler during init gives a dependency
244 # loop
245 yield self.hs.get_room_creation_handler().create_room(
244246 fake_requester,
245247 config={
246248 "preset": "public_chat",
253255 except Exception as e:
254256 logger.error("Failed to join new user to %r: %r", r, e)
255257
256 # We used to generate default identicons here, but nowadays
257 # we want clients to generate their own as part of their branding
258 # rather than there being consistent matrix-wide ones, so we don't.
259258 defer.returnValue((user_id, token))
260259
261260 @defer.inlineCallbacks
2020 import string
2121 from collections import OrderedDict
2222
23 from six import string_types
23 from six import iteritems, string_types
2424
2525 from twisted.internet import defer
2626
3131 JoinRules,
3232 RoomCreationPreset,
3333 )
34 from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
34 from synapse.api.errors import AuthError, Codes, NotFoundError, StoreError, SynapseError
3535 from synapse.storage.state import StateFilter
3636 from synapse.types import RoomAlias, RoomID, RoomStreamToken, StreamToken, UserID
3737 from synapse.util import stringutils
38 from synapse.util.async_helpers import Linearizer
3839 from synapse.visibility import filter_events_for_client
3940
4041 from ._base import BaseHandler
7273
7374 self.spam_checker = hs.get_spam_checker()
7475 self.event_creation_handler = hs.get_event_creation_handler()
76 self.room_member_handler = hs.get_room_member_handler()
77
78 # linearizer to stop two upgrades happening at once
79 self._upgrade_linearizer = Linearizer("room_upgrade_linearizer")
80
81 @defer.inlineCallbacks
82 def upgrade_room(self, requester, old_room_id, new_version):
83 """Replace a room with a new room with a different version
84
85 Args:
86 requester (synapse.types.Requester): the user requesting the upgrade
87 old_room_id (unicode): the id of the room to be replaced
88 new_version (unicode): the new room version to use
89
90 Returns:
91 Deferred[unicode]: the new room id
92 """
93 yield self.ratelimit(requester)
94
95 user_id = requester.user.to_string()
96
97 with (yield self._upgrade_linearizer.queue(old_room_id)):
98 # start by allocating a new room id
99 r = yield self.store.get_room(old_room_id)
100 if r is None:
101 raise NotFoundError("Unknown room id %s" % (old_room_id,))
102 new_room_id = yield self._generate_room_id(
103 creator_id=user_id, is_public=r["is_public"],
104 )
105
106 logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
107
108 # we create and auth the tombstone event before properly creating the new
109 # room, to check our user has perms in the old room.
110 tombstone_event, tombstone_context = (
111 yield self.event_creation_handler.create_event(
112 requester, {
113 "type": EventTypes.Tombstone,
114 "state_key": "",
115 "room_id": old_room_id,
116 "sender": user_id,
117 "content": {
118 "body": "This room has been replaced",
119 "replacement_room": new_room_id,
120 }
121 },
122 token_id=requester.access_token_id,
123 )
124 )
125 yield self.auth.check_from_context(tombstone_event, tombstone_context)
126
127 yield self.clone_exiting_room(
128 requester,
129 old_room_id=old_room_id,
130 new_room_id=new_room_id,
131 new_room_version=new_version,
132 tombstone_event_id=tombstone_event.event_id,
133 )
134
135 # now send the tombstone
136 yield self.event_creation_handler.send_nonmember_event(
137 requester, tombstone_event, tombstone_context,
138 )
139
140 old_room_state = yield tombstone_context.get_current_state_ids(self.store)
141
142 # update any aliases
143 yield self._move_aliases_to_new_room(
144 requester, old_room_id, new_room_id, old_room_state,
145 )
146
147 # and finally, shut down the PLs in the old room, and update them in the new
148 # room.
149 yield self._update_upgraded_room_pls(
150 requester, old_room_id, new_room_id, old_room_state,
151 )
152
153 defer.returnValue(new_room_id)
154
155 @defer.inlineCallbacks
156 def _update_upgraded_room_pls(
157 self, requester, old_room_id, new_room_id, old_room_state,
158 ):
159 """Send updated power levels in both rooms after an upgrade
160
161 Args:
162 requester (synapse.types.Requester): the user requesting the upgrade
163 old_room_id (unicode): the id of the room to be replaced
164 new_room_id (unicode): the id of the replacement room
165 old_room_state (dict[tuple[str, str], str]): the state map for the old room
166
167 Returns:
168 Deferred
169 """
170 old_room_pl_event_id = old_room_state.get((EventTypes.PowerLevels, ""))
171
172 if old_room_pl_event_id is None:
173 logger.warning(
174 "Not supported: upgrading a room with no PL event. Not setting PLs "
175 "in old room.",
176 )
177 return
178
179 old_room_pl_state = yield self.store.get_event(old_room_pl_event_id)
180
181 # we try to stop regular users from speaking by setting the PL required
182 # to send regular events and invites to 'Moderator' level. That's normally
183 # 50, but if the default PL in a room is 50 or more, then we set the
184 # required PL above that.
185
186 pl_content = dict(old_room_pl_state.content)
187 users_default = int(pl_content.get("users_default", 0))
188 restricted_level = max(users_default + 1, 50)
189
190 updated = False
191 for v in ("invite", "events_default"):
192 current = int(pl_content.get(v, 0))
193 if current < restricted_level:
194 logger.info(
195 "Setting level for %s in %s to %i (was %i)",
196 v, old_room_id, restricted_level, current,
197 )
198 pl_content[v] = restricted_level
199 updated = True
200 else:
201 logger.info(
202 "Not setting level for %s (already %i)",
203 v, current,
204 )
205
206 if updated:
207 try:
208 yield self.event_creation_handler.create_and_send_nonmember_event(
209 requester, {
210 "type": EventTypes.PowerLevels,
211 "state_key": '',
212 "room_id": old_room_id,
213 "sender": requester.user.to_string(),
214 "content": pl_content,
215 }, ratelimit=False,
216 )
217 except AuthError as e:
218 logger.warning("Unable to update PLs in old room: %s", e)
219
220 logger.info("Setting correct PLs in new room")
221 yield self.event_creation_handler.create_and_send_nonmember_event(
222 requester, {
223 "type": EventTypes.PowerLevels,
224 "state_key": '',
225 "room_id": new_room_id,
226 "sender": requester.user.to_string(),
227 "content": old_room_pl_state.content,
228 }, ratelimit=False,
229 )
230
231 @defer.inlineCallbacks
232 def clone_exiting_room(
233 self, requester, old_room_id, new_room_id, new_room_version,
234 tombstone_event_id,
235 ):
236 """Populate a new room based on an old room
237
238 Args:
239 requester (synapse.types.Requester): the user requesting the upgrade
240 old_room_id (unicode): the id of the room to be replaced
241 new_room_id (unicode): the id to give the new room (should already have been
242 created with _gemerate_room_id())
243 new_room_version (unicode): the new room version to use
244 tombstone_event_id (unicode|str): the ID of the tombstone event in the old
245 room.
246 Returns:
247 Deferred[None]
248 """
249 user_id = requester.user.to_string()
250
251 if not self.spam_checker.user_may_create_room(user_id):
252 raise SynapseError(403, "You are not permitted to create rooms")
253
254 creation_content = {
255 "room_version": new_room_version,
256 "predecessor": {
257 "room_id": old_room_id,
258 "event_id": tombstone_event_id,
259 }
260 }
261
262 initial_state = dict()
263
264 types_to_copy = (
265 (EventTypes.JoinRules, ""),
266 (EventTypes.Name, ""),
267 (EventTypes.Topic, ""),
268 (EventTypes.RoomHistoryVisibility, ""),
269 (EventTypes.GuestAccess, ""),
270 (EventTypes.RoomAvatar, ""),
271 )
272
273 old_room_state_ids = yield self.store.get_filtered_current_state_ids(
274 old_room_id, StateFilter.from_types(types_to_copy),
275 )
276 # map from event_id to BaseEvent
277 old_room_state_events = yield self.store.get_events(old_room_state_ids.values())
278
279 for k, old_event_id in iteritems(old_room_state_ids):
280 old_event = old_room_state_events.get(old_event_id)
281 if old_event:
282 initial_state[k] = old_event.content
283
284 yield self._send_events_for_new_room(
285 requester,
286 new_room_id,
287
288 # we expect to override all the presets with initial_state, so this is
289 # somewhat arbitrary.
290 preset_config=RoomCreationPreset.PRIVATE_CHAT,
291
292 invite_list=[],
293 initial_state=initial_state,
294 creation_content=creation_content,
295 )
296
297 # XXX invites/joins
298 # XXX 3pid invites
299
300 @defer.inlineCallbacks
301 def _move_aliases_to_new_room(
302 self, requester, old_room_id, new_room_id, old_room_state,
303 ):
304 directory_handler = self.hs.get_handlers().directory_handler
305
306 aliases = yield self.store.get_aliases_for_room(old_room_id)
307
308 # check to see if we have a canonical alias.
309 canonical_alias = None
310 canonical_alias_event_id = old_room_state.get((EventTypes.CanonicalAlias, ""))
311 if canonical_alias_event_id:
312 canonical_alias_event = yield self.store.get_event(canonical_alias_event_id)
313 if canonical_alias_event:
314 canonical_alias = canonical_alias_event.content.get("alias", "")
315
316 # first we try to remove the aliases from the old room (we suppress sending
317 # the room_aliases event until the end).
318 #
319 # Note that we'll only be able to remove aliases that (a) aren't owned by an AS,
320 # and (b) unless the user is a server admin, which the user created.
321 #
322 # This is probably correct - given we don't allow such aliases to be deleted
323 # normally, it would be odd to allow it in the case of doing a room upgrade -
324 # but it makes the upgrade less effective, and you have to wonder why a room
325 # admin can't remove aliases that point to that room anyway.
326 # (cf https://github.com/matrix-org/synapse/issues/2360)
327 #
328 removed_aliases = []
329 for alias_str in aliases:
330 alias = RoomAlias.from_string(alias_str)
331 try:
332 yield directory_handler.delete_association(
333 requester, alias, send_event=False,
334 )
335 removed_aliases.append(alias_str)
336 except SynapseError as e:
337 logger.warning(
338 "Unable to remove alias %s from old room: %s",
339 alias, e,
340 )
341
342 # if we didn't find any aliases, or couldn't remove anyway, we can skip the rest
343 # of this.
344 if not removed_aliases:
345 return
346
347 try:
348 # this can fail if, for some reason, our user doesn't have perms to send
349 # m.room.aliases events in the old room (note that we've already checked that
350 # they have perms to send a tombstone event, so that's not terribly likely).
351 #
352 # If that happens, it's regrettable, but we should carry on: it's the same
353 # as when you remove an alias from the directory normally - it just means that
354 # the aliases event gets out of sync with the directory
355 # (cf https://github.com/vector-im/riot-web/issues/2369)
356 yield directory_handler.send_room_alias_update_event(
357 requester, old_room_id,
358 )
359 except AuthError as e:
360 logger.warning(
361 "Failed to send updated alias event on old room: %s", e,
362 )
363
364 # we can now add any aliases we successfully removed to the new room.
365 for alias in removed_aliases:
366 try:
367 yield directory_handler.create_association(
368 requester, RoomAlias.from_string(alias),
369 new_room_id, servers=(self.hs.hostname, ),
370 send_event=False,
371 )
372 logger.info("Moved alias %s to new room", alias)
373 except SynapseError as e:
374 # I'm not really expecting this to happen, but it could if the spam
375 # checking module decides it shouldn't, or similar.
376 logger.error(
377 "Error adding alias %s to new room: %s",
378 alias, e,
379 )
380
381 try:
382 if canonical_alias and (canonical_alias in removed_aliases):
383 yield self.event_creation_handler.create_and_send_nonmember_event(
384 requester,
385 {
386 "type": EventTypes.CanonicalAlias,
387 "state_key": "",
388 "room_id": new_room_id,
389 "sender": requester.user.to_string(),
390 "content": {"alias": canonical_alias, },
391 },
392 ratelimit=False
393 )
394
395 yield directory_handler.send_room_alias_update_event(
396 requester, new_room_id,
397 )
398 except SynapseError as e:
399 # again I'm not really expecting this to fail, but if it does, I'd rather
400 # we returned the new room to the client at this point.
401 logger.error(
402 "Unable to send updated alias events in new room: %s", e,
403 )
75404
76405 @defer.inlineCallbacks
77406 def create_room(self, requester, config, ratelimit=True,
164493 visibility = config.get("visibility", None)
165494 is_public = visibility == "public"
166495
167 # autogen room IDs and try to create it. We may clash, so just
168 # try a few times till one goes through, giving up eventually.
169 attempts = 0
170 room_id = None
171 while attempts < 5:
172 try:
173 random_string = stringutils.random_string(18)
174 gen_room_id = RoomID(
175 random_string,
176 self.hs.hostname,
177 )
178 yield self.store.store_room(
179 room_id=gen_room_id.to_string(),
180 room_creator_user_id=user_id,
181 is_public=is_public
182 )
183 room_id = gen_room_id.to_string()
184 break
185 except StoreError:
186 attempts += 1
187 if not room_id:
188 raise StoreError(500, "Couldn't generate a room ID.")
496 room_id = yield self._generate_room_id(creator_id=user_id, is_public=is_public)
189497
190498 if room_alias:
191499 directory_handler = self.hs.get_handlers().directory_handler
215523 # override any attempt to set room versions via the creation_content
216524 creation_content["room_version"] = room_version
217525
218 room_member_handler = self.hs.get_room_member_handler()
219
220526 yield self._send_events_for_new_room(
221527 requester,
222528 room_id,
223 room_member_handler,
224529 preset_config=preset_config,
225530 invite_list=invite_list,
226531 initial_state=initial_state,
227532 creation_content=creation_content,
228533 room_alias=room_alias,
229 power_level_content_override=config.get("power_level_content_override", {}),
534 power_level_content_override=config.get("power_level_content_override"),
230535 creator_join_profile=creator_join_profile,
231536 )
232537
262567 if is_direct:
263568 content["is_direct"] = is_direct
264569
265 yield room_member_handler.update_membership(
570 yield self.room_member_handler.update_membership(
266571 requester,
267572 UserID.from_string(invitee),
268573 room_id,
300605 self,
301606 creator, # A Requester object.
302607 room_id,
303 room_member_handler,
304608 preset_config,
305609 invite_list,
306610 initial_state,
307611 creation_content,
308 room_alias,
309 power_level_content_override,
310 creator_join_profile,
612 room_alias=None,
613 power_level_content_override=None,
614 creator_join_profile=None,
311615 ):
312616 def create(etype, content, **kwargs):
313617 e = {
323627 @defer.inlineCallbacks
324628 def send(etype, content, **kwargs):
325629 event = create(etype, content, **kwargs)
630 logger.info("Sending %s in new room", etype)
326631 yield self.event_creation_handler.create_and_send_nonmember_event(
327632 creator,
328633 event,
345650 content=creation_content,
346651 )
347652
348 yield room_member_handler.update_membership(
653 logger.info("Sending %s in new room", EventTypes.Member)
654 yield self.room_member_handler.update_membership(
349655 creator,
350656 creator.user,
351657 room_id,
387693 for invitee in invite_list:
388694 power_level_content["users"][invitee] = 100
389695
390 power_level_content.update(power_level_content_override)
696 if power_level_content_override:
697 power_level_content.update(power_level_content_override)
391698
392699 yield send(
393700 etype=EventTypes.PowerLevels,
425732 state_key=state_key,
426733 content=content,
427734 )
735
736 @defer.inlineCallbacks
737 def _generate_room_id(self, creator_id, is_public):
738 # autogen room IDs and try to create it. We may clash, so just
739 # try a few times till one goes through, giving up eventually.
740 attempts = 0
741 while attempts < 5:
742 try:
743 random_string = stringutils.random_string(18)
744 gen_room_id = RoomID(
745 random_string,
746 self.hs.hostname,
747 ).to_string()
748 if isinstance(gen_room_id, bytes):
749 gen_room_id = gen_room_id.decode('utf-8')
750 yield self.store.store_room(
751 room_id=gen_room_id,
752 room_creator_user_id=creator_id,
753 is_public=is_public,
754 )
755 defer.returnValue(gen_room_id)
756 except StoreError:
757 attempts += 1
758 raise StoreError(500, "Couldn't generate a room ID.")
428759
429760
430761 class RoomContextHandler(object):
6262 self._member_typing_until = {} # clock time we expect to stop
6363 self._member_last_federation_poke = {}
6464
65 # map room IDs to serial numbers
66 self._room_serials = {}
6765 self._latest_room_serial = 0
68 # map room IDs to sets of users currently typing
69 self._room_typing = {}
66 self._reset()
7067
7168 # caches which room_ids changed at which serials
7269 self._typing_stream_change_cache = StreamChangeCache(
7774 self._handle_timeouts,
7875 5000,
7976 )
77
78 def _reset(self):
79 """
80 Reset the typing handler's data caches.
81 """
82 # map room IDs to serial numbers
83 self._room_serials = {}
84 # map room IDs to sets of users currently typing
85 self._room_typing = {}
8086
8187 def _handle_timeouts(self):
8288 logger.info("Checking for typing timeouts")
467467 Args:
468468 request (twisted.web.http.Request): The http request to add CORs to.
469469 """
470 request.setHeader("Access-Control-Allow-Origin", "*")
470 request.setHeader(b"Access-Control-Allow-Origin", b"*")
471471 request.setHeader(
472 "Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"
472 b"Access-Control-Allow-Methods", b"GET, POST, PUT, DELETE, OPTIONS"
473473 )
474474 request.setHeader(
475 "Access-Control-Allow-Headers",
476 "Origin, X-Requested-With, Content-Type, Accept, Authorization"
475 b"Access-Control-Allow-Headers",
476 b"Origin, X-Requested-With, Content-Type, Accept, Authorization"
477477 )
478478
479479
120120
121121 Args:
122122 request: the twisted HTTP request.
123 name (bytes/unicode): the name of the query parameter.
124 default (bytes/unicode|None): value to use if the parameter is absent,
123 name (bytes|unicode): the name of the query parameter.
124 default (bytes|unicode|None): value to use if the parameter is absent,
125125 defaults to None. Must be bytes if encoding is None.
126126 required (bool): whether to raise a 400 SynapseError if the
127127 parameter is absent, defaults to False.
128 allowed_values (list[bytes/unicode]): List of allowed values for the
128 allowed_values (list[bytes|unicode]): List of allowed values for the
129129 string, or None if any value is allowed, defaults to None. Must be
130130 the same type as name, if given.
131 encoding: The encoding to decode the name to, and decode the string
132 content with.
131 encoding (str|None): The encoding to decode the string content with.
133132
134133 Returns:
135134 bytes/unicode|None: A string value or the default. Unicode if encoding
8484 self.timed_call = None
8585
8686 def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
87 self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
87 if self.max_stream_ordering:
88 self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
89 else:
90 self.max_stream_ordering = max_stream_ordering
8891 self._start_processing()
8992
9093 def on_new_receipts(self, min_stream_id, max_stream_id):
310310 ]
311311 }
312312 }
313 if event.type == 'm.room.member':
313 if event.type == 'm.room.member' and event.is_state():
314314 d['notification']['membership'] = event.content['membership']
315315 d['notification']['user_is_target'] = event.state_key == self.user_id
316 if self.hs.config.push_include_content and 'content' in event:
316 if self.hs.config.push_include_content and event.content:
317317 d['notification']['content'] = event.content
318318
319319 # We no longer send aliases separately, instead, we send the human
2525 import jinja2
2626
2727 from twisted.internet import defer
28 from twisted.mail.smtp import sendmail
2928
3029 from synapse.api.constants import EventTypes
3130 from synapse.api.errors import StoreError
8483 self.notif_template_html = notif_template_html
8584 self.notif_template_text = notif_template_text
8685
86 self.sendmail = self.hs.get_sendmail()
8787 self.store = self.hs.get_datastore()
8888 self.macaroon_gen = self.hs.get_macaroon_generator()
8989 self.state_handler = self.hs.get_state_handler()
190190 multipart_msg.attach(html_part)
191191
192192 logger.info("Sending email push notification to %s" % email_address)
193 # logger.debug(html_text)
194
195 yield sendmail(
193
194 yield self.sendmail(
196195 self.hs.config.email_smtp_host,
197 raw_from, raw_to, multipart_msg.as_string(),
196 raw_from, raw_to, multipart_msg.as_string().encode('utf8'),
197 reactor=self.hs.get_reactor(),
198198 port=self.hs.config.email_smtp_port,
199199 requireAuthentication=self.hs.config.email_smtp_user is not None,
200200 username=self.hs.config.email_smtp_user,
332332 notif_events, user_id, reason):
333333 if len(notifs_by_room) == 1:
334334 # Only one room has new stuff
335 room_id = notifs_by_room.keys()[0]
335 room_id = list(notifs_by_room.keys())[0]
336336
337337 # If the room has some kind of name, use it, but we don't
338338 # want the generated-from-names one here otherwise we'll
123123
124124 # XXX: optimisation: cache our pattern regexps
125125 if condition['key'] == 'content.body':
126 body = self._event["content"].get("body", None)
126 body = self._event.content.get("body", None)
127127 if not body:
128128 return False
129129
139139 if not display_name:
140140 return False
141141
142 body = self._event["content"].get("body", None)
142 body = self._event.content.get("body", None)
143143 if not body:
144144 return False
145145
5050 "daemonize>=2.3.1": ["daemonize"],
5151 "bcrypt>=3.1.0": ["bcrypt>=3.1.0"],
5252 "pillow>=3.1.2": ["PIL"],
53 "pydenticon>=0.2": ["pydenticon"],
5453 "sortedcontainers>=1.4.4": ["sortedcontainers"],
5554 "psutil>=2.0.0": ["psutil>=2.0.0"],
5655 "pysaml2>=3.0.0": ["saml2"],
105105
106106 Can be overriden in subclasses to handle more.
107107 """
108 logger.info("Received rdata %s -> %s", stream_name, token)
108 logger.debug("Received rdata %s -> %s", stream_name, token)
109109 return self.store.process_replication_rows(stream_name, token, rows)
110110
111111 def on_position(self, stream_name, token):
655655 "",
656656 ["command", "name"],
657657 lambda: {
658 (k[0], p.name,): count
658 (k, p.name,): count
659659 for p in connected_connections
660660 for k, count in iteritems(p.inbound_commands_counter)
661661 },
666666 "",
667667 ["command", "name"],
668668 lambda: {
669 (k[0], p.name,): count
669 (k, p.name,): count
670670 for p in connected_connections
671671 for k, count in iteritems(p.outbound_commands_counter)
672672 },
4646 register,
4747 report_event,
4848 room_keys,
49 room_upgrade_rest_servlet,
4950 sendtodevice,
5051 sync,
5152 tags,
115116 sendtodevice.register_servlets(hs, client_resource)
116117 user_directory.register_servlets(hs, client_resource)
117118 groups.register_servlets(hs, client_resource)
119 room_upgrade_rest_servlet.register_servlets(hs, client_resource)
6767 </html>
6868 """
6969
70 TERMS_TEMPLATE = """
71 <html>
72 <head>
73 <title>Authentication</title>
74 <meta name='viewport' content='width=device-width, initial-scale=1,
75 user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
76 <link rel="stylesheet" href="/_matrix/static/client/register/style.css">
77 </head>
78 <body>
79 <form id="registrationForm" method="post" action="%(myurl)s">
80 <div>
81 <p>
82 Please click the button below if you agree to the
83 <a href="%(terms_url)s">privacy policy of this homeserver.</a>
84 </p>
85 <input type="hidden" name="session" value="%(session)s" />
86 <input type="submit" value="Agree" />
87 </div>
88 </form>
89 </body>
90 </html>
91 """
92
7093 SUCCESS_TEMPLATE = """
7194 <html>
7295 <head>
132155 request.write(html_bytes)
133156 finish_request(request)
134157 defer.returnValue(None)
158 elif stagetype == LoginType.TERMS:
159 session = request.args['session'][0]
160
161 html = TERMS_TEMPLATE % {
162 'session': session,
163 'terms_url': "%s/_matrix/consent?v=%s" % (
164 self.hs.config.public_baseurl,
165 self.hs.config.user_consent_version,
166 ),
167 'myurl': "%s/auth/%s/fallback/web" % (
168 CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
169 ),
170 }
171 html_bytes = html.encode("utf8")
172 request.setResponseCode(200)
173 request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
174 request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
175
176 request.write(html_bytes)
177 finish_request(request)
178 defer.returnValue(None)
135179 else:
136180 raise SynapseError(404, "Unknown auth stage type")
137181
138182 @defer.inlineCallbacks
139183 def on_POST(self, request, stagetype):
140184 yield
141 if stagetype == "m.login.recaptcha":
185 if stagetype == LoginType.RECAPTCHA:
142186 if ('g-recaptcha-response' not in request.args or
143187 len(request.args['g-recaptcha-response'])) == 0:
144188 raise SynapseError(400, "No captcha response supplied")
178222 finish_request(request)
179223
180224 defer.returnValue(None)
225 elif stagetype == LoginType.TERMS:
226 if ('session' not in request.args or
227 len(request.args['session'])) == 0:
228 raise SynapseError(400, "No session supplied")
229
230 session = request.args['session'][0]
231 authdict = {'session': session}
232
233 success = yield self.auth_handler.add_oob_auth(
234 LoginType.TERMS,
235 authdict,
236 self.hs.get_ip_from_request(request)
237 )
238
239 if success:
240 html = SUCCESS_TEMPLATE
241 else:
242 html = TERMS_TEMPLATE % {
243 'session': session,
244 'terms_url': "%s/_matrix/consent?v=%s" % (
245 self.hs.config.public_baseurl,
246 self.hs.config.user_consent_version,
247 ),
248 'myurl': "%s/auth/%s/fallback/web" % (
249 CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
250 ),
251 }
252 html_bytes = html.encode("utf8")
253 request.setResponseCode(200)
254 request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
255 request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
256
257 request.write(html_bytes)
258 finish_request(request)
259 defer.returnValue(None)
181260 else:
182261 raise SynapseError(404, "Unknown auth stage type")
183262
358358 [LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
359359 ])
360360
361 # Append m.login.terms to all flows if we're requiring consent
362 if self.hs.config.user_consent_at_registration:
363 new_flows = []
364 for flow in flows:
365 flow.append(LoginType.TERMS)
366 flows.extend(new_flows)
367
361368 auth_result, params, session_id = yield self.auth_handler.check_auth(
362369 flows, body, self.hs.get_ip_from_request(request)
363370 )
442449 yield self._register_msisdn_threepid(
443450 registered_user_id, threepid, return_dict["access_token"],
444451 params.get("bind_msisdn")
452 )
453
454 if auth_result and LoginType.TERMS in auth_result:
455 logger.info("%s has consented to the privacy policy" % registered_user_id)
456 yield self.store.user_set_consent_version(
457 registered_user_id, self.hs.config.user_consent_version,
445458 )
446459
447460 defer.returnValue((200, return_dict))
1616
1717 from twisted.internet import defer
1818
19 from synapse.api.errors import Codes, SynapseError
19 from synapse.api.errors import Codes, NotFoundError, SynapseError
2020 from synapse.http.servlet import (
2121 RestServlet,
2222 parse_json_object_from_request,
207207 user_id, version, room_id, session_id
208208 )
209209
210 # Convert room_keys to the right format to return.
210211 if session_id:
211 room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
212 # If the client requests a specific session, but that session was
213 # not backed up, then return an M_NOT_FOUND.
214 if room_keys['rooms'] == {}:
215 raise NotFoundError("No room_keys found")
216 else:
217 room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
212218 elif room_id:
213 room_keys = room_keys['rooms'][room_id]
219 # If the client requests all sessions from a room, but no sessions
220 # are found, then return an empty result rather than an error, so
221 # that clients don't have to handle an error condition, and an
222 # empty result is valid. (Similarly if the client requests all
223 # sessions from the backup, but in that case, room_keys is already
224 # in the right format, so we don't need to do anything about it.)
225 if room_keys['rooms'] == {}:
226 room_keys = {'sessions': {}}
227 else:
228 room_keys = room_keys['rooms'][room_id]
214229
215230 defer.returnValue((200, room_keys))
216231
0 # -*- coding: utf-8 -*-
1 # Copyright 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16
17 from twisted.internet import defer
18
19 from synapse.api.constants import KNOWN_ROOM_VERSIONS
20 from synapse.api.errors import Codes, SynapseError
21 from synapse.http.servlet import (
22 RestServlet,
23 assert_params_in_dict,
24 parse_json_object_from_request,
25 )
26
27 from ._base import client_v2_patterns
28
29 logger = logging.getLogger(__name__)
30
31
32 class RoomUpgradeRestServlet(RestServlet):
33 """Handler for room uprade requests.
34
35 Handles requests of the form:
36
37 POST /_matrix/client/r0/rooms/$roomid/upgrade HTTP/1.1
38 Content-Type: application/json
39
40 {
41 "new_version": "2",
42 }
43
44 Creates a new room and shuts down the old one. Returns the ID of the new room.
45
46 Args:
47 hs (synapse.server.HomeServer):
48 """
49 PATTERNS = client_v2_patterns(
50 # /rooms/$roomid/upgrade
51 "/rooms/(?P<room_id>[^/]*)/upgrade$",
52 v2_alpha=False,
53 )
54
55 def __init__(self, hs):
56 super(RoomUpgradeRestServlet, self).__init__()
57 self._hs = hs
58 self._room_creation_handler = hs.get_room_creation_handler()
59 self._auth = hs.get_auth()
60
61 @defer.inlineCallbacks
62 def on_POST(self, request, room_id):
63 requester = yield self._auth.get_user_by_req(request)
64
65 content = parse_json_object_from_request(request)
66 assert_params_in_dict(content, ("new_version", ))
67 new_version = content["new_version"]
68
69 if new_version not in KNOWN_ROOM_VERSIONS:
70 raise SynapseError(
71 400,
72 "Your homeserver does not support this room version",
73 Codes.UNSUPPORTED_ROOM_VERSION,
74 )
75
76 new_room_id = yield self._room_creation_handler.upgrade_room(
77 requester, room_id, new_version
78 )
79
80 ret = {
81 "replacement_room": new_room_id,
82 }
83
84 defer.returnValue((200, ret))
85
86
87 def register_servlets(hs, http_server):
88 RoomUpgradeRestServlet(hs).register(http_server)
136136 request (twisted.web.http.Request):
137137 """
138138
139 version = parse_string(request, "v",
140 default=self._default_consent_version)
141 username = parse_string(request, "u", required=True)
142 userhmac = parse_string(request, "h", required=True, encoding=None)
143
144 self._check_hash(username, userhmac)
145
146 if username.startswith('@'):
147 qualified_user_id = username
148 else:
149 qualified_user_id = UserID(username, self.hs.hostname).to_string()
150
151 u = yield self.store.get_user_by_id(qualified_user_id)
152 if u is None:
153 raise NotFoundError("Unknown user")
139 version = parse_string(request, "v", default=self._default_consent_version)
140 username = parse_string(request, "u", required=False, default="")
141 userhmac = None
142 has_consented = False
143 public_version = username == ""
144 if not public_version:
145 userhmac_bytes = parse_string(request, "h", required=True, encoding=None)
146
147 self._check_hash(username, userhmac_bytes)
148
149 if username.startswith('@'):
150 qualified_user_id = username
151 else:
152 qualified_user_id = UserID(username, self.hs.hostname).to_string()
153
154 u = yield self.store.get_user_by_id(qualified_user_id)
155 if u is None:
156 raise NotFoundError("Unknown user")
157
158 has_consented = u["consent_version"] == version
159 userhmac = userhmac_bytes.decode("ascii")
154160
155161 try:
156162 self._render_template(
157163 request, "%s.html" % (version,),
158 user=username, userhmac=userhmac, version=version,
159 has_consented=(u["consent_version"] == version),
164 user=username,
165 userhmac=userhmac,
166 version=version,
167 has_consented=has_consented,
168 public_version=public_version,
160169 )
161170 except TemplateNotFound:
162171 raise NotFoundError("Unknown policy version")
222231 key=self._hmac_secret,
223232 msg=userid.encode('utf-8'),
224233 digestmod=sha256,
225 ).hexdigest()
234 ).hexdigest().encode('ascii')
226235
227236 if not compare_digest(want_mac, userhmac):
228237 raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")
+0
-14
synapse/rest/key/v1/__init__.py less more
0 # -*- coding: utf-8 -*-
1 # Copyright 2015, 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
+0
-92
synapse/rest/key/v1/server_key_resource.py less more
0 # -*- coding: utf-8 -*-
1 # Copyright 2014-2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import logging
17
18 from canonicaljson import encode_canonical_json
19 from signedjson.sign import sign_json
20 from unpaddedbase64 import encode_base64
21
22 from OpenSSL import crypto
23 from twisted.web.resource import Resource
24
25 from synapse.http.server import respond_with_json_bytes
26
27 logger = logging.getLogger(__name__)
28
29
30 class LocalKey(Resource):
31 """HTTP resource containing encoding the TLS X.509 certificate and NACL
32 signature verification keys for this server::
33
34 GET /key HTTP/1.1
35
36 HTTP/1.1 200 OK
37 Content-Type: application/json
38 {
39 "server_name": "this.server.example.com"
40 "verify_keys": {
41 "algorithm:version": # base64 encoded NACL verification key.
42 },
43 "tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
44 "signatures": {
45 "this.server.example.com": {
46 "algorithm:version": # NACL signature for this server.
47 }
48 }
49 }
50 """
51
52 def __init__(self, hs):
53 self.response_body = encode_canonical_json(
54 self.response_json_object(hs.config)
55 )
56 Resource.__init__(self)
57
58 @staticmethod
59 def response_json_object(server_config):
60 verify_keys = {}
61 for key in server_config.signing_key:
62 verify_key_bytes = key.verify_key.encode()
63 key_id = "%s:%s" % (key.alg, key.version)
64 verify_keys[key_id] = encode_base64(verify_key_bytes)
65
66 x509_certificate_bytes = crypto.dump_certificate(
67 crypto.FILETYPE_ASN1,
68 server_config.tls_certificate
69 )
70 json_object = {
71 u"server_name": server_config.server_name,
72 u"verify_keys": verify_keys,
73 u"tls_certificate": encode_base64(x509_certificate_bytes)
74 }
75 for key in server_config.signing_key:
76 json_object = sign_json(
77 json_object,
78 server_config.server_name,
79 key,
80 )
81
82 return json_object
83
84 def render_GET(self, request):
85 return respond_with_json_bytes(
86 request, 200, self.response_body,
87 )
88
89 def getChild(self, name, request):
90 if name == b'':
91 return self
+0
-68
synapse/rest/media/v1/identicon_resource.py less more
0 # Copyright 2015, 2016 OpenMarket Ltd
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from pydenticon import Generator
15
16 from twisted.web.resource import Resource
17
18 from synapse.http.servlet import parse_integer
19
20 FOREGROUND = [
21 "rgb(45,79,255)",
22 "rgb(254,180,44)",
23 "rgb(226,121,234)",
24 "rgb(30,179,253)",
25 "rgb(232,77,65)",
26 "rgb(49,203,115)",
27 "rgb(141,69,170)"
28 ]
29
30 BACKGROUND = "rgb(224,224,224)"
31 SIZE = 5
32
33
34 class IdenticonResource(Resource):
35 isLeaf = True
36
37 def __init__(self):
38 Resource.__init__(self)
39 self.generator = Generator(
40 SIZE, SIZE, foreground=FOREGROUND, background=BACKGROUND,
41 )
42
43 def generate_identicon(self, name, width, height):
44 v_padding = width % SIZE
45 h_padding = height % SIZE
46 top_padding = v_padding // 2
47 left_padding = h_padding // 2
48 bottom_padding = v_padding - top_padding
49 right_padding = h_padding - left_padding
50 width -= v_padding
51 height -= h_padding
52 padding = (top_padding, bottom_padding, left_padding, right_padding)
53 identicon = self.generator.generate(
54 name, width, height, padding=padding
55 )
56 return identicon
57
58 def render_GET(self, request):
59 name = "/".join(request.postpath)
60 width = parse_integer(request, "width", default=96)
61 height = parse_integer(request, "height", default=96)
62 identicon_bytes = self.generate_identicon(name, width, height)
63 request.setHeader(b"Content-Type", b"image/png")
64 request.setHeader(
65 b"Cache-Control", b"public,max-age=86400,s-maxage=86400"
66 )
67 return identicon_bytes
4444 from .config_resource import MediaConfigResource
4545 from .download_resource import DownloadResource
4646 from .filepath import MediaFilePaths
47 from .identicon_resource import IdenticonResource
4847 from .media_storage import MediaStorage
4948 from .preview_url_resource import PreviewUrlResource
5049 from .storage_provider import StorageProviderWrapper
768767 self.putChild(b"thumbnail", ThumbnailResource(
769768 hs, media_repo, media_repo.media_storage,
770769 ))
771 self.putChild(b"identicon", IdenticonResource())
772770 if hs.config.url_preview_enabled:
773771 self.putChild(b"preview_url", PreviewUrlResource(
774772 hs, media_repo, media_repo.media_storage,
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14
1415 import cgi
1516 import datetime
1617 import errno
2324 import sys
2425 import traceback
2526
27 import six
2628 from six import string_types
2729 from six.moves import urllib_parse as urlparse
2830
9799 # XXX: if get_user_by_req fails, what should we do in an async render?
98100 requester = yield self.auth.get_user_by_req(request)
99101 url = parse_string(request, "url")
100 if "ts" in request.args:
102 if b"ts" in request.args:
101103 ts = parse_integer(request, "ts")
102104 else:
103105 ts = self.clock.time_msec()
179181 cache_result["expires_ts"] > ts and
180182 cache_result["response_code"] / 100 == 2
181183 ):
182 defer.returnValue(cache_result["og"])
184 # It may be stored as text in the database, not as bytes (such as
185 # PostgreSQL). If so, encode it back before handing it on.
186 og = cache_result["og"]
187 if isinstance(og, six.text_type):
188 og = og.encode('utf8')
189 defer.returnValue(og)
183190 return
184191
185192 media_info = yield self._download_url(url, user)
212219 elif _is_html(media_info['media_type']):
213220 # TODO: somehow stop a big HTML tree from exploding synapse's RAM
214221
215 file = open(media_info['filename'])
216 body = file.read()
217 file.close()
222 with open(media_info['filename'], 'rb') as file:
223 body = file.read()
218224
219225 # clobber the encoding from the content-type, or default to utf-8
220226 # XXX: this overrides any <meta/> or XML charset headers in the body
221227 # which may pose problems, but so far seems to work okay.
222 match = re.match(r'.*; *charset=(.*?)(;|$)', media_info['media_type'], re.I)
228 match = re.match(
229 r'.*; *charset="?(.*?)"?(;|$)',
230 media_info['media_type'],
231 re.I
232 )
223233 encoding = match.group(1) if match else "utf-8"
224234
225235 og = decode_and_calc_og(body, media_info['uri'], encoding)
2222 import logging
2323
2424 from twisted.enterprise import adbapi
25 from twisted.mail.smtp import sendmail
2526 from twisted.web.client import BrowserLikePolicyForHTTPS
2627
2728 from synapse.api.auth import Auth
173174 'message_handler',
174175 'pagination_handler',
175176 'room_context_handler',
177 'sendmail',
176178 ]
177179
178180 # This is overridden in derived application classes
267269
268270 def build_room_creation_handler(self):
269271 return RoomCreationHandler(self)
272
273 def build_sendmail(self):
274 return sendmail
270275
271276 def build_state_handler(self):
272277 return StateHandler(self)
66 import synapse.handlers.deactivate_account
77 import synapse.handlers.device
88 import synapse.handlers.e2e_keys
9 import synapse.handlers.room
10 import synapse.handlers.room_member
11 import synapse.handlers.message
912 import synapse.handlers.set_password
1013 import synapse.rest.media.v1.media_repository
1114 import synapse.server_notices.server_notices_manager
4952 def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
5053 pass
5154
55 def get_room_member_handler(self) -> synapse.handlers.room_member.RoomMemberHandler:
56 pass
57
5258 def get_event_creation_handler(self) -> synapse.handlers.message.EventCreationHandler:
5359 pass
5460
260260 logger.debug("calling resolve_state_groups from compute_event_context")
261261
262262 entry = yield self.resolve_state_groups_for_events(
263 event.room_id, [e for e, _ in event.prev_events],
263 event.room_id, event.prev_event_ids(),
264264 )
265265
266266 prev_state_ids = entry.state
606606 return v1.resolve_events_with_store(
607607 state_sets, event_map, state_res_store.get_events,
608608 )
609 elif room_version == RoomVersions.VDH_TEST:
609 elif room_version in (RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST):
610610 return v2.resolve_events_with_store(
611611 state_sets, event_map, state_res_store,
612612 )
5252
5353 logger.debug("Computing conflicted state")
5454
55 # We use event_map as a cache, so if its None we need to initialize it
56 if event_map is None:
57 event_map = {}
58
5559 # First split up the un/conflicted state
5660 unconflicted_state, conflicted_state = _seperate(state_sets)
5761
154158 event = yield _get_event(event_id, event_map, state_res_store)
155159
156160 pl = None
157 for aid, _ in event.auth_events:
161 for aid in event.auth_event_ids():
158162 aev = yield _get_event(aid, event_map, state_res_store)
159163 if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
160164 pl = aev
162166
163167 if pl is None:
164168 # Couldn't find power level. Check if they're the creator of the room
165 for aid, _ in event.auth_events:
169 for aid in event.auth_event_ids():
166170 aev = yield _get_event(aid, event_map, state_res_store)
167171 if (aev.type, aev.state_key) == (EventTypes.Create, ""):
168172 if aev.content.get("creator") == event.sender:
294298 graph.setdefault(eid, set())
295299
296300 event = yield _get_event(eid, event_map, state_res_store)
297 for aid, _ in event.auth_events:
301 for aid in event.auth_event_ids():
298302 if aid in auth_diff:
299303 if aid not in graph:
300304 state.append(aid)
364368 event = event_map[event_id]
365369
366370 auth_events = {}
367 for aid, _ in event.auth_events:
371 for aid in event.auth_event_ids():
368372 ev = yield _get_event(aid, event_map, state_res_store)
369373
370374 if ev.rejected_reason is None:
412416 while pl:
413417 mainline.append(pl)
414418 pl_ev = yield _get_event(pl, event_map, state_res_store)
415 auth_events = pl_ev.auth_events
419 auth_events = pl_ev.auth_event_ids()
416420 pl = None
417 for aid, _ in auth_events:
421 for aid in auth_events:
418422 ev = yield _get_event(aid, event_map, state_res_store)
419423 if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
420424 pl = aid
459463 if depth is not None:
460464 defer.returnValue(depth)
461465
462 auth_events = event.auth_events
466 auth_events = event.auth_event_ids()
463467 event = None
464468
465 for aid, _ in auth_events:
469 for aid in auth_events:
466470 aev = yield _get_event(aid, event_map, state_res_store)
467471 if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
468472 event = aev
2121
2222 from synapse.api.errors import StoreError
2323 from synapse.metrics.background_process_metrics import run_as_background_process
24 from synapse.storage.background_updates import BackgroundUpdateStore
2425 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
2526
26 from ._base import Cache, SQLBaseStore, db_to_json
27 from ._base import Cache, db_to_json
2728
2829 logger = logging.getLogger(__name__)
2930
30
31 class DeviceStore(SQLBaseStore):
31 DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
32 "drop_device_list_streams_non_unique_indexes"
33 )
34
35
36 class DeviceStore(BackgroundUpdateStore):
3237 def __init__(self, db_conn, hs):
3338 super(DeviceStore, self).__init__(db_conn, hs)
3439
4954 index_name="device_lists_stream_user_id",
5055 table="device_lists_stream",
5156 columns=["user_id", "device_id"],
57 )
58
59 # create a unique index on device_lists_remote_cache
60 self.register_background_index_update(
61 "device_lists_remote_cache_unique_idx",
62 index_name="device_lists_remote_cache_unique_id",
63 table="device_lists_remote_cache",
64 columns=["user_id", "device_id"],
65 unique=True,
66 )
67
68 # And one on device_lists_remote_extremeties
69 self.register_background_index_update(
70 "device_lists_remote_extremeties_unique_idx",
71 index_name="device_lists_remote_extremeties_unique_idx",
72 table="device_lists_remote_extremeties",
73 columns=["user_id"],
74 unique=True,
75 )
76
77 # once they complete, we can remove the old non-unique indexes.
78 self.register_background_update_handler(
79 DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES,
80 self._drop_device_list_streams_non_unique_indexes,
5281 )
5382
5483 @defer.inlineCallbacks
238267
239268 def update_remote_device_list_cache_entry(self, user_id, device_id, content,
240269 stream_id):
241 """Updates a single user's device in the cache.
270 """Updates a single device in the cache of a remote user's devicelist.
271
272 Note: assumes that we are the only thread that can be updating this user's
273 device list.
274
275 Args:
276 user_id (str): User to update device list for
277 device_id (str): ID of decivice being updated
278 content (dict): new data on this device
279 stream_id (int): the version of the device list
280
281 Returns:
282 Deferred[None]
242283 """
243284 return self.runInteraction(
244285 "update_remote_device_list_cache_entry",
271312 },
272313 values={
273314 "content": json.dumps(content),
274 }
315 },
316
317 # we don't need to lock, because we assume we are the only thread
318 # updating this user's devices.
319 lock=False,
275320 )
276321
277322 txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
288333 },
289334 values={
290335 "stream_id": stream_id,
291 }
336 },
337
338 # again, we can assume we are the only thread updating this user's
339 # extremity.
340 lock=False,
292341 )
293342
294343 def update_remote_device_list_cache(self, user_id, devices, stream_id):
295 """Replace the cache of the remote user's devices.
344 """Replace the entire cache of the remote user's devices.
345
346 Note: assumes that we are the only thread that can be updating this user's
347 device list.
348
349 Args:
350 user_id (str): User to update device list for
351 devices (list[dict]): list of device objects supplied over federation
352 stream_id (int): the version of the device list
353
354 Returns:
355 Deferred[None]
296356 """
297357 return self.runInteraction(
298358 "update_remote_device_list_cache",
337397 },
338398 values={
339399 "stream_id": stream_id,
340 }
400 },
401
402 # we don't need to lock, because we can assume we are the only thread
403 # updating this user's extremity.
404 lock=False,
341405 )
342406
343407 def get_devices_by_remote(self, destination, from_stream_id):
588652 combined list of changes to devices, and which destinations need to be
589653 poked. `destination` may be None if no destinations need to be poked.
590654 """
655 # We do a group by here as there can be a large number of duplicate
656 # entries, since we throw away device IDs.
591657 sql = """
592 SELECT stream_id, user_id, destination FROM device_lists_stream
658 SELECT MAX(stream_id) AS stream_id, user_id, destination
659 FROM device_lists_stream
593660 LEFT JOIN device_lists_outbound_pokes USING (stream_id, user_id, device_id)
594661 WHERE ? < stream_id AND stream_id <= ?
662 GROUP BY user_id, destination
595663 """
596664 return self._execute(
597665 "get_all_device_list_changes_for_remotes", None,
717785 "_prune_old_outbound_device_pokes",
718786 _prune_txn,
719787 )
788
789 @defer.inlineCallbacks
790 def _drop_device_list_streams_non_unique_indexes(self, progress, batch_size):
791 def f(conn):
792 txn = conn.cursor()
793 txn.execute(
794 "DROP INDEX IF EXISTS device_lists_remote_cache_id"
795 )
796 txn.execute(
797 "DROP INDEX IF EXISTS device_lists_remote_extremeties_id"
798 )
799 txn.close()
800
801 yield self.runWithConnection(f)
802 yield self._end_background_update(DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES)
803 defer.returnValue(1)
117117 these room keys.
118118 """
119119
120 try:
121 version = int(version)
122 except ValueError:
123 defer.returnValue({'rooms': {}})
124
120125 keyvalues = {
121126 "user_id": user_id,
122127 "version": version,
211216 Raises:
212217 StoreError: with code 404 if there are no e2e_room_keys_versions present
213218 Returns:
214 A deferred dict giving the info metadata for this backup version
219 A deferred dict giving the info metadata for this backup version, with
220 fields including:
221 version(str)
222 algorithm(str)
223 auth_data(object): opaque dict supplied by the client
215224 """
216225
217226 def _get_e2e_room_keys_version_info_txn(txn):
218227 if version is None:
219228 this_version = self._get_current_version(txn, user_id)
220229 else:
221 this_version = version
230 try:
231 this_version = int(version)
232 except ValueError:
233 # Our versions are all ints so if we can't convert it to an integer,
234 # it isn't there.
235 raise StoreError(404, "No row found")
222236
223237 result = self._simple_select_one_txn(
224238 txn,
235249 ),
236250 )
237251 result["auth_data"] = json.loads(result["auth_data"])
252 result["version"] = str(result["version"])
238253 return result
239254
240255 return self.runInteraction(
3939 allow_none=True,
4040 )
4141
42 new_key_json = encode_canonical_json(device_keys)
42 # In py3 we need old_key_json to match new_key_json type. The DB
43 # returns unicode while encode_canonical_json returns bytes.
44 new_key_json = encode_canonical_json(device_keys).decode("utf-8")
45
4346 if old_key_json == new_key_json:
4447 return False
4548
476476 "is_state": False,
477477 }
478478 for ev in events
479 for e_id, _ in ev.prev_events
479 for e_id in ev.prev_event_ids()
480480 ],
481481 )
482482
509509
510510 txn.executemany(query, [
511511 (e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
512 for ev in events for e_id, _ in ev.prev_events
512 for ev in events for e_id in ev.prev_event_ids()
513513 if not ev.internal_metadata.is_outlier()
514514 ])
515515
3737 from synapse.storage.background_updates import BackgroundUpdateStore
3838 from synapse.storage.event_federation import EventFederationStore
3939 from synapse.storage.events_worker import EventsWorkerStore
40 from synapse.storage.state import StateGroupWorkerStore
4041 from synapse.types import RoomStreamToken, get_domain_from_id
4142 from synapse.util import batch_iter
4243 from synapse.util.async_helpers import ObservableDeferred
204205
205206 # inherits from EventFederationStore so that we can call _update_backward_extremities
206207 # and _handle_mult_prev_events (though arguably those could both be moved in here)
207 class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore):
208 class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore,
209 BackgroundUpdateStore):
208210 EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
209211 EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
210212
413415 )
414416 if len_1:
415417 all_single_prev_not_state = all(
416 len(event.prev_events) == 1
418 len(event.prev_event_ids()) == 1
417419 and not event.is_state()
418420 for event, ctx in ev_ctx_rm
419421 )
437439 # guess this by looking at the prev_events and checking
438440 # if they match the current forward extremities.
439441 for ev, _ in ev_ctx_rm:
440 prev_event_ids = set(e for e, _ in ev.prev_events)
442 prev_event_ids = set(ev.prev_event_ids())
441443 if latest_event_ids == prev_event_ids:
442444 state_delta_reuse_delta_counter.inc()
443445 break
548550 result.difference_update(
549551 e_id
550552 for event in new_events
551 for e_id, _ in event.prev_events
553 for e_id in event.prev_event_ids()
552554 )
553555
554556 # Finally, remove any events which are prev_events of any existing events.
866868 "auth_id": auth_id,
867869 }
868870 for event, _ in events_and_contexts
869 for auth_id, _ in event.auth_events
871 for auth_id in event.auth_event_ids()
870872 if event.is_state()
871873 ],
872874 )
20332035
20342036 logger.info("[purge] finding redundant state groups")
20352037
2036 # Get all state groups that are only referenced by events that are
2037 # to be deleted.
2038 # This works by first getting state groups that we may want to delete,
2039 # joining against event_to_state_groups to get events that use that
2040 # state group, then left joining against events_to_purge again. Any
2041 # state group where the left join produce *no nulls* are referenced
2042 # only by events that are going to be purged.
2038 # Get all state groups that are referenced by events that are to be
2039 # deleted. We then go and check if they are referenced by other events
2040 # or state groups, and if not we delete them.
20432041 txn.execute("""
2044 SELECT state_group FROM
2045 (
2046 SELECT DISTINCT state_group FROM events_to_purge
2047 INNER JOIN event_to_state_groups USING (event_id)
2048 ) AS sp
2049 INNER JOIN event_to_state_groups USING (state_group)
2050 LEFT JOIN events_to_purge AS ep USING (event_id)
2051 GROUP BY state_group
2052 HAVING SUM(CASE WHEN ep.event_id IS NULL THEN 1 ELSE 0 END) = 0
2042 SELECT DISTINCT state_group FROM events_to_purge
2043 INNER JOIN event_to_state_groups USING (event_id)
20532044 """)
20542045
2055 state_rows = txn.fetchall()
2056 logger.info("[purge] found %i redundant state groups", len(state_rows))
2057
2058 # make a set of the redundant state groups, so that we can look them up
2059 # efficiently
2060 state_groups_to_delete = set([sg for sg, in state_rows])
2061
2062 # Now we get all the state groups that rely on these state groups
2063 logger.info("[purge] finding state groups which depend on redundant"
2064 " state groups")
2065 remaining_state_groups = []
2066 for i in range(0, len(state_rows), 100):
2067 chunk = [sg for sg, in state_rows[i:i + 100]]
2068 # look for state groups whose prev_state_group is one we are about
2069 # to delete
2070 rows = self._simple_select_many_txn(
2071 txn,
2072 table="state_group_edges",
2073 column="prev_state_group",
2074 iterable=chunk,
2075 retcols=["state_group"],
2076 keyvalues={},
2077 )
2078 remaining_state_groups.extend(
2079 row["state_group"] for row in rows
2080
2081 # exclude state groups we are about to delete: no point in
2082 # updating them
2083 if row["state_group"] not in state_groups_to_delete
2084 )
2046 referenced_state_groups = set(sg for sg, in txn)
2047 logger.info(
2048 "[purge] found %i referenced state groups",
2049 len(referenced_state_groups),
2050 )
2051
2052 logger.info("[purge] finding state groups that can be deleted")
2053
2054 state_groups_to_delete, remaining_state_groups = (
2055 self._find_unreferenced_groups_during_purge(
2056 txn, referenced_state_groups,
2057 )
2058 )
2059
2060 logger.info(
2061 "[purge] found %i state groups to delete",
2062 len(state_groups_to_delete),
2063 )
2064
2065 logger.info(
2066 "[purge] de-delta-ing %i remaining state groups",
2067 len(remaining_state_groups),
2068 )
20852069
20862070 # Now we turn the state groups that reference to-be-deleted state
20872071 # groups to non delta versions.
21262110 logger.info("[purge] removing redundant state groups")
21272111 txn.executemany(
21282112 "DELETE FROM state_groups_state WHERE state_group = ?",
2129 state_rows
2113 ((sg,) for sg in state_groups_to_delete),
21302114 )
21312115 txn.executemany(
21322116 "DELETE FROM state_groups WHERE id = ?",
2133 state_rows
2117 ((sg,) for sg in state_groups_to_delete),
21342118 )
21352119
21362120 logger.info("[purge] removing events from event_to_state_groups")
22262210
22272211 logger.info("[purge] done")
22282212
2213 def _find_unreferenced_groups_during_purge(self, txn, state_groups):
2214 """Used when purging history to figure out which state groups can be
2215 deleted and which need to be de-delta'ed (due to one of its prev groups
2216 being scheduled for deletion).
2217
2218 Args:
2219 txn
2220 state_groups (set[int]): Set of state groups referenced by events
2221 that are going to be deleted.
2222
2223 Returns:
2224 tuple[set[int], set[int]]: The set of state groups that can be
2225 deleted and the set of state groups that need to be de-delta'ed
2226 """
2227 # Graph of state group -> previous group
2228 graph = {}
2229
2230 # Set of events that we have found to be referenced by events
2231 referenced_groups = set()
2232
2233 # Set of state groups we've already seen
2234 state_groups_seen = set(state_groups)
2235
2236 # Set of state groups to handle next.
2237 next_to_search = set(state_groups)
2238 while next_to_search:
2239 # We bound size of groups we're looking up at once, to stop the
2240 # SQL query getting too big
2241 if len(next_to_search) < 100:
2242 current_search = next_to_search
2243 next_to_search = set()
2244 else:
2245 current_search = set(itertools.islice(next_to_search, 100))
2246 next_to_search -= current_search
2247
2248 # Check if state groups are referenced
2249 sql = """
2250 SELECT DISTINCT state_group FROM event_to_state_groups
2251 LEFT JOIN events_to_purge AS ep USING (event_id)
2252 WHERE state_group IN (%s) AND ep.event_id IS NULL
2253 """ % (",".join("?" for _ in current_search),)
2254 txn.execute(sql, list(current_search))
2255
2256 referenced = set(sg for sg, in txn)
2257 referenced_groups |= referenced
2258
2259 # We don't continue iterating up the state group graphs for state
2260 # groups that are referenced.
2261 current_search -= referenced
2262
2263 rows = self._simple_select_many_txn(
2264 txn,
2265 table="state_group_edges",
2266 column="prev_state_group",
2267 iterable=current_search,
2268 keyvalues={},
2269 retcols=("prev_state_group", "state_group",),
2270 )
2271
2272 prevs = set(row["state_group"] for row in rows)
2273 # We don't bother re-handling groups we've already seen
2274 prevs -= state_groups_seen
2275 next_to_search |= prevs
2276 state_groups_seen |= prevs
2277
2278 for row in rows:
2279 # Note: Each state group can have at most one prev group
2280 graph[row["state_group"]] = row["prev_state_group"]
2281
2282 to_delete = state_groups_seen - referenced_groups
2283
2284 to_dedelta = set()
2285 for sg in referenced_groups:
2286 prev_sg = graph.get(sg)
2287 if prev_sg and prev_sg in to_delete:
2288 to_dedelta.add(sg)
2289
2290 return to_delete, to_dedelta
2291
22292292 @defer.inlineCallbacks
22302293 def is_event_after(self, event_id1, event_id2):
22312294 """Returns True if event_id1 is after event_id2 in the stream
2424
2525 # Remember to update this number every time a change is made to database
2626 # schema files, so the users will be informed on server restarts.
27 SCHEMA_VERSION = 51
27 SCHEMA_VERSION = 52
2828
2929 dir_path = os.path.abspath(os.path.dirname(__file__))
3030
4646 Args:
4747 room_id (str): The ID of the room to retrieve.
4848 Returns:
49 A namedtuple containing the room information, or an empty list.
49 A dict containing the room information, or None if the room is unknown.
5050 """
5151 return self._simple_select_one(
5252 table="rooms",
1919 content TEXT NOT NULL
2020 );
2121
22 CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
23
24
2522 -- The last update we got for a user. Empty if we're not receiving updates for
2623 -- that user.
2724 CREATE TABLE device_lists_remote_extremeties (
2926 stream_id TEXT NOT NULL
3027 );
3128
32 CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
29 -- we used to create non-unique indexes on these tables, but as of update 52 we create
30 -- unique indexes concurrently:
31 --
32 -- CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
33 -- CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
3334
3435
3536 -- Stream of device lists updates. Includes both local and remotes
0 /* Copyright 2018 New Vector Ltd
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- This is needed to efficiently check for unreferenced state groups during
16 -- purge. Added events_to_state_group(state_group) index
17 INSERT into background_updates (update_name, progress_json)
18 VALUES ('event_to_state_groups_sg_index', '{}');
0 /* Copyright 2018 New Vector Ltd
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- register a background update which will create a unique index on
16 -- device_lists_remote_cache
17 INSERT into background_updates (update_name, progress_json)
18 VALUES ('device_lists_remote_cache_unique_idx', '{}');
19
20 -- and one on device_lists_remote_extremeties
21 INSERT into background_updates (update_name, progress_json, depends_on)
22 VALUES (
23 'device_lists_remote_extremeties_unique_idx', '{}',
24
25 -- doesn't really depend on this, but we need to make sure both happen
26 -- before we drop the old indexes.
27 'device_lists_remote_cache_unique_idx'
28 );
29
30 -- once they complete, we can drop the old indexes.
31 INSERT into background_updates (update_name, progress_json, depends_on)
32 VALUES (
33 'drop_device_list_streams_non_unique_indexes', '{}',
34 'device_lists_remote_extremeties_unique_idx'
35 );
0 /* Copyright 2018 New Vector Ltd
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 /* Change version column to an integer so we can do MAX() sensibly
16 */
17 CREATE TABLE e2e_room_keys_versions_new (
18 user_id TEXT NOT NULL,
19 version BIGINT NOT NULL,
20 algorithm TEXT NOT NULL,
21 auth_data TEXT NOT NULL,
22 deleted SMALLINT DEFAULT 0 NOT NULL
23 );
24
25 INSERT INTO e2e_room_keys_versions_new
26 SELECT user_id, CAST(version as BIGINT), algorithm, auth_data, deleted FROM e2e_room_keys_versions;
27
28 DROP TABLE e2e_room_keys_versions;
29 ALTER TABLE e2e_room_keys_versions_new RENAME TO e2e_room_keys_versions;
30
31 CREATE UNIQUE INDEX e2e_room_keys_versions_idx ON e2e_room_keys_versions(user_id, version);
32
33 /* Change e2e_rooms_keys to match
34 */
35 CREATE TABLE e2e_room_keys_new (
36 user_id TEXT NOT NULL,
37 room_id TEXT NOT NULL,
38 session_id TEXT NOT NULL,
39 version BIGINT NOT NULL,
40 first_message_index INT,
41 forwarded_count INT,
42 is_verified BOOLEAN,
43 session_data TEXT NOT NULL
44 );
45
46 INSERT INTO e2e_room_keys_new
47 SELECT user_id, room_id, session_id, CAST(version as BIGINT), first_message_index, forwarded_count, is_verified, session_data FROM e2e_room_keys;
48
49 DROP TABLE e2e_room_keys;
50 ALTER TABLE e2e_room_keys_new RENAME TO e2e_room_keys;
51
52 CREATE UNIQUE INDEX e2e_room_keys_idx ON e2e_room_keys(user_id, room_id, session_id);
12561256 STATE_GROUP_DEDUPLICATION_UPDATE_NAME = "state_group_state_deduplication"
12571257 STATE_GROUP_INDEX_UPDATE_NAME = "state_group_state_type_index"
12581258 CURRENT_STATE_INDEX_UPDATE_NAME = "current_state_members_idx"
1259 EVENT_STATE_GROUP_INDEX_UPDATE_NAME = "event_to_state_groups_sg_index"
12591260
12601261 def __init__(self, db_conn, hs):
12611262 super(StateStore, self).__init__(db_conn, hs)
12731274 table="current_state_events",
12741275 columns=["state_key"],
12751276 where_clause="type='m.room.member'",
1277 )
1278 self.register_background_index_update(
1279 self.EVENT_STATE_GROUP_INDEX_UPDATE_NAME,
1280 index_name="event_to_state_groups_sg_index",
1281 table="event_to_state_groups",
1282 columns=["state_group"],
12761283 )
12771284
12781285 def _store_event_state_mappings_txn(self, txn, events_and_contexts):
168168 self.assertEqual(res, 404)
169169
170170 @defer.inlineCallbacks
171 def test_get_missing_backup(self):
172 """Check that we get a 404 on querying missing backup
173 """
174 res = None
175 try:
176 yield self.handler.get_room_keys(self.local_user, "bogus_version")
177 except errors.SynapseError as e:
178 res = e.code
179 self.assertEqual(res, 404)
180
181 @defer.inlineCallbacks
171182 def test_get_missing_room_keys(self):
172 """Check that we get a 404 on querying missing room_keys
173 """
174 res = None
175 try:
176 yield self.handler.get_room_keys(self.local_user, "bogus_version")
177 except errors.SynapseError as e:
178 res = e.code
179 self.assertEqual(res, 404)
180
181 # check we also get a 404 even if the version is valid
182 version = yield self.handler.create_version(self.local_user, {
183 "algorithm": "m.megolm_backup.v1",
184 "auth_data": "first_version_auth_data",
185 })
186 self.assertEqual(version, "1")
187
188 res = None
189 try:
190 yield self.handler.get_room_keys(self.local_user, version)
191 except errors.SynapseError as e:
192 res = e.code
193 self.assertEqual(res, 404)
183 """Check we get an empty response from an empty backup
184 """
185 version = yield self.handler.create_version(self.local_user, {
186 "algorithm": "m.megolm_backup.v1",
187 "auth_data": "first_version_auth_data",
188 })
189 self.assertEqual(version, "1")
190
191 res = yield self.handler.get_room_keys(self.local_user, version)
192 self.assertDictEqual(res, {
193 "rooms": {}
194 })
194195
195196 # TODO: test the locking semantics when uploading room_keys,
196197 # although this is probably best done in sytest
344345 # check for bulk-delete
345346 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
346347 yield self.handler.delete_room_keys(self.local_user, version)
347 res = None
348 try:
349 yield self.handler.get_room_keys(
350 self.local_user,
351 version,
352 room_id="!abc:matrix.org",
353 session_id="c0ff33",
354 )
355 except errors.SynapseError as e:
356 res = e.code
357 self.assertEqual(res, 404)
348 res = yield self.handler.get_room_keys(
349 self.local_user,
350 version,
351 room_id="!abc:matrix.org",
352 session_id="c0ff33",
353 )
354 self.assertDictEqual(res, {
355 "rooms": {}
356 })
358357
359358 # check for bulk-delete per room
360359 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
363362 version,
364363 room_id="!abc:matrix.org",
365364 )
366 res = None
367 try:
368 yield self.handler.get_room_keys(
369 self.local_user,
370 version,
371 room_id="!abc:matrix.org",
372 session_id="c0ff33",
373 )
374 except errors.SynapseError as e:
375 res = e.code
376 self.assertEqual(res, 404)
365 res = yield self.handler.get_room_keys(
366 self.local_user,
367 version,
368 room_id="!abc:matrix.org",
369 session_id="c0ff33",
370 )
371 self.assertDictEqual(res, {
372 "rooms": {}
373 })
377374
378375 # check for bulk-delete per session
379376 yield self.handler.upload_room_keys(self.local_user, version, room_keys)
383380 room_id="!abc:matrix.org",
384381 session_id="c0ff33",
385382 )
386 res = None
387 try:
388 yield self.handler.get_room_keys(
389 self.local_user,
390 version,
391 room_id="!abc:matrix.org",
392 session_id="c0ff33",
393 )
394 except errors.SynapseError as e:
395 res = e.code
396 self.assertEqual(res, 404)
383 res = yield self.handler.get_room_keys(
384 self.local_user,
385 version,
386 room_id="!abc:matrix.org",
387 session_id="c0ff33",
388 )
389 self.assertDictEqual(res, {
390 "rooms": {}
391 })
(New empty file)
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 import pkg_resources
18
19 from twisted.internet.defer import Deferred
20
21 from synapse.rest.client.v1 import admin, login, room
22
23 from tests.unittest import HomeserverTestCase
24
25 try:
26 from synapse.push.mailer import load_jinja2_templates
27 except Exception:
28 load_jinja2_templates = None
29
30
31 class EmailPusherTests(HomeserverTestCase):
32
33 skip = "No Jinja installed" if not load_jinja2_templates else None
34 servlets = [
35 admin.register_servlets,
36 room.register_servlets,
37 login.register_servlets,
38 ]
39 user_id = True
40 hijack_auth = False
41
42 def make_homeserver(self, reactor, clock):
43
44 # List[Tuple[Deferred, args, kwargs]]
45 self.email_attempts = []
46
47 def sendmail(*args, **kwargs):
48 d = Deferred()
49 self.email_attempts.append((d, args, kwargs))
50 return d
51
52 config = self.default_config()
53 config.email_enable_notifs = True
54 config.start_pushers = True
55
56 config.email_template_dir = os.path.abspath(
57 pkg_resources.resource_filename('synapse', 'res/templates')
58 )
59 config.email_notif_template_html = "notif_mail.html"
60 config.email_notif_template_text = "notif_mail.txt"
61 config.email_smtp_host = "127.0.0.1"
62 config.email_smtp_port = 20
63 config.require_transport_security = False
64 config.email_smtp_user = None
65 config.email_app_name = "Matrix"
66 config.email_notif_from = "test@example.com"
67
68 hs = self.setup_test_homeserver(config=config, sendmail=sendmail)
69
70 return hs
71
72 def test_sends_email(self):
73
74 # Register the user who gets notified
75 user_id = self.register_user("user", "pass")
76 access_token = self.login("user", "pass")
77
78 # Register the user who sends the message
79 other_user_id = self.register_user("otheruser", "pass")
80 other_access_token = self.login("otheruser", "pass")
81
82 # Register the pusher
83 user_tuple = self.get_success(
84 self.hs.get_datastore().get_user_by_access_token(access_token)
85 )
86 token_id = user_tuple["token_id"]
87
88 self.get_success(
89 self.hs.get_pusherpool().add_pusher(
90 user_id=user_id,
91 access_token=token_id,
92 kind="email",
93 app_id="m.email",
94 app_display_name="Email Notifications",
95 device_display_name="a@example.com",
96 pushkey="a@example.com",
97 lang=None,
98 data={},
99 )
100 )
101
102 # Create a room
103 room = self.helper.create_room_as(user_id, tok=access_token)
104
105 # Invite the other person
106 self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
107
108 # The other user joins
109 self.helper.join(room=room, user=other_user_id, tok=other_access_token)
110
111 # The other user sends some messages
112 self.helper.send(room, body="Hi!", tok=other_access_token)
113 self.helper.send(room, body="There!", tok=other_access_token)
114
115 # Get the stream ordering before it gets sent
116 pushers = self.get_success(
117 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
118 )
119 self.assertEqual(len(pushers), 1)
120 last_stream_ordering = pushers[0]["last_stream_ordering"]
121
122 # Advance time a bit, so the pusher will register something has happened
123 self.pump(100)
124
125 # It hasn't succeeded yet, so the stream ordering shouldn't have moved
126 pushers = self.get_success(
127 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
128 )
129 self.assertEqual(len(pushers), 1)
130 self.assertEqual(last_stream_ordering, pushers[0]["last_stream_ordering"])
131
132 # One email was attempted to be sent
133 self.assertEqual(len(self.email_attempts), 1)
134
135 # Make the email succeed
136 self.email_attempts[0][0].callback(True)
137 self.pump()
138
139 # One email was attempted to be sent
140 self.assertEqual(len(self.email_attempts), 1)
141
142 # The stream ordering has increased
143 pushers = self.get_success(
144 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
145 )
146 self.assertEqual(len(pushers), 1)
147 self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from mock import Mock
16
17 from twisted.internet.defer import Deferred
18
19 from synapse.rest.client.v1 import admin, login, room
20
21 from tests.unittest import HomeserverTestCase
22
23 try:
24 from synapse.push.mailer import load_jinja2_templates
25 except Exception:
26 load_jinja2_templates = None
27
28
29 class HTTPPusherTests(HomeserverTestCase):
30
31 skip = "No Jinja installed" if not load_jinja2_templates else None
32 servlets = [
33 admin.register_servlets,
34 room.register_servlets,
35 login.register_servlets,
36 ]
37 user_id = True
38 hijack_auth = False
39
40 def make_homeserver(self, reactor, clock):
41
42 self.push_attempts = []
43
44 m = Mock()
45
46 def post_json_get_json(url, body):
47 d = Deferred()
48 self.push_attempts.append((d, url, body))
49 return d
50
51 m.post_json_get_json = post_json_get_json
52
53 config = self.default_config()
54 config.start_pushers = True
55
56 hs = self.setup_test_homeserver(config=config, simple_http_client=m)
57
58 return hs
59
60 def test_sends_http(self):
61 """
62 The HTTP pusher will send pushes for each message to a HTTP endpoint
63 when configured to do so.
64 """
65 # Register the user who gets notified
66 user_id = self.register_user("user", "pass")
67 access_token = self.login("user", "pass")
68
69 # Register the user who sends the message
70 other_user_id = self.register_user("otheruser", "pass")
71 other_access_token = self.login("otheruser", "pass")
72
73 # Register the pusher
74 user_tuple = self.get_success(
75 self.hs.get_datastore().get_user_by_access_token(access_token)
76 )
77 token_id = user_tuple["token_id"]
78
79 self.get_success(
80 self.hs.get_pusherpool().add_pusher(
81 user_id=user_id,
82 access_token=token_id,
83 kind="http",
84 app_id="m.http",
85 app_display_name="HTTP Push Notifications",
86 device_display_name="pushy push",
87 pushkey="a@example.com",
88 lang=None,
89 data={"url": "example.com"},
90 )
91 )
92
93 # Create a room
94 room = self.helper.create_room_as(user_id, tok=access_token)
95
96 # Invite the other person
97 self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
98
99 # The other user joins
100 self.helper.join(room=room, user=other_user_id, tok=other_access_token)
101
102 # The other user sends some messages
103 self.helper.send(room, body="Hi!", tok=other_access_token)
104 self.helper.send(room, body="There!", tok=other_access_token)
105
106 # Get the stream ordering before it gets sent
107 pushers = self.get_success(
108 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
109 )
110 self.assertEqual(len(pushers), 1)
111 last_stream_ordering = pushers[0]["last_stream_ordering"]
112
113 # Advance time a bit, so the pusher will register something has happened
114 self.pump()
115
116 # It hasn't succeeded yet, so the stream ordering shouldn't have moved
117 pushers = self.get_success(
118 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
119 )
120 self.assertEqual(len(pushers), 1)
121 self.assertEqual(last_stream_ordering, pushers[0]["last_stream_ordering"])
122
123 # One push was attempted to be sent -- it'll be the first message
124 self.assertEqual(len(self.push_attempts), 1)
125 self.assertEqual(self.push_attempts[0][1], "example.com")
126 self.assertEqual(
127 self.push_attempts[0][2]["notification"]["content"]["body"], "Hi!"
128 )
129
130 # Make the push succeed
131 self.push_attempts[0][0].callback({})
132 self.pump()
133
134 # The stream ordering has increased
135 pushers = self.get_success(
136 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
137 )
138 self.assertEqual(len(pushers), 1)
139 self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
140 last_stream_ordering = pushers[0]["last_stream_ordering"]
141
142 # Now it'll try and send the second push message, which will be the second one
143 self.assertEqual(len(self.push_attempts), 2)
144 self.assertEqual(self.push_attempts[1][1], "example.com")
145 self.assertEqual(
146 self.push_attempts[1][2]["notification"]["content"]["body"], "There!"
147 )
148
149 # Make the second push succeed
150 self.push_attempts[1][0].callback({})
151 self.pump()
152
153 # The stream ordering has increased, again
154 pushers = self.get_success(
155 self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
156 )
157 self.assertEqual(len(pushers), 1)
158 self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
2727
2828
2929 def dict_equals(self, other):
30 me = encode_canonical_json(self._event_dict)
31 them = encode_canonical_json(other._event_dict)
30 me = encode_canonical_json(self.get_pdu_json())
31 them = encode_canonical_json(other.get_pdu_json())
3232 return me == them
3333
3434
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from synapse.api.urls import ConsentURIBuilder
18 from synapse.rest.client.v1 import admin, login, room
19 from synapse.rest.consent import consent_resource
20
21 from tests import unittest
22 from tests.server import render
23
24 try:
25 from synapse.push.mailer import load_jinja2_templates
26 except Exception:
27 load_jinja2_templates = None
28
29
30 class ConsentResourceTestCase(unittest.HomeserverTestCase):
31 skip = "No Jinja installed" if not load_jinja2_templates else None
32 servlets = [
33 admin.register_servlets,
34 room.register_servlets,
35 login.register_servlets,
36 ]
37 user_id = True
38 hijack_auth = False
39
40 def make_homeserver(self, reactor, clock):
41
42 config = self.default_config()
43 config.user_consent_version = "1"
44 config.public_baseurl = ""
45 config.form_secret = "123abc"
46
47 # Make some temporary templates...
48 temp_consent_path = self.mktemp()
49 os.mkdir(temp_consent_path)
50 os.mkdir(os.path.join(temp_consent_path, 'en'))
51 config.user_consent_template_dir = os.path.abspath(temp_consent_path)
52
53 with open(os.path.join(temp_consent_path, "en/1.html"), 'w') as f:
54 f.write("{{version}},{{has_consented}}")
55
56 with open(os.path.join(temp_consent_path, "en/success.html"), 'w') as f:
57 f.write("yay!")
58
59 hs = self.setup_test_homeserver(config=config)
60 return hs
61
62 def test_render_public_consent(self):
63 """You can observe the terms form without specifying a user"""
64 resource = consent_resource.ConsentResource(self.hs)
65 request, channel = self.make_request("GET", "/consent?v=1", shorthand=False)
66 render(request, resource, self.reactor)
67 self.assertEqual(channel.code, 200)
68
69 def test_accept_consent(self):
70 """
71 A user can use the consent form to accept the terms.
72 """
73 uri_builder = ConsentURIBuilder(self.hs.config)
74 resource = consent_resource.ConsentResource(self.hs)
75
76 # Register a user
77 user_id = self.register_user("user", "pass")
78 access_token = self.login("user", "pass")
79
80 # Fetch the consent page, to get the consent version
81 consent_uri = (
82 uri_builder.build_user_consent_uri(user_id).replace("_matrix/", "")
83 + "&u=user"
84 )
85 request, channel = self.make_request(
86 "GET", consent_uri, access_token=access_token, shorthand=False
87 )
88 render(request, resource, self.reactor)
89 self.assertEqual(channel.code, 200)
90
91 # Get the version from the body, and whether we've consented
92 version, consented = channel.result["body"].decode('ascii').split(",")
93 self.assertEqual(consented, "False")
94
95 # POST to the consent page, saying we've agreed
96 request, channel = self.make_request(
97 "POST",
98 consent_uri + "&v=" + version,
99 access_token=access_token,
100 shorthand=False,
101 )
102 render(request, resource, self.reactor)
103 self.assertEqual(channel.code, 200)
104
105 # Fetch the consent page, to get the consent version -- it should have
106 # changed
107 request, channel = self.make_request(
108 "GET", consent_uri, access_token=access_token, shorthand=False
109 )
110 render(request, resource, self.reactor)
111 self.assertEqual(channel.code, 200)
112
113 # Get the version from the body, and check that it's the version we
114 # agreed to, and that we've consented to it.
115 version, consented = channel.result["body"].decode('ascii').split(",")
116 self.assertEqual(consented, "True")
117 self.assertEqual(version, "1")
1818
1919 from mock import Mock
2020
21 from synapse.http.server import JsonResource
2221 from synapse.rest.client.v1.admin import register_servlets
23 from synapse.util import Clock
2422
2523 from tests import unittest
26 from tests.server import (
27 ThreadedMemoryReactorClock,
28 make_request,
29 render,
30 setup_test_homeserver,
31 )
32
33
34 class UserRegisterTestCase(unittest.TestCase):
35 def setUp(self):
36
37 self.clock = ThreadedMemoryReactorClock()
38 self.hs_clock = Clock(self.clock)
24
25
26 class UserRegisterTestCase(unittest.HomeserverTestCase):
27
28 servlets = [register_servlets]
29
30 def make_homeserver(self, reactor, clock):
31
3932 self.url = "/_matrix/client/r0/admin/register"
4033
4134 self.registration_handler = Mock()
4942
5043 self.secrets = Mock()
5144
52 self.hs = setup_test_homeserver(
53 self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
54 )
45 self.hs = self.setup_test_homeserver()
5546
5647 self.hs.config.registration_shared_secret = u"shared"
5748
5849 self.hs.get_media_repository = Mock()
5950 self.hs.get_deactivate_account_handler = Mock()
6051
61 self.resource = JsonResource(self.hs)
62 register_servlets(self.hs, self.resource)
52 return self.hs
6353
6454 def test_disabled(self):
6555 """
6858 """
6959 self.hs.config.registration_shared_secret = None
7060
71 request, channel = make_request("POST", self.url, b'{}')
72 render(request, self.resource, self.clock)
61 request, channel = self.make_request("POST", self.url, b'{}')
62 self.render(request)
7363
7464 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
7565 self.assertEqual(
8676
8777 self.hs.get_secrets = Mock(return_value=secrets)
8878
89 request, channel = make_request("GET", self.url)
90 render(request, self.resource, self.clock)
79 request, channel = self.make_request("GET", self.url)
80 self.render(request)
9181
9282 self.assertEqual(channel.json_body, {"nonce": "abcd"})
9383
9686 Calling GET on the endpoint will return a randomised nonce, which will
9787 only last for SALT_TIMEOUT (60s).
9888 """
99 request, channel = make_request("GET", self.url)
100 render(request, self.resource, self.clock)
89 request, channel = self.make_request("GET", self.url)
90 self.render(request)
10191 nonce = channel.json_body["nonce"]
10292
10393 # 59 seconds
104 self.clock.advance(59)
94 self.reactor.advance(59)
10595
10696 body = json.dumps({"nonce": nonce})
107 request, channel = make_request("POST", self.url, body.encode('utf8'))
108 render(request, self.resource, self.clock)
97 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
98 self.render(request)
10999
110100 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
111101 self.assertEqual('username must be specified', channel.json_body["error"])
112102
113103 # 61 seconds
114 self.clock.advance(2)
115
116 request, channel = make_request("POST", self.url, body.encode('utf8'))
117 render(request, self.resource, self.clock)
104 self.reactor.advance(2)
105
106 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
107 self.render(request)
118108
119109 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
120110 self.assertEqual('unrecognised nonce', channel.json_body["error"])
123113 """
124114 Only the provided nonce can be used, as it's checked in the MAC.
125115 """
126 request, channel = make_request("GET", self.url)
127 render(request, self.resource, self.clock)
116 request, channel = self.make_request("GET", self.url)
117 self.render(request)
128118 nonce = channel.json_body["nonce"]
129119
130120 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
140130 "mac": want_mac,
141131 }
142132 )
143 request, channel = make_request("POST", self.url, body.encode('utf8'))
144 render(request, self.resource, self.clock)
133 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
134 self.render(request)
145135
146136 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
147137 self.assertEqual("HMAC incorrect", channel.json_body["error"])
151141 When the correct nonce is provided, and the right key is provided, the
152142 user is registered.
153143 """
154 request, channel = make_request("GET", self.url)
155 render(request, self.resource, self.clock)
144 request, channel = self.make_request("GET", self.url)
145 self.render(request)
156146 nonce = channel.json_body["nonce"]
157147
158148 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
168158 "mac": want_mac,
169159 }
170160 )
171 request, channel = make_request("POST", self.url, body.encode('utf8'))
172 render(request, self.resource, self.clock)
161 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
162 self.render(request)
173163
174164 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
175165 self.assertEqual("@bob:test", channel.json_body["user_id"])
178168 """
179169 A valid unrecognised nonce.
180170 """
181 request, channel = make_request("GET", self.url)
182 render(request, self.resource, self.clock)
171 request, channel = self.make_request("GET", self.url)
172 self.render(request)
183173 nonce = channel.json_body["nonce"]
184174
185175 want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
195185 "mac": want_mac,
196186 }
197187 )
198 request, channel = make_request("POST", self.url, body.encode('utf8'))
199 render(request, self.resource, self.clock)
188 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
189 self.render(request)
200190
201191 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
202192 self.assertEqual("@bob:test", channel.json_body["user_id"])
203193
204194 # Now, try and reuse it
205 request, channel = make_request("POST", self.url, body.encode('utf8'))
206 render(request, self.resource, self.clock)
195 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
196 self.render(request)
207197
208198 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
209199 self.assertEqual('unrecognised nonce', channel.json_body["error"])
216206 """
217207
218208 def nonce():
219 request, channel = make_request("GET", self.url)
220 render(request, self.resource, self.clock)
209 request, channel = self.make_request("GET", self.url)
210 self.render(request)
221211 return channel.json_body["nonce"]
222212
223213 #
226216
227217 # Must be present
228218 body = json.dumps({})
229 request, channel = make_request("POST", self.url, body.encode('utf8'))
230 render(request, self.resource, self.clock)
219 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
220 self.render(request)
231221
232222 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
233223 self.assertEqual('nonce must be specified', channel.json_body["error"])
238228
239229 # Must be present
240230 body = json.dumps({"nonce": nonce()})
241 request, channel = make_request("POST", self.url, body.encode('utf8'))
242 render(request, self.resource, self.clock)
231 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
232 self.render(request)
243233
244234 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
245235 self.assertEqual('username must be specified', channel.json_body["error"])
246236
247237 # Must be a string
248238 body = json.dumps({"nonce": nonce(), "username": 1234})
249 request, channel = make_request("POST", self.url, body.encode('utf8'))
250 render(request, self.resource, self.clock)
239 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
240 self.render(request)
251241
252242 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
253243 self.assertEqual('Invalid username', channel.json_body["error"])
254244
255245 # Must not have null bytes
256246 body = json.dumps({"nonce": nonce(), "username": u"abcd\u0000"})
257 request, channel = make_request("POST", self.url, body.encode('utf8'))
258 render(request, self.resource, self.clock)
247 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
248 self.render(request)
259249
260250 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
261251 self.assertEqual('Invalid username', channel.json_body["error"])
262252
263253 # Must not have null bytes
264254 body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
265 request, channel = make_request("POST", self.url, body.encode('utf8'))
266 render(request, self.resource, self.clock)
255 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
256 self.render(request)
267257
268258 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
269259 self.assertEqual('Invalid username', channel.json_body["error"])
274264
275265 # Must be present
276266 body = json.dumps({"nonce": nonce(), "username": "a"})
277 request, channel = make_request("POST", self.url, body.encode('utf8'))
278 render(request, self.resource, self.clock)
267 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
268 self.render(request)
279269
280270 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
281271 self.assertEqual('password must be specified', channel.json_body["error"])
282272
283273 # Must be a string
284274 body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
285 request, channel = make_request("POST", self.url, body.encode('utf8'))
286 render(request, self.resource, self.clock)
275 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
276 self.render(request)
287277
288278 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
289279 self.assertEqual('Invalid password', channel.json_body["error"])
292282 body = json.dumps(
293283 {"nonce": nonce(), "username": "a", "password": u"abcd\u0000"}
294284 )
295 request, channel = make_request("POST", self.url, body.encode('utf8'))
296 render(request, self.resource, self.clock)
285 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
286 self.render(request)
297287
298288 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
299289 self.assertEqual('Invalid password', channel.json_body["error"])
300290
301291 # Super long
302292 body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
303 request, channel = make_request("POST", self.url, body.encode('utf8'))
304 render(request, self.resource, self.clock)
293 request, channel = self.make_request("POST", self.url, body.encode('utf8'))
294 self.render(request)
305295
306296 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
307297 self.assertEqual('Invalid password', channel.json_body["error"])
4444 )
4545
4646 handlers = Mock(registration_handler=self.registration_handler)
47 self.clock = MemoryReactorClock()
48 self.hs_clock = Clock(self.clock)
47 self.reactor = MemoryReactorClock()
48 self.hs_clock = Clock(self.reactor)
4949
5050 self.hs = self.hs = setup_test_homeserver(
51 self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
51 self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.reactor
5252 )
5353 self.hs.get_datastore = Mock(return_value=self.datastore)
5454 self.hs.get_handlers = Mock(return_value=handlers)
7575 return_value=(user_id, token)
7676 )
7777
78 request, channel = make_request(b"POST", url, request_data)
79 render(request, res, self.clock)
78 request, channel = make_request(self.reactor, b"POST", url, request_data)
79 render(request, res, self.reactor)
8080
8181 self.assertEquals(channel.result["code"], b"200")
8282
168168 path = path + "?access_token=%s" % tok
169169
170170 request, channel = make_request(
171 "POST", path, json.dumps(content).encode('utf8')
171 self.hs.get_reactor(), "POST", path, json.dumps(content).encode('utf8')
172172 )
173173 render(request, self.resource, self.hs.get_reactor())
174174
216216
217217 data = {"membership": membership}
218218
219 request, channel = make_request("PUT", path, json.dumps(data).encode('utf8'))
219 request, channel = make_request(
220 self.hs.get_reactor(), "PUT", path, json.dumps(data).encode('utf8')
221 )
220222
221223 render(request, self.resource, self.hs.get_reactor())
222224
226228 )
227229
228230 self.auth_user_id = temp_id
229
230 @defer.inlineCallbacks
231 def register(self, user_id):
232 (code, response) = yield self.mock_resource.trigger(
233 "POST",
234 "/_matrix/client/r0/register",
235 json.dumps(
236 {"user": user_id, "password": "test", "type": "m.login.password"}
237 ),
238 )
239 self.assertEquals(200, code)
240 defer.returnValue(response)
241231
242232 def send(self, room_id, body=None, txn_id=None, tok=None, expect_code=200):
243233 if txn_id is None:
250240 if tok:
251241 path = path + "?access_token=%s" % tok
252242
253 request, channel = make_request("PUT", path, json.dumps(content).encode('utf8'))
243 request, channel = make_request(
244 self.hs.get_reactor(), "PUT", path, json.dumps(content).encode('utf8')
245 )
254246 render(request, self.resource, self.hs.get_reactor())
255247
256248 assert int(channel.result["code"]) == expect_code, (
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 import synapse.types
1615 from synapse.api.errors import Codes
17 from synapse.http.server import JsonResource
1816 from synapse.rest.client.v2_alpha import filter
19 from synapse.types import UserID
20 from synapse.util import Clock
2117
2218 from tests import unittest
23 from tests.server import (
24 ThreadedMemoryReactorClock as MemoryReactorClock,
25 make_request,
26 render,
27 setup_test_homeserver,
28 )
2919
3020 PATH_PREFIX = "/_matrix/client/v2_alpha"
3121
3222
33 class FilterTestCase(unittest.TestCase):
23 class FilterTestCase(unittest.HomeserverTestCase):
3424
35 USER_ID = "@apple:test"
25 user_id = "@apple:test"
26 hijack_auth = True
3627 EXAMPLE_FILTER = {"room": {"timeline": {"types": ["m.room.message"]}}}
3728 EXAMPLE_FILTER_JSON = b'{"room": {"timeline": {"types": ["m.room.message"]}}}'
38 TO_REGISTER = [filter]
29 servlets = [filter.register_servlets]
3930
40 def setUp(self):
41 self.clock = MemoryReactorClock()
42 self.hs_clock = Clock(self.clock)
43
44 self.hs = setup_test_homeserver(
45 self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
46 )
47
48 self.auth = self.hs.get_auth()
49
50 def get_user_by_access_token(token=None, allow_guest=False):
51 return {
52 "user": UserID.from_string(self.USER_ID),
53 "token_id": 1,
54 "is_guest": False,
55 }
56
57 def get_user_by_req(request, allow_guest=False, rights="access"):
58 return synapse.types.create_requester(
59 UserID.from_string(self.USER_ID), 1, False, None
60 )
61
62 self.auth.get_user_by_access_token = get_user_by_access_token
63 self.auth.get_user_by_req = get_user_by_req
64
65 self.store = self.hs.get_datastore()
66 self.filtering = self.hs.get_filtering()
67 self.resource = JsonResource(self.hs)
68
69 for r in self.TO_REGISTER:
70 r.register_servlets(self.hs, self.resource)
31 def prepare(self, reactor, clock, hs):
32 self.filtering = hs.get_filtering()
33 self.store = hs.get_datastore()
7134
7235 def test_add_filter(self):
73 request, channel = make_request(
36 request, channel = self.make_request(
7437 "POST",
75 "/_matrix/client/r0/user/%s/filter" % (self.USER_ID),
38 "/_matrix/client/r0/user/%s/filter" % (self.user_id),
7639 self.EXAMPLE_FILTER_JSON,
7740 )
78 render(request, self.resource, self.clock)
41 self.render(request)
7942
8043 self.assertEqual(channel.result["code"], b"200")
8144 self.assertEqual(channel.json_body, {"filter_id": "0"})
8245 filter = self.store.get_user_filter(user_localpart="apple", filter_id=0)
83 self.clock.advance(0)
46 self.pump()
8447 self.assertEquals(filter.result, self.EXAMPLE_FILTER)
8548
8649 def test_add_filter_for_other_user(self):
87 request, channel = make_request(
50 request, channel = self.make_request(
8851 "POST",
8952 "/_matrix/client/r0/user/%s/filter" % ("@watermelon:test"),
9053 self.EXAMPLE_FILTER_JSON,
9154 )
92 render(request, self.resource, self.clock)
55 self.render(request)
9356
9457 self.assertEqual(channel.result["code"], b"403")
9558 self.assertEquals(channel.json_body["errcode"], Codes.FORBIDDEN)
9760 def test_add_filter_non_local_user(self):
9861 _is_mine = self.hs.is_mine
9962 self.hs.is_mine = lambda target_user: False
100 request, channel = make_request(
63 request, channel = self.make_request(
10164 "POST",
102 "/_matrix/client/r0/user/%s/filter" % (self.USER_ID),
65 "/_matrix/client/r0/user/%s/filter" % (self.user_id),
10366 self.EXAMPLE_FILTER_JSON,
10467 )
105 render(request, self.resource, self.clock)
68 self.render(request)
10669
10770 self.hs.is_mine = _is_mine
10871 self.assertEqual(channel.result["code"], b"403")
11275 filter_id = self.filtering.add_user_filter(
11376 user_localpart="apple", user_filter=self.EXAMPLE_FILTER
11477 )
115 self.clock.advance(1)
78 self.reactor.advance(1)
11679 filter_id = filter_id.result
117 request, channel = make_request(
118 "GET", "/_matrix/client/r0/user/%s/filter/%s" % (self.USER_ID, filter_id)
80 request, channel = self.make_request(
81 "GET", "/_matrix/client/r0/user/%s/filter/%s" % (self.user_id, filter_id)
11982 )
120 render(request, self.resource, self.clock)
83 self.render(request)
12184
12285 self.assertEqual(channel.result["code"], b"200")
12386 self.assertEquals(channel.json_body, self.EXAMPLE_FILTER)
12487
12588 def test_get_filter_non_existant(self):
126 request, channel = make_request(
127 "GET", "/_matrix/client/r0/user/%s/filter/12382148321" % (self.USER_ID)
89 request, channel = self.make_request(
90 "GET", "/_matrix/client/r0/user/%s/filter/12382148321" % (self.user_id)
12891 )
129 render(request, self.resource, self.clock)
92 self.render(request)
13093
13194 self.assertEqual(channel.result["code"], b"400")
13295 self.assertEquals(channel.json_body["errcode"], Codes.NOT_FOUND)
13497 # Currently invalid params do not have an appropriate errcode
13598 # in errors.py
13699 def test_get_filter_invalid_id(self):
137 request, channel = make_request(
138 "GET", "/_matrix/client/r0/user/%s/filter/foobar" % (self.USER_ID)
100 request, channel = self.make_request(
101 "GET", "/_matrix/client/r0/user/%s/filter/foobar" % (self.user_id)
139102 )
140 render(request, self.resource, self.clock)
103 self.render(request)
141104
142105 self.assertEqual(channel.result["code"], b"400")
143106
144107 # No ID also returns an invalid_id error
145108 def test_get_filter_no_id(self):
146 request, channel = make_request(
147 "GET", "/_matrix/client/r0/user/%s/filter/" % (self.USER_ID)
109 request, channel = self.make_request(
110 "GET", "/_matrix/client/r0/user/%s/filter/" % (self.user_id)
148111 )
149 render(request, self.resource, self.clock)
112 self.render(request)
150113
151114 self.assertEqual(channel.result["code"], b"400")
22 from mock import Mock
33
44 from twisted.python import failure
5 from twisted.test.proto_helpers import MemoryReactorClock
65
76 from synapse.api.errors import InteractiveAuthIncompleteError
8 from synapse.http.server import JsonResource
97 from synapse.rest.client.v2_alpha.register import register_servlets
10 from synapse.util import Clock
118
129 from tests import unittest
13 from tests.server import make_request, render, setup_test_homeserver
1410
1511
16 class RegisterRestServletTestCase(unittest.TestCase):
17 def setUp(self):
12 class RegisterRestServletTestCase(unittest.HomeserverTestCase):
1813
19 self.clock = MemoryReactorClock()
20 self.hs_clock = Clock(self.clock)
14 servlets = [register_servlets]
15
16 def make_homeserver(self, reactor, clock):
17
2118 self.url = b"/_matrix/client/r0/register"
2219
2320 self.appservice = None
4542 identity_handler=self.identity_handler,
4643 login_handler=self.login_handler,
4744 )
48 self.hs = setup_test_homeserver(
49 self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
50 )
45 self.hs = self.setup_test_homeserver()
5146 self.hs.get_auth = Mock(return_value=self.auth)
5247 self.hs.get_handlers = Mock(return_value=self.handlers)
5348 self.hs.get_auth_handler = Mock(return_value=self.auth_handler)
5752 self.hs.config.registrations_require_3pid = []
5853 self.hs.config.auto_join_rooms = []
5954
60 self.resource = JsonResource(self.hs)
61 register_servlets(self.hs, self.resource)
55 return self.hs
6256
6357 def test_POST_appservice_registration_valid(self):
6458 user_id = "@kermit:muppet"
6862 self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
6963 request_data = json.dumps({"username": "kermit"})
7064
71 request, channel = make_request(
65 request, channel = self.make_request(
7266 b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
7367 )
74 render(request, self.resource, self.clock)
68 self.render(request)
7569
7670 self.assertEquals(channel.result["code"], b"200", channel.result)
7771 det_data = {
8478 def test_POST_appservice_registration_invalid(self):
8579 self.appservice = None # no application service exists
8680 request_data = json.dumps({"username": "kermit"})
87 request, channel = make_request(
81 request, channel = self.make_request(
8882 b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
8983 )
90 render(request, self.resource, self.clock)
84 self.render(request)
9185
9286 self.assertEquals(channel.result["code"], b"401", channel.result)
9387
9488 def test_POST_bad_password(self):
9589 request_data = json.dumps({"username": "kermit", "password": 666})
96 request, channel = make_request(b"POST", self.url, request_data)
97 render(request, self.resource, self.clock)
90 request, channel = self.make_request(b"POST", self.url, request_data)
91 self.render(request)
9892
9993 self.assertEquals(channel.result["code"], b"400", channel.result)
10094 self.assertEquals(channel.json_body["error"], "Invalid password")
10195
10296 def test_POST_bad_username(self):
10397 request_data = json.dumps({"username": 777, "password": "monkey"})
104 request, channel = make_request(b"POST", self.url, request_data)
105 render(request, self.resource, self.clock)
98 request, channel = self.make_request(b"POST", self.url, request_data)
99 self.render(request)
106100
107101 self.assertEquals(channel.result["code"], b"400", channel.result)
108102 self.assertEquals(channel.json_body["error"], "Invalid username")
120114 self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
121115 self.device_handler.check_device_registered = Mock(return_value=device_id)
122116
123 request, channel = make_request(b"POST", self.url, request_data)
124 render(request, self.resource, self.clock)
117 request, channel = self.make_request(b"POST", self.url, request_data)
118 self.render(request)
125119
126120 det_data = {
127121 "user_id": user_id,
142136 self.auth_result = (None, {"username": "kermit", "password": "monkey"}, None)
143137 self.registration_handler.register = Mock(return_value=("@user:id", "t"))
144138
145 request, channel = make_request(b"POST", self.url, request_data)
146 render(request, self.resource, self.clock)
139 request, channel = self.make_request(b"POST", self.url, request_data)
140 self.render(request)
147141
148142 self.assertEquals(channel.result["code"], b"403", channel.result)
149143 self.assertEquals(channel.json_body["error"], "Registration has been disabled")
154148 self.hs.config.allow_guest_access = True
155149 self.registration_handler.register = Mock(return_value=(user_id, None))
156150
157 request, channel = make_request(b"POST", self.url + b"?kind=guest", b"{}")
158 render(request, self.resource, self.clock)
151 request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
152 self.render(request)
159153
160154 det_data = {
161155 "user_id": user_id,
168162 def test_POST_disabled_guest_registration(self):
169163 self.hs.config.allow_guest_access = False
170164
171 request, channel = make_request(b"POST", self.url + b"?kind=guest", b"{}")
172 render(request, self.resource, self.clock)
165 request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
166 self.render(request)
173167
174168 self.assertEquals(channel.result["code"], b"403", channel.result)
175169 self.assertEquals(channel.json_body["error"], "Guest access is disabled")
1414
1515 from mock import Mock
1616
17 from synapse.rest.client.v1 import admin, login, room
1718 from synapse.rest.client.v2_alpha import sync
1819
1920 from tests import unittest
21 from tests.server import TimedOutException
2022
2123
2224 class FilterTestCase(unittest.HomeserverTestCase):
6466 ["next_batch", "rooms", "account_data", "to_device", "device_lists"]
6567 ).issubset(set(channel.json_body.keys()))
6668 )
69
70
71 class SyncTypingTests(unittest.HomeserverTestCase):
72
73 servlets = [
74 admin.register_servlets,
75 room.register_servlets,
76 login.register_servlets,
77 sync.register_servlets,
78 ]
79 user_id = True
80 hijack_auth = False
81
82 def test_sync_backwards_typing(self):
83 """
84 If the typing serial goes backwards and the typing handler is then reset
85 (such as when the master restarts and sets the typing serial to 0), we
86 do not incorrectly return typing information that had a serial greater
87 than the now-reset serial.
88 """
89 typing_url = "/rooms/%s/typing/%s?access_token=%s"
90 sync_url = "/sync?timeout=3000000&access_token=%s&since=%s"
91
92 # Register the user who gets notified
93 user_id = self.register_user("user", "pass")
94 access_token = self.login("user", "pass")
95
96 # Register the user who sends the message
97 other_user_id = self.register_user("otheruser", "pass")
98 other_access_token = self.login("otheruser", "pass")
99
100 # Create a room
101 room = self.helper.create_room_as(user_id, tok=access_token)
102
103 # Invite the other person
104 self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
105
106 # The other user joins
107 self.helper.join(room=room, user=other_user_id, tok=other_access_token)
108
109 # The other user sends some messages
110 self.helper.send(room, body="Hi!", tok=other_access_token)
111 self.helper.send(room, body="There!", tok=other_access_token)
112
113 # Start typing.
114 request, channel = self.make_request(
115 "PUT",
116 typing_url % (room, other_user_id, other_access_token),
117 b'{"typing": true, "timeout": 30000}',
118 )
119 self.render(request)
120 self.assertEquals(200, channel.code)
121
122 request, channel = self.make_request(
123 "GET", "/sync?access_token=%s" % (access_token,)
124 )
125 self.render(request)
126 self.assertEquals(200, channel.code)
127 next_batch = channel.json_body["next_batch"]
128
129 # Stop typing.
130 request, channel = self.make_request(
131 "PUT",
132 typing_url % (room, other_user_id, other_access_token),
133 b'{"typing": false}',
134 )
135 self.render(request)
136 self.assertEquals(200, channel.code)
137
138 # Start typing.
139 request, channel = self.make_request(
140 "PUT",
141 typing_url % (room, other_user_id, other_access_token),
142 b'{"typing": true, "timeout": 30000}',
143 )
144 self.render(request)
145 self.assertEquals(200, channel.code)
146
147 # Should return immediately
148 request, channel = self.make_request(
149 "GET", sync_url % (access_token, next_batch)
150 )
151 self.render(request)
152 self.assertEquals(200, channel.code)
153 next_batch = channel.json_body["next_batch"]
154
155 # Reset typing serial back to 0, as if the master had.
156 typing = self.hs.get_typing_handler()
157 typing._latest_room_serial = 0
158
159 # Since it checks the state token, we need some state to update to
160 # invalidate the stream token.
161 self.helper.send(room, body="There!", tok=other_access_token)
162
163 request, channel = self.make_request(
164 "GET", sync_url % (access_token, next_batch)
165 )
166 self.render(request)
167 self.assertEquals(200, channel.code)
168 next_batch = channel.json_body["next_batch"]
169
170 # This should time out! But it does not, because our stream token is
171 # ahead, and therefore it's saying the typing (that we've actually
172 # already seen) is new, since it's got a token above our new, now-reset
173 # stream token.
174 request, channel = self.make_request(
175 "GET", sync_url % (access_token, next_batch)
176 )
177 self.render(request)
178 self.assertEquals(200, channel.code)
179 next_batch = channel.json_body["next_batch"]
180
181 # Clear the typing information, so that it doesn't think everything is
182 # in the future.
183 typing._reset()
184
185 # Now it SHOULD fail as it never completes!
186 request, channel = self.make_request(
187 "GET", sync_url % (access_token, next_batch)
188 )
189 self.assertRaises(TimedOutException, self.render, request)
0 # -*- coding: utf-8 -*-
1 # Copyright 2018 New Vector Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from mock import Mock
18
19 from twisted.internet.defer import Deferred
20
21 from synapse.config.repository import MediaStorageProviderConfig
22 from synapse.util.module_loader import load_module
23
24 from tests import unittest
25
26
27 class URLPreviewTests(unittest.HomeserverTestCase):
28
29 hijack_auth = True
30 user_id = "@test:user"
31
32 def make_homeserver(self, reactor, clock):
33
34 self.storage_path = self.mktemp()
35 os.mkdir(self.storage_path)
36
37 config = self.default_config()
38 config.url_preview_enabled = True
39 config.max_spider_size = 9999999
40 config.url_preview_url_blacklist = []
41 config.media_store_path = self.storage_path
42
43 provider_config = {
44 "module": "synapse.rest.media.v1.storage_provider.FileStorageProviderBackend",
45 "store_local": True,
46 "store_synchronous": False,
47 "store_remote": True,
48 "config": {"directory": self.storage_path},
49 }
50
51 loaded = list(load_module(provider_config)) + [
52 MediaStorageProviderConfig(False, False, False)
53 ]
54
55 config.media_storage_providers = [loaded]
56
57 hs = self.setup_test_homeserver(config=config)
58
59 return hs
60
61 def prepare(self, reactor, clock, hs):
62
63 self.fetches = []
64
65 def get_file(url, output_stream, max_size):
66 """
67 Returns tuple[int,dict,str,int] of file length, response headers,
68 absolute URI, and response code.
69 """
70
71 def write_to(r):
72 data, response = r
73 output_stream.write(data)
74 return response
75
76 d = Deferred()
77 d.addCallback(write_to)
78 self.fetches.append((d, url))
79 return d
80
81 client = Mock()
82 client.get_file = get_file
83
84 self.media_repo = hs.get_media_repository_resource()
85 preview_url = self.media_repo.children[b'preview_url']
86 preview_url.client = client
87 self.preview_url = preview_url
88
89 def test_cache_returns_correct_type(self):
90
91 request, channel = self.make_request(
92 "GET", "url_preview?url=matrix.org", shorthand=False
93 )
94 request.render(self.preview_url)
95 self.pump()
96
97 # We've made one fetch
98 self.assertEqual(len(self.fetches), 1)
99
100 end_content = (
101 b'<html><head>'
102 b'<meta property="og:title" content="~matrix~" />'
103 b'<meta property="og:description" content="hi" />'
104 b'</head></html>'
105 )
106
107 self.fetches[0][0].callback(
108 (
109 end_content,
110 (
111 len(end_content),
112 {
113 b"Content-Length": [b"%d" % (len(end_content))],
114 b"Content-Type": [b'text/html; charset="utf8"'],
115 },
116 "https://example.com",
117 200,
118 ),
119 )
120 )
121
122 self.pump()
123 self.assertEqual(channel.code, 200)
124 self.assertEqual(
125 channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
126 )
127
128 # Check the cache returns the correct response
129 request, channel = self.make_request(
130 "GET", "url_preview?url=matrix.org", shorthand=False
131 )
132 request.render(self.preview_url)
133 self.pump()
134
135 # Only one fetch, still, since we'll lean on the cache
136 self.assertEqual(len(self.fetches), 1)
137
138 # Check the cache response has the same content
139 self.assertEqual(channel.code, 200)
140 self.assertEqual(
141 channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
142 )
143
144 # Clear the in-memory cache
145 self.assertIn("matrix.org", self.preview_url._cache)
146 self.preview_url._cache.pop("matrix.org")
147 self.assertNotIn("matrix.org", self.preview_url._cache)
148
149 # Check the database cache returns the correct response
150 request, channel = self.make_request(
151 "GET", "url_preview?url=matrix.org", shorthand=False
152 )
153 request.render(self.preview_url)
154 self.pump()
155
156 # Only one fetch, still, since we'll lean on the cache
157 self.assertEqual(len(self.fetches), 1)
158
159 # Check the cache response has the same content
160 self.assertEqual(channel.code, 200)
161 self.assertEqual(
162 channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
163 )
2020 from tests.utils import setup_test_homeserver as _sth
2121
2222
23 class TimedOutException(Exception):
24 """
25 A web query timed out.
26 """
27
28
2329 @attr.s
2430 class FakeChannel(object):
2531 """
2733 wire).
2834 """
2935
36 _reactor = attr.ib()
3037 result = attr.ib(default=attr.Factory(dict))
3138 _producer = None
3239
4956 self.result["headers"] = headers
5057
5158 def write(self, content):
59 assert isinstance(content, bytes), "Should be bytes! " + repr(content)
60
5261 if "body" not in self.result:
5362 self.result["body"] = b""
5463
5665
5766 def registerProducer(self, producer, streaming):
5867 self._producer = producer
68 self.producerStreaming = streaming
69
70 def _produce():
71 if self._producer:
72 self._producer.resumeProducing()
73 self._reactor.callLater(0.1, _produce)
74
75 if not streaming:
76 self._reactor.callLater(0.0, _produce)
5977
6078 def unregisterProducer(self):
6179 if self._producer is None:
97115 return FakeLogger()
98116
99117
100 def make_request(method, path, content=b"", access_token=None, request=SynapseRequest):
118 def make_request(
119 reactor,
120 method,
121 path,
122 content=b"",
123 access_token=None,
124 request=SynapseRequest,
125 shorthand=True,
126 ):
101127 """
102128 Make a web request using the given method and path, feed it the
103129 content, and return the Request and the Channel underneath.
130
131 Args:
132 method (bytes/unicode): The HTTP request method ("verb").
133 path (bytes/unicode): The HTTP path, suitably URL encoded (e.g.
134 escaped UTF-8 & spaces and such).
135 content (bytes or dict): The body of the request. JSON-encoded, if
136 a dict.
137 shorthand: Whether to try and be helpful and prefix the given URL
138 with the usual REST API path, if it doesn't contain it.
139
140 Returns:
141 A synapse.http.site.SynapseRequest.
104142 """
105143 if not isinstance(method, bytes):
106144 method = method.encode('ascii')
108146 if not isinstance(path, bytes):
109147 path = path.encode('ascii')
110148
111 # Decorate it to be the full path
112 if not path.startswith(b"/_matrix"):
149 # Decorate it to be the full path, if we're using shorthand
150 if shorthand and not path.startswith(b"/_matrix"):
113151 path = b"/_matrix/client/r0/" + path
114152 path = path.replace(b"//", b"/")
115153
117155 content = content.encode('utf8')
118156
119157 site = FakeSite()
120 channel = FakeChannel()
158 channel = FakeChannel(reactor)
121159
122160 req = request(site, channel)
123161 req.process = lambda: b""
124162 req.content = BytesIO(content)
125163
126164 if access_token:
127 req.requestHeaders.addRawHeader(b"Authorization", b"Bearer " + access_token)
165 req.requestHeaders.addRawHeader(
166 b"Authorization", b"Bearer " + access_token.encode('ascii')
167 )
128168
129169 if content:
130170 req.requestHeaders.addRawHeader(b"Content-Type", b"application/json")
150190 x += 1
151191
152192 if x > timeout:
153 raise Exception("Timed out waiting for request to finish.")
193 raise TimedOutException("Timed out waiting for request to finish.")
154194
155195 clock.advance(0.1)
156196
33
44 from synapse.api.constants import EventTypes, ServerNoticeMsgType
55 from synapse.api.errors import ResourceLimitError
6 from synapse.handlers.auth import AuthHandler
76 from synapse.server_notices.resource_limits_server_notices import (
87 ResourceLimitsServerNotices,
98 )
1211 from tests.utils import setup_test_homeserver
1312
1413
15 class AuthHandlers(object):
16 def __init__(self, hs):
17 self.auth_handler = AuthHandler(hs)
18
19
2014 class TestResourceLimitsServerNotices(unittest.TestCase):
2115 @defer.inlineCallbacks
2216 def setUp(self):
23 self.hs = yield setup_test_homeserver(self.addCleanup, handlers=None)
24 self.hs.handlers = AuthHandlers(self.hs)
25 self.auth_handler = self.hs.handlers.auth_handler
17 self.hs = yield setup_test_homeserver(self.addCleanup)
2618 self.server_notices_sender = self.hs.get_server_notices_sender()
2719
2820 # relying on [1] is far from ideal, but the only case where
543543 state_res_store=TestStateResolutionStore(event_map),
544544 )
545545
546 self.assertTrue(state_d.called)
547 state_before = state_d.result
546 state_before = self.successResultOf(state_d)
548547
549548 state_after = dict(state_before)
550549 if fake_event.state_key is not None:
598597 self.assertEqual(["o", "l", "n", "m", "p"], res)
599598
600599
600 class SimpleParamStateTestCase(unittest.TestCase):
601 def setUp(self):
602 # We build up a simple DAG.
603
604 event_map = {}
605
606 create_event = FakeEvent(
607 id="CREATE",
608 sender=ALICE,
609 type=EventTypes.Create,
610 state_key="",
611 content={"creator": ALICE},
612 ).to_event([], [])
613 event_map[create_event.event_id] = create_event
614
615 alice_member = FakeEvent(
616 id="IMA",
617 sender=ALICE,
618 type=EventTypes.Member,
619 state_key=ALICE,
620 content=MEMBERSHIP_CONTENT_JOIN,
621 ).to_event([create_event.event_id], [create_event.event_id])
622 event_map[alice_member.event_id] = alice_member
623
624 join_rules = FakeEvent(
625 id="IJR",
626 sender=ALICE,
627 type=EventTypes.JoinRules,
628 state_key="",
629 content={"join_rule": JoinRules.PUBLIC},
630 ).to_event(
631 auth_events=[create_event.event_id, alice_member.event_id],
632 prev_events=[alice_member.event_id],
633 )
634 event_map[join_rules.event_id] = join_rules
635
636 # Bob and Charlie join at the same time, so there is a fork
637 bob_member = FakeEvent(
638 id="IMB",
639 sender=BOB,
640 type=EventTypes.Member,
641 state_key=BOB,
642 content=MEMBERSHIP_CONTENT_JOIN,
643 ).to_event(
644 auth_events=[create_event.event_id, join_rules.event_id],
645 prev_events=[join_rules.event_id],
646 )
647 event_map[bob_member.event_id] = bob_member
648
649 charlie_member = FakeEvent(
650 id="IMC",
651 sender=CHARLIE,
652 type=EventTypes.Member,
653 state_key=CHARLIE,
654 content=MEMBERSHIP_CONTENT_JOIN,
655 ).to_event(
656 auth_events=[create_event.event_id, join_rules.event_id],
657 prev_events=[join_rules.event_id],
658 )
659 event_map[charlie_member.event_id] = charlie_member
660
661 self.event_map = event_map
662 self.create_event = create_event
663 self.alice_member = alice_member
664 self.join_rules = join_rules
665 self.bob_member = bob_member
666 self.charlie_member = charlie_member
667
668 self.state_at_bob = {
669 (e.type, e.state_key): e.event_id
670 for e in [create_event, alice_member, join_rules, bob_member]
671 }
672
673 self.state_at_charlie = {
674 (e.type, e.state_key): e.event_id
675 for e in [create_event, alice_member, join_rules, charlie_member]
676 }
677
678 self.expected_combined_state = {
679 (e.type, e.state_key): e.event_id
680 for e in [create_event, alice_member, join_rules, bob_member, charlie_member]
681 }
682
683 def test_event_map_none(self):
684 # Test that we correctly handle passing `None` as the event_map
685
686 state_d = resolve_events_with_store(
687 [self.state_at_bob, self.state_at_charlie],
688 event_map=None,
689 state_res_store=TestStateResolutionStore(self.event_map),
690 )
691
692 state = self.successResultOf(state_d)
693
694 self.assert_dict(self.expected_combined_state, state)
695
696
601697 def pairwise(iterable):
602698 "s -> (s0,s1), (s1,s2), (s2, s3), ..."
603699 a, b = itertools.tee(iterable)
656752 result.add(event_id)
657753
658754 event = self.event_map[event_id]
659 for aid, _ in event.auth_events:
755 for aid in event.auth_event_ids():
660756 stack.append(aid)
661757
662758 return list(result)
4444 self.assertDictContainsSubset({"keys": json, "device_display_name": None}, dev)
4545
4646 @defer.inlineCallbacks
47 def test_reupload_key(self):
48 now = 1470174257070
49 json = {"key": "value"}
50
51 yield self.store.store_device("user", "device", None)
52
53 changed = yield self.store.set_e2e_device_keys("user", "device", now, json)
54 self.assertTrue(changed)
55
56 # If we try to upload the same key then we should be told nothing
57 # changed
58 changed = yield self.store.set_e2e_device_keys("user", "device", now, json)
59 self.assertFalse(changed)
60
61 @defer.inlineCallbacks
4762 def test_get_key_with_device_name(self):
4863 now = 1470174257070
4964 json = {"key": "value"}
111111 "origin_server_ts": 1,
112112 "type": "m.room.message",
113113 "origin": "test.serv",
114 "content": "hewwo?",
114 "content": {"body": "hewwo?"},
115115 "auth_events": [],
116116 "prev_events": [("two:test.serv", {}), (most_recent, {})],
117117 }
2020
2121 from synapse.api.constants import LoginType
2222 from synapse.api.errors import Codes, HttpResponseException, SynapseError
23 from synapse.http.server import JsonResource
2423 from synapse.rest.client.v2_alpha import register, sync
25 from synapse.util import Clock
2624
2725 from tests import unittest
28 from tests.server import (
29 ThreadedMemoryReactorClock,
30 make_request,
31 render,
32 setup_test_homeserver,
33 )
34
35
36 class TestMauLimit(unittest.TestCase):
37 def setUp(self):
38 self.reactor = ThreadedMemoryReactorClock()
39 self.clock = Clock(self.reactor)
40
41 self.hs = setup_test_homeserver(
42 self.addCleanup,
26
27
28 class TestMauLimit(unittest.HomeserverTestCase):
29
30 servlets = [register.register_servlets, sync.register_servlets]
31
32 def make_homeserver(self, reactor, clock):
33
34 self.hs = self.setup_test_homeserver(
4335 "red",
4436 http_client=None,
45 clock=self.clock,
46 reactor=self.reactor,
4737 federation_client=Mock(),
4838 ratelimiter=NonCallableMock(spec_set=["send_message"]),
4939 )
6252 self.hs.config.server_notices_mxid_display_name = None
6353 self.hs.config.server_notices_mxid_avatar_url = None
6454 self.hs.config.server_notices_room_name = "Test Server Notice Room"
65
66 self.resource = JsonResource(self.hs)
67 register.register_servlets(self.hs, self.resource)
68 sync.register_servlets(self.hs, self.resource)
55 return self.hs
6956
7057 def test_simple_deny_mau(self):
7158 # Create and sync so that the MAU counts get updated
192179 }
193180 )
194181
195 request, channel = make_request("POST", "/register", request_data)
196 render(request, self.resource, self.reactor)
182 request, channel = self.make_request("POST", "/register", request_data)
183 self.render(request)
197184
198185 if channel.code != 200:
199186 raise HttpResponseException(
205192 return access_token
206193
207194 def do_sync_for_user(self, token):
208 request, channel = make_request(
209 "GET", "/sync", access_token=token.encode('ascii')
195 request, channel = self.make_request(
196 "GET", "/sync", access_token=token
210197 )
211 render(request, self.resource, self.reactor)
198 self.render(request)
212199
213200 if channel.code != 200:
214201 raise HttpResponseException(
5656 "GET", [re.compile("^/_matrix/foo/(?P<room_id>[^/]*)$")], _callback
5757 )
5858
59 request, channel = make_request(b"GET", b"/_matrix/foo/%E2%98%83?a=%E2%98%83")
59 request, channel = make_request(
60 self.reactor, b"GET", b"/_matrix/foo/%E2%98%83?a=%E2%98%83"
61 )
6062 render(request, res, self.reactor)
6163
6264 self.assertEqual(request.args, {b'a': [u"\N{SNOWMAN}".encode('utf8')]})
7476 res = JsonResource(self.homeserver)
7577 res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
7678
77 request, channel = make_request(b"GET", b"/_matrix/foo")
79 request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
7880 render(request, res, self.reactor)
7981
8082 self.assertEqual(channel.result["code"], b'500')
9799 res = JsonResource(self.homeserver)
98100 res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
99101
100 request, channel = make_request(b"GET", b"/_matrix/foo")
102 request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
101103 render(request, res, self.reactor)
102104
103105 self.assertEqual(channel.result["code"], b'500')
114116 res = JsonResource(self.homeserver)
115117 res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
116118
117 request, channel = make_request(b"GET", b"/_matrix/foo")
119 request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
118120 render(request, res, self.reactor)
119121
120122 self.assertEqual(channel.result["code"], b'403')
135137 res = JsonResource(self.homeserver)
136138 res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
137139
138 request, channel = make_request(b"GET", b"/_matrix/foobar")
140 request, channel = make_request(self.reactor, b"GET", b"/_matrix/foobar")
139141 render(request, res, self.reactor)
140142
141143 self.assertEqual(channel.result["code"], b'400')
0 # Copyright 2018 New Vector Ltd
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import json
15
16 import six
17 from mock import Mock
18
19 from twisted.test.proto_helpers import MemoryReactorClock
20
21 from synapse.rest.client.v2_alpha.register import register_servlets
22 from synapse.util import Clock
23
24 from tests import unittest
25
26
27 class TermsTestCase(unittest.HomeserverTestCase):
28 servlets = [register_servlets]
29
30 def prepare(self, reactor, clock, hs):
31 self.clock = MemoryReactorClock()
32 self.hs_clock = Clock(self.clock)
33 self.url = "/_matrix/client/r0/register"
34 self.registration_handler = Mock()
35 self.auth_handler = Mock()
36 self.device_handler = Mock()
37 hs.config.enable_registration = True
38 hs.config.registrations_require_3pid = []
39 hs.config.auto_join_rooms = []
40 hs.config.enable_registration_captcha = False
41
42 def test_ui_auth(self):
43 self.hs.config.user_consent_at_registration = True
44 self.hs.config.user_consent_policy_name = "My Cool Privacy Policy"
45 self.hs.config.public_baseurl = "https://example.org"
46 self.hs.config.user_consent_version = "1.0"
47
48 # Do a UI auth request
49 request, channel = self.make_request(b"POST", self.url, b"{}")
50 self.render(request)
51
52 self.assertEquals(channel.result["code"], b"401", channel.result)
53
54 self.assertTrue(channel.json_body is not None)
55 self.assertIsInstance(channel.json_body["session"], six.text_type)
56
57 self.assertIsInstance(channel.json_body["flows"], list)
58 for flow in channel.json_body["flows"]:
59 self.assertIsInstance(flow["stages"], list)
60 self.assertTrue(len(flow["stages"]) > 0)
61 self.assertEquals(flow["stages"][-1], "m.login.terms")
62
63 expected_params = {
64 "m.login.terms": {
65 "policies": {
66 "privacy_policy": {
67 "en": {
68 "name": "My Cool Privacy Policy",
69 "url": "https://example.org/_matrix/consent?v=1.0",
70 },
71 "version": "1.0"
72 },
73 },
74 },
75 }
76 self.assertIsInstance(channel.json_body["params"], dict)
77 self.assertDictContainsSubset(channel.json_body["params"], expected_params)
78
79 # We have to complete the dummy auth stage before completing the terms stage
80 request_data = json.dumps(
81 {
82 "username": "kermit",
83 "password": "monkey",
84 "auth": {
85 "session": channel.json_body["session"],
86 "type": "m.login.dummy",
87 },
88 }
89 )
90
91 self.registration_handler.check_username = Mock(return_value=True)
92
93 request, channel = self.make_request(b"POST", self.url, request_data)
94 self.render(request)
95
96 # We don't bother checking that the response is correct - we'll leave that to
97 # other tests. We just want to make sure we're on the right path.
98 self.assertEquals(channel.result["code"], b"401", channel.result)
99
100 # Finish the UI auth for terms
101 request_data = json.dumps(
102 {
103 "username": "kermit",
104 "password": "monkey",
105 "auth": {
106 "session": channel.json_body["session"],
107 "type": "m.login.terms",
108 },
109 }
110 )
111 request, channel = self.make_request(b"POST", self.url, request_data)
112 self.render(request)
113
114 # We're interested in getting a response that looks like a successful
115 # registration, not so much that the details are exactly what we want.
116
117 self.assertEquals(channel.result["code"], b"200", channel.result)
118
119 self.assertTrue(channel.json_body is not None)
120 self.assertIsInstance(channel.json_body["user_id"], six.text_type)
121 self.assertIsInstance(channel.json_body["access_token"], six.text_type)
122 self.assertIsInstance(channel.json_body["device_id"], six.text_type)
145145 return target
146146
147147
148 def INFO(target):
149 """A decorator to set the .loglevel attribute to logging.INFO.
150 Can apply to either a TestCase or an individual test method."""
151 target.loglevel = logging.INFO
152 return target
153
154
148155 class HomeserverTestCase(TestCase):
149156 """
150157 A base TestCase that reduces boilerplate for HomeServer-using test cases.
181188 for servlet in self.servlets:
182189 servlet(self.hs, self.resource)
183190
191 from tests.rest.client.v1.utils import RestHelper
192
193 self.helper = RestHelper(self.hs, self.resource, getattr(self, "user_id", None))
194
184195 if hasattr(self, "user_id"):
185 from tests.rest.client.v1.utils import RestHelper
186
187 self.helper = RestHelper(self.hs, self.resource, self.user_id)
188
189196 if self.hijack_auth:
190197
191198 def get_user_by_access_token(token=None, allow_guest=False):
250257 """
251258
252259 def make_request(
253 self, method, path, content=b"", access_token=None, request=SynapseRequest
260 self,
261 method,
262 path,
263 content=b"",
264 access_token=None,
265 request=SynapseRequest,
266 shorthand=True,
254267 ):
255268 """
256269 Create a SynapseRequest at the path using the method and containing the
262275 escaped UTF-8 & spaces and such).
263276 content (bytes or dict): The body of the request. JSON-encoded, if
264277 a dict.
278 shorthand: Whether to try and be helpful and prefix the given URL
279 with the usual REST API path, if it doesn't contain it.
265280
266281 Returns:
267282 A synapse.http.site.SynapseRequest.
269284 if isinstance(content, dict):
270285 content = json.dumps(content).encode('utf8')
271286
272 return make_request(method, path, content, access_token, request)
287 return make_request(
288 self.reactor, method, path, content, access_token, request, shorthand
289 )
273290
274291 def render(self, request):
275292 """
372389 self.render(request)
373390 self.assertEqual(channel.code, 200)
374391
375 access_token = channel.json_body["access_token"].encode('ascii')
392 access_token = channel.json_body["access_token"]
376393 return access_token
122122 config.user_directory_search_all_users = False
123123 config.user_consent_server_notice_content = None
124124 config.block_events_without_consent_error = None
125 config.user_consent_at_registration = False
126 config.user_consent_policy_name = "Privacy Policy"
125127 config.media_storage_providers = []
126128 config.autocreate_auto_join_rooms = True
127129 config.auto_join_rooms = []
99
1010 # needed by some of the tests
1111 lxml
12
13 # cyptography 2.2 requires setuptools >= 18.5
14 #
15 # older versions of virtualenv (?) give us a virtualenv with the same
16 # version of setuptools as is installed on the system python (and tox runs
17 # virtualenv under python3, so we get the version of setuptools that is
18 # installed on that).
19 #
20 # anyway, make sure that we have a recent enough setuptools.
21 setuptools>=18.5
22
23 # we also need a semi-recent version of pip, because old ones fail to
24 # install the "enum34" dependency of cryptography.
25 pip>=10
1226
1327 setenv =
1428 PYTHONDONTWRITEBYTECODE = no_byte_code
107121 basepython = python3.6
108122 deps =
109123 flake8
110 commands = /bin/sh -c "flake8 synapse tests scripts scripts-dev scripts/register_new_matrix_user scripts/synapse_port_db synctl {env:PEP8SUFFIX:}"
124 commands = /bin/sh -c "flake8 synapse tests scripts scripts-dev scripts/hash_password scripts/register_new_matrix_user scripts/synapse_port_db synctl {env:PEP8SUFFIX:}"
111125
112126 [testenv:check_isort]
113127 skip_install = True