Codebase list aioredis / fc07d33
Update upstream source from tag 'upstream/1.3.1' Update to upstream version '1.3.1' with Debian dir 27afbabe1a4623f247f1200c34af49c52653326d Piotr Ożarowski 4 years ago
82 changed file(s) with 3442 addition(s) and 3127 deletion(s). Raw diff Collapse all Expand all
00 Changes
11 -------
2
3 .. towncrier release notes start
4
5 1.3.1 (2019-12-02)
6 ^^^^^^^^^^^^^^^^^^
7 Bugfixes
8 ~~~~~~~~
9
10 - Fix transaction data decoding
11 (see `#657 <https://github.com/aio-libs/aioredis/issues/657>`_);
12 - Fix duplicate calls to ``pool.wait_closed()`` upon ``create_pool()`` exception.
13 (see `#671 <https://github.com/aio-libs/aioredis/issues/671>`_);
14
15 Deprecations and Removals
16 ~~~~~~~~~~~~~~~~~~~~~~~~~
17
18 - Drop explicit loop requirement in API.
19 Deprecate ``loop`` argument.
20 Throw warning in Python 3.8+ if explicit ``loop`` is passed to methods.
21 (see `#666 <https://github.com/aio-libs/aioredis/issues/666>`_);
22
23 Misc
24 ~~~~
25
26 - `#643 <https://github.com/aio-libs/aioredis/issues/643>`_,
27 `#646 <https://github.com/aio-libs/aioredis/issues/646>`_,
28 `#648 <https://github.com/aio-libs/aioredis/issues/648>`_;
29
30
31 1.3.0 (2019-09-24)
32 ^^^^^^^^^^^^^^^^^^
33 Features
34 ~~~~~~~~
35
36 - Added ``xdel`` and ``xtrim`` method which missed in ``commands/streams.py`` & also added unit test code for them
37 (see `#438 <https://github.com/aio-libs/aioredis/issues/438>`_);
38 - Add ``count`` argument to ``spop`` command
39 (see `#485 <https://github.com/aio-libs/aioredis/issues/485>`_);
40 - Add support for ``zpopmax`` and ``zpopmin`` redis commands
41 (see `#550 <https://github.com/aio-libs/aioredis/issues/550>`_);
42 - Add ``towncrier``: change notes are now stored in ``CHANGES.txt``
43 (see `#576 <https://github.com/aio-libs/aioredis/issues/576>`_);
44 - Type hints for the library
45 (see `#584 <https://github.com/aio-libs/aioredis/issues/584>`_);
46 - A few additions to the sorted set commands:
47
48 - the blocking pop commands: ``BZPOPMAX`` and ``BZPOPMIN``
49
50 - the ``CH`` and ``INCR`` options of the ``ZADD`` command
51
52 (see `#618 <https://github.com/aio-libs/aioredis/issues/618>`_);
53 - Added ``no_ack`` parameter to ``xread_group`` streams method in ``commands/streams.py``
54 (see `#625 <https://github.com/aio-libs/aioredis/issues/625>`_);
55
56 Bugfixes
57 ~~~~~~~~
58
59 - Fix for sensitive logging
60 (see `#459 <https://github.com/aio-libs/aioredis/issues/459>`_);
61 - Fix slow memory leak in ``wait_closed`` implementation
62 (see `#498 <https://github.com/aio-libs/aioredis/issues/498>`_);
63 - Fix handling of instances were Redis returns null fields for a stream message
64 (see `#605 <https://github.com/aio-libs/aioredis/issues/605>`_);
65
66 Improved Documentation
67 ~~~~~~~~~~~~~~~~~~~~~~
68
69 - Rewrite "Getting started" documentation.
70 (see `#641 <https://github.com/aio-libs/aioredis/issues/641>`_);
71
72 Misc
73 ~~~~
74
75 - `#585 <https://github.com/aio-libs/aioredis/issues/585>`_,
76 `#611 <https://github.com/aio-libs/aioredis/issues/611>`_,
77 `#612 <https://github.com/aio-libs/aioredis/issues/612>`_,
78 `#619 <https://github.com/aio-libs/aioredis/issues/619>`_,
79 `#620 <https://github.com/aio-libs/aioredis/issues/620>`_,
80 `#642 <https://github.com/aio-libs/aioredis/issues/642>`_;
81
282
383 1.2.0 (2018-10-24)
484 ^^^^^^^^^^^^^^^^^^
348428 * Fixed cancellation of wait_closed
349429 (see `#118 <https://github.com/aio-libs/aioredis/issues/118>`_);
350430
351 * Fixed ``time()`` convertion to float
431 * Fixed ``time()`` conversion to float
352432 (see `#126 <https://github.com/aio-libs/aioredis/issues/126>`_);
353433
354434 * Fixed ``hmset()`` method to return bool instead of ``b'OK'``
77 Alexander Shorin
88 Aliaksei Urbanski
99 Andrew Svetlov
10 Anton Salii
1011 Anton Verinov
12 Artem Mazur
1113 <cynecx>
1214 David Francos
1315 Dima Kruk
1517 Hugo <hugovk>
1618 Ihor Gorobets
1719 Ihor Liubymov
20 Ilya Samartsev
1821 James Hilliard
1922 Jan Špaček
2023 Jeff Moser
2427 Marek Szapiel
2528 Marijn Giesen
2629 Martin <the-panda>
30 Maxim Dodonchuk
2731 Michael Käufl
2832 Nickolai Novik
33 Oleg Butuzov
34 Oleksandr Tykhonruk
2935 Pau Freixes
3036 Paul Colomiets
3137 Samuel Colvin
3238 Samuel Dion-Girardeau
39 Sergey Miletskiy
3340 SeungHyun Hwang
3441 Taku Fukada
3542 Taras Voinarovskyi
3643 Thanos Lefteris
3744 Thomas Steinacher
3845 Volodymyr Hotsyk
46 Youngmin Koo <youngminz>
47 Dima Kit
48 <curiouscod3>
49 Dmitry Vasilishin <dmvass>
00 Metadata-Version: 1.1
11 Name: aioredis
2 Version: 1.2.0
2 Version: 1.3.1
33 Summary: asyncio (PEP 3156) Redis support
44 Home-page: https://github.com/aio-libs/aioredis
55 Author: Alexey Popravka
3434 Sentinel support Yes
3535 Redis Cluster support WIP
3636 Trollius (python 2.7) No
37 Tested CPython versions `3.5, 3.6 3.7 <travis_>`_ [2]_
38 Tested PyPy3 versions `5.9.0 <travis_>`_
39 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 <travis_>`_
37 Tested CPython versions `3.5.3, 3.6, 3.7 <travis_>`_ [1]_
38 Tested PyPy3 versions `pypy3.5-7.0 pypy3.6-7.1.1 <travis_>`_
39 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 5.0 <travis_>`_
4040 Support for dev Redis server through low-level API
4141 ================================ ==============================
4242
43
44 .. [2] For Python 3.3, 3.4 support use aioredis v0.3.
43 .. [1] For Python 3.3, 3.4 support use aioredis v0.3.
4544
4645 Documentation
4746 -------------
4847
4948 http://aioredis.readthedocs.io/
5049
51 Usage examples
52 --------------
53
54 Simple low-level interface:
50 Usage example
51 -------------
52
53 Simple high-level interface with connections pool:
5554
5655 .. code:: python
5756
5857 import asyncio
5958 import aioredis
6059
61 loop = asyncio.get_event_loop()
62
6360 async def go():
64 conn = await aioredis.create_connection(
65 'redis://localhost', loop=loop)
66 await conn.execute('set', 'my-key', 'value')
67 val = await conn.execute('get', 'my-key')
68 print(val)
69 conn.close()
70 await conn.wait_closed()
71 loop.run_until_complete(go())
72 # will print 'value'
73
74 Simple high-level interface:
75
76 .. code:: python
77
78 import asyncio
79 import aioredis
80
81 loop = asyncio.get_event_loop()
82
83 async def go():
84 redis = await aioredis.create_redis(
85 'redis://localhost', loop=loop)
61 redis = await aioredis.create_redis_pool(
62 'redis://localhost')
8663 await redis.set('my-key', 'value')
87 val = await redis.get('my-key')
64 val = await redis.get('my-key', encoding='utf-8')
8865 print(val)
8966 redis.close()
9067 await redis.wait_closed()
91 loop.run_until_complete(go())
92 # will print 'value'
93
94 Connections pool:
95
96 .. code:: python
97
98 import asyncio
99 import aioredis
100
101 loop = asyncio.get_event_loop()
102
103 async def go():
104 pool = await aioredis.create_pool(
105 'redis://localhost',
106 minsize=5, maxsize=10,
107 loop=loop)
108 await pool.execute('set', 'my-key', 'value')
109 print(await pool.execute('get', 'my-key'))
110 # graceful shutdown
111 pool.close()
112 await pool.wait_closed()
113
114 loop.run_until_complete(go())
115
116 Simple high-level interface with connections pool:
117
118 .. code:: python
119
120 import asyncio
121 import aioredis
122
123 loop = asyncio.get_event_loop()
124
125 async def go():
126 redis = await aioredis.create_redis_pool(
127 'redis://localhost',
128 minsize=5, maxsize=10,
129 loop=loop)
130 await redis.set('my-key', 'value')
131 val = await redis.get('my-key')
132 print(val)
133 redis.close()
134 await redis.wait_closed()
135 loop.run_until_complete(go())
68
69 asyncio.run(go())
13670 # will print 'value'
13771
13872 Requirements
170104
171105 Changes
172106 -------
107
108 .. towncrier release notes start
109
110 1.3.1 (2019-12-02)
111 ^^^^^^^^^^^^^^^^^^
112 Bugfixes
113 ~~~~~~~~
114
115 - Fix transaction data decoding
116 (see `#657 <https://github.com/aio-libs/aioredis/issues/657>`_);
117 - Fix duplicate calls to ``pool.wait_closed()`` upon ``create_pool()`` exception.
118 (see `#671 <https://github.com/aio-libs/aioredis/issues/671>`_);
119
120 Deprecations and Removals
121 ~~~~~~~~~~~~~~~~~~~~~~~~~
122
123 - Drop explicit loop requirement in API.
124 Deprecate ``loop`` argument.
125 Throw warning in Python 3.8+ if explicit ``loop`` is passed to methods.
126 (see `#666 <https://github.com/aio-libs/aioredis/issues/666>`_);
127
128 Misc
129 ~~~~
130
131 - `#643 <https://github.com/aio-libs/aioredis/issues/643>`_,
132 `#646 <https://github.com/aio-libs/aioredis/issues/646>`_,
133 `#648 <https://github.com/aio-libs/aioredis/issues/648>`_;
134
135
136 1.3.0 (2019-09-24)
137 ^^^^^^^^^^^^^^^^^^
138 Features
139 ~~~~~~~~
140
141 - Added ``xdel`` and ``xtrim`` method which missed in ``commands/streams.py`` & also added unit test code for them
142 (see `#438 <https://github.com/aio-libs/aioredis/issues/438>`_);
143 - Add ``count`` argument to ``spop`` command
144 (see `#485 <https://github.com/aio-libs/aioredis/issues/485>`_);
145 - Add support for ``zpopmax`` and ``zpopmin`` redis commands
146 (see `#550 <https://github.com/aio-libs/aioredis/issues/550>`_);
147 - Add ``towncrier``: change notes are now stored in ``CHANGES.txt``
148 (see `#576 <https://github.com/aio-libs/aioredis/issues/576>`_);
149 - Type hints for the library
150 (see `#584 <https://github.com/aio-libs/aioredis/issues/584>`_);
151 - A few additions to the sorted set commands:
152
153 - the blocking pop commands: ``BZPOPMAX`` and ``BZPOPMIN``
154
155 - the ``CH`` and ``INCR`` options of the ``ZADD`` command
156
157 (see `#618 <https://github.com/aio-libs/aioredis/issues/618>`_);
158 - Added ``no_ack`` parameter to ``xread_group`` streams method in ``commands/streams.py``
159 (see `#625 <https://github.com/aio-libs/aioredis/issues/625>`_);
160
161 Bugfixes
162 ~~~~~~~~
163
164 - Fix for sensitive logging
165 (see `#459 <https://github.com/aio-libs/aioredis/issues/459>`_);
166 - Fix slow memory leak in ``wait_closed`` implementation
167 (see `#498 <https://github.com/aio-libs/aioredis/issues/498>`_);
168 - Fix handling of instances were Redis returns null fields for a stream message
169 (see `#605 <https://github.com/aio-libs/aioredis/issues/605>`_);
170
171 Improved Documentation
172 ~~~~~~~~~~~~~~~~~~~~~~
173
174 - Rewrite "Getting started" documentation.
175 (see `#641 <https://github.com/aio-libs/aioredis/issues/641>`_);
176
177 Misc
178 ~~~~
179
180 - `#585 <https://github.com/aio-libs/aioredis/issues/585>`_,
181 `#611 <https://github.com/aio-libs/aioredis/issues/611>`_,
182 `#612 <https://github.com/aio-libs/aioredis/issues/612>`_,
183 `#619 <https://github.com/aio-libs/aioredis/issues/619>`_,
184 `#620 <https://github.com/aio-libs/aioredis/issues/620>`_,
185 `#642 <https://github.com/aio-libs/aioredis/issues/642>`_;
186
173187
174188 1.2.0 (2018-10-24)
175189 ^^^^^^^^^^^^^^^^^^
519533 * Fixed cancellation of wait_closed
520534 (see `#118 <https://github.com/aio-libs/aioredis/issues/118>`_);
521535
522 * Fixed ``time()`` convertion to float
536 * Fixed ``time()`` conversion to float
523537 (see `#126 <https://github.com/aio-libs/aioredis/issues/126>`_);
524538
525539 * Fixed ``hmset()`` method to return bool instead of ``b'OK'``
2626 Sentinel support Yes
2727 Redis Cluster support WIP
2828 Trollius (python 2.7) No
29 Tested CPython versions `3.5, 3.6 3.7 <travis_>`_ [2]_
30 Tested PyPy3 versions `5.9.0 <travis_>`_
31 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 <travis_>`_
29 Tested CPython versions `3.5.3, 3.6, 3.7 <travis_>`_ [1]_
30 Tested PyPy3 versions `pypy3.5-7.0 pypy3.6-7.1.1 <travis_>`_
31 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 5.0 <travis_>`_
3232 Support for dev Redis server through low-level API
3333 ================================ ==============================
3434
35
36 .. [2] For Python 3.3, 3.4 support use aioredis v0.3.
35 .. [1] For Python 3.3, 3.4 support use aioredis v0.3.
3736
3837 Documentation
3938 -------------
4039
4140 http://aioredis.readthedocs.io/
4241
43 Usage examples
44 --------------
45
46 Simple low-level interface:
47
48 .. code:: python
49
50 import asyncio
51 import aioredis
52
53 loop = asyncio.get_event_loop()
54
55 async def go():
56 conn = await aioredis.create_connection(
57 'redis://localhost', loop=loop)
58 await conn.execute('set', 'my-key', 'value')
59 val = await conn.execute('get', 'my-key')
60 print(val)
61 conn.close()
62 await conn.wait_closed()
63 loop.run_until_complete(go())
64 # will print 'value'
65
66 Simple high-level interface:
67
68 .. code:: python
69
70 import asyncio
71 import aioredis
72
73 loop = asyncio.get_event_loop()
74
75 async def go():
76 redis = await aioredis.create_redis(
77 'redis://localhost', loop=loop)
78 await redis.set('my-key', 'value')
79 val = await redis.get('my-key')
80 print(val)
81 redis.close()
82 await redis.wait_closed()
83 loop.run_until_complete(go())
84 # will print 'value'
85
86 Connections pool:
87
88 .. code:: python
89
90 import asyncio
91 import aioredis
92
93 loop = asyncio.get_event_loop()
94
95 async def go():
96 pool = await aioredis.create_pool(
97 'redis://localhost',
98 minsize=5, maxsize=10,
99 loop=loop)
100 await pool.execute('set', 'my-key', 'value')
101 print(await pool.execute('get', 'my-key'))
102 # graceful shutdown
103 pool.close()
104 await pool.wait_closed()
105
106 loop.run_until_complete(go())
42 Usage example
43 -------------
10744
10845 Simple high-level interface with connections pool:
10946
11249 import asyncio
11350 import aioredis
11451
115 loop = asyncio.get_event_loop()
116
11752 async def go():
11853 redis = await aioredis.create_redis_pool(
119 'redis://localhost',
120 minsize=5, maxsize=10,
121 loop=loop)
54 'redis://localhost')
12255 await redis.set('my-key', 'value')
123 val = await redis.get('my-key')
56 val = await redis.get('my-key', encoding='utf-8')
12457 print(val)
12558 redis.close()
12659 await redis.wait_closed()
127 loop.run_until_complete(go())
60
61 asyncio.run(go())
12862 # will print 'value'
12963
13064 Requirements
2727 )
2828
2929
30 __version__ = '1.2.0'
30 __version__ = '1.3.1'
3131
3232 __all__ = [
3333 # Factories
22 These are intended to be used for implementing custom connection managers.
33 """
44 import abc
5 import asyncio
6
7 from abc import ABC
85
96
107 __all__ = [
1411 ]
1512
1613
17 class AbcConnection(ABC):
14 class AbcConnection(abc.ABC):
1815 """Abstract connection interface."""
1916
2017 @abc.abstractmethod
2926 def close(self):
3027 """Perform connection(s) close and resources cleanup."""
3128
32 @asyncio.coroutine
3329 @abc.abstractmethod
34 def wait_closed(self):
30 async def wait_closed(self):
3531 """
3632 Coroutine waiting until all resources are closed/released/cleaned up.
3733 """
8379 """
8480
8581 @abc.abstractmethod
86 def get_connection(self): # TODO: arguments
82 def get_connection(self, command, args=()):
8783 """
8884 Gets free connection from pool in a sync way.
8985
9086 If no connection available — returns None.
9187 """
9288
93 @asyncio.coroutine
9489 @abc.abstractmethod
95 def acquire(self): # TODO: arguments
90 async def acquire(self, command=None, args=()):
9691 """Acquires connection from pool."""
9792
9893 @abc.abstractmethod
99 def release(self, conn): # TODO: arguments
94 def release(self, conn):
10095 """Releases connection to pool.
10196
10297 :param AbcConnection conn: Owned connection to be released.
108103 """Connection address or None."""
109104
110105
111 class AbcChannel(ABC):
106 class AbcChannel(abc.ABC):
112107 """Abstract Pub/Sub Channel interface."""
113108
114109 @property
127122 """Flag indicating that channel has unreceived messages
128123 and not marked as closed."""
129124
130 @asyncio.coroutine
131125 @abc.abstractmethod
132 def get(self):
126 async def get(self):
133127 """Wait and return new message.
134128
135129 Will raise ``ChannelClosedError`` if channel is not active.
118118 return self.execute('QUIT')
119119
120120 def select(self, db):
121 """Change the selected database for the current connection.
122
123 This method wraps call to :meth:`aioredis.RedisConnection.select()`
124 """
121 """Change the selected database."""
125122 return self._pool_or_conn.select(db)
126123
127124 def swapdb(self, from_index, to_index):
142142 """Returns the kind of internal representation used in order
143143 to store the value associated with a key (OBJECT ENCODING).
144144 """
145 # TODO: set default encoding to 'utf-8'
146 return self.execute(b'OBJECT', b'ENCODING', key)
145 return self.execute(b'OBJECT', b'ENCODING', key, encoding='utf-8')
147146
148147 def object_idletime(self, key):
149148 """Returns the number of seconds since the object is not requested
00 from collections import namedtuple
11
22 from aioredis.util import wait_ok, wait_convert, wait_make_dict, _NOTSET
3 from aioredis.log import logger
43
54
65 class ServerCommandsMixin:
205204 else:
206205 return self.execute(b'SHUTDOWN')
207206
208 def slaveof(self, host=_NOTSET, port=None):
207 def slaveof(self, host, port=None):
209208 """Make the server a slave of another instance,
210209 or promote it as master.
211210
215214 ``slaveof()`` form deprecated
216215 in favour of explicit ``slaveof(None)``.
217216 """
218 if host is _NOTSET:
219 logger.warning("slaveof() form is deprecated!"
220 " Use slaveof(None) to turn redis into a MASTER.")
221 host = None
222 # TODO: drop in 0.3.0
223217 if host is None and port is None:
224218 return self.execute(b'SLAVEOF', b'NO', b'ONE')
225219 return self.execute(b'SLAVEOF', host, port)
4242 """Move a member from one set to another."""
4343 return self.execute(b'SMOVE', sourcekey, destkey, member)
4444
45 def spop(self, key, *, encoding=_NOTSET):
46 """Remove and return a random member from a set."""
47 return self.execute(b'SPOP', key, encoding=encoding)
45 def spop(self, key, count=None, *, encoding=_NOTSET):
46 """Remove and return one or multiple random members from a set."""
47 args = [key]
48 if count is not None:
49 args.append(count)
50 return self.execute(b'SPOP', *args, encoding=encoding)
4851
4952 def srandmember(self, key, count=None, *, encoding=_NOTSET):
5053 """Get one or multiple random members from a set."""
1717 ZSET_IF_NOT_EXIST = 'ZSET_IF_NOT_EXIST' # NX
1818 ZSET_IF_EXIST = 'ZSET_IF_EXIST' # XX
1919
20 def zadd(self, key, score, member, *pairs, exist=None):
20 def bzpopmax(self, key, *keys, timeout=0, encoding=_NOTSET):
21 """Remove and get an element with the highest score in the sorted set,
22 or block until one is available.
23
24 :raises TypeError: if timeout is not int
25 :raises ValueError: if timeout is less than 0
26 """
27 if not isinstance(timeout, int):
28 raise TypeError("timeout argument must be int")
29 if timeout < 0:
30 raise ValueError("timeout must be greater equal 0")
31 args = keys + (timeout,)
32 return self.execute(b'BZPOPMAX', key, *args, encoding=encoding)
33
34 def bzpopmin(self, key, *keys, timeout=0, encoding=_NOTSET):
35 """Remove and get an element with the lowest score in the sorted set,
36 or block until one is available.
37
38 :raises TypeError: if timeout is not int
39 :raises ValueError: if timeout is less than 0
40 """
41 if not isinstance(timeout, int):
42 raise TypeError("timeout argument must be int")
43 if timeout < 0:
44 raise ValueError("timeout must be greater equal 0")
45 args = keys + (timeout,)
46 return self.execute(b'BZPOPMIN', key, *args, encoding=encoding)
47
48 def zadd(self, key, score, member, *pairs, exist=None, changed=False,
49 incr=False):
2150 """Add one or more members to a sorted set or update its score.
2251
2352 :raises TypeError: score not int or float
3766 args.append(b'XX')
3867 elif exist is self.ZSET_IF_NOT_EXIST:
3968 args.append(b'NX')
69
70 if changed:
71 args.append(b'CH')
72
73 if incr:
74 if pairs:
75 raise ValueError('only one score-element pair '
76 'can be specified in this mode')
77 args.append(b'INCR')
4078
4179 args.extend([score, member])
4280 if pairs:
423461 match=match,
424462 count=count))
425463
464 def zpopmin(self, key, count=None, *, encoding=_NOTSET):
465 """Removes and returns up to count members with the lowest scores
466 in the sorted set stored at key.
467
468 :raises TypeError: if count is not int
469 """
470 if count is not None and not isinstance(count, int):
471 raise TypeError("count argument must be int")
472
473 args = []
474 if count is not None:
475 args.extend([count])
476
477 fut = self.execute(b'ZPOPMIN', key, *args, encoding=encoding)
478 return fut
479
480 def zpopmax(self, key, count=None, *, encoding=_NOTSET):
481 """Removes and returns up to count members with the highest scores
482 in the sorted set stored at key.
483
484 :raises TypeError: if count is not int
485 """
486 if count is not None and not isinstance(count, int):
487 raise TypeError("count argument must be int")
488
489 args = []
490 if count is not None:
491 args.extend([count])
492
493 fut = self.execute(b'ZPOPMAX', key, *args, encoding=encoding)
494 return fut
495
426496
427497 def _encode_min_max(flag, min, max):
428498 if flag is SortedSetCommandsMixin.ZSET_EXCLUDE_MIN:
3232 """
3333 if messages is None:
3434 return []
35 return [(mid, fields_to_dict(values)) for mid, values in messages]
35
36 messages = (message for message in messages if message is not None)
37 return [
38 (mid, fields_to_dict(values))
39 for mid, values
40 in messages if values is not None
41 ]
3642
3743
3844 def parse_messages_by_stream(messages_by_stream):
7884 class StreamCommandsMixin:
7985 """Stream commands mixin
8086
81 Streams are under development in Redis and
82 not currently released.
87 Streams are available in Redis since v5.0
8388 """
8489
8590 def xadd(self, stream, fields, message_id=b'*', max_len=None,
127132 return wait_convert(fut, parse_messages_by_stream)
128133
129134 def xread_group(self, group_name, consumer_name, streams, timeout=0,
130 count=None, latest_ids=None):
135 count=None, latest_ids=None, no_ack=False):
131136 """Perform a blocking read on the given stream as part of a consumer group
132137
133138 :raises ValueError: if the length of streams and latest_ids do
134139 not match
135140 """
136 args = self._xread(streams, timeout, count, latest_ids)
141 args = self._xread(
142 streams, timeout, count, latest_ids, no_ack
143 )
137144 fut = self.execute(
138145 b'XREADGROUP', b'GROUP', group_name, consumer_name, *args
139146 )
140147 return wait_convert(fut, parse_messages_by_stream)
141148
142 def xgroup_create(self, stream, group_name, latest_id='$'):
149 def xgroup_create(self, stream, group_name, latest_id='$', mkstream=False):
143150 """Create a consumer group"""
144 fut = self.execute(b'XGROUP', b'CREATE', stream, group_name, latest_id)
151 args = [b'CREATE', stream, group_name, latest_id]
152 if mkstream:
153 args.append(b'MKSTREAM')
154 fut = self.execute(b'XGROUP', *args)
145155 return wait_ok(fut)
146156
147157 def xgroup_setid(self, stream, group_name, latest_id='$'):
200210 """Acknowledge a message for a given consumer group"""
201211 return self.execute(b'XACK', stream, group_name, id, *ids)
202212
213 def xdel(self, stream, id):
214 """Removes the specified entries(IDs) from a stream"""
215 return self.execute(b'XDEL', stream, id)
216
217 def xtrim(self, stream, max_len, exact_len=False):
218 """trims the stream to a given number of items, evicting older items"""
219 args = []
220 if exact_len:
221 args.extend((b'MAXLEN', max_len))
222 else:
223 args.extend((b'MAXLEN', b'~', max_len))
224 return self.execute(b'XTRIM', stream, *args)
225
226 def xlen(self, stream):
227 """Returns the number of entries inside a stream"""
228 return self.execute(b'XLEN', stream)
229
203230 def xinfo(self, stream):
204231 """Retrieve information about the given stream.
205232
228255 fut = self.execute(b'XINFO', b'HELP')
229256 return wait_convert(fut, lambda l: b'\n'.join(l))
230257
231 def _xread(self, streams, timeout=0, count=None, latest_ids=None):
258 def _xread(self, streams, timeout=0, count=None, latest_ids=None,
259 no_ack=False):
232260 """Wraps up common functionality between ``xread()``
233261 and ``xread_group()``
234262
245273 count_args = [b'COUNT', count] if count else []
246274 if timeout is None:
247275 block_args = []
276 elif not isinstance(timeout, int):
277 raise TypeError(
278 "timeout argument must be int, not {!r}".format(timeout))
248279 else:
249280 block_args = [b'BLOCK', timeout]
250 return block_args + count_args + [b'STREAMS'] + streams + latest_ids
281
282 noack_args = [b'NOACK'] if no_ack else []
283
284 return count_args + block_args + noack_args + [b'STREAMS'] + streams \
285 + latest_ids
0 from itertools import chain
1
02 from aioredis.util import wait_convert, wait_ok, _NOTSET
13
24
135137 """Get the values of all the given keys."""
136138 return self.execute(b'MGET', key, *keys, encoding=encoding)
137139
138 def mset(self, key, value, *pairs):
139 """Set multiple keys to multiple values.
140
141 :raises TypeError: if len of pairs is not event number
142 """
143 if len(pairs) % 2 != 0:
140 def mset(self, *args):
141 """Set multiple keys to multiple values or unpack dict to keys & values.
142
143 :raises TypeError: if len of args is not event number
144 :raises TypeError: if len of args equals 1 and it is not a dict
145 """
146 data = args
147 if len(args) == 1:
148 if not isinstance(args[0], dict):
149 raise TypeError("if one arg it should be a dict")
150 data = chain.from_iterable(args[0].items())
151 elif len(args) % 2 != 0:
144152 raise TypeError("length of pairs must be even number")
145 fut = self.execute(b'MSET', key, value, *pairs)
153 fut = self.execute(b'MSET', *data)
146154 return wait_ok(fut)
147155
148156 def msetnx(self, key, value, *pairs):
1010 from ..util import (
1111 wait_ok,
1212 _set_exception,
13 get_event_loop,
1314 )
1415
1516
6263 >>> await asyncio.gather(fut1, fut2)
6364 [1, 1]
6465 """
65 return MultiExec(self._pool_or_conn, self.__class__,
66 loop=self._pool_or_conn._loop)
66 return MultiExec(self._pool_or_conn, self.__class__)
6767
6868 def pipeline(self):
6969 """Returns :class:`Pipeline` object to execute bulk of commands.
8989 >>> await asyncio.gather(fut1, fut2)
9090 [2, 2]
9191 """
92 return Pipeline(self._pool_or_conn, self.__class__,
93 loop=self._pool_or_conn._loop)
92 return Pipeline(self._pool_or_conn, self.__class__)
9493
9594
9695 class _RedisBuffer:
9796
9897 def __init__(self, pipeline, *, loop=None):
99 if loop is None:
100 loop = asyncio.get_event_loop()
98 # TODO: deprecation note
99 # if loop is None:
100 # loop = asyncio.get_event_loop()
101101 self._pipeline = pipeline
102 self._loop = loop
103102
104103 def execute(self, cmd, *args, **kw):
105 fut = self._loop.create_future()
104 fut = get_event_loop().create_future()
106105 self._pipeline.append((fut, cmd, args, kw))
107106 return fut
108107
128127
129128 def __init__(self, pool_or_connection, commands_factory=lambda conn: conn,
130129 *, loop=None):
131 if loop is None:
132 loop = asyncio.get_event_loop()
130 # TODO: deprecation note
131 # if loop is None:
132 # loop = asyncio.get_event_loop()
133133 self._pool_or_conn = pool_or_connection
134 self._loop = loop
135134 self._pipeline = []
136135 self._results = []
137 self._buffer = _RedisBuffer(self._pipeline, loop=loop)
136 self._buffer = _RedisBuffer(self._pipeline)
138137 self._redis = commands_factory(self._buffer)
139138 self._done = False
140139
146145 @functools.wraps(attr)
147146 def wrapper(*args, **kw):
148147 try:
149 task = asyncio.ensure_future(attr(*args, **kw),
150 loop=self._loop)
148 task = asyncio.ensure_future(attr(*args, **kw))
151149 except Exception as exc:
152 task = self._loop.create_future()
150 task = get_event_loop().create_future()
153151 task.set_exception(exc)
154152 self._results.append(task)
155153 return task
182180
183181 async def _do_execute(self, conn, *, return_exceptions=False):
184182 await asyncio.gather(*self._send_pipeline(conn),
185 loop=self._loop,
186183 return_exceptions=True)
187184 return await self._gather_result(return_exceptions)
188185
264261 multi = conn.execute('MULTI')
265262 coros = list(self._send_pipeline(conn))
266263 exec_ = conn.execute('EXEC')
267 gather = asyncio.gather(multi, *coros, loop=self._loop,
264 gather = asyncio.gather(multi, *coros,
268265 return_exceptions=True)
269266 last_error = None
270267 try:
271 await asyncio.shield(gather, loop=self._loop)
268 await asyncio.shield(gather)
272269 except asyncio.CancelledError:
273270 await gather
274271 except Exception as err:
00 import types
11 import asyncio
22 import socket
3 import warnings
4 import sys
5
36 from functools import partial
47 from collections import deque
58 from contextlib import contextmanager
1316 coerced_keys_dict,
1417 decode,
1518 parse_url,
19 get_event_loop,
1620 )
1721 from .parser import Reader
1822 from .stream import open_connection, open_unix_connection
7579 """
7680 assert isinstance(address, (tuple, list, str)), "tuple or str expected"
7781 if isinstance(address, str):
78 logger.debug("Parsing Redis URI %r", address)
7982 address, options = parse_url(address)
83 logger.debug("Parsed Redis URI %r", address)
8084 db = options.setdefault('db', db)
8185 password = options.setdefault('password', password)
8286 encoding = options.setdefault('encoding', encoding)
96100 else:
97101 cls = RedisConnection
98102
99 if loop is None:
100 loop = asyncio.get_event_loop()
103 if loop is not None and sys.version_info >= (3, 8, 0):
104 warnings.warn("The loop argument is deprecated",
105 DeprecationWarning)
101106
102107 if isinstance(address, (list, tuple)):
103108 host, port = address
104109 logger.debug("Creating tcp connection to %r", address)
105110 reader, writer = await asyncio.wait_for(open_connection(
106 host, port, limit=MAX_CHUNK_SIZE, ssl=ssl, loop=loop),
107 timeout, loop=loop)
111 host, port, limit=MAX_CHUNK_SIZE, ssl=ssl),
112 timeout)
108113 sock = writer.transport.get_extra_info('socket')
109114 if sock is not None:
110115 sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
113118 else:
114119 logger.debug("Creating unix connection to %r", address)
115120 reader, writer = await asyncio.wait_for(open_unix_connection(
116 address, ssl=ssl, limit=MAX_CHUNK_SIZE, loop=loop),
117 timeout, loop=loop)
121 address, ssl=ssl, limit=MAX_CHUNK_SIZE),
122 timeout)
118123 sock = writer.transport.get_extra_info('socket')
119124 if sock is not None:
120125 address = sock.getpeername()
121126
122127 conn = cls(reader, writer, encoding=encoding,
123 address=address, parser=parser,
124 loop=loop)
128 address=address, parser=parser)
125129
126130 try:
127131 if password is not None:
140144
141145 def __init__(self, reader, writer, *, address, encoding=None,
142146 parser=None, loop=None):
143 if loop is None:
144 loop = asyncio.get_event_loop()
147 if loop is not None and sys.version_info >= (3, 8):
148 warnings.warn("The loop argument is deprecated",
149 DeprecationWarning)
145150 if parser is None:
146151 parser = Reader
147152 assert callable(parser), (
149154 self._reader = reader
150155 self._writer = writer
151156 self._address = address
152 self._loop = loop
153157 self._waiters = deque()
154158 self._reader.set_parser(
155159 parser(protocolError=ProtocolError, replyError=ReplyError)
156160 )
157 self._reader_task = asyncio.ensure_future(self._read_data(),
158 loop=self._loop)
161 self._reader_task = asyncio.ensure_future(self._read_data())
159162 self._close_msg = None
160163 self._db = 0
161164 self._closing = False
162165 self._closed = False
163 self._close_waiter = loop.create_future()
164 self._reader_task.add_done_callback(self._close_waiter.set_result)
166 self._close_state = asyncio.Event()
167 self._reader_task.add_done_callback(lambda x: self._close_state.set())
165168 self._in_transaction = None
166169 self._transaction_error = None # XXX: never used?
167170 self._in_pubsub = 0
211214 else:
212215 self._process_data(obj)
213216 self._closing = True
214 self._loop.call_soon(self._do_close, last_error)
217 get_event_loop().call_soon(self._do_close, last_error)
215218
216219 def _process_data(self, obj):
217220 """Processes command results."""
335338 cb = self._start_transaction
336339 elif command in ('EXEC', b'EXEC'):
337340 cb = partial(self._end_transaction, discard=False)
341 encoding = None
338342 elif command in ('DISCARD', b'DISCARD'):
339343 cb = partial(self._end_transaction, discard=True)
340344 else:
341345 cb = None
342346 if encoding is _NOTSET:
343347 encoding = self._encoding
344 fut = self._loop.create_future()
348 fut = get_event_loop().create_future()
345349 if self._pipeline_buffer is None:
346350 self._writer.write(encode_command(command, *args))
347351 else:
365369 if not len(channels):
366370 raise TypeError("No channels/patterns supplied")
367371 is_pattern = len(command) in (10, 12)
368 mkchannel = partial(Channel, is_pattern=is_pattern, loop=self._loop)
372 mkchannel = partial(Channel, is_pattern=is_pattern)
369373 channels = [ch if isinstance(ch, AbcChannel) else mkchannel(ch)
370374 for ch in channels]
371375 if not all(ch.is_pattern == is_pattern for ch in channels):
374378 cmd = encode_command(command, *(ch.name for ch in channels))
375379 res = []
376380 for ch in channels:
377 fut = self._loop.create_future()
381 fut = get_event_loop().create_future()
378382 res.append(fut)
379383 cb = partial(self._update_pubsub, ch=ch)
380384 self._waiters.append((fut, None, cb))
382386 self._writer.write(cmd)
383387 else:
384388 self._pipeline_buffer.extend(cmd)
385 return asyncio.gather(*res, loop=self._loop)
389 return asyncio.gather(*res)
386390
387391 def close(self):
388392 """Close connection."""
425429 closed = self._closing or self._closed
426430 if not closed and self._reader and self._reader.at_eof():
427431 self._closing = closed = True
428 self._loop.call_soon(self._do_close, None)
432 get_event_loop().call_soon(self._do_close, None)
429433 return closed
430434
431435 async def wait_closed(self):
432436 """Coroutine waiting until connection is closed."""
433 await asyncio.shield(self._close_waiter, loop=self._loop)
437 await self._close_state.wait()
434438
435439 @property
436440 def db(self):
0 from typing import Optional, Sequence # noqa
1
02 __all__ = [
13 'RedisError',
24 'ProtocolError',
2729 class ReplyError(RedisError):
2830 """Raised for redis error replies (-ERR)."""
2931
30 MATCH_REPLY = None
32 MATCH_REPLY = None # type: Optional[Sequence[str]]
3133
3234 def __new__(cls, msg, *args):
3335 for klass in cls.__subclasses__():
4648 class AuthError(ReplyError):
4749 """Raised when authentication errors occurs."""
4850
49 MATCH_REPLY = ("NOAUTH ", "ERR invalid password")
51 MATCH_REPLY = (
52 "NOAUTH ",
53 "ERR invalid password",
54 "ERR Client sent AUTH, but no password is set",
55 )
56
57
58 class BusyGroupError(ReplyError):
59 """Raised if Consumer Group name already exists."""
60
61 MATCH_REPLY = "BUSYGROUP Consumer Group name already exists"
5062
5163
5264 class PipelineError(RedisError):
0 import asyncio
1 import sys
2
03 from asyncio.locks import Lock as _Lock
1 from asyncio import coroutine
2 from asyncio import futures
34
45 # Fixes an issue with all Python versions that leaves pending waiters
56 # without being awakened when the first waiter is canceled.
1011
1112 class Lock(_Lock):
1213
13 @coroutine
14 def acquire(self):
15 """Acquire a lock.
16 This method blocks until the lock is unlocked, then sets it to
17 locked and returns True.
18 """
19 if not self._locked and all(w.cancelled() for w in self._waiters):
20 self._locked = True
21 return True
14 if sys.version_info < (3, 7, 0):
15 async def acquire(self):
16 """Acquire a lock.
17 This method blocks until the lock is unlocked, then sets it to
18 locked and returns True.
19 """
20 if not self._locked and all(w.cancelled() for w in self._waiters):
21 self._locked = True
22 return True
2223
23 fut = self._loop.create_future()
24 fut = self._loop.create_future()
2425
25 self._waiters.append(fut)
26 try:
27 yield from fut
28 self._locked = True
29 return True
30 except futures.CancelledError:
31 if not self._locked: # pragma: no cover
32 self._wake_up_first()
33 raise
34 finally:
35 self._waiters.remove(fut)
26 self._waiters.append(fut)
27 try:
28 await fut
29 self._locked = True
30 return True
31 except asyncio.CancelledError:
32 if not self._locked: # pragma: no cover
33 self._wake_up_first()
34 raise
35 finally:
36 self._waiters.remove(fut)
3637
37 def _wake_up_first(self):
38 """Wake up the first waiter who isn't cancelled."""
39 for fut in self._waiters:
40 if not fut.done():
41 fut.set_result(True)
42 break
38 def _wake_up_first(self):
39 """Wake up the first waiter who isn't cancelled."""
40 for fut in self._waiters:
41 if not fut.done():
42 fut.set_result(True)
43 break
00 from .errors import ProtocolError, ReplyError
1 from typing import Optional, Generator, Callable, Iterator # noqa
12
23 __all__ = [
34 'Reader', 'PyReader',
89 """Pure-Python Redis protocol parser that follows hiredis.Reader
910 interface (except setmaxbuf/getmaxbuf).
1011 """
11 def __init__(self, protocolError=ProtocolError, replyError=ReplyError,
12 encoding=None):
12 def __init__(self, protocolError: Callable = ProtocolError,
13 replyError: Callable = ReplyError,
14 encoding: Optional[str] = None):
1315 if not callable(protocolError):
1416 raise TypeError("Expected a callable")
1517 if not callable(replyError):
1618 raise TypeError("Expected a callable")
1719 self._parser = Parser(protocolError, replyError, encoding)
1820
19 def feed(self, data, o=0, l=-1):
21 def feed(self, data, o: int = 0, l: int = -1):
2022 """Feed data to parser."""
2123 if l == -1:
2224 l = len(data) - o
3436 """
3537 return self._parser.parse_one()
3638
37 def setmaxbuf(self, size):
39 def setmaxbuf(self, size: Optional[int]) -> None:
3840 """No-op."""
3941 pass
4042
41 def getmaxbuf(self):
43 def getmaxbuf(self) -> int:
4244 """No-op."""
4345 return 0
4446
4547
4648 class Parser:
47 def __init__(self, protocolError, replyError, encoding):
48 self.buf = bytearray()
49 self.pos = 0
50 self.protocolError = protocolError
51 self.replyError = replyError
52 self.encoding = encoding
49 def __init__(self, protocolError: Callable,
50 replyError: Callable, encoding: Optional[str]):
51
52 self.buf = bytearray() # type: bytearray
53 self.pos = 0 # type: int
54 self.protocolError = protocolError # type: Callable
55 self.replyError = replyError # type: Callable
56 self.encoding = encoding # type: Optional[str]
5357 self._err = None
54 self._gen = None
58 self._gen = None # type: Optional[Generator]
5559
56 def waitsome(self, size):
60 def waitsome(self, size: int) -> Iterator[bool]:
5761 # keep yielding false until at least `size` bytes added to buf.
5862 while len(self.buf) < self.pos+size:
5963 yield False
6064
61 def waitany(self):
65 def waitany(self) -> Iterator[bool]:
6266 yield from self.waitsome(len(self.buf) + 1)
6367
6468 def readone(self):
65 if not self.buf[self.pos:1]:
69 if not self.buf[self.pos:self.pos + 1]:
6670 yield from self.waitany()
67 val = self.buf[self.pos:1]
71 val = self.buf[self.pos:self.pos + 1]
6872 self.pos += 1
6973 return val
7074
71 def readline(self, size=None):
75 def readline(self, size: Optional[int] = None):
7276 if size is not None:
7377 if len(self.buf) < size + 2 + self.pos:
7478 yield from self.waitsome(size + 2)
9599 self._err = self.protocolError(msg)
96100 return self._err
97101
98 def parse(self, is_bulk=False):
102 def parse(self, is_bulk: bool = False):
99103 if self._err is not None:
100104 raise self._err
101105 ctl = yield from self.readone()
00 import asyncio
11 import collections
22 import types
3 import warnings
4 import sys
35
46 from .connection import create_connection, _PUBSUB_COMMANDS
57 from .log import logger
6 from .util import parse_url
8 from .util import parse_url, CloseEvent
79 from .errors import PoolClosedError
810 from .abc import AbcPool
911 from .locks import Lock
5355 loop=loop)
5456 try:
5557 await pool._fill_free(override_min=False)
56 except Exception as ex:
58 except Exception:
5759 pool.close()
5860 await pool.wait_closed()
5961 raise
7577 "maxsize must be int > 0", maxsize, type(maxsize))
7678 assert minsize <= maxsize, (
7779 "Invalid pool min/max sizes", minsize, maxsize)
78 if loop is None:
79 loop = asyncio.get_event_loop()
80 if loop is not None and sys.version_info >= (3, 8):
81 warnings.warn("The loop argument is deprecated",
82 DeprecationWarning)
8083 self._address = address
8184 self._db = db
8285 self._password = password
8588 self._parser_class = parser
8689 self._minsize = minsize
8790 self._create_connection_timeout = create_connection_timeout
88 self._loop = loop
8991 self._pool = collections.deque(maxlen=maxsize)
9092 self._used = set()
9193 self._acquiring = 0
92 self._cond = asyncio.Condition(lock=Lock(loop=loop), loop=loop)
93 self._close_state = asyncio.Event(loop=loop)
94 self._close_waiter = None
94 self._cond = asyncio.Condition(lock=Lock())
95 self._close_state = CloseEvent(self._do_close)
9596 self._pubsub_conn = None
9697 self._connection_cls = connection_cls
9798
138139 conn = self._pool.popleft()
139140 conn.close()
140141 waiters.append(conn.wait_closed())
141 await asyncio.gather(*waiters, loop=self._loop)
142 await asyncio.gather(*waiters)
142143
143144 async def _do_close(self):
144 await self._close_state.wait()
145145 async with self._cond:
146146 assert not self._acquiring, self._acquiring
147147 waiters = []
152152 for conn in self._used:
153153 conn.close()
154154 waiters.append(conn.wait_closed())
155 await asyncio.gather(*waiters, loop=self._loop)
155 await asyncio.gather(*waiters)
156156 # TODO: close _pubsub_conn connection
157157 logger.debug("Closed %d connection(s)", len(waiters))
158158
160160 """Close all free and in-progress connections and mark pool as closed.
161161 """
162162 if not self._close_state.is_set():
163 self._close_waiter = asyncio.ensure_future(self._do_close(),
164 loop=self._loop)
165163 self._close_state.set()
166164
167165 @property
172170 async def wait_closed(self):
173171 """Wait until pool gets closed."""
174172 await self._close_state.wait()
175 assert self._close_waiter is not None
176 await asyncio.shield(self._close_waiter, loop=self._loop)
177173
178174 @property
179175 def db(self):
286282 async with self._cond:
287283 for i in range(self.freesize):
288284 res = res and (await self._pool[i].select(db))
289 else:
290 self._db = db
285 self._db = db
291286 return res
292287
293288 async def auth(self, password):
367362 else:
368363 conn.close()
369364 # FIXME: check event loop is not closed
370 asyncio.ensure_future(self._wakeup(), loop=self._loop)
365 asyncio.ensure_future(self._wakeup())
371366
372367 def _drop_closed(self):
373368 for i in range(self.freesize):
415410 parser=self._parser_class,
416411 timeout=self._create_connection_timeout,
417412 connection_cls=self._connection_cls,
418 loop=self._loop)
413 )
419414
420415 async def _wakeup(self, closing_conn=None):
421416 async with self._cond:
11 import json
22 import types
33 import collections
4 import warnings
5 import sys
46
57 from .abc import AbcChannel
68 from .util import _converters # , _set_result
2224 """Wrapper around asyncio.Queue."""
2325
2426 def __init__(self, name, is_pattern, loop=None):
25 self._queue = ClosableQueue(loop=loop)
27 if loop is not None and sys.version_info >= (3, 8):
28 warnings.warn("The loop argument is deprecated",
29 DeprecationWarning)
30 self._queue = ClosableQueue()
2631 self._name = _converters[type(name)](name)
2732 self._is_pattern = is_pattern
2833
164169
165170 >>> from aioredis.pubsub import Receiver
166171 >>> from aioredis.abc import AbcChannel
167 >>> mpsc = Receiver(loop=loop)
172 >>> mpsc = Receiver()
168173 >>> async def reader(mpsc):
169174 ... async for channel, msg in mpsc.iter():
170175 ... assert isinstance(channel, AbcChannel)
187192 def __init__(self, loop=None, on_close=None):
188193 assert on_close is None or callable(on_close), (
189194 "on_close must be None or callable", on_close)
190 if loop is None:
191 loop = asyncio.get_event_loop()
195 if loop is not None:
196 warnings.warn("The loop argument is deprecated",
197 DeprecationWarning)
192198 if on_close is None:
193199 on_close = self.check_stop
194 self._queue = ClosableQueue(loop=loop)
200 self._queue = ClosableQueue()
195201 self._refs = {}
196202 self._on_close = on_close
197203
395401
396402 class ClosableQueue:
397403
398 def __init__(self, *, loop=None):
404 def __init__(self):
399405 self._queue = collections.deque()
400 self._event = asyncio.Event(loop=loop)
406 self._event = asyncio.Event()
401407 self._closed = False
402408
403409 async def wait(self):
1414 MasterReplyError,
1515 SlaveReplyError,
1616 )
17 from ..util import CloseEvent
1718
1819
1920 # Address marker for discovery
2829 """Create SentinelPool."""
2930 # FIXME: revise default timeout value
3031 assert isinstance(sentinels, (list, tuple)), sentinels
31 if loop is None:
32 loop = asyncio.get_event_loop()
32 # TODO: deprecation note
33 # if loop is None:
34 # loop = asyncio.get_event_loop()
3335
3436 pool = SentinelPool(sentinels, db=db,
3537 password=password,
5456 def __init__(self, sentinels, *, db=None, password=None, ssl=None,
5557 encoding=None, parser=None, minsize, maxsize, timeout,
5658 loop=None):
57 if loop is None:
58 loop = asyncio.get_event_loop()
59 # TODO: deprecation note
60 # if loop is None:
61 # loop = asyncio.get_event_loop()
5962 # TODO: add connection/discover timeouts;
6063 # and what to do if no master is found:
6164 # (raise error or try forever or try until timeout)
6265
6366 # XXX: _sentinels is unordered
6467 self._sentinels = set(sentinels)
65 self._loop = loop
6668 self._timeout = timeout
6769 self._pools = [] # list of sentinel pools
6870 self._masters = {}
7476 self._redis_encoding = encoding
7577 self._redis_minsize = minsize
7678 self._redis_maxsize = maxsize
77 self._close_state = asyncio.Event(loop=loop)
79 self._close_state = CloseEvent(self._do_close)
7880 self._close_waiter = None
79 self._monitor = monitor = Receiver(loop=loop)
81 self._monitor = monitor = Receiver()
8082
8183 async def echo_events():
8284 try:
8385 while await monitor.wait_message():
84 ch, (ev, data) = await monitor.get(encoding='utf-8')
86 _, (ev, data) = await monitor.get(encoding='utf-8')
8587 ev = ev.decode('utf-8')
8688 _logger.debug("%s: %s", ev, data)
8789 if ev in ('+odown',):
101103 # etc...
102104 except asyncio.CancelledError:
103105 pass
104 self._monitor_task = asyncio.ensure_future(echo_events(), loop=loop)
106 self._monitor_task = asyncio.ensure_future(echo_events())
105107
106108 @property
107109 def discover_timeout(self):
123125 maxsize=self._redis_maxsize,
124126 ssl=self._redis_ssl,
125127 parser=self._parser_class,
126 loop=self._loop)
128 )
127129 return self._masters[service]
128130
129131 def slave_for(self, service):
139141 maxsize=self._redis_maxsize,
140142 ssl=self._redis_ssl,
141143 parser=self._parser_class,
142 loop=self._loop)
144 )
143145 return self._slaves[service]
144146
145147 def execute(self, command, *args, **kwargs):
161163 def close(self):
162164 """Close all controlled connections (both sentinel and redis)."""
163165 if not self._close_state.is_set():
164 self._close_waiter = asyncio.ensure_future(self._do_close(),
165 loop=self._loop)
166166 self._close_state.set()
167167
168168 async def _do_close(self):
169 await self._close_state.wait()
170169 # TODO: lock
171170 tasks = []
172171 task, self._monitor_task = self._monitor_task, None
184183 _, pool = self._slaves.popitem()
185184 pool.close()
186185 tasks.append(pool.wait_closed())
187 await asyncio.gather(*tasks, loop=self._loop)
186 await asyncio.gather(*tasks)
188187
189188 async def wait_closed(self):
190189 """Wait until pool gets closed."""
191190 await self._close_state.wait()
192 assert self._close_waiter is not None
193 await asyncio.shield(self._close_waiter, loop=self._loop)
194191
195192 async def discover(self, timeout=None): # TODO: better name?
196193 """Discover sentinels and all monitored services within given timeout.
209206 pools = []
210207 for addr in self._sentinels: # iterate over unordered set
211208 tasks.append(self._connect_sentinel(addr, timeout, pools))
212 done, pending = await asyncio.wait(tasks, loop=self._loop,
209 done, pending = await asyncio.wait(tasks,
213210 return_when=ALL_COMPLETED)
214211 assert not pending, ("Expected all tasks to complete", done, pending)
215212
235232 connections pool or exception.
236233 """
237234 try:
238 with async_timeout(timeout, loop=self._loop):
235 with async_timeout(timeout):
239236 pool = await create_pool(
240237 address, minsize=1, maxsize=2,
241238 parser=self._parser_class,
242 loop=self._loop)
239 )
243240 pools.append(pool)
244241 return pool
245242 except asyncio.TimeoutError as err:
267264 pools = self._pools[:]
268265 for sentinel in pools:
269266 try:
270 with async_timeout(timeout, loop=self._loop):
267 with async_timeout(timeout):
271268 address = await self._get_masters_address(
272269 sentinel, service)
273270
274271 pool = self._masters[service]
275 with async_timeout(timeout, loop=self._loop), \
272 with async_timeout(timeout), \
276273 contextlib.ExitStack() as stack:
277274 conn = await pool._create_new_connection(address)
278275 stack.callback(conn.close)
290287 except DiscoverError as err:
291288 sentinel_logger.debug("DiscoverError(%r, %s): %r",
292289 sentinel, service, err)
293 await asyncio.sleep(idle_timeout, loop=self._loop)
290 await asyncio.sleep(idle_timeout)
294291 continue
295292 except RedisError as err:
296293 raise MasterReplyError("Service {} error".format(service), err)
297294 except Exception:
298295 # TODO: clear (drop) connections to schedule reconnect
299 await asyncio.sleep(idle_timeout, loop=self._loop)
300 continue
301 else:
302 raise MasterNotFoundError("No master found for {}".format(service))
296 await asyncio.sleep(idle_timeout)
297 continue
298 # Otherwise
299 raise MasterNotFoundError("No master found for {}".format(service))
303300
304301 async def discover_slave(self, service, timeout, **kwargs):
305302 """Perform Slave discovery for specified service."""
309306 pools = self._pools[:]
310307 for sentinel in pools:
311308 try:
312 with async_timeout(timeout, loop=self._loop):
309 with async_timeout(timeout):
313310 address = await self._get_slave_address(
314311 sentinel, service) # add **kwargs
315312 pool = self._slaves[service]
316 with async_timeout(timeout, loop=self._loop), \
313 with async_timeout(timeout), \
317314 contextlib.ExitStack() as stack:
318315 conn = await pool._create_new_connection(address)
319316 stack.callback(conn.close)
325322 except asyncio.TimeoutError:
326323 continue
327324 except DiscoverError:
328 await asyncio.sleep(idle_timeout, loop=self._loop)
325 await asyncio.sleep(idle_timeout)
329326 continue
330327 except RedisError as err:
331328 raise SlaveReplyError("Service {} error".format(service), err)
332329 except Exception:
333 await asyncio.sleep(idle_timeout, loop=self._loop)
330 await asyncio.sleep(idle_timeout)
334331 continue
335332 raise SlaveNotFoundError("No slave found for {}".format(service))
336333
361358 if {'s_down', 'o_down', 'disconnected'} & flags:
362359 continue
363360 return address
364 else:
365 raise BadState(state) # XXX: only last state
361 raise BadState() # XXX: only last state
366362
367363 async def _verify_service_role(self, conn, role):
368364 res = await conn.execute(b'role', encoding='utf-8')
00 import asyncio
1 import warnings
2 import sys
3
4 from .util import get_event_loop
15
26 __all__ = [
37 'open_connection',
1014 limit, loop=None,
1115 parser=None, **kwds):
1216 # XXX: parser is not used (yet)
13 if loop is None:
14 loop = asyncio.get_event_loop()
15 reader = StreamReader(limit=limit, loop=loop)
16 protocol = asyncio.StreamReaderProtocol(reader, loop=loop)
17 transport, _ = await loop.create_connection(
17 if loop is not None and sys.version_info >= (3, 8):
18 warnings.warn("The loop argument is deprecated",
19 DeprecationWarning)
20 reader = StreamReader(limit=limit)
21 protocol = asyncio.StreamReaderProtocol(reader)
22 transport, _ = await get_event_loop().create_connection(
1823 lambda: protocol, host, port, **kwds)
19 writer = asyncio.StreamWriter(transport, protocol, reader, loop)
24 writer = asyncio.StreamWriter(transport, protocol, reader,
25 loop=get_event_loop())
2026 return reader, writer
2127
2228
2430 limit, loop=None,
2531 parser=None, **kwds):
2632 # XXX: parser is not used (yet)
27 if loop is None:
28 loop = asyncio.get_event_loop()
29 reader = StreamReader(limit=limit, loop=loop)
30 protocol = asyncio.StreamReaderProtocol(reader, loop=loop)
31 transport, _ = await loop.create_unix_connection(
33 if loop is not None and sys.version_info >= (3, 8):
34 warnings.warn("The loop argument is deprecated",
35 DeprecationWarning)
36 reader = StreamReader(limit=limit)
37 protocol = asyncio.StreamReaderProtocol(reader)
38 transport, _ = await get_event_loop().create_unix_connection(
3239 lambda: protocol, address, **kwds)
33 writer = asyncio.StreamWriter(transport, protocol, reader, loop)
40 writer = asyncio.StreamWriter(transport, protocol, reader,
41 loop=get_event_loop())
3442 return reader, writer
3543
3644
0 import asyncio
1 import sys
2
03 from urllib.parse import urlparse, parse_qsl
14
25 from .log import logger
36
47 _NOTSET = object()
58
9 IS_PY38 = sys.version_info >= (3, 8)
610
711 # NOTE: never put here anything else;
812 # just this basic types
206210 if 'timeout' in params:
207211 options['timeout'] = float(params['timeout'])
208212 return options
213
214
215 class CloseEvent:
216 def __init__(self, on_close):
217 self._close_init = asyncio.Event()
218 self._close_done = asyncio.Event()
219 self._on_close = on_close
220
221 async def wait(self):
222 await self._close_init.wait()
223 await self._close_done.wait()
224
225 def is_set(self):
226 return self._close_done.is_set() or self._close_init.is_set()
227
228 def set(self):
229 if self._close_init.is_set():
230 return
231
232 task = asyncio.ensure_future(self._on_close())
233 task.add_done_callback(self._cleanup)
234 self._close_init.set()
235
236 def _cleanup(self, task):
237 self._on_close = None
238 self._close_done.set()
239
240
241 get_event_loop = getattr(asyncio, 'get_running_loop', asyncio.get_event_loop)
00 Metadata-Version: 1.1
11 Name: aioredis
2 Version: 1.2.0
2 Version: 1.3.1
33 Summary: asyncio (PEP 3156) Redis support
44 Home-page: https://github.com/aio-libs/aioredis
55 Author: Alexey Popravka
3434 Sentinel support Yes
3535 Redis Cluster support WIP
3636 Trollius (python 2.7) No
37 Tested CPython versions `3.5, 3.6 3.7 <travis_>`_ [2]_
38 Tested PyPy3 versions `5.9.0 <travis_>`_
39 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 <travis_>`_
37 Tested CPython versions `3.5.3, 3.6, 3.7 <travis_>`_ [1]_
38 Tested PyPy3 versions `pypy3.5-7.0 pypy3.6-7.1.1 <travis_>`_
39 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 5.0 <travis_>`_
4040 Support for dev Redis server through low-level API
4141 ================================ ==============================
4242
43
44 .. [2] For Python 3.3, 3.4 support use aioredis v0.3.
43 .. [1] For Python 3.3, 3.4 support use aioredis v0.3.
4544
4645 Documentation
4746 -------------
4847
4948 http://aioredis.readthedocs.io/
5049
51 Usage examples
52 --------------
53
54 Simple low-level interface:
50 Usage example
51 -------------
52
53 Simple high-level interface with connections pool:
5554
5655 .. code:: python
5756
5857 import asyncio
5958 import aioredis
6059
61 loop = asyncio.get_event_loop()
62
6360 async def go():
64 conn = await aioredis.create_connection(
65 'redis://localhost', loop=loop)
66 await conn.execute('set', 'my-key', 'value')
67 val = await conn.execute('get', 'my-key')
68 print(val)
69 conn.close()
70 await conn.wait_closed()
71 loop.run_until_complete(go())
72 # will print 'value'
73
74 Simple high-level interface:
75
76 .. code:: python
77
78 import asyncio
79 import aioredis
80
81 loop = asyncio.get_event_loop()
82
83 async def go():
84 redis = await aioredis.create_redis(
85 'redis://localhost', loop=loop)
61 redis = await aioredis.create_redis_pool(
62 'redis://localhost')
8663 await redis.set('my-key', 'value')
87 val = await redis.get('my-key')
64 val = await redis.get('my-key', encoding='utf-8')
8865 print(val)
8966 redis.close()
9067 await redis.wait_closed()
91 loop.run_until_complete(go())
92 # will print 'value'
93
94 Connections pool:
95
96 .. code:: python
97
98 import asyncio
99 import aioredis
100
101 loop = asyncio.get_event_loop()
102
103 async def go():
104 pool = await aioredis.create_pool(
105 'redis://localhost',
106 minsize=5, maxsize=10,
107 loop=loop)
108 await pool.execute('set', 'my-key', 'value')
109 print(await pool.execute('get', 'my-key'))
110 # graceful shutdown
111 pool.close()
112 await pool.wait_closed()
113
114 loop.run_until_complete(go())
115
116 Simple high-level interface with connections pool:
117
118 .. code:: python
119
120 import asyncio
121 import aioredis
122
123 loop = asyncio.get_event_loop()
124
125 async def go():
126 redis = await aioredis.create_redis_pool(
127 'redis://localhost',
128 minsize=5, maxsize=10,
129 loop=loop)
130 await redis.set('my-key', 'value')
131 val = await redis.get('my-key')
132 print(val)
133 redis.close()
134 await redis.wait_closed()
135 loop.run_until_complete(go())
68
69 asyncio.run(go())
13670 # will print 'value'
13771
13872 Requirements
170104
171105 Changes
172106 -------
107
108 .. towncrier release notes start
109
110 1.3.1 (2019-12-02)
111 ^^^^^^^^^^^^^^^^^^
112 Bugfixes
113 ~~~~~~~~
114
115 - Fix transaction data decoding
116 (see `#657 <https://github.com/aio-libs/aioredis/issues/657>`_);
117 - Fix duplicate calls to ``pool.wait_closed()`` upon ``create_pool()`` exception.
118 (see `#671 <https://github.com/aio-libs/aioredis/issues/671>`_);
119
120 Deprecations and Removals
121 ~~~~~~~~~~~~~~~~~~~~~~~~~
122
123 - Drop explicit loop requirement in API.
124 Deprecate ``loop`` argument.
125 Throw warning in Python 3.8+ if explicit ``loop`` is passed to methods.
126 (see `#666 <https://github.com/aio-libs/aioredis/issues/666>`_);
127
128 Misc
129 ~~~~
130
131 - `#643 <https://github.com/aio-libs/aioredis/issues/643>`_,
132 `#646 <https://github.com/aio-libs/aioredis/issues/646>`_,
133 `#648 <https://github.com/aio-libs/aioredis/issues/648>`_;
134
135
136 1.3.0 (2019-09-24)
137 ^^^^^^^^^^^^^^^^^^
138 Features
139 ~~~~~~~~
140
141 - Added ``xdel`` and ``xtrim`` method which missed in ``commands/streams.py`` & also added unit test code for them
142 (see `#438 <https://github.com/aio-libs/aioredis/issues/438>`_);
143 - Add ``count`` argument to ``spop`` command
144 (see `#485 <https://github.com/aio-libs/aioredis/issues/485>`_);
145 - Add support for ``zpopmax`` and ``zpopmin`` redis commands
146 (see `#550 <https://github.com/aio-libs/aioredis/issues/550>`_);
147 - Add ``towncrier``: change notes are now stored in ``CHANGES.txt``
148 (see `#576 <https://github.com/aio-libs/aioredis/issues/576>`_);
149 - Type hints for the library
150 (see `#584 <https://github.com/aio-libs/aioredis/issues/584>`_);
151 - A few additions to the sorted set commands:
152
153 - the blocking pop commands: ``BZPOPMAX`` and ``BZPOPMIN``
154
155 - the ``CH`` and ``INCR`` options of the ``ZADD`` command
156
157 (see `#618 <https://github.com/aio-libs/aioredis/issues/618>`_);
158 - Added ``no_ack`` parameter to ``xread_group`` streams method in ``commands/streams.py``
159 (see `#625 <https://github.com/aio-libs/aioredis/issues/625>`_);
160
161 Bugfixes
162 ~~~~~~~~
163
164 - Fix for sensitive logging
165 (see `#459 <https://github.com/aio-libs/aioredis/issues/459>`_);
166 - Fix slow memory leak in ``wait_closed`` implementation
167 (see `#498 <https://github.com/aio-libs/aioredis/issues/498>`_);
168 - Fix handling of instances were Redis returns null fields for a stream message
169 (see `#605 <https://github.com/aio-libs/aioredis/issues/605>`_);
170
171 Improved Documentation
172 ~~~~~~~~~~~~~~~~~~~~~~
173
174 - Rewrite "Getting started" documentation.
175 (see `#641 <https://github.com/aio-libs/aioredis/issues/641>`_);
176
177 Misc
178 ~~~~
179
180 - `#585 <https://github.com/aio-libs/aioredis/issues/585>`_,
181 `#611 <https://github.com/aio-libs/aioredis/issues/611>`_,
182 `#612 <https://github.com/aio-libs/aioredis/issues/612>`_,
183 `#619 <https://github.com/aio-libs/aioredis/issues/619>`_,
184 `#620 <https://github.com/aio-libs/aioredis/issues/620>`_,
185 `#642 <https://github.com/aio-libs/aioredis/issues/642>`_;
186
173187
174188 1.2.0 (2018-10-24)
175189 ^^^^^^^^^^^^^^^^^^
519533 * Fixed cancellation of wait_closed
520534 (see `#118 <https://github.com/aio-libs/aioredis/issues/118>`_);
521535
522 * Fixed ``time()`` convertion to float
536 * Fixed ``time()`` conversion to float
523537 (see `#126 <https://github.com/aio-libs/aioredis/issues/126>`_);
524538
525539 * Fixed ``hmset()`` method to return bool instead of ``b'OK'``
5353 docs/_build/man/aioredis.1
5454 examples/commands.py
5555 examples/connection.py
56 examples/iscan.py
5756 examples/pipeline.py
5857 examples/pool.py
59 examples/pool2.py
6058 examples/pool_pubsub.py
6159 examples/pubsub.py
6260 examples/pubsub2.py
6462 examples/sentinel.py
6563 examples/transaction.py
6664 examples/transaction2.py
65 examples/getting_started/00_connect.py
66 examples/getting_started/01_decoding.py
67 examples/getting_started/02_decoding.py
68 examples/getting_started/03_multiexec.py
69 examples/getting_started/04_pubsub.py
70 examples/getting_started/05_pubsub.py
71 examples/getting_started/06_sentinel.py
72 tests/_testutils.py
6773 tests/coerced_keys_dict_test.py
6874 tests/conftest.py
6975 tests/connection_commands_test.py
00 .\" Man page generated from reStructuredText.
11 .
2 .TH "AIOREDIS" "1" "Oct 24, 2018" "1.2" "aioredis"
2 .TH "AIOREDIS" "1" "Dec 02, 2019" "1.3" "aioredis"
33 .SH NAME
44 aioredis \- aioredis Documentation
55 .
7878 T{
7979 Sentinel support
8080 T} T{
81 Yes [1]
81 Yes
8282 T}
8383 _
8484 T{
9696 T{
9797 Tested CPython versions
9898 T} T{
99 \fI\%3.5, 3.6\fP [2]
99 \fI\%3.5.3, 3.6, 3.7\fP [1]
100100 T}
101101 _
102102 T{
103103 Tested PyPy3 versions
104104 T} T{
105 \fI\%5.9.0\fP
105 \fI\%pypy3.5\-7.0 pypy3.6\-7.1.1\fP
106106 T}
107107 _
108108 T{
109109 Tested for Redis server
110110 T} T{
111 \fI\%2.6, 2.8, 3.0, 3.2, 4.0\fP
111 \fI\%2.6, 2.8, 3.0, 3.2, 4.0 5.0\fP
112112 T}
113113 _
114114 T{
119119 _
120120 .TE
121121 .IP [1] 5
122 Sentinel support is available in master branch.
123 This feature is not yet stable and may have some issues.
124 .IP [2] 5
125122 For Python 3.3, 3.4 support use aioredis v0.3.
126123 .SH INSTALLATION
127124 .sp
150147 .INDENT 0.0
151148 .IP \(bu 2
152149 Issue Tracker: \fI\%https://github.com/aio\-libs/aioredis/issues\fP
150 .IP \(bu 2
151 Google Group: \fI\%https://groups.google.com/forum/#!forum/aio\-libs\fP
152 .IP \(bu 2
153 Gitter: \fI\%https://gitter.im/aio\-libs/Lobby\fP
153154 .IP \(bu 2
154155 Source Code: \fI\%https://github.com/aio\-libs/aioredis\fP
155156 .IP \(bu 2
169170 .ce 0
170171 .sp
171172 .SH GETTING STARTED
172 .SS Commands Pipelining
173 .sp
174 Commands pipelining is built\-in.
175 .sp
176 Every command is sent to transport at\-once
177 (ofcourse if no \fBTypeError\fP/\fBValueError\fP was raised)
178 .sp
179 When you making a call with \fBawait\fP / \fByield from\fP you will be waiting result,
180 and then gather results.
181 .sp
182 Simple example show both cases (\fBget source code\fP):
183 .INDENT 0.0
184 .INDENT 3.5
185 .sp
186 .nf
187 .ft C
188 # No pipelining;
189 async def wait_each_command():
190 val = await redis.get(\(aqfoo\(aq) # wait until \(gaval\(ga is available
191 cnt = await redis.incr(\(aqbar\(aq) # wait until \(gacnt\(ga is available
192 return val, cnt
193
194 # Sending multiple commands and then gathering results
195 async def pipelined():
196 fut1 = redis.get(\(aqfoo\(aq) # issue command and return future
197 fut2 = redis.incr(\(aqbar\(aq) # issue command and return future
198 # block until results are available
199 val, cnt = await asyncio.gather(fut1, fut2)
200 return val, cnt
201
173 .SS Installation
174 .INDENT 0.0
175 .INDENT 3.5
176 .sp
177 .nf
178 .ft C
179 $ pip install aioredis
180 .ft P
181 .fi
182 .UNINDENT
183 .UNINDENT
184 .sp
185 This will install aioredis along with its dependencies:
186 .INDENT 0.0
187 .IP \(bu 2
188 hiredis protocol parser;
189 .IP \(bu 2
190 async\-timeout \-\-\- used in Sentinel client.
191 .UNINDENT
192 .SS Without dependencies
193 .sp
194 In some cases [1] you might need to install \fBaioredis\fP without \fBhiredis\fP,
195 it is achievable with the following command:
196 .INDENT 0.0
197 .INDENT 3.5
198 .sp
199 .nf
200 .ft C
201 $ pip install \-\-no\-deps aioredis async\-timeout
202 .ft P
203 .fi
204 .UNINDENT
205 .UNINDENT
206 .SS Installing latest version from Git
207 .INDENT 0.0
208 .INDENT 3.5
209 .sp
210 .nf
211 .ft C
212 $ pip install git+https://github.com/aio\-libs/aioredis@master#egg=aioredis
213 .ft P
214 .fi
215 .UNINDENT
216 .UNINDENT
217 .SS Connecting
218 .sp
219 \fBget source code\fP
220 .INDENT 0.0
221 .INDENT 3.5
222 .sp
223 .nf
224 .ft C
225 import asyncio
226 import aioredis
227
228
229 async def main():
230 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
231 await redis.set(\(aqmy\-key\(aq, \(aqvalue\(aq)
232 value = await redis.get(\(aqmy\-key\(aq, encoding=\(aqutf\-8\(aq)
233 print(value)
234
235 redis.close()
236 await redis.wait_closed()
237
238 asyncio.run(main())
239
240 .ft P
241 .fi
242 .UNINDENT
243 .UNINDENT
244 .sp
245 \fBaioredis.create_redis_pool()\fP creates a Redis client backed by a pool of
246 connections. The only required argument is the address of Redis server.
247 Redis server address can be either host and port tuple
248 (ex: \fB(\(aqlocalhost\(aq, 6379)\fP), or a string which will be parsed into
249 TCP or UNIX socket address (ex: \fB\(aqunix://var/run/redis.sock\(aq\fP,
250 \fB\(aq//var/run/redis.sock\(aq\fP, \fBredis://redis\-host\-or\-ip:6379/1\fP).
251 .sp
252 Closing the client. Calling \fBredis.close()\fP and then \fBredis.wait_closed()\fP
253 is strongly encouraged as this will methods will shutdown all open connections
254 and cleanup resources.
255 .sp
256 See the commands reference for the full list of supported commands.
257 .SS Connecting to specific DB
258 .sp
259 There are several ways you can specify database index to select on connection:
260 .INDENT 0.0
261 .IP 1. 3
262 explicitly pass db index as \fBdb\fP argument:
263 .INDENT 3.0
264 .INDENT 3.5
265 .sp
266 .nf
267 .ft C
268 redis = await aioredis.create_redis_pool(
269 \(aqredis://localhost\(aq, db=1)
270 .ft P
271 .fi
272 .UNINDENT
273 .UNINDENT
274 .IP 2. 3
275 pass db index in URI as path component:
276 .INDENT 3.0
277 .INDENT 3.5
278 .sp
279 .nf
280 .ft C
281 redis = await aioredis.create_redis_pool(
282 \(aqredis://localhost/2\(aq)
202283 .ft P
203284 .fi
204285 .UNINDENT
205286 .UNINDENT
206287 .sp
207288 \fBNOTE:\fP
208 .INDENT 0.0
209 .INDENT 3.5
210 For convenience \fBaioredis\fP provides
211 \fBpipeline()\fP
212 method allowing to execute bulk of commands as one
213 (\fBget source code\fP):
214 .INDENT 0.0
215 .INDENT 3.5
216 .INDENT 0.0
217 .INDENT 3.5
218 .sp
219 .nf
220 .ft C
221 # Explicit pipeline
222 async def explicit_pipeline():
223 pipe = redis.pipeline()
224 fut1 = pipe.get(\(aqfoo\(aq)
225 fut2 = pipe.incr(\(aqbar\(aq)
226 result = await pipe.execute()
227 val, cnt = await asyncio.gather(fut1, fut2)
228 assert result == [val, cnt]
229 return val, cnt
230
231 .ft P
232 .fi
233 .UNINDENT
234 .UNINDENT
235 .UNINDENT
236 .UNINDENT
289 .INDENT 3.0
290 .INDENT 3.5
291 DB index specified in URI will take precedence over
292 \fBdb\fP keyword argument.
293 .UNINDENT
294 .UNINDENT
295 .IP 3. 3
296 call \fBselect()\fP method:
297 .INDENT 3.0
298 .INDENT 3.5
299 .sp
300 .nf
301 .ft C
302 redis = await aioredis.create_redis_pool(
303 \(aqredis://localhost/\(aq)
304 await redis.select(3)
305 .ft P
306 .fi
307 .UNINDENT
308 .UNINDENT
309 .UNINDENT
310 .SS Connecting to password\-protected Redis instance
311 .sp
312 The password can be specified either in keyword argument or in address URI:
313 .INDENT 0.0
314 .INDENT 3.5
315 .sp
316 .nf
317 .ft C
318 redis = await aioredis.create_redis_pool(
319 \(aqredis://localhost\(aq, password=\(aqsEcRet\(aq)
320
321 redis = await aioredis.create_redis_pool(
322 \(aqredis://:sEcRet@localhost/\(aq)
323
324 redis = await aioredis.create_redis_pool(
325 \(aqredis://localhost/?password=sEcRet\(aq)
326 .ft P
327 .fi
328 .UNINDENT
329 .UNINDENT
330 .sp
331 \fBNOTE:\fP
332 .INDENT 0.0
333 .INDENT 3.5
334 Password specified in URI will take precedence over password keyword.
335 .sp
336 Also specifying both password as authentication component and
337 query parameter in URI is forbidden.
338 .INDENT 0.0
339 .INDENT 3.5
340 .sp
341 .nf
342 .ft C
343 # This will cause assertion error
344 await aioredis.create_redis_pool(
345 \(aqredis://:sEcRet@localhost/?password=SeCreT\(aq)
346 .ft P
347 .fi
348 .UNINDENT
349 .UNINDENT
350 .UNINDENT
351 .UNINDENT
352 .SS Result messages decoding
353 .sp
354 By default \fBaioredis\fP will return \fI\%bytes\fP for most Redis
355 commands that return string replies. Redis error replies are known to be
356 valid UTF\-8 strings so error messages are decoded automatically.
357 .sp
358 If you know that data in Redis is valid string you can tell \fBaioredis\fP
359 to decode result by passing keyword\-only argument \fBencoding\fP
360 in a command call:
361 .sp
362 \fBget source code\fP
363 .INDENT 0.0
364 .INDENT 3.5
365 .sp
366 .nf
367 .ft C
368 import asyncio
369 import aioredis
370
371
372 async def main():
373 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
374 await redis.set(\(aqkey\(aq, \(aqstring\-value\(aq)
375 bin_value = await redis.get(\(aqkey\(aq)
376 assert bin_value == b\(aqstring\-value\(aq
377
378 str_value = await redis.get(\(aqkey\(aq, encoding=\(aqutf\-8\(aq)
379 assert str_value == \(aqstring\-value\(aq
380
381 redis.close()
382 await redis.wait_closed()
383
384 asyncio.run(main())
385
386 .ft P
387 .fi
388 .UNINDENT
389 .UNINDENT
390 .sp
391 \fBaioredis\fP can decode messages for all Redis data types like
392 lists, hashes, sorted sets, etc:
393 .sp
394 \fBget source code\fP
395 .INDENT 0.0
396 .INDENT 3.5
397 .sp
398 .nf
399 .ft C
400 import asyncio
401 import aioredis
402
403
404 async def main():
405 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
406
407 await redis.hmset_dict(\(aqhash\(aq,
408 key1=\(aqvalue1\(aq,
409 key2=\(aqvalue2\(aq,
410 key3=123)
411
412 result = await redis.hgetall(\(aqhash\(aq, encoding=\(aqutf\-8\(aq)
413 assert result == {
414 \(aqkey1\(aq: \(aqvalue1\(aq,
415 \(aqkey2\(aq: \(aqvalue2\(aq,
416 \(aqkey3\(aq: \(aq123\(aq, # note that Redis returns int as string
417 }
418
419 redis.close()
420 await redis.wait_closed()
421
422 asyncio.run(main())
423
424 .ft P
425 .fi
237426 .UNINDENT
238427 .UNINDENT
239428 .SS Multi/Exec transactions
240429 .sp
241 \fBaioredis\fP provides several ways for executing transactions:
242 .INDENT 0.0
243 .IP \(bu 2
244 when using raw connection you can issue \fBMulti\fP/\fBExec\fP commands
245 manually;
246 .IP \(bu 2
247 when using \fBaioredis.Redis\fP instance you can use
248 \fBmulti_exec()\fP transaction pipeline.
430 \fBget source code\fP
431 .INDENT 0.0
432 .INDENT 3.5
433 .sp
434 .nf
435 .ft C
436 import asyncio
437 import aioredis
438
439
440 async def main():
441 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
442
443 tr = redis.multi_exec()
444 tr.set(\(aqkey1\(aq, \(aqvalue1\(aq)
445 tr.set(\(aqkey2\(aq, \(aqvalue2\(aq)
446 ok1, ok2 = await tr.execute()
447 assert ok1
448 assert ok2
449
450 asyncio.run(main())
451
452 .ft P
453 .fi
454 .UNINDENT
249455 .UNINDENT
250456 .sp
251457 \fBmulti_exec()\fP method creates and returns new
252458 \fBMultiExec\fP object which is used for buffering commands and
253459 then executing them inside MULTI/EXEC block.
254460 .sp
255 Here is a simple example
256 (\fBget source code\fP):
257 .INDENT 0.0
258 .INDENT 3.5
259 .sp
260 .nf
261 .ft C
262 async def transaction():
263 tr = redis.multi_exec()
264 future1 = tr.set(\(aqfoo\(aq, \(aq123\(aq)
265 future2 = tr.set(\(aqbar\(aq, \(aq321\(aq)
266 result = await tr.execute()
267 assert result == await asyncio.gather(future1, future2)
268 return result
269
270 .ft P
271 .fi
272 .UNINDENT
273 .UNINDENT
274 .sp
275 As you can notice \fBawait\fP is \fBonly\fP used at line 5 with \fBtr.execute\fP
276 and \fBnot with\fP \fBtr.set(...)\fP calls.
277 .sp
278461 \fBWARNING:\fP
279462 .INDENT 0.0
280463 .INDENT 3.5
299482 .sp
300483 \fBaioredis\fP provides support for Redis Publish/Subscribe messaging.
301484 .sp
302 To switch connection to subscribe mode you must execute \fBsubscribe\fP command
303 by yield\(aqing from \fBsubscribe()\fP it returns a list of
304 \fBChannel\fP objects representing subscribed channels.
305 .sp
306 As soon as connection is switched to subscribed mode the channel will receive
307 and store messages
485 To start listening for messages you must call either
486 \fBsubscribe()\fP or
487 \fBpsubscribe()\fP method.
488 Both methods return list of \fBChannel\fP objects representing
489 subscribed channels.
490 .sp
491 Right after that the channel will receive and store messages
308492 (the \fBChannel\fP object is basically a wrapper around \fI\%asyncio.Queue\fP).
309493 To read messages from channel you need to use \fBget()\fP
310494 or \fBget_json()\fP coroutines.
311495 .sp
312 \fBNOTE:\fP
313 .INDENT 0.0
314 .INDENT 3.5
315 In Pub/Sub mode redis connection can only receive messages or issue
316 (P)SUBSCRIBE / (P)UNSUBSCRIBE commands.
317 .UNINDENT
318 .UNINDENT
319 .sp
320 Pub/Sub example (\fBget source code\fP):
321 .INDENT 0.0
322 .INDENT 3.5
323 .sp
324 .nf
325 .ft C
326 sub = await aioredis.create_redis(
327 \(aqredis://localhost\(aq)
328
329 ch1, ch2 = await sub.subscribe(\(aqchannel:1\(aq, \(aqchannel:2\(aq)
330 assert isinstance(ch1, aioredis.Channel)
331 assert isinstance(ch2, aioredis.Channel)
332
333 async def async_reader(channel):
334 while await channel.wait_message():
335 msg = await channel.get(encoding=\(aqutf\-8\(aq)
336 # ... process message ...
337 print("message in {}: {}".format(channel.name, msg))
338
339 tsk1 = asyncio.ensure_future(async_reader(ch1))
340
341 # Or alternatively:
342
343 async def async_reader2(channel):
344 while True:
345 msg = await channel.get(encoding=\(aqutf\-8\(aq)
346 if msg is None:
347 break
348 # ... process message ...
349 print("message in {}: {}".format(channel.name, msg))
350
351 tsk2 = asyncio.ensure_future(async_reader2(ch2))
352
353 .ft P
354 .fi
355 .UNINDENT
356 .UNINDENT
357 .sp
358 Pub/Sub example (\fBget source code\fP):
359 .INDENT 0.0
360 .INDENT 3.5
361 .sp
362 .nf
363 .ft C
364 async def reader(channel):
365 while (await channel.wait_message()):
366 msg = await channel.get(encoding=\(aqutf\-8\(aq)
367 # ... process message ...
368 print("message in {}: {}".format(channel.name, msg))
369
370 if msg == STOPWORD:
371 return
372
373 with await pool as conn:
374 await conn.execute_pubsub(\(aqsubscribe\(aq, \(aqchannel:1\(aq)
375 channel = conn.pubsub_channels[\(aqchannel:1\(aq]
376 await reader(channel) # wait for reader to complete
377 await conn.execute_pubsub(\(aqunsubscribe\(aq, \(aqchannel:1\(aq)
378
379 # Explicit connection usage
380 conn = await pool.acquire()
381 try:
382 await conn.execute_pubsub(\(aqsubscribe\(aq, \(aqchannel:1\(aq)
383 channel = conn.pubsub_channels[\(aqchannel:1\(aq]
384 await reader(channel) # wait for reader to complete
385 await conn.execute_pubsub(\(aqunsubscribe\(aq, \(aqchannel:1\(aq)
386 finally:
387 pool.release(conn)
388
389 .ft P
390 .fi
391 .UNINDENT
392 .UNINDENT
393 .SS Python 3.5 \fBasync with\fP / \fBasync for\fP support
394 .sp
395 \fBaioredis\fP is compatible with \fI\%PEP 492\fP\&.
396 .sp
397 \fBPool\fP can be used with \fI\%async with\fP
398 (\fBget source code\fP):
399 .INDENT 0.0
400 .INDENT 3.5
401 .sp
402 .nf
403 .ft C
404 pool = await aioredis.create_pool(
405 \(aqredis://localhost\(aq)
406 async with pool.get() as conn:
407 value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
408 print(\(aqraw value:\(aq, value)
409
410 .ft P
411 .fi
412 .UNINDENT
413 .UNINDENT
414 .sp
415 It also can be used with \fBawait\fP:
416 .INDENT 0.0
417 .INDENT 3.5
418 .sp
419 .nf
420 .ft C
421 pool = await aioredis.create_pool(
422 \(aqredis://localhost\(aq)
423 # This is exactly the same as:
424 # with (yield from pool) as conn:
425 with (await pool) as conn:
426 value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
427 print(\(aqraw value:\(aq, value)
428
429 .ft P
430 .fi
431 .UNINDENT
432 .UNINDENT
433 .sp
434 New \fBscan\fP\-family commands added with support of \fI\%async for\fP
435 (\fBget source code\fP):
436 .INDENT 0.0
437 .INDENT 3.5
438 .sp
439 .nf
440 .ft C
441 redis = await aioredis.create_redis(
442 \(aqredis://localhost\(aq)
443
444 async for key in redis.iscan(match=\(aqsomething*\(aq):
445 print(\(aqMatched:\(aq, key)
446
447 async for name, val in redis.ihscan(key, match=\(aqsomething*\(aq):
448 print(\(aqMatched:\(aq, name, \(aq\->\(aq, val)
449
450 async for val in redis.isscan(key, match=\(aqsomething*\(aq):
451 print(\(aqMatched:\(aq, val)
452
453 async for val, score in redis.izscan(key, match=\(aqsomething*\(aq):
454 print(\(aqMatched:\(aq, val, \(aq:\(aq, score)
455
456 .ft P
457 .fi
458 .UNINDENT
459 .UNINDENT
460 .SS SSL/TLS support
461 .sp
462 Though Redis server \fI\%does not support data encryption\fP
463 it is still possible to setup Redis server behind SSL proxy. For such cases
464 \fBaioredis\fP library support secure connections through \fI\%asyncio\fP
465 SSL support. See \fI\%BaseEventLoop.create_connection\fP for details.
466 .SH MIGRATING FROM V0.3 TO V1.0
467 .SS API changes and backward incompatible changes:
468 .INDENT 0.0
469 .IP \(bu 2
470 \fI\%aioredis.create_pool\fP
471 .IP \(bu 2
472 \fI\%aioredis.create_reconnecting_redis\fP
473 .IP \(bu 2
474 \fI\%aioredis.Redis\fP
475 .IP \(bu 2
476 \fI\%Blocking operations and connection sharing\fP
477 .IP \(bu 2
478 \fI\%Sorted set commands return values\fP
479 .IP \(bu 2
480 \fI\%Hash hscan command now returns list of tuples\fP
481 .UNINDENT
496 Example subscribing and reading channels:
497 .sp
498 \fBget source code\fP
499 .INDENT 0.0
500 .INDENT 3.5
501 .sp
502 .nf
503 .ft C
504 import asyncio
505 import aioredis
506
507
508 async def main():
509 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
510
511 ch1, ch2 = await redis.subscribe(\(aqchannel:1\(aq, \(aqchannel:2\(aq)
512 assert isinstance(ch1, aioredis.Channel)
513 assert isinstance(ch2, aioredis.Channel)
514
515 async def reader(channel):
516 async for message in channel.iter():
517 print("Got message:", message)
518 asyncio.get_running_loop().create_task(reader(ch1))
519 asyncio.get_running_loop().create_task(reader(ch2))
520
521 await redis.publish(\(aqchannel:1\(aq, \(aqHello\(aq)
522 await redis.publish(\(aqchannel:2\(aq, \(aqWorld\(aq)
523
524 redis.close()
525 await redis.wait_closed()
526
527 asyncio.run(main())
528
529 .ft P
530 .fi
531 .UNINDENT
532 .UNINDENT
533 .sp
534 Subscribing and reading patterns:
535 .sp
536 \fBget source code\fP
537 .INDENT 0.0
538 .INDENT 3.5
539 .sp
540 .nf
541 .ft C
542 import asyncio
543 import aioredis
544
545
546 async def main():
547 redis = await aioredis.create_redis_pool(\(aqredis://localhost\(aq)
548
549 ch, = await redis.psubscribe(\(aqchannel:*\(aq)
550 assert isinstance(ch, aioredis.Channel)
551
552 async def reader(channel):
553 async for ch, message in channel.iter():
554 print("Got message in channel:", ch, ":", message)
555 asyncio.get_running_loop().create_task(reader(ch))
556
557 await redis.publish(\(aqchannel:1\(aq, \(aqHello\(aq)
558 await redis.publish(\(aqchannel:2\(aq, \(aqWorld\(aq)
559
560 redis.close()
561 await redis.wait_closed()
562
563 asyncio.run(main())
564
565 .ft P
566 .fi
567 .UNINDENT
568 .UNINDENT
569 .SS Sentinel client
570 .sp
571 \fBget source code\fP
572 .INDENT 0.0
573 .INDENT 3.5
574 .sp
575 .nf
576 .ft C
577 import asyncio
578 import aioredis
579
580
581 async def main():
582 sentinel = await aioredis.create_sentinel(
583 [\(aqredis://localhost:26379\(aq, \(aqredis://sentinel2:26379\(aq])
584 redis = sentinel.master_for(\(aqmymaster\(aq)
585
586 ok = await redis.set(\(aqkey\(aq, \(aqvalue\(aq)
587 assert ok
588 val = await redis.get(\(aqkey\(aq, encoding=\(aqutf\-8\(aq)
589 assert val == \(aqvalue\(aq
590
591 asyncio.run(main())
592
593 .ft P
594 .fi
595 .UNINDENT
596 .UNINDENT
597 .sp
598 Sentinel client requires a list of Redis Sentinel addresses to connect to
599 and start discovering services.
600 .sp
601 Calling \fBmaster_for()\fP or
602 \fBslave_for()\fP methods will return
603 Redis clients connected to specified services monitored by Sentinel.
604 .sp
605 Sentinel client will detect failover and reconnect Redis clients automatically.
606 .sp
607 See detailed reference here
482608
483609 .sp
484610 .ce
486612
487613 .ce 0
488614 .sp
489 .SS aioredis.create_pool
490 .sp
491 \fBcreate_pool()\fP now returns \fBConnectionsPool\fP
492 instead of \fBRedisPool\fP\&.
493 .sp
494 This means that pool now operates with \fBRedisConnection\fP
495 objects and not \fBRedis\fP\&.
496 .TS
497 center;
498 |l|l|.
499 _
500 T{
501 v0.3
502 T} T{
503 .INDENT 0.0
504 .INDENT 3.5
505 .sp
506 .nf
507 .ft C
508 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
509
510 with await pool as redis:
511 # calling methods of Redis class
512 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
513 .ft P
514 .fi
515 .UNINDENT
516 .UNINDENT
517 T}
518 _
519 T{
520 v1.0
521 T} T{
522 .INDENT 0.0
523 .INDENT 3.5
524 .sp
525 .nf
526 .ft C
527 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
528
529 with await pool as conn:
530 # calling conn.lpush will raise AttributeError exception
531 await conn.execute(\(aqlpush\(aq, \(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
532 .ft P
533 .fi
534 .UNINDENT
535 .UNINDENT
536 T}
537 _
538 .TE
539 .SS aioredis.create_reconnecting_redis
540 .sp
541 \fBcreate_reconnecting_redis()\fP has been dropped.
542 .sp
543 \fBcreate_redis_pool()\fP can be used instead of former function.
544 .TS
545 center;
546 |l|l|.
547 _
548 T{
549 v0.3
550 T} T{
551 .INDENT 0.0
552 .INDENT 3.5
553 .sp
554 .nf
555 .ft C
556 redis = await aioredis.create_reconnecting_redis(
557 (\(aqlocalhost\(aq, 6379))
558
559 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
560 .ft P
561 .fi
562 .UNINDENT
563 .UNINDENT
564 T}
565 _
566 T{
567 v1.0
568 T} T{
569 .INDENT 0.0
570 .INDENT 3.5
571 .sp
572 .nf
573 .ft C
574 redis = await aioredis.create_redis_pool(
575 (\(aqlocalhost\(aq, 6379))
576
577 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
578 .ft P
579 .fi
580 .UNINDENT
581 .UNINDENT
582 T}
583 _
584 .TE
585 .sp
586 \fBcreate_redis_pool\fP returns \fBRedis\fP initialized with
587 \fBConnectionsPool\fP which is responsible for reconnecting to server.
588 .sp
589 Also \fBcreate_reconnecting_redis\fP was patching the \fBRedisConnection\fP and
590 breaking \fBclosed\fP property (it was always \fBTrue\fP).
591 .SS aioredis.Redis
592 .sp
593 \fBRedis\fP class now operates with objects implementing
594 \fBaioredis.abc.AbcConnection\fP interface.
595 \fBRedisConnection\fP and \fBConnectionsPool\fP are
596 both implementing \fBAbcConnection\fP so it is become possible to use same API
597 when working with either single connection or connections pool.
598 .TS
599 center;
600 |l|l|.
601 _
602 T{
603 v0.3
604 T} T{
605 .INDENT 0.0
606 .INDENT 3.5
607 .sp
608 .nf
609 .ft C
610 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
611 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
612
613 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
614 redis = await pool.acquire() # get Redis object
615 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
616 .ft P
617 .fi
618 .UNINDENT
619 .UNINDENT
620 T}
621 _
622 T{
623 v1.0
624 T} T{
625 .INDENT 0.0
626 .INDENT 3.5
627 .sp
628 .nf
629 .ft C
630 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
631 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
632
633 redis = await aioredis.create_redis_pool((\(aqlocalhost\(aq, 6379))
634 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
635 .ft P
636 .fi
637 .UNINDENT
638 .UNINDENT
639 T}
640 _
641 .TE
642 .SS Blocking operations and connection sharing
643 .sp
644 Current implementation of \fBConnectionsPool\fP by default \fBexecute
645 every command on random connection\fP\&. The \fIPros\fP of this is that it allowed
646 implementing \fBAbcConnection\fP interface and hide pool inside \fBRedis\fP class,
647 and also keep pipelining feature (like RedisConnection.execute).
648 The \fICons\fP of this is that \fBdifferent tasks may use same connection and block
649 it\fP with some long\-running command.
650 .sp
651 We can call it \fBShared Mode\fP \-\-\- commands are sent to random connections
652 in pool without need to lock [connection]:
653 .INDENT 0.0
654 .INDENT 3.5
655 .sp
656 .nf
657 .ft C
658 redis = await aioredis.create_redis_pool(
659 (\(aqlocalhost\(aq, 6379),
660 minsize=1,
661 maxsize=1)
662
663 async def task():
664 # Shared mode
665 await redis.set(\(aqkey\(aq, \(aqval\(aq)
666
667 asyncio.ensure_future(task())
668 asyncio.ensure_future(task())
669 # Both tasks will send commands through same connection
670 # without acquiring (locking) it first.
671 .ft P
672 .fi
673 .UNINDENT
674 .UNINDENT
675 .sp
676 Blocking operations (like \fBblpop\fP, \fBbrpop\fP or long\-running LUA scripts)
677 in \fBshared mode\fP mode will block connection and thus may lead to whole
678 program malfunction.
679 .sp
680 This \fIblocking\fP issue can be easily solved by using exclusive connection
681 for such operations:
682 .INDENT 0.0
683 .INDENT 3.5
684 .sp
685 .nf
686 .ft C
687 redis = await aioredis.create_redis_pool(
688 (\(aqlocalhost\(aq, 6379),
689 minsize=1,
690 maxsize=1)
691
692 async def task():
693 # Exclusive mode
694 with await redis as r:
695 await r.set(\(aqkey\(aq, \(aqval\(aq)
696 asyncio.ensure_future(task())
697 asyncio.ensure_future(task())
698 # Both tasks will first acquire connection.
699 .ft P
700 .fi
701 .UNINDENT
702 .UNINDENT
703 .sp
704 We can call this \fBExclusive Mode\fP \-\-\- context manager is used to
705 acquire (lock) exclusive connection from pool and send all commands through it.
706 .sp
707 \fBNOTE:\fP
708 .INDENT 0.0
709 .INDENT 3.5
710 This technique is similar to v0.3 pool usage:
711 .INDENT 0.0
712 .INDENT 3.5
713 .sp
714 .nf
715 .ft C
716 # in aioredis v0.3
717 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
718 with await pool as redis:
719 # Redis is bound to exclusive connection
720 redis.set(\(aqkey\(aq, \(aqval\(aq)
721 .ft P
722 .fi
723 .UNINDENT
724 .UNINDENT
725 .UNINDENT
726 .UNINDENT
727 .SS Sorted set commands return values
728 .sp
729 Sorted set commands (like \fBzrange\fP, \fBzrevrange\fP and others) that accept
730 \fBwithscores\fP argument now \fBreturn list of tuples\fP instead of plain list.
731 .TS
732 center;
733 |l|l|.
734 _
735 T{
736 v0.3
737 T} T{
738 .INDENT 0.0
739 .INDENT 3.5
740 .sp
741 .nf
742 .ft C
743 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
744 await redis.zadd(\(aqzset\-key\(aq, 1, \(aqone\(aq, 2, \(aqtwo\(aq)
745 res = await redis.zrage(\(aqzset\-key\(aq, withscores=True)
746 assert res == [b\(aqone\(aq, 1, b\(aqtwo\(aq, 2]
747
748 # not an esiest way to make a dict
749 it = iter(res)
750 assert dict(zip(it, it)) == {b\(aqone\(aq: 1, b\(aqtwo\(aq: 2}
751 .ft P
752 .fi
753 .UNINDENT
754 .UNINDENT
755 T}
756 _
757 T{
758 v1.0
759 T} T{
760 .INDENT 0.0
761 .INDENT 3.5
762 .sp
763 .nf
764 .ft C
765 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
766 await redis.zadd(\(aqzset\-key\(aq, 1, \(aqone\(aq, 2, \(aqtwo\(aq)
767 res = await redis.zrage(\(aqzset\-key\(aq, withscores=True)
768 assert res == [(b\(aqone\(aq, 1), (b\(aqtwo\(aq, 2)]
769
770 # now its easier to make a dict of it
771 assert dict(res) == {b\(aqone\(aq: 1, b\(aqtwo\(aq: 2}
772 .ft P
773 .fi
774 .UNINDENT
775 .UNINDENT
776 T}
777 _
778 .TE
779 .SS Hash \fBhscan\fP command now returns list of tuples
780 .sp
781 \fBhscan\fP updated to return a list of tuples instead of plain
782 mixed key/value list.
783 .TS
784 center;
785 |l|l|.
786 _
787 T{
788 v0.3
789 T} T{
790 .INDENT 0.0
791 .INDENT 3.5
792 .sp
793 .nf
794 .ft C
795 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
796 await redis.hmset(\(aqhash\(aq, \(aqone\(aq, 1, \(aqtwo\(aq, 2)
797 cur, data = await redis.hscan(\(aqhash\(aq)
798 assert data == [b\(aqone\(aq, b\(aq1\(aq, b\(aqtwo\(aq, b\(aq2\(aq]
799
800 # not an esiest way to make a dict
801 it = iter(data)
802 assert dict(zip(it, it)) == {b\(aqone\(aq: b\(aq1\(aq, b\(aqtwo\(aq: b\(aq2\(aq}
803 .ft P
804 .fi
805 .UNINDENT
806 .UNINDENT
807 T}
808 _
809 T{
810 v1.0
811 T} T{
812 .INDENT 0.0
813 .INDENT 3.5
814 .sp
815 .nf
816 .ft C
817 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
818 await redis.hmset(\(aqhash\(aq, \(aqone\(aq, 1, \(aqtwo\(aq, 2)
819 cur, data = await redis.hscan(\(aqhash\(aq)
820 assert data == [(b\(aqone\(aq, b\(aq1\(aq), (b\(aqtwo\(aq, b\(aq2\(aq)]
821
822 # now its easier to make a dict of it
823 assert dict(data) == {b\(aqone\(aq: b\(aq1\(aq: b\(aqtwo\(aq: b\(aq2\(aq}
824 .ft P
825 .fi
826 .UNINDENT
827 .UNINDENT
828 T}
829 _
830 .TE
615 .IP [1] 5
616 Celery hiredis issues
617 (\fI\%#197\fP,
618 \fI\%#317\fP)
831619 .SH AIOREDIS --- API REFERENCE
832620 .SS Connection
833621 .sp
845633 import aioredis
846634
847635 async def connect_uri():
848 conn = await aioredis\&.create_connection(
636 conn = await aioredis.create_connection(
849637 \(aqredis://localhost/0\(aq)
850 val = await conn\&.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
638 val = await conn.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
851639
852640 async def connect_tcp():
853 conn = await aioredis\&.create_connection(
641 conn = await aioredis.create_connection(
854642 (\(aqlocalhost\(aq, 6379))
855 val = await conn\&.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
643 val = await conn.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
856644
857645 async def connect_unixsocket():
858 conn = await aioredis\&.create_connection(
646 conn = await aioredis.create_connection(
859647 \(aq/path/to/redis/socket\(aq)
860648 # or uri \(aqunix:///path/to/redis/socket?db=1\(aq
861 val = await conn\&.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
862
863 asyncio\&.get_event_loop()\&.run_until_complete(connect_tcp())
864 asyncio\&.get_event_loop()\&.run_until_complete(connect_unixsocket())
865 .ft P
866 .fi
867 .UNINDENT
868 .UNINDENT
869 .INDENT 0.0
870 .TP
871 .B coroutine aioredis.create_connection(address, *, db=0, password=None, ssl=None, encoding=None, parser=None, loop=None, timeout=None)
649 val = await conn.execute(\(aqGET\(aq, \(aqmy\-key\(aq)
650
651 asyncio.get_event_loop().run_until_complete(connect_tcp())
652 asyncio.get_event_loop().run_until_complete(connect_unixsocket())
653 .ft P
654 .fi
655 .UNINDENT
656 .UNINDENT
657 .INDENT 0.0
658 .TP
659 .B coroutine aioredis.create_connection(address, *, db=0, password=None, ssl=None, encoding=None, parser=None, timeout=None, connection_cls=None)
872660 Creates Redis connection.
873661 .sp
874662 Changed in version v0.3.1: \fBtimeout\fP argument added.
875663
876664 .sp
877665 Changed in version v1.0: \fBparser\fP argument added.
666
667 .sp
668 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
878669
879670 .INDENT 7.0
880671 .TP
909700 \fBparser\fP (\fIcallable\fP\fI or \fP\fI\%None\fP) \-\- Protocol parser class. Can be used to set custom protocol
910701 reader; expected same interface as \fBhiredis.Reader\fP\&.
911702 .IP \(bu 2
912 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
913 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
914 .IP \(bu 2
915703 \fBtimeout\fP (\fIfloat greater than 0\fP\fI or \fP\fI\%None\fP) \-\- Max time to open a connection, otherwise
916704 raise \fI\%asyncio.TimeoutError\fP exception.
917705 \fBNone\fP by default
706 .IP \(bu 2
707 \fBconnection_cls\fP (\fBabc.AbcConnection\fP or None) \-\- Custom connection class. \fBNone\fP by default.
918708 .UNINDENT
919709 .TP
920710 .B Returns
1023813 .sp
1024814 .nf
1025815 .ft C
1026 >>> ch1 = Channel(\(aqA\(aq, is_pattern=False, loop=loop)
816 >>> ch1 = Channel(\(aqA\(aq, is_pattern=False)
1027817 >>> await conn.execute_pubsub(\(aqsubscribe\(aq, ch1)
1028818 [[b\(aqsubscribe\(aq, b\(aqA\(aq, 1]]
1029819 .ft P
1132922 import aioredis
1133923
1134924 async def sample_pool():
1135 pool = await aioredis\&.create_pool(\(aqredis://localhost\(aq)
1136 val = await pool\&.execute(\(aqget\(aq, \(aqmy\-key\(aq)
1137 .ft P
1138 .fi
1139 .UNINDENT
1140 .UNINDENT
1141 .INDENT 0.0
1142 .TP
1143 .B aioredis.create_pool(address, *, db=0, password=None, ssl=None, encoding=None, minsize=1, maxsize=10, parser=None, loop=None, create_connection_timeout=None, pool_cls=None, connection_cls=None)
925 pool = await aioredis.create_pool(\(aqredis://localhost\(aq)
926 val = await pool.execute(\(aqget\(aq, \(aqmy\-key\(aq)
927 .ft P
928 .fi
929 .UNINDENT
930 .UNINDENT
931 .INDENT 0.0
932 .TP
933 .B aioredis.create_pool(address, *, db=0, password=None, ssl=None, encoding=None, minsize=1, maxsize=10, parser=None, create_connection_timeout=None, pool_cls=None, connection_cls=None)
1144934 A \fI\%coroutine\fP that instantiates a pool of
1145935 \fI\%RedisConnection\fP\&.
1146936 .sp
1157947
1158948 .sp
1159949 New in version v1.0: \fBparser\fP, \fBpool_cls\fP and \fBconnection_cls\fP arguments added.
950
951 .sp
952 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
1160953
1161954 .INDENT 7.0
1162955 .TP
1196989 .IP \(bu 2
1197990 \fBparser\fP (\fIcallable\fP\fI or \fP\fI\%None\fP) \-\- Protocol parser class. Can be used to set custom protocol
1198991 reader; expected same interface as \fBhiredis.Reader\fP\&.
1199 .IP \(bu 2
1200 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
1201 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
1202992 .IP \(bu 2
1203993 \fBcreate_connection_timeout\fP (\fIfloat greater than 0\fP\fI or \fP\fI\%None\fP) \-\- Max time to open a connection,
1204994 otherwise raise an \fI\%asyncio.TimeoutError\fP\&. \fBNone\fP by default.
13771167 Wait until pool gets closed (when all connections are closed).
13781168 .sp
13791169 New in version v0.2.8.
1380
1381 .UNINDENT
1382 .UNINDENT
1383
1384 .sp
1385 .ce
1386 ----
1387
1388 .ce 0
1389 .sp
1390 .SS Pub/Sub Channel object
1391 .sp
1392 \fIChannel\fP object is a wrapper around queue for storing received pub/sub messages.
1393 .INDENT 0.0
1394 .TP
1395 .B class aioredis.Channel(name, is_pattern, loop=None)
1396 Bases: \fBabc.AbcChannel\fP
1397 .sp
1398 Object representing Pub/Sub messages queue.
1399 It\(aqs basically a wrapper around \fI\%asyncio.Queue\fP\&.
1400 .INDENT 7.0
1401 .TP
1402 .B name
1403 Holds encoded channel/pattern name.
1404 .UNINDENT
1405 .INDENT 7.0
1406 .TP
1407 .B is_pattern
1408 Set to True for pattern channels.
1409 .UNINDENT
1410 .INDENT 7.0
1411 .TP
1412 .B is_active
1413 Set to True if there are messages in queue and connection is still
1414 subscribed to this channel.
1415 .UNINDENT
1416 .INDENT 7.0
1417 .TP
1418 .B coroutine get(*, encoding=None, decoder=None)
1419 Coroutine that waits for and returns a message.
1420 .sp
1421 Return value is message received or \fBNone\fP signifying that channel has
1422 been unsubscribed and no more messages will be received.
1423 .INDENT 7.0
1424 .TP
1425 .B Parameters
1426 .INDENT 7.0
1427 .IP \(bu 2
1428 \fBencoding\fP (\fI\%str\fP) \-\- If not None used to decode resulting bytes message.
1429 .IP \(bu 2
1430 \fBdecoder\fP (\fIcallable\fP) \-\- If specified used to decode message,
1431 ex. \fI\%json.loads()\fP
1432 .UNINDENT
1433 .TP
1434 .B Raises
1435 \fBaioredis.ChannelClosedError\fP \-\- If channel is unsubscribed and
1436 has no more messages.
1437 .UNINDENT
1438 .UNINDENT
1439 .INDENT 7.0
1440 .TP
1441 .B get_json(*, encoding="utf\-8")
1442 Shortcut to \fBget(encoding="utf\-8", decoder=json.loads)\fP
1443 .UNINDENT
1444 .INDENT 7.0
1445 .TP
1446 .B coroutine wait_message()
1447 Waits for message to become available in channel
1448 or channel is closed (unsubscribed).
1449 .sp
1450 Main idea is to use it in loops:
1451 .sp
1452 .nf
1453 .ft C
1454 >>> ch = redis.channels[\(aqchannel:1\(aq]
1455 >>> while await ch.wait_message():
1456 \&... msg = await ch.get()
1457 .ft P
1458 .fi
1459 .INDENT 7.0
1460 .TP
1461 .B Return type
1462 \fI\%bool\fP
1463 .UNINDENT
1464 .UNINDENT
1465 .INDENT 7.0
1466 .TP
1467 .B coroutine async\-for iter(*, encoding=None, decoder=None)
1468 Same as \fI\%get()\fP method but it is a native coroutine.
1469 .sp
1470 Usage example:
1471 .INDENT 7.0
1472 .INDENT 3.5
1473 .sp
1474 .nf
1475 .ft C
1476 >>> async for msg in ch.iter():
1477 \&... print(msg)
1478 .ft P
1479 .fi
1480 .UNINDENT
1481 .UNINDENT
1482 .sp
1483 New in version 0.2.5: Available for Python 3.5 only
14841170
14851171 .UNINDENT
14861172 .UNINDENT
17241410
17251411 .ce 0
17261412 .sp
1413 .SS Pub/Sub Channel object
1414 .sp
1415 \fIChannel\fP object is a wrapper around queue for storing received pub/sub messages.
1416 .INDENT 0.0
1417 .TP
1418 .B class aioredis.Channel(name, is_pattern)
1419 Bases: \fBabc.AbcChannel\fP
1420 .sp
1421 Object representing Pub/Sub messages queue.
1422 It\(aqs basically a wrapper around \fI\%asyncio.Queue\fP\&.
1423 .INDENT 7.0
1424 .TP
1425 .B name
1426 Holds encoded channel/pattern name.
1427 .UNINDENT
1428 .INDENT 7.0
1429 .TP
1430 .B is_pattern
1431 Set to True for pattern channels.
1432 .UNINDENT
1433 .INDENT 7.0
1434 .TP
1435 .B is_active
1436 Set to True if there are messages in queue and connection is still
1437 subscribed to this channel.
1438 .UNINDENT
1439 .INDENT 7.0
1440 .TP
1441 .B coroutine get(*, encoding=None, decoder=None)
1442 Coroutine that waits for and returns a message.
1443 .sp
1444 Return value is message received or \fBNone\fP signifying that channel has
1445 been unsubscribed and no more messages will be received.
1446 .INDENT 7.0
1447 .TP
1448 .B Parameters
1449 .INDENT 7.0
1450 .IP \(bu 2
1451 \fBencoding\fP (\fI\%str\fP) \-\- If not None used to decode resulting bytes message.
1452 .IP \(bu 2
1453 \fBdecoder\fP (\fIcallable\fP) \-\- If specified used to decode message,
1454 ex. \fI\%json.loads()\fP
1455 .UNINDENT
1456 .TP
1457 .B Raises
1458 \fBaioredis.ChannelClosedError\fP \-\- If channel is unsubscribed and
1459 has no more messages.
1460 .UNINDENT
1461 .UNINDENT
1462 .INDENT 7.0
1463 .TP
1464 .B get_json(*, encoding="utf\-8")
1465 Shortcut to \fBget(encoding="utf\-8", decoder=json.loads)\fP
1466 .UNINDENT
1467 .INDENT 7.0
1468 .TP
1469 .B coroutine wait_message()
1470 Waits for message to become available in channel
1471 or channel is closed (unsubscribed).
1472 .sp
1473 Main idea is to use it in loops:
1474 .sp
1475 .nf
1476 .ft C
1477 >>> ch = redis.channels[\(aqchannel:1\(aq]
1478 >>> while await ch.wait_message():
1479 \&... msg = await ch.get()
1480 .ft P
1481 .fi
1482 .INDENT 7.0
1483 .TP
1484 .B Return type
1485 \fI\%bool\fP
1486 .UNINDENT
1487 .UNINDENT
1488 .INDENT 7.0
1489 .TP
1490 .B coroutine async\-for iter(*, encoding=None, decoder=None)
1491 Same as \fI\%get()\fP method but it is a native coroutine.
1492 .sp
1493 Usage example:
1494 .INDENT 7.0
1495 .INDENT 3.5
1496 .sp
1497 .nf
1498 .ft C
1499 >>> async for msg in ch.iter():
1500 \&... print(msg)
1501 .ft P
1502 .fi
1503 .UNINDENT
1504 .UNINDENT
1505 .sp
1506 New in version 0.2.5: Available for Python 3.5 only
1507
1508 .UNINDENT
1509 .UNINDENT
1510
1511 .sp
1512 .ce
1513 ----
1514
1515 .ce 0
1516 .sp
17271517 .SS Commands Interface
17281518 .sp
17291519 The library provides high\-level API implementing simple interface
17391529
17401530 # Create Redis client bound to single non\-reconnecting connection.
17411531 async def single_connection():
1742 redis = await aioredis\&.create_redis(
1532 redis = await aioredis.create_redis(
17431533 \(aqredis://localhost\(aq)
1744 val = await redis\&.get(\(aqmy\-key\(aq)
1534 val = await redis.get(\(aqmy\-key\(aq)
17451535
17461536 # Create Redis client bound to connections pool.
17471537 async def pool_of_connections():
1748 redis = await aioredis\&.create_redis_pool(
1538 redis = await aioredis.create_redis_pool(
17491539 \(aqredis://localhost\(aq)
1750 val = await redis\&.get(\(aqmy\-key\(aq)
1540 val = await redis.get(\(aqmy\-key\(aq)
17511541
17521542 # we can also use pub/sub as underlying pool
17531543 # has several free connections:
1754 ch1, ch2 = await redis\&.subscribe(\(aqchan:1\(aq, \(aqchan:2\(aq)
1544 ch1, ch2 = await redis.subscribe(\(aqchan:1\(aq, \(aqchan:2\(aq)
17551545 # publish using free connection
1756 await redis\&.publish(\(aqchan:1\(aq, \(aqHello\(aq)
1757 await ch1\&.get()
1546 await redis.publish(\(aqchan:1\(aq, \(aqHello\(aq)
1547 await ch1.get()
17581548 .ft P
17591549 .fi
17601550 .UNINDENT
17641554 see commands mixins reference\&.
17651555 .INDENT 0.0
17661556 .TP
1767 .B coroutine aioredis.create_redis(address, *, db=0, password=None, ssl=None, encoding=None, commands_factory=Redis, parser=None, timeout=None, connection_cls=None, loop=None)
1557 .B coroutine aioredis.create_redis(address, *, db=0, password=None, ssl=None, encoding=None, commands_factory=Redis, parser=None, timeout=None, connection_cls=None)
17681558 This \fI\%coroutine\fP creates high\-level Redis
17691559 interface instance bound to single Redis connection
17701560 (without auto\-reconnect).
17711561 .sp
17721562 New in version v1.0: \fBparser\fP, \fBtimeout\fP and \fBconnection_cls\fP arguments added.
1563
1564 .sp
1565 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
17731566
17741567 .sp
17751568 See also \fI\%RedisConnection\fP for parameters description.
18061599 \fBconnection_cls\fP (\fIaioredis.abc.AbcConnection\fP) \-\- Can be used to instantiate custom
18071600 connection class. This argument \fBmust be\fP a subclass of
18081601 \fBAbcConnection\fP\&.
1809 .IP \(bu 2
1810 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
1811 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
18121602 .UNINDENT
18131603 .TP
18141604 .B Returns
18181608 .UNINDENT
18191609 .INDENT 0.0
18201610 .TP
1821 .B coroutine aioredis.create_redis_pool(address, *, db=0, password=None, ssl=None, encoding=None, commands_factory=Redis, minsize=1, maxsize=10, parser=None, timeout=None, pool_cls=None, connection_cls=None, loop=None)
1611 .B coroutine aioredis.create_redis_pool(address, *, db=0, password=None, ssl=None, encoding=None, commands_factory=Redis, minsize=1, maxsize=10, parser=None, timeout=None, pool_cls=None, connection_cls=None)
18221612 This \fI\%coroutine\fP create high\-level Redis client instance
18231613 bound to connections pool (this allows auto\-reconnect and simple pub/sub
18241614 use).
18271617 .sp
18281618 Changed in version v1.0: \fBparser\fP, \fBtimeout\fP, \fBpool_cls\fP and \fBconnection_cls\fP
18291619 arguments added.
1620
1621 .sp
1622 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
18301623
18311624 .INDENT 7.0
18321625 .TP
18701663 \fBconnection_cls\fP (\fIaioredis.abc.AbcConnection\fP) \-\- Can be used to make pool instantiate custom
18711664 connection classes. This argument \fBmust be\fP a subclass of
18721665 \fBAbcConnection\fP\&.
1873 .IP \(bu 2
1874 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
1875 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
18761666 .UNINDENT
18771667 .TP
18781668 .B Returns
19011691 .UNINDENT
19021692 .INDENT 7.0
19031693 .TP
1904 .B address
1694 .B property address
19051695 Redis connection address (if applicable).
19061696 .UNINDENT
19071697 .INDENT 7.0
19181708 .UNINDENT
19191709 .INDENT 7.0
19201710 .TP
1921 .B closed
1711 .B property closed
19221712 True if connection is closed.
19231713 .UNINDENT
19241714 .INDENT 7.0
19251715 .TP
1926 .B connection
1716 .B property connection
19271717 Either \fBaioredis.RedisConnection\fP,
19281718 or \fBaioredis.ConnectionsPool\fP instance.
19291719 .UNINDENT
19301720 .INDENT 7.0
19311721 .TP
1932 .B db
1722 .B property db
19331723 Currently selected db index.
19341724 .UNINDENT
19351725 .INDENT 7.0
19391729 .UNINDENT
19401730 .INDENT 7.0
19411731 .TP
1942 .B encoding
1732 .B property encoding
19431733 Current set codec or None.
19441734 .UNINDENT
19451735 .INDENT 7.0
19461736 .TP
1947 .B in_transaction
1737 .B property in_transaction
19481738 Set to True when MULTI command was issued.
19491739 .UNINDENT
19501740 .INDENT 7.0
19621752 .INDENT 7.0
19631753 .TP
19641754 .B select(db)
1965 Change the selected database for the current connection.
1966 .sp
1967 This method wraps call to \fBaioredis.RedisConnection.select()\fP
1968 .UNINDENT
1969 .INDENT 7.0
1970 .TP
1971 .B coroutine wait_closed()
1972 Coroutine waiting until underlying connections are closed.
1755 Change the selected database.
19731756 .UNINDENT
19741757 .UNINDENT
19751758 .SS Generic commands
25322315 .UNINDENT
25332316 .INDENT 7.0
25342317 .TP
2535 .B mset(key, value, *pairs)
2536 Set multiple keys to multiple values.
2318 .B mset(*args)
2319 Set multiple keys to multiple values or unpack dict to keys & values.
25372320 .INDENT 7.0
25382321 .TP
25392322 .B Raises
2540 \fI\%TypeError\fP \-\- if len of pairs is not event number
2323 .INDENT 7.0
2324 .IP \(bu 2
2325 \fI\%TypeError\fP \-\- if len of args is not event number
2326 .IP \(bu 2
2327 \fI\%TypeError\fP \-\- if len of args equals 1 and it is not a dict
2328 .UNINDENT
25412329 .UNINDENT
25422330 .UNINDENT
25432331 .INDENT 7.0
29982786 .UNINDENT
29992787 .INDENT 7.0
30002788 .TP
3001 .B spop(key, *, encoding=<object object>)
3002 Remove and return a random member from a set.
2789 .B spop(key, count=None, *, encoding=<object object>)
2790 Remove and return one or multiple random members from a set.
30032791 .UNINDENT
30042792 .INDENT 7.0
30052793 .TP
30362824 For commands details see: \fI\%http://redis.io/commands/#sorted_set\fP
30372825 .INDENT 7.0
30382826 .TP
2827 .B bzpopmax(key, *keys, timeout=0, encoding=<object object>)
2828 Remove and get an element with the highest score in the sorted set,
2829 or block until one is available.
2830 .INDENT 7.0
2831 .TP
2832 .B Raises
2833 .INDENT 7.0
2834 .IP \(bu 2
2835 \fI\%TypeError\fP \-\- if timeout is not int
2836 .IP \(bu 2
2837 \fI\%ValueError\fP \-\- if timeout is less than 0
2838 .UNINDENT
2839 .UNINDENT
2840 .UNINDENT
2841 .INDENT 7.0
2842 .TP
2843 .B bzpopmin(key, *keys, timeout=0, encoding=<object object>)
2844 Remove and get an element with the lowest score in the sorted set,
2845 or block until one is available.
2846 .INDENT 7.0
2847 .TP
2848 .B Raises
2849 .INDENT 7.0
2850 .IP \(bu 2
2851 \fI\%TypeError\fP \-\- if timeout is not int
2852 .IP \(bu 2
2853 \fI\%ValueError\fP \-\- if timeout is less than 0
2854 .UNINDENT
2855 .UNINDENT
2856 .UNINDENT
2857 .INDENT 7.0
2858 .TP
30392859 .B izscan(key, *, match=None, count=None)
30402860 Incrementally iterate sorted set items using async for.
30412861 .sp
30502870 .UNINDENT
30512871 .INDENT 7.0
30522872 .TP
3053 .B zadd(key, score, member, *pairs, exist=None)
2873 .B zadd(key, score, member, *pairs, exist=None, changed=False, incr=False)
30542874 Add one or more members to a sorted set or update its score.
30552875 .INDENT 7.0
30562876 .TP
31232943 .UNINDENT
31242944 .INDENT 7.0
31252945 .TP
2946 .B zpopmax(key, count=None, *, encoding=<object object>)
2947 Removes and returns up to count members with the highest scores
2948 in the sorted set stored at key.
2949 .INDENT 7.0
2950 .TP
2951 .B Raises
2952 \fI\%TypeError\fP \-\- if count is not int
2953 .UNINDENT
2954 .UNINDENT
2955 .INDENT 7.0
2956 .TP
2957 .B zpopmin(key, count=None, *, encoding=<object object>)
2958 Removes and returns up to count members with the lowest scores
2959 in the sorted set stored at key.
2960 .INDENT 7.0
2961 .TP
2962 .B Raises
2963 \fI\%TypeError\fP \-\- if count is not int
2964 .UNINDENT
2965 .UNINDENT
2966 .INDENT 7.0
2967 .TP
31262968 .B zrange(key, start=0, stop=\-1, withscores=False, encoding=<object object>)
31272969 Return a range of members in a sorted set, by index.
31282970 .INDENT 7.0
35013343 .UNINDENT
35023344 .INDENT 7.0
35033345 .TP
3504 .B slaveof(host=<object object>, port=None)
3346 .B slaveof(host, port=None)
35053347 Make the server a slave of another instance,
35063348 or promote it as master.
35073349 .sp
36483490 .UNINDENT
36493491 .INDENT 0.0
36503492 .TP
3651 .B class aioredis.commands.Pipeline(connection, commands_factory=lambda conn: conn, *, loop=None)
3493 .B class aioredis.commands.Pipeline(connection, commands_factory=lambda conn: conn)
36523494 Commands pipeline.
36533495 .sp
36543496 Buffers commands for execution in bulk.
36553497 .sp
36563498 This class implements \fI__getattr__\fP method allowing to call methods
36573499 on instance created with \fBcommands_factory\fP\&.
3500 .sp
3501 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
3502
36583503 .INDENT 7.0
36593504 .TP
36603505 .B Parameters
36633508 \fBconnection\fP (\fIaioredis.RedisConnection\fP) \-\- Redis connection
36643509 .IP \(bu 2
36653510 \fBcommands_factory\fP (\fIcallable\fP) \-\- Commands factory to get methods from.
3666 .IP \(bu 2
3667 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
3668 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
36693511 .UNINDENT
36703512 .UNINDENT
36713513 .INDENT 7.0
36923534 .UNINDENT
36933535 .INDENT 0.0
36943536 .TP
3695 .B class aioredis.commands.MultiExec(connection, commands_factory=lambda conn: conn, *, loop=None)
3537 .B class aioredis.commands.MultiExec(connection, commands_factory=lambda conn: conn)
36963538 Bases: \fI\%Pipeline\fP\&.
36973539 .sp
36983540 Multi/Exec pipeline wrapper.
36993541 .sp
37003542 See \fI\%Pipeline\fP for parameters description.
3543 .sp
3544 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
3545
37013546 .INDENT 7.0
37023547 .TP
37033548 .B coroutine execute(*, return_exceptions=False)
39563801 .UNINDENT
39573802 .INDENT 7.0
39583803 .TP
3959 .B slaveof(host=<object object>, port=None)
3804 .B slaveof(host, port=None)
39603805 Make the server a slave of another instance,
39613806 or promote it as master.
39623807 .sp
40033848 For commands details see: \fI\%http://redis.io/commands/#pubsub\fP
40043849 .INDENT 7.0
40053850 .TP
4006 .B channels
3851 .B property channels
40073852 Returns read\-only channels dict.
40083853 .sp
40093854 See \fBpubsub_channels\fP
40103855 .UNINDENT
40113856 .INDENT 7.0
40123857 .TP
4013 .B in_pubsub
3858 .B property in_pubsub
40143859 Indicates that connection is in PUB/SUB mode.
40153860 .sp
40163861 Provides the number of subscribed channels.
40173862 .UNINDENT
40183863 .INDENT 7.0
40193864 .TP
4020 .B patterns
3865 .B property patterns
40213866 Returns read\-only patterns dict.
40223867 .sp
40233868 See \fBpubsub_patterns\fP
40903935 \fBWARNING:\fP
40913936 .INDENT 0.0
40923937 .INDENT 3.5
4093 Current release (1.2.0) of the library \fBdoes not support\fP
3938 Current release (1.3.0) of the library \fBdoes not support\fP
40943939 \fI\%Redis Cluster\fP in a full manner.
40953940 It provides only several API methods which may be changed in future.
40963941 .UNINDENT
41013946 .B class aioredis.commands.StreamCommandsMixin
41023947 Stream commands mixin
41033948 .sp
4104 Streams are under development in Redis and
4105 not currently released.
3949 Streams are available in Redis since v5.0
41063950 .INDENT 7.0
41073951 .TP
41083952 .B xack(stream, group_name, id, *ids)
41203964 .UNINDENT
41213965 .INDENT 7.0
41223966 .TP
4123 .B xgroup_create(stream, group_name, latest_id=\(aq$\(aq)
3967 .B xdel(stream, id)
3968 Removes the specified entries(IDs) from a stream
3969 .UNINDENT
3970 .INDENT 7.0
3971 .TP
3972 .B xgroup_create(stream, group_name, latest_id=\(aq$\(aq, mkstream=False)
41243973 Create a consumer group
41253974 .UNINDENT
41263975 .INDENT 7.0
41644013 .TP
41654014 .B xinfo_stream(stream)
41664015 Retrieve information about the given stream.
4016 .UNINDENT
4017 .INDENT 7.0
4018 .TP
4019 .B xlen(stream)
4020 Returns the number of entries inside a stream
41674021 .UNINDENT
41684022 .INDENT 7.0
41694023 .TP
41984052 .UNINDENT
41994053 .INDENT 7.0
42004054 .TP
4201 .B xread_group(group_name, consumer_name, streams, timeout=0, count=None, latest_ids=None)
4055 .B xread_group(group_name, consumer_name, streams, timeout=0, count=None, latest_ids=None, no_ack=False)
42024056 Perform a blocking read on the given stream as part of a consumer group
42034057 .INDENT 7.0
42044058 .TP
42124066 .B xrevrange(stream, start=\(aq+\(aq, stop=\(aq\-\(aq, count=None)
42134067 Retrieve messages from a stream in reverse order.
42144068 .UNINDENT
4069 .INDENT 7.0
4070 .TP
4071 .B xtrim(stream, max_len, exact_len=False)
4072 trims the stream to a given number of items, evicting older items
4073 .UNINDENT
42154074 .UNINDENT
42164075 .SH AIOREDIS.ABC --- INTERFACES REFERENCE
42174076 .sp
42254084 Abstract connection interface.
42264085 .INDENT 7.0
42274086 .TP
4228 .B address
4087 .B abstract property address
42294088 Connection address.
42304089 .UNINDENT
42314090 .INDENT 7.0
42324091 .TP
4233 .B close()
4092 .B abstract close()
42344093 Perform connection(s) close and resources cleanup.
42354094 .UNINDENT
42364095 .INDENT 7.0
42374096 .TP
4238 .B closed
4097 .B abstract property closed
42394098 Flag indicating if connection is closing or already closed.
42404099 .UNINDENT
42414100 .INDENT 7.0
42424101 .TP
4243 .B db
4102 .B abstract property db
42444103 Current selected DB index.
42454104 .UNINDENT
42464105 .INDENT 7.0
42474106 .TP
4248 .B encoding
4107 .B abstract property encoding
42494108 Current set connection codec.
42504109 .UNINDENT
42514110 .INDENT 7.0
42524111 .TP
4253 .B execute(command, *args, **kwargs)
4112 .B abstract execute(command, *args, **kwargs)
42544113 Execute redis command.
42554114 .UNINDENT
42564115 .INDENT 7.0
42574116 .TP
4258 .B execute_pubsub(command, *args, **kwargs)
4117 .B abstract execute_pubsub(command, *args, **kwargs)
42594118 Execute Redis (p)subscribe/(p)unsubscribe commands.
42604119 .UNINDENT
42614120 .INDENT 7.0
42624121 .TP
4263 .B in_pubsub
4122 .B abstract property in_pubsub
42644123 Returns number of subscribed channels.
42654124 .sp
42664125 Can be tested as bool indicating Pub/Sub mode state.
42674126 .UNINDENT
42684127 .INDENT 7.0
42694128 .TP
4270 .B pubsub_channels
4129 .B abstract property pubsub_channels
42714130 Read\-only channels dict.
42724131 .UNINDENT
42734132 .INDENT 7.0
42744133 .TP
4275 .B pubsub_patterns
4134 .B abstract property pubsub_patterns
42764135 Read\-only patterns dict.
4277 .UNINDENT
4278 .INDENT 7.0
4279 .TP
4280 .B coroutine wait_closed()
4281 Coroutine waiting until all resources are closed/released/cleaned up.
42824136 .UNINDENT
42834137 .UNINDENT
42844138 .INDENT 0.0
42924146 for executing Redis commands.
42934147 .INDENT 7.0
42944148 .TP
4295 .B coroutine acquire()
4296 Acquires connection from pool.
4297 .UNINDENT
4298 .INDENT 7.0
4299 .TP
4300 .B address
4149 .B abstract property address
43014150 Connection address or None.
43024151 .UNINDENT
43034152 .INDENT 7.0
43044153 .TP
4305 .B get_connection()
4154 .B abstract get_connection(command, args=())
43064155 Gets free connection from pool in a sync way.
43074156 .sp
43084157 If no connection available — returns None.
43094158 .UNINDENT
43104159 .INDENT 7.0
43114160 .TP
4312 .B release(conn)
4161 .B abstract release(conn)
43134162 Releases connection to pool.
43144163 .INDENT 7.0
43154164 .TP
43264175 Abstract Pub/Sub Channel interface.
43274176 .INDENT 7.0
43284177 .TP
4329 .B close(exc=None)
4178 .B abstract close(exc=None)
43304179 Marks Channel as closed, no more messages will be sent to it.
43314180 .sp
43324181 Called by RedisConnection when channel is unsubscribed
43344183 .UNINDENT
43354184 .INDENT 7.0
43364185 .TP
4337 .B coroutine get()
4338 Wait and return new message.
4339 .sp
4340 Will raise \fBChannelClosedError\fP if channel is not active.
4341 .UNINDENT
4342 .INDENT 7.0
4343 .TP
4344 .B is_active
4186 .B abstract property is_active
43454187 Flag indicating that channel has unreceived messages
43464188 and not marked as closed.
43474189 .UNINDENT
43484190 .INDENT 7.0
43494191 .TP
4350 .B is_pattern
4192 .B abstract property is_pattern
43514193 Boolean flag indicating if channel is pattern channel.
43524194 .UNINDENT
43534195 .INDENT 7.0
43544196 .TP
4355 .B name
4197 .B abstract property name
43564198 Encoded channel name or pattern.
43574199 .UNINDENT
43584200 .INDENT 7.0
43594201 .TP
4360 .B put_nowait(data)
4202 .B abstract put_nowait(data)
43614203 Send data to channel.
43624204 .sp
43634205 Called by RedisConnection when new message received.
43854227 .ft C
43864228 >>> from aioredis.pubsub import Receiver
43874229 >>> from aioredis.abc import AbcChannel
4388 >>> mpsc = Receiver(loop=loop)
4230 >>> mpsc = Receiver()
43894231 >>> async def reader(mpsc):
43904232 \&... async for channel, msg in mpsc.iter():
43914233 \&... assert isinstance(channel, AbcChannel)
44354277 .UNINDENT
44364278 .INDENT 7.0
44374279 .TP
4438 .B channels
4280 .B property channels
44394281 Read\-only channels dict.
44404282 .UNINDENT
44414283 .INDENT 7.0
44454287 .UNINDENT
44464288 .INDENT 7.0
44474289 .TP
4448 .B coroutine get(*, encoding=None, decoder=None)
4449 Wait for and return pub/sub message from one of channels.
4450 .sp
4451 Return value is either:
4452 .INDENT 7.0
4453 .IP \(bu 2
4454 tuple of two elements: channel & message;
4455 .IP \(bu 2
4456 tuple of three elements: pattern channel, (target channel & message);
4457 .IP \(bu 2
4458 or None in case Receiver is not active or has just been stopped.
4459 .UNINDENT
4460 .INDENT 7.0
4461 .TP
4462 .B Raises
4463 \fBaioredis.ChannelClosedError\fP \-\- If listener is stopped
4464 and all messages have been received.
4465 .UNINDENT
4466 .UNINDENT
4467 .INDENT 7.0
4468 .TP
4469 .B is_active
4290 .B property is_active
44704291 Returns True if listener has any active subscription.
44714292 .UNINDENT
44724293 .INDENT 7.0
44934314 .UNINDENT
44944315 .INDENT 7.0
44954316 .TP
4496 .B patterns
4317 .B property patterns
44974318 Read\-only patterns dict.
44984319 .UNINDENT
44994320 .INDENT 7.0
45044325 All new messages after this call will be ignored,
45054326 so you must call unsubscribe before stopping this listener.
45064327 .UNINDENT
4507 .INDENT 7.0
4508 .TP
4509 .B coroutine wait_message()
4510 Blocks until new message appear.
4511 .UNINDENT
45124328 .UNINDENT
45134329 .INDENT 0.0
45144330 .TP
45344350 .ft C
45354351 import aioredis
45364352
4537 sentinel = await aioredis\&.create_sentinel(
4353 sentinel = await aioredis.create_sentinel(
45384354 [(\(aqsentinel.host1\(aq, 26379), (\(aqsentinel.host2\(aq, 26379)])
45394355
4540 redis = sentinel\&.master_for(\(aqmymaster\(aq)
4541 assert await redis\&.set(\(aqkey\(aq, \(aqvalue\(aq)
4542 assert await redis\&.get(\(aqkey\(aq, encoding=\(aqutf\-8\(aq) == \(aqvalue\(aq
4356 redis = sentinel.master_for(\(aqmymaster\(aq)
4357 assert await redis.set(\(aqkey\(aq, \(aqvalue\(aq)
4358 assert await redis.get(\(aqkey\(aq, encoding=\(aqutf\-8\(aq) == \(aqvalue\(aq
45434359
45444360 # redis client will reconnect/reconfigure automatically
45454361 # by sentinel client instance
45504366 .SS \fBRedisSentinel\fP
45514367 .INDENT 0.0
45524368 .TP
4553 .B coroutine aioredis.sentinel.create_sentinel(sentinels, *, db=None, password=None, encoding=None, minsize=1, maxsize=10, ssl=None, parser=None, loop=None)
4369 .B coroutine aioredis.sentinel.create_sentinel(sentinels, *, db=None, password=None, encoding=None, minsize=1, maxsize=10, ssl=None, parser=None)
45544370 Creates Redis Sentinel client.
4371 .sp
4372 Deprecated since version v1.3.1: \fBloop\fP argument deprecated for Python 3.8 compatibility.
4373
45554374 .INDENT 7.0
45564375 .TP
45574376 .B Parameters
45784397 .IP \(bu 2
45794398 \fBparser\fP (\fIcallable\fP\fI or \fP\fI\%None\fP) \-\- Protocol parser class. Can be used to set custom protocol
45804399 reader; expected same interface as \fBhiredis.Reader\fP\&.
4581 .IP \(bu 2
4582 \fBloop\fP (\fI\%EventLoop\fP) \-\- An optional \fIevent loop\fP instance
4583 (uses \fI\%asyncio.get_event_loop()\fP if not specified).
45844400 .UNINDENT
45854401 .TP
45864402 .B Return type
48964712 (see for more).
48974713 .sp
48984714 Every example is a correct python program that can be executed.
4899 .SS Low\-level connection usage example
4900 .sp
4901 \fBget source code\fP
4902 .INDENT 0.0
4903 .INDENT 3.5
4904 .sp
4905 .nf
4906 .ft C
4907 import asyncio
4908 import aioredis
4909
4910
4911 async def main():
4912 conn = await aioredis.create_connection(
4913 \(aqredis://localhost\(aq, encoding=\(aqutf\-8\(aq)
4914
4915 ok = await conn.execute(\(aqset\(aq, \(aqmy\-key\(aq, \(aqsome value\(aq)
4916 assert ok == \(aqOK\(aq, ok
4917
4918 str_value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
4919 raw_value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq, encoding=None)
4920 assert str_value == \(aqsome value\(aq
4921 assert raw_value == b\(aqsome value\(aq
4922
4923 print(\(aqstr value:\(aq, str_value)
4924 print(\(aqraw value:\(aq, raw_value)
4925
4926 # optionally close connection
4927 conn.close()
4928 await conn.wait_closed()
4929
4930
4931 if __name__ == \(aq__main__\(aq:
4932 asyncio.get_event_loop().run_until_complete(main())
4933
4934 .ft P
4935 .fi
4936 .UNINDENT
4937 .UNINDENT
4938 .SS Connections pool example
4939 .sp
4940 \fBget source code\fP
4941 .INDENT 0.0
4942 .INDENT 3.5
4943 .sp
4944 .nf
4945 .ft C
4946 import asyncio
4947 import aioredis
4948
4949
4950 async def main():
4951 pool = await aioredis.create_pool(
4952 \(aqredis://localhost\(aq,
4953 minsize=5, maxsize=10)
4954 with await pool as conn: # low\-level redis connection
4955 await conn.execute(\(aqset\(aq, \(aqmy\-key\(aq, \(aqvalue\(aq)
4956 val = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
4957 print(\(aqraw value:\(aq, val)
4958 pool.close()
4959 await pool.wait_closed() # closing all open connections
4960
4961
4962 if __name__ == \(aq__main__\(aq:
4963 asyncio.get_event_loop().run_until_complete(main())
4964
4965 .ft P
4966 .fi
4967 .UNINDENT
4968 .UNINDENT
49694715 .SS Commands example
49704716 .sp
49714717 \fBget source code\fP
50054751
50064752
50074753 if __name__ == \(aq__main__\(aq:
5008 asyncio.get_event_loop().run_until_complete(main())
5009 asyncio.get_event_loop().run_until_complete(redis_pool())
4754 asyncio.run(main())
4755 asyncio.run(redis_pool())
50104756
50114757 .ft P
50124758 .fi
50414787
50424788
50434789 if __name__ == \(aq__main__\(aq:
5044 asyncio.get_event_loop().run_until_complete(main())
4790 asyncio.run(main())
50454791
50464792 .ft P
50474793 .fi
50854831
50864832
50874833 if __name__ == \(aq__main__\(aq:
5088 asyncio.get_event_loop().run_until_complete(main())
4834 asyncio.run(main())
50894835
50904836 .ft P
50914837 .fi
51214867 if __name__ == \(aq__main__\(aq:
51224868 import os
51234869 if \(aqredis_version:2.6\(aq not in os.environ.get(\(aqREDIS_VERSION\(aq, \(aq\(aq):
5124 asyncio.get_event_loop().run_until_complete(main())
4870 asyncio.run(main())
51254871
51264872 .ft P
51274873 .fi
51534899
51544900
51554901 if __name__ == \(aq__main__\(aq:
5156 asyncio.get_event_loop().run_until_complete(main())
4902 asyncio.run(main())
4903
4904 .ft P
4905 .fi
4906 .UNINDENT
4907 .UNINDENT
4908 .SS Low\-level connection usage example
4909 .sp
4910 \fBget source code\fP
4911 .INDENT 0.0
4912 .INDENT 3.5
4913 .sp
4914 .nf
4915 .ft C
4916 import asyncio
4917 import aioredis
4918
4919
4920 async def main():
4921 conn = await aioredis.create_connection(
4922 \(aqredis://localhost\(aq, encoding=\(aqutf\-8\(aq)
4923
4924 ok = await conn.execute(\(aqset\(aq, \(aqmy\-key\(aq, \(aqsome value\(aq)
4925 assert ok == \(aqOK\(aq, ok
4926
4927 str_value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
4928 raw_value = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq, encoding=None)
4929 assert str_value == \(aqsome value\(aq
4930 assert raw_value == b\(aqsome value\(aq
4931
4932 print(\(aqstr value:\(aq, str_value)
4933 print(\(aqraw value:\(aq, raw_value)
4934
4935 # optionally close connection
4936 conn.close()
4937 await conn.wait_closed()
4938
4939
4940 if __name__ == \(aq__main__\(aq:
4941 asyncio.run(main())
4942
4943 .ft P
4944 .fi
4945 .UNINDENT
4946 .UNINDENT
4947 .SS Connections pool example
4948 .sp
4949 \fBget source code\fP
4950 .INDENT 0.0
4951 .INDENT 3.5
4952 .sp
4953 .nf
4954 .ft C
4955 import asyncio
4956 import aioredis
4957
4958
4959 async def main():
4960 pool = await aioredis.create_pool(
4961 \(aqredis://localhost\(aq,
4962 minsize=5, maxsize=10)
4963 with await pool as conn: # low\-level redis connection
4964 await conn.execute(\(aqset\(aq, \(aqmy\-key\(aq, \(aqvalue\(aq)
4965 val = await conn.execute(\(aqget\(aq, \(aqmy\-key\(aq)
4966 print(\(aqraw value:\(aq, val)
4967 pool.close()
4968 await pool.wait_closed() # closing all open connections
4969
4970
4971 if __name__ == \(aq__main__\(aq:
4972 asyncio.run(main())
51574973
51584974 .ft P
51594975 .fi
51985014 \fBflake8\fP for code linting;
51995015 .IP \(bu 2
52005016 and few other packages.
5017 .UNINDENT
5018 .sp
5019 Make sure you have provided a \fBtowncrier\fP note.
5020 Just add short description running following commands:
5021 .INDENT 0.0
5022 .INDENT 3.5
5023 .sp
5024 .nf
5025 .ft C
5026 $ echo "Short description" > CHANGES/filename.type
5027 .ft P
5028 .fi
5029 .UNINDENT
5030 .UNINDENT
5031 .sp
5032 This will create new file in \fBCHANGES\fP directory.
5033 Filename should consist of the ticket ID or other unique identifier.
5034 Five default types are:
5035 .INDENT 0.0
5036 .IP \(bu 2
5037 \&.feature \- signifying new feature
5038 .IP \(bu 2
5039 \&.bugfix \- signifying a bug fix
5040 .IP \(bu 2
5041 \&.doc \- documentation improvement
5042 .IP \(bu 2
5043 \&.removal \- deprecation or removal of public API
5044 .IP \(bu 2
5045 \&.misc \- a ticket has been closed, but not in interest of users
5046 .UNINDENT
5047 .sp
5048 You can check if everything is correct by typing:
5049 .INDENT 0.0
5050 .INDENT 3.5
5051 .sp
5052 .nf
5053 .ft C
5054 $ towncrier \-\-draft
5055 .ft P
5056 .fi
5057 .UNINDENT
5058 .UNINDENT
5059 .sp
5060 To produce the news file:
5061 .INDENT 0.0
5062 .INDENT 3.5
5063 .sp
5064 .nf
5065 .ft C
5066 $ towncrier
5067 .ft P
5068 .fi
5069 .UNINDENT
52015070 .UNINDENT
52025071 .SS Code style
52035072 .sp
52255094 # will run tests in a verbose mode
52265095 $ make test
52275096 # or
5228 $ py.test
5097 $ pytest
5098
5099 # or with particular Redis server
5100 $ pytest \-\-redis\-server=/usr/local/bin/redis\-server tests/errors_test.py
52295101
52305102 # will run tests with coverage report
52315103 $ make cov
52325104 # or
5233 $ py.test \-\-cov
5105 $ pytest \-\-cov
52345106 .ft P
52355107 .fi
52365108 .UNINDENT
52665138 .sp
52675139 .nf
52685140 .ft C
5269 $ py.test \-\-redis\-server=/path/to/custom/redis\-server
5141 $ pytest \-\-redis\-server=/path/to/custom/redis\-server
52705142 .ft P
52715143 .fi
52725144 .UNINDENT
52805152 .nf
52815153 .ft C
52825154 $ pip install uvloop
5283 $ py.test \-\-uvloop
5155 $ pytest \-\-uvloop
52845156 .ft P
52855157 .fi
52865158 .UNINDENT
52985170 \fBaioredis\fP uses pytest tool.
52995171 .sp
53005172 Tests are located under \fB/tests\fP directory.
5301 .sp
5302 Pure Python 3.5 tests (ie the ones using \fBasync\fP/\fBawait\fP syntax) must be
5303 prefixed with \fBpy35_\fP, for instance see:
5304 .INDENT 0.0
5305 .INDENT 3.5
5306 .sp
5307 .nf
5308 .ft C
5309 tests/py35_generic_commands_tests.py
5310 tests/py35_pool_test.py
5311 .ft P
5312 .fi
5313 .UNINDENT
5314 .UNINDENT
53155173 .SS Fixtures
53165174 .sp
53175175 There is a number of fixtures that can be used to write tests:
54395297 \fI\%tuple\fP
54405298 .UNINDENT
54415299 .UNINDENT
5442 .SS Helpers
5443 .sp
5444 \fBaioredis\fP also updates pytest\(aqs namespace with several helpers.
5445 .INDENT 0.0
5446 .TP
5447 .B pytest.redis_version(*version, reason)
5300 .SS \fBredis_version\fP tests helper
5301 .sp
5302 In \fBtests\fP directory there is a \fB_testutils\fP module with a simple
5303 helper \-\-\- \fBredis_version()\fP \-\-\- a function that add a pytest mark to a test
5304 allowing to run it with requested Redis server versions.
5305 .INDENT 0.0
5306 .TP
5307 .B _testutils.redis_version(*version, reason)
54485308 Marks test with minimum redis version to run.
54495309 .sp
54505310 Example:
54535313 .sp
54545314 .nf
54555315 .ft C
5456 @pytest.redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
5316 from _testutil import redis_version
5317
5318 @redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
54575319 def test_hstrlen(redis):
54585320 pass
54595321 .ft P
54615323 .UNINDENT
54625324 .UNINDENT
54635325 .UNINDENT
5464 .INDENT 0.0
5465 .TP
5466 .B pytest.logs(logger, level=None)
5467 Adopted version of \fI\%unittest.TestCase.assertEqual()\fP,
5468 see it for details.
5469 .sp
5470 Example:
5471 .INDENT 7.0
5472 .INDENT 3.5
5473 .sp
5474 .nf
5475 .ft C
5476 def test_logs(create_connection, server):
5477 with pytest.logs(\(aqaioredis\(aq, \(aqDEBUG\(aq) as cm:
5478 conn yield from create_connection(server.tcp_address)
5479 assert cm.output[0].startswith(
5480 \(aqDEBUG:aioredis:Creating tcp connection\(aq)
5481 .ft P
5482 .fi
5483 .UNINDENT
5484 .UNINDENT
5485 .UNINDENT
5486 .INDENT 0.0
5487 .TP
5488 .B pytest.assert_almost_equal(first, second, places=None, msg=None, delta=None)
5489 Adopted version of \fI\%unittest.TestCase.assertAlmostEqual()\fP\&.
5490 .UNINDENT
5491 .INDENT 0.0
5492 .TP
5493 .B pytest.raises_regex(exc_type, message)
5494 Adopted version of \fI\%unittest.TestCase.assertRaisesRegex()\fP\&.
5495 .UNINDENT
5326 .SH MIGRATING FROM V0.3 TO V1.0
5327 .SS API changes and backward incompatible changes:
5328 .INDENT 0.0
5329 .IP \(bu 2
5330 \fI\%aioredis.create_pool\fP
5331 .IP \(bu 2
5332 \fI\%aioredis.create_reconnecting_redis\fP
5333 .IP \(bu 2
5334 \fI\%aioredis.Redis\fP
5335 .IP \(bu 2
5336 \fI\%Blocking operations and connection sharing\fP
5337 .IP \(bu 2
5338 \fI\%Sorted set commands return values\fP
5339 .IP \(bu 2
5340 \fI\%Hash hscan command now returns list of tuples\fP
5341 .UNINDENT
5342
5343 .sp
5344 .ce
5345 ----
5346
5347 .ce 0
5348 .sp
5349 .SS aioredis.create_pool
5350 .sp
5351 \fBcreate_pool()\fP now returns \fBConnectionsPool\fP
5352 instead of \fBRedisPool\fP\&.
5353 .sp
5354 This means that pool now operates with \fBRedisConnection\fP
5355 objects and not \fBRedis\fP\&.
5356 .TS
5357 center;
5358 |l|l|.
5359 _
5360 T{
5361 v0.3
5362 T} T{
5363 .INDENT 0.0
5364 .INDENT 3.5
5365 .sp
5366 .nf
5367 .ft C
5368 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
5369
5370 with await pool as redis:
5371 # calling methods of Redis class
5372 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5373 .ft P
5374 .fi
5375 .UNINDENT
5376 .UNINDENT
5377 T}
5378 _
5379 T{
5380 v1.0
5381 T} T{
5382 .INDENT 0.0
5383 .INDENT 3.5
5384 .sp
5385 .nf
5386 .ft C
5387 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
5388
5389 with await pool as conn:
5390 # calling conn.lpush will raise AttributeError exception
5391 await conn.execute(\(aqlpush\(aq, \(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5392 .ft P
5393 .fi
5394 .UNINDENT
5395 .UNINDENT
5396 T}
5397 _
5398 .TE
5399 .SS aioredis.create_reconnecting_redis
5400 .sp
5401 \fBcreate_reconnecting_redis()\fP has been dropped.
5402 .sp
5403 \fBcreate_redis_pool()\fP can be used instead of former function.
5404 .TS
5405 center;
5406 |l|l|.
5407 _
5408 T{
5409 v0.3
5410 T} T{
5411 .INDENT 0.0
5412 .INDENT 3.5
5413 .sp
5414 .nf
5415 .ft C
5416 redis = await aioredis.create_reconnecting_redis(
5417 (\(aqlocalhost\(aq, 6379))
5418
5419 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5420 .ft P
5421 .fi
5422 .UNINDENT
5423 .UNINDENT
5424 T}
5425 _
5426 T{
5427 v1.0
5428 T} T{
5429 .INDENT 0.0
5430 .INDENT 3.5
5431 .sp
5432 .nf
5433 .ft C
5434 redis = await aioredis.create_redis_pool(
5435 (\(aqlocalhost\(aq, 6379))
5436
5437 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5438 .ft P
5439 .fi
5440 .UNINDENT
5441 .UNINDENT
5442 T}
5443 _
5444 .TE
5445 .sp
5446 \fBcreate_redis_pool\fP returns \fBRedis\fP initialized with
5447 \fBConnectionsPool\fP which is responsible for reconnecting to server.
5448 .sp
5449 Also \fBcreate_reconnecting_redis\fP was patching the \fBRedisConnection\fP and
5450 breaking \fBclosed\fP property (it was always \fBTrue\fP).
5451 .SS aioredis.Redis
5452 .sp
5453 \fBRedis\fP class now operates with objects implementing
5454 \fBaioredis.abc.AbcConnection\fP interface.
5455 \fBRedisConnection\fP and \fBConnectionsPool\fP are
5456 both implementing \fBAbcConnection\fP so it is become possible to use same API
5457 when working with either single connection or connections pool.
5458 .TS
5459 center;
5460 |l|l|.
5461 _
5462 T{
5463 v0.3
5464 T} T{
5465 .INDENT 0.0
5466 .INDENT 3.5
5467 .sp
5468 .nf
5469 .ft C
5470 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5471 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5472
5473 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
5474 redis = await pool.acquire() # get Redis object
5475 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5476 .ft P
5477 .fi
5478 .UNINDENT
5479 .UNINDENT
5480 T}
5481 _
5482 T{
5483 v1.0
5484 T} T{
5485 .INDENT 0.0
5486 .INDENT 3.5
5487 .sp
5488 .nf
5489 .ft C
5490 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5491 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5492
5493 redis = await aioredis.create_redis_pool((\(aqlocalhost\(aq, 6379))
5494 await redis.lpush(\(aqlist\-key\(aq, \(aqitem1\(aq, \(aqitem2\(aq)
5495 .ft P
5496 .fi
5497 .UNINDENT
5498 .UNINDENT
5499 T}
5500 _
5501 .TE
5502 .SS Blocking operations and connection sharing
5503 .sp
5504 Current implementation of \fBConnectionsPool\fP by default \fBexecute
5505 every command on random connection\fP\&. The \fIPros\fP of this is that it allowed
5506 implementing \fBAbcConnection\fP interface and hide pool inside \fBRedis\fP class,
5507 and also keep pipelining feature (like RedisConnection.execute).
5508 The \fICons\fP of this is that \fBdifferent tasks may use same connection and block
5509 it\fP with some long\-running command.
5510 .sp
5511 We can call it \fBShared Mode\fP \-\-\- commands are sent to random connections
5512 in pool without need to lock [connection]:
5513 .INDENT 0.0
5514 .INDENT 3.5
5515 .sp
5516 .nf
5517 .ft C
5518 redis = await aioredis.create_redis_pool(
5519 (\(aqlocalhost\(aq, 6379),
5520 minsize=1,
5521 maxsize=1)
5522
5523 async def task():
5524 # Shared mode
5525 await redis.set(\(aqkey\(aq, \(aqval\(aq)
5526
5527 asyncio.ensure_future(task())
5528 asyncio.ensure_future(task())
5529 # Both tasks will send commands through same connection
5530 # without acquiring (locking) it first.
5531 .ft P
5532 .fi
5533 .UNINDENT
5534 .UNINDENT
5535 .sp
5536 Blocking operations (like \fBblpop\fP, \fBbrpop\fP or long\-running LUA scripts)
5537 in \fBshared mode\fP mode will block connection and thus may lead to whole
5538 program malfunction.
5539 .sp
5540 This \fIblocking\fP issue can be easily solved by using exclusive connection
5541 for such operations:
5542 .INDENT 0.0
5543 .INDENT 3.5
5544 .sp
5545 .nf
5546 .ft C
5547 redis = await aioredis.create_redis_pool(
5548 (\(aqlocalhost\(aq, 6379),
5549 minsize=1,
5550 maxsize=1)
5551
5552 async def task():
5553 # Exclusive mode
5554 with await redis as r:
5555 await r.set(\(aqkey\(aq, \(aqval\(aq)
5556 asyncio.ensure_future(task())
5557 asyncio.ensure_future(task())
5558 # Both tasks will first acquire connection.
5559 .ft P
5560 .fi
5561 .UNINDENT
5562 .UNINDENT
5563 .sp
5564 We can call this \fBExclusive Mode\fP \-\-\- context manager is used to
5565 acquire (lock) exclusive connection from pool and send all commands through it.
5566 .sp
5567 \fBNOTE:\fP
5568 .INDENT 0.0
5569 .INDENT 3.5
5570 This technique is similar to v0.3 pool usage:
5571 .INDENT 0.0
5572 .INDENT 3.5
5573 .sp
5574 .nf
5575 .ft C
5576 # in aioredis v0.3
5577 pool = await aioredis.create_pool((\(aqlocalhost\(aq, 6379))
5578 with await pool as redis:
5579 # Redis is bound to exclusive connection
5580 redis.set(\(aqkey\(aq, \(aqval\(aq)
5581 .ft P
5582 .fi
5583 .UNINDENT
5584 .UNINDENT
5585 .UNINDENT
5586 .UNINDENT
5587 .SS Sorted set commands return values
5588 .sp
5589 Sorted set commands (like \fBzrange\fP, \fBzrevrange\fP and others) that accept
5590 \fBwithscores\fP argument now \fBreturn list of tuples\fP instead of plain list.
5591 .TS
5592 center;
5593 |l|l|.
5594 _
5595 T{
5596 v0.3
5597 T} T{
5598 .INDENT 0.0
5599 .INDENT 3.5
5600 .sp
5601 .nf
5602 .ft C
5603 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5604 await redis.zadd(\(aqzset\-key\(aq, 1, \(aqone\(aq, 2, \(aqtwo\(aq)
5605 res = await redis.zrange(\(aqzset\-key\(aq, withscores=True)
5606 assert res == [b\(aqone\(aq, 1, b\(aqtwo\(aq, 2]
5607
5608 # not an easy way to make a dict
5609 it = iter(res)
5610 assert dict(zip(it, it)) == {b\(aqone\(aq: 1, b\(aqtwo\(aq: 2}
5611 .ft P
5612 .fi
5613 .UNINDENT
5614 .UNINDENT
5615 T}
5616 _
5617 T{
5618 v1.0
5619 T} T{
5620 .INDENT 0.0
5621 .INDENT 3.5
5622 .sp
5623 .nf
5624 .ft C
5625 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5626 await redis.zadd(\(aqzset\-key\(aq, 1, \(aqone\(aq, 2, \(aqtwo\(aq)
5627 res = await redis.zrange(\(aqzset\-key\(aq, withscores=True)
5628 assert res == [(b\(aqone\(aq, 1), (b\(aqtwo\(aq, 2)]
5629
5630 # now its easier to make a dict of it
5631 assert dict(res) == {b\(aqone\(aq: 1, b\(aqtwo\(aq: 2}
5632 .ft P
5633 .fi
5634 .UNINDENT
5635 .UNINDENT
5636 T}
5637 _
5638 .TE
5639 .SS Hash \fBhscan\fP command now returns list of tuples
5640 .sp
5641 \fBhscan\fP updated to return a list of tuples instead of plain
5642 mixed key/value list.
5643 .TS
5644 center;
5645 |l|l|.
5646 _
5647 T{
5648 v0.3
5649 T} T{
5650 .INDENT 0.0
5651 .INDENT 3.5
5652 .sp
5653 .nf
5654 .ft C
5655 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5656 await redis.hmset(\(aqhash\(aq, \(aqone\(aq, 1, \(aqtwo\(aq, 2)
5657 cur, data = await redis.hscan(\(aqhash\(aq)
5658 assert data == [b\(aqone\(aq, b\(aq1\(aq, b\(aqtwo\(aq, b\(aq2\(aq]
5659
5660 # not an easy way to make a dict
5661 it = iter(data)
5662 assert dict(zip(it, it)) == {b\(aqone\(aq: b\(aq1\(aq, b\(aqtwo\(aq: b\(aq2\(aq}
5663 .ft P
5664 .fi
5665 .UNINDENT
5666 .UNINDENT
5667 T}
5668 _
5669 T{
5670 v1.0
5671 T} T{
5672 .INDENT 0.0
5673 .INDENT 3.5
5674 .sp
5675 .nf
5676 .ft C
5677 redis = await aioredis.create_redis((\(aqlocalhost\(aq, 6379))
5678 await redis.hmset(\(aqhash\(aq, \(aqone\(aq, 1, \(aqtwo\(aq, 2)
5679 cur, data = await redis.hscan(\(aqhash\(aq)
5680 assert data == [(b\(aqone\(aq, b\(aq1\(aq), (b\(aqtwo\(aq, b\(aq2\(aq)]
5681
5682 # now its easier to make a dict of it
5683 assert dict(data) == {b\(aqone\(aq: b\(aq1\(aq: b\(aqtwo\(aq: b\(aq2\(aq}
5684 .ft P
5685 .fi
5686 .UNINDENT
5687 .UNINDENT
5688 T}
5689 _
5690 .TE
54965691 .SH RELEASES
5692 .SS 1.3.1 (2019\-12\-02)
5693 .SS Bugfixes
5694 .INDENT 0.0
5695 .IP \(bu 2
5696 Fix transaction data decoding
5697 (see \fI\%#657\fP);
5698 .IP \(bu 2
5699 Fix duplicate calls to \fBpool.wait_closed()\fP upon \fBcreate_pool()\fP exception.
5700 (see \fI\%#671\fP);
5701 .UNINDENT
5702 .SS Deprecations and Removals
5703 .INDENT 0.0
5704 .IP \(bu 2
5705 Drop explicit loop requirement in API.
5706 Deprecate \fBloop\fP argument.
5707 Throw warning in Python 3.8+ if explicit \fBloop\fP is passed to methods.
5708 (see \fI\%#666\fP);
5709 .UNINDENT
5710 .SS Misc
5711 .INDENT 0.0
5712 .IP \(bu 2
5713 \fI\%#643\fP,
5714 \fI\%#646\fP,
5715 \fI\%#648\fP;
5716 .UNINDENT
5717 .SS 1.3.0 (2019\-09\-24)
5718 .SS Features
5719 .INDENT 0.0
5720 .IP \(bu 2
5721 Added \fBxdel\fP and \fBxtrim\fP method which missed in \fBcommands/streams.py\fP & also added unit test code for them
5722 (see \fI\%#438\fP);
5723 .IP \(bu 2
5724 Add \fBcount\fP argument to \fBspop\fP command
5725 (see \fI\%#485\fP);
5726 .IP \(bu 2
5727 Add support for \fBzpopmax\fP and \fBzpopmin\fP redis commands
5728 (see \fI\%#550\fP);
5729 .IP \(bu 2
5730 Add \fBtowncrier\fP: change notes are now stored in \fBCHANGES.txt\fP
5731 (see \fI\%#576\fP);
5732 .IP \(bu 2
5733 Type hints for the library
5734 (see \fI\%#584\fP);
5735 .IP \(bu 2
5736 A few additions to the sorted set commands:
5737 .INDENT 2.0
5738 .IP \(bu 2
5739 the blocking pop commands: \fBBZPOPMAX\fP and \fBBZPOPMIN\fP
5740 .IP \(bu 2
5741 the \fBCH\fP and \fBINCR\fP options of the \fBZADD\fP command
5742 .UNINDENT
5743 .sp
5744 (see \fI\%#618\fP);
5745 .IP \(bu 2
5746 Added \fBno_ack\fP parameter to \fBxread_group\fP streams method in \fBcommands/streams.py\fP
5747 (see \fI\%#625\fP);
5748 .UNINDENT
5749 .SS Bugfixes
5750 .INDENT 0.0
5751 .IP \(bu 2
5752 Fix for sensitive logging
5753 (see \fI\%#459\fP);
5754 .IP \(bu 2
5755 Fix slow memory leak in \fBwait_closed\fP implementation
5756 (see \fI\%#498\fP);
5757 .IP \(bu 2
5758 Fix handling of instances were Redis returns null fields for a stream message
5759 (see \fI\%#605\fP);
5760 .UNINDENT
5761 .SS Improved Documentation
5762 .INDENT 0.0
5763 .IP \(bu 2
5764 Rewrite "Getting started" documentation.
5765 (see \fI\%#641\fP);
5766 .UNINDENT
5767 .SS Misc
5768 .INDENT 0.0
5769 .IP \(bu 2
5770 \fI\%#585\fP,
5771 \fI\%#611\fP,
5772 \fI\%#612\fP,
5773 \fI\%#619\fP,
5774 \fI\%#620\fP,
5775 \fI\%#642\fP;
5776 .UNINDENT
54975777 .SS 1.2.0 (2018\-10\-24)
54985778 .sp
54995779 \fBNEW\fP:
58576137 Fixed cancellation of wait_closed
58586138 (see \fI\%#118\fP);
58596139 .IP \(bu 2
5860 Fixed \fBtime()\fP convertion to float
6140 Fixed \fBtime()\fP conversion to float
58616141 (see \fI\%#126\fP);
58626142 .IP \(bu 2
58636143 Fixed \fBhmset()\fP method to return bool instead of \fBb\(aqOK\(aq\fP
61556435 .SH AUTHOR
61566436 Alexey Popravka
61576437 .SH COPYRIGHT
6158 2014-2018, Alexey Popravka
6438 2014-2019, Alexey Popravka
61596439 .\" Generated by docutils manpage writer.
61606440 .
4141
4242
4343 .. cofunction:: create_connection(address, \*, db=0, password=None, ssl=None,\
44 encoding=None, parser=None, loop=None,\
45 timeout=None)
44 encoding=None, parser=None,\
45 timeout=None, connection_cls=None)
4646
4747 Creates Redis connection.
4848
5151
5252 .. versionchanged:: v1.0
5353 ``parser`` argument added.
54
55 .. deprecated:: v1.3.1
56 ``loop`` argument deprecated for Python 3.8 compatibility.
5457
5558 :param address: An address where to connect.
5659 Can be one of the following:
7982 :param parser: Protocol parser class. Can be used to set custom protocol
8083 reader; expected same interface as :class:`hiredis.Reader`.
8184 :type parser: callable or None
82
83 :param loop: An optional *event loop* instance
84 (uses :func:`asyncio.get_event_loop` if not specified).
85 :type loop: :ref:`EventLoop<asyncio-event-loop>`
8685
8786 :param timeout: Max time to open a connection, otherwise
8887 raise :exc:`asyncio.TimeoutError` exception.
8988 ``None`` by default
9089 :type timeout: float greater than 0 or None
90
91 :param connection_cls: Custom connection class. ``None`` by default.
92 :type connection_cls: :class:`abc.AbcConnection` or None
9193
9294 :return: :class:`RedisConnection` instance.
9395
170172 Method also accept :class:`aioredis.Channel` instances as command
171173 arguments::
172174
173 >>> ch1 = Channel('A', is_pattern=False, loop=loop)
175 >>> ch1 = Channel('A', is_pattern=False)
174176 >>> await conn.execute_pubsub('subscribe', ch1)
175177 [[b'subscribe', b'A', 1]]
176178
251253
252254 .. function:: create_pool(address, \*, db=0, password=None, ssl=None, \
253255 encoding=None, minsize=1, maxsize=10, \
254 parser=None, loop=None, \
256 parser=None, \
255257 create_connection_timeout=None, \
256258 pool_cls=None, connection_cls=None)
257259
275277
276278 .. versionadded:: v1.0
277279 ``parser``, ``pool_cls`` and ``connection_cls`` arguments added.
280
281 .. deprecated:: v1.3.1
282 ``loop`` argument deprecated for Python 3.8 compatibility.
278283
279284 :param address: An address where to connect.
280285 Can be one of the following:
309314 :param parser: Protocol parser class. Can be used to set custom protocol
310315 reader; expected same interface as :class:`hiredis.Reader`.
311316 :type parser: callable or None
312
313 :param loop: An optional *event loop* instance
314 (uses :func:`asyncio.get_event_loop` if not specified).
315 :type loop: :ref:`EventLoop<asyncio-event-loop>`
316317
317318 :param create_connection_timeout: Max time to open a connection,
318319 otherwise raise an :exc:`asyncio.TimeoutError`. ``None`` by default.
448449 Wait until pool gets closed (when all connections are closed).
449450
450451 .. versionadded:: v0.2.8
451
452
453 ----
454
455 .. _aioredis-channel:
456
457 Pub/Sub Channel object
458 ----------------------
459
460 `Channel` object is a wrapper around queue for storing received pub/sub messages.
461
462
463 .. class:: Channel(name, is_pattern, loop=None)
464
465 Bases: :class:`abc.AbcChannel`
466
467 Object representing Pub/Sub messages queue.
468 It's basically a wrapper around :class:`asyncio.Queue`.
469
470 .. attribute:: name
471
472 Holds encoded channel/pattern name.
473
474 .. attribute:: is_pattern
475
476 Set to True for pattern channels.
477
478 .. attribute:: is_active
479
480 Set to True if there are messages in queue and connection is still
481 subscribed to this channel.
482
483 .. comethod:: get(\*, encoding=None, decoder=None)
484
485 Coroutine that waits for and returns a message.
486
487 Return value is message received or ``None`` signifying that channel has
488 been unsubscribed and no more messages will be received.
489
490 :param str encoding: If not None used to decode resulting bytes message.
491
492 :param callable decoder: If specified used to decode message,
493 ex. :func:`json.loads()`
494
495 :raise aioredis.ChannelClosedError: If channel is unsubscribed and
496 has no more messages.
497
498 .. method:: get_json(\*, encoding="utf-8")
499
500 Shortcut to ``get(encoding="utf-8", decoder=json.loads)``
501
502 .. comethod:: wait_message()
503
504 Waits for message to become available in channel
505 or channel is closed (unsubscribed).
506
507 Main idea is to use it in loops:
508
509 >>> ch = redis.channels['channel:1']
510 >>> while await ch.wait_message():
511 ... msg = await ch.get()
512
513 :rtype: bool
514
515 .. comethod:: iter(, \*, encoding=None, decoder=None)
516 :async-for:
517 :coroutine:
518
519 Same as :meth:`~.get` method but it is a native coroutine.
520
521 Usage example::
522
523 >>> async for msg in ch.iter():
524 ... print(msg)
525
526 .. versionadded:: 0.2.5
527 Available for Python 3.5 only
528452
529453 ----
530454
669593 MasterReplyError
670594 SlaveReplyError
671595
596
597 ----
598
599 .. _aioredis-channel:
600
601 Pub/Sub Channel object
602 ----------------------
603
604 `Channel` object is a wrapper around queue for storing received pub/sub messages.
605
606
607 .. class:: Channel(name, is_pattern)
608
609 Bases: :class:`abc.AbcChannel`
610
611 Object representing Pub/Sub messages queue.
612 It's basically a wrapper around :class:`asyncio.Queue`.
613
614 .. attribute:: name
615
616 Holds encoded channel/pattern name.
617
618 .. attribute:: is_pattern
619
620 Set to True for pattern channels.
621
622 .. attribute:: is_active
623
624 Set to True if there are messages in queue and connection is still
625 subscribed to this channel.
626
627 .. comethod:: get(\*, encoding=None, decoder=None)
628
629 Coroutine that waits for and returns a message.
630
631 Return value is message received or ``None`` signifying that channel has
632 been unsubscribed and no more messages will be received.
633
634 :param str encoding: If not None used to decode resulting bytes message.
635
636 :param callable decoder: If specified used to decode message,
637 ex. :func:`json.loads()`
638
639 :raise aioredis.ChannelClosedError: If channel is unsubscribed and
640 has no more messages.
641
642 .. method:: get_json(\*, encoding="utf-8")
643
644 Shortcut to ``get(encoding="utf-8", decoder=json.loads)``
645
646 .. comethod:: wait_message()
647
648 Waits for message to become available in channel
649 or channel is closed (unsubscribed).
650
651 Main idea is to use it in loops:
652
653 >>> ch = redis.channels['channel:1']
654 >>> while await ch.wait_message():
655 ... msg = await ch.get()
656
657 :rtype: bool
658
659 .. comethod:: iter(, \*, encoding=None, decoder=None)
660 :async-for:
661 :coroutine:
662
663 Same as :meth:`~.get` method but it is a native coroutine.
664
665 Usage example::
666
667 >>> async for msg in ch.iter():
668 ... print(msg)
669
670 .. versionadded:: 0.2.5
671 Available for Python 3.5 only
672
673
672674 ----
673675
674676 .. _aioredis-redis:
711713 .. cofunction:: create_redis(address, \*, db=0, password=None, ssl=None,\
712714 encoding=None, commands_factory=Redis,\
713715 parser=None, timeout=None,\
714 connection_cls=None, loop=None)
716 connection_cls=None)
715717
716718 This :ref:`coroutine<coroutine>` creates high-level Redis
717719 interface instance bound to single Redis connection
719721
720722 .. versionadded:: v1.0
721723 ``parser``, ``timeout`` and ``connection_cls`` arguments added.
724
725 .. deprecated:: v1.3.1
726 ``loop`` argument deprecated for Python 3.8 compatibility.
722727
723728 See also :class:`~aioredis.RedisConnection` for parameters description.
724729
759764 :class:`~aioredis.abc.AbcConnection`.
760765 :type connection_cls: aioredis.abc.AbcConnection
761766
762 :param loop: An optional *event loop* instance
763 (uses :func:`asyncio.get_event_loop` if not specified).
764 :type loop: :ref:`EventLoop<asyncio-event-loop>`
765
766767 :returns: Redis client (result of ``commands_factory`` call),
767768 :class:`Redis` by default.
768769
772773 minsize=1, maxsize=10,\
773774 parser=None, timeout=None,\
774775 pool_cls=None, connection_cls=None,\
775 loop=None)
776 )
776777
777778 This :ref:`coroutine<coroutine>` create high-level Redis client instance
778779 bound to connections pool (this allows auto-reconnect and simple pub/sub
783784 .. versionchanged:: v1.0
784785 ``parser``, ``timeout``, ``pool_cls`` and ``connection_cls``
785786 arguments added.
787
788 .. deprecated:: v1.3.1
789 ``loop`` argument deprecated for Python 3.8 compatibility.
786790
787791 :param address: An address where to connect. Can be a (host, port) tuple,
788792 unix domain socket path string or a Redis URI string.
830834 :class:`~aioredis.abc.AbcConnection`.
831835 :type connection_cls: aioredis.abc.AbcConnection
832836
833 :param loop: An optional *event loop* instance
834 (uses :func:`asyncio.get_event_loop` if not specified).
835 :type loop: :ref:`EventLoop<asyncio-event-loop>`
836
837837 :returns: Redis client (result of ``commands_factory`` call),
838838 :class:`Redis` by default.
2222 * ``flake8`` for code linting;
2323 * and few other packages.
2424
25 Make sure you have provided a ``towncrier`` note.
26 Just add short description running following commands::
27
28 $ echo "Short description" > CHANGES/filename.type
29
30 This will create new file in ``CHANGES`` directory.
31 Filename should consist of the ticket ID or other unique identifier.
32 Five default types are:
33
34 * .feature - signifying new feature
35 * .bugfix - signifying a bug fix
36 * .doc - documentation improvement
37 * .removal - deprecation or removal of public API
38 * .misc - a ticket has been closed, but not in interest of users
39
40 You can check if everything is correct by typing::
41
42 $ towncrier --draft
43
44 To produce the news file::
45
46 $ towncrier
47
2548 Code style
2649 ----------
2750
4063 # will run tests in a verbose mode
4164 $ make test
4265 # or
43 $ py.test
66 $ pytest
67
68 # or with particular Redis server
69 $ pytest --redis-server=/usr/local/bin/redis-server tests/errors_test.py
4470
4571 # will run tests with coverage report
4672 $ make cov
4773 # or
48 $ py.test --cov
49
74 $ pytest --cov
5075
5176 SSL tests
5277 ~~~~~~~~~
6893 To run tests against different redises use ``--redis-server`` command line
6994 option::
7095
71 $ py.test --redis-server=/path/to/custom/redis-server
96 $ pytest --redis-server=/path/to/custom/redis-server
7297
7398 UVLoop
7499 ~~~~~~
76101 To run tests with :term:`uvloop`::
77102
78103 $ pip install uvloop
79 $ py.test --uvloop
104 $ pytest --uvloop
80105
81106 .. note:: Until Python 3.5.2 EventLoop has no ``create_future`` method
82107 so aioredis won't benefit from uvloop's futures.
88113 :mod:`aioredis` uses :term:`pytest` tool.
89114
90115 Tests are located under ``/tests`` directory.
91
92 Pure Python 3.5 tests (ie the ones using ``async``/``await`` syntax) must be
93 prefixed with ``py35_``, for instance see::
94
95 tests/py35_generic_commands_tests.py
96 tests/py35_pool_test.py
97116
98117
99118 Fixtures
186205 :rtype: tuple
187206
188207
189 Helpers
190 ~~~~~~~
191
192 :mod:`aioredis` also updates :term:`pytest`'s namespace with several helpers.
193
194 .. function:: pytest.redis_version(\*version, reason)
208 ``redis_version`` tests helper
209 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210
211 In ``tests`` directory there is a :mod:`_testutils` module with a simple
212 helper --- :func:`redis_version` --- a function that add a pytest mark to a test
213 allowing to run it with requested Redis server versions.
214
215 .. function:: _testutils.redis_version(\*version, reason)
195216
196217 Marks test with minimum redis version to run.
197218
199220
200221 .. code-block:: python
201222
202 @pytest.redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
223 from _testutil import redis_version
224
225 @redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
203226 def test_hstrlen(redis):
204227 pass
205
206
207 .. function:: pytest.logs(logger, level=None)
208
209 Adopted version of :meth:`unittest.TestCase.assertEqual`,
210 see it for details.
211
212 Example:
213
214 .. code-block:: python
215
216 def test_logs(create_connection, server):
217 with pytest.logs('aioredis', 'DEBUG') as cm:
218 conn yield from create_connection(server.tcp_address)
219 assert cm.output[0].startswith(
220 'DEBUG:aioredis:Creating tcp connection')
221
222
223 .. function:: pytest.assert_almost_equal(first, second, places=None, \
224 msg=None, delta=None)
225
226 Adopted version of :meth:`unittest.TestCase.assertAlmostEqual`.
227
228
229 .. function:: pytest.raises_regex(exc_type, message)
230
231 Adopted version of :meth:`unittest.TestCase.assertRaisesRegex`.
55 (see for more).
66
77 Every example is a correct python program that can be executed.
8
9 .. _aioredis-examples-simple:
10
11 Low-level connection usage example
12 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13
14 :download:`get source code<../examples/connection.py>`
15
16 .. literalinclude:: ../examples/connection.py
17
18
19 Connections pool example
20 ~~~~~~~~~~~~~~~~~~~~~~~~
21
22 :download:`get source code<../examples/pool.py>`
23
24 .. literalinclude:: ../examples/pool.py
258
269
2710 Commands example
6245 :download:`get source code<../examples/sentinel.py>`
6346
6447 .. literalinclude:: ../examples/sentinel.py
48
49 .. _aioredis-examples-simple:
50
51 Low-level connection usage example
52 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
53
54 :download:`get source code<../examples/connection.py>`
55
56 .. literalinclude:: ../examples/connection.py
57
58
59 Connections pool example
60 ~~~~~~~~~~~~~~~~~~~~~~~~
61
62 :download:`get source code<../examples/pool.py>`
63
64 .. literalinclude:: ../examples/pool.py
2121 Connections Pool Yes
2222 Pipelining support Yes
2323 Pub/Sub support Yes
24 Sentinel support Yes [1]_
24 Sentinel support Yes
2525 Redis Cluster support WIP
2626 Trollius (python 2.7) No
27 Tested CPython versions `3.5, 3.6 <travis_>`_ [2]_
28 Tested PyPy3 versions `5.9.0 <travis_>`_
29 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 <travis_>`_
27 Tested CPython versions `3.5.3, 3.6, 3.7 <travis_>`_ [1]_
28 Tested PyPy3 versions `pypy3.5-7.0 pypy3.6-7.1.1 <travis_>`_
29 Tested for Redis server `2.6, 2.8, 3.0, 3.2, 4.0 5.0 <travis_>`_
3030 Support for dev Redis server through low-level API
3131 ================================ ==============================
3232
33 .. [1] Sentinel support is available in master branch.
34 This feature is not yet stable and may have some issues.
35
36 .. [2] For Python 3.3, 3.4 support use aioredis v0.3.
33 .. [1] For Python 3.3, 3.4 support use aioredis v0.3.
3734
3835 Installation
3936 ------------
5754 ----------
5855
5956 - Issue Tracker: https://github.com/aio-libs/aioredis/issues
57 - Google Group: https://groups.google.com/forum/#!forum/aio-libs
58 - Gitter: https://gitter.im/aio-libs/Lobby
6059 - Source Code: https://github.com/aio-libs/aioredis
6160 - Contributor's guide: :doc:`devel`
6261
7776 :maxdepth: 3
7877
7978 start
80 migration
8179 api_reference
8280 mixins
8381 abc
8583 sentinel
8684 examples
8785 devel
86 migration
8887 releases
8988 glossary
89
90 .. ::
91 todo insert after start
92 advanced
9093
9194 Indices and tables
9295 ==================
9699 * :ref:`search`
97100
98101 .. _MIT license: https://github.com/aio-libs/aioredis/blob/master/LICENSE
99 .. _travis: https://travis-ci.org/aio-libs/aioredis
102 .. _travis: https://travis-ci.com/aio-libs/aioredis
181181 | | |
182182 | | redis = await aioredis.create_redis(('localhost', 6379)) |
183183 | | await redis.zadd('zset-key', 1, 'one', 2, 'two') |
184 | | res = await redis.zrage('zset-key', withscores=True) |
184 | | res = await redis.zrange('zset-key', withscores=True) |
185185 | | assert res == [b'one', 1, b'two', 2] |
186186 | | |
187 | | # not an esiest way to make a dict |
187 | | # not an easy way to make a dict |
188188 | | it = iter(res) |
189189 | | assert dict(zip(it, it)) == {b'one': 1, b'two': 2} |
190190 | | |
194194 | | |
195195 | | redis = await aioredis.create_redis(('localhost', 6379)) |
196196 | | await redis.zadd('zset-key', 1, 'one', 2, 'two') |
197 | | res = await redis.zrage('zset-key', withscores=True) |
197 | | res = await redis.zrange('zset-key', withscores=True) |
198198 | | assert res == [(b'one', 1), (b'two', 2)] |
199199 | | |
200200 | | # now its easier to make a dict of it |
218218 | | cur, data = await redis.hscan('hash') |
219219 | | assert data == [b'one', b'1', b'two', b'2'] |
220220 | | |
221 | | # not an esiest way to make a dict |
221 | | # not an easy way to make a dict |
222222 | | it = iter(data) |
223223 | | assert dict(zip(it, it)) == {b'one': b'1', b'two': b'2'} |
224224 | | |
118118 .. autoclass:: TransactionsCommandsMixin
119119 :members:
120120
121 .. class:: Pipeline(connection, commands_factory=lambda conn: conn, \*,\
122 loop=None)
121 .. class:: Pipeline(connection, commands_factory=lambda conn: conn)
123122
124123 Commands pipeline.
125124
128127 This class implements `__getattr__` method allowing to call methods
129128 on instance created with ``commands_factory``.
130129
130 .. deprecated:: v1.3.1
131 ``loop`` argument deprecated for Python 3.8 compatibility.
132
131133 :param connection: Redis connection
132134 :type connection: aioredis.RedisConnection
133135
134136 :param callable commands_factory: Commands factory to get methods from.
135
136 :param loop: An optional *event loop* instance
137 (uses :func:`asyncio.get_event_loop` if not specified).
138 :type loop: :ref:`EventLoop<asyncio-event-loop>`
139137
140138 .. comethod:: execute(\*, return_exceptions=False)
141139
153151
154152 :raise aioredis.PipelineError: Raised when any command caused error.
155153
156 .. class:: MultiExec(connection, commands_factory=lambda conn: conn, \*,\
157 loop=None)
154 .. class:: MultiExec(connection, commands_factory=lambda conn: conn)
158155
159156 Bases: :class:`~Pipeline`.
160157
161158 Multi/Exec pipeline wrapper.
162159
163160 See :class:`~Pipeline` for parameters description.
161
162 .. deprecated:: v1.3.1
163 ``loop`` argument deprecated for Python 3.8 compatibility.
164164
165165 .. comethod:: execute(\*, return_exceptions=False)
166166
2727 .. corofunction:: create_sentinel(sentinels, \*, db=None, password=None,\
2828 encoding=None, minsize=1, maxsize=10,\
2929 ssl=None, parser=None,\
30 loop=None)
30 )
3131
3232 Creates Redis Sentinel client.
33
34 .. deprecated:: v1.3.1
35 ``loop`` argument deprecated for Python 3.8 compatibility.
3336
3437 :param sentinels: A list of Sentinel node addresses.
3538 :type sentinels: list[tuple]
5760 :param parser: Protocol parser class. Can be used to set custom protocol
5861 reader; expected same interface as :class:`hiredis.Reader`.
5962 :type parser: callable or None
60
61 :param loop: An optional *event loop* instance
62 (uses :func:`asyncio.get_event_loop` if not specified).
63 :type loop: :ref:`EventLoop<asyncio-event-loop>`
6463
6564 :rtype: RedisSentinel
6665
33 Getting started
44 ===============
55
6
7 Commands Pipelining
8 -------------------
9
10 Commands pipelining is built-in.
11
12 Every command is sent to transport at-once
13 (ofcourse if no ``TypeError``/``ValueError`` was raised)
14
15 When you making a call with ``await`` / ``yield from`` you will be waiting result,
16 and then gather results.
17
18 Simple example show both cases (:download:`get source code<../examples/pipeline.py>`):
19
20 .. literalinclude:: ../examples/pipeline.py
21 :language: python3
22 :lines: 9-21
23 :dedent: 4
6 Installation
7 ------------
8
9 .. code-block:: bash
10
11 $ pip install aioredis
12
13 This will install aioredis along with its dependencies:
14
15 * hiredis protocol parser;
16
17 * async-timeout --- used in Sentinel client.
18
19 Without dependencies
20 ~~~~~~~~~~~~~~~~~~~~
21
22 In some cases [1]_ you might need to install :mod:`aioredis` without ``hiredis``,
23 it is achievable with the following command:
24
25 .. code-block:: bash
26
27 $ pip install --no-deps aioredis async-timeout
28
29 Installing latest version from Git
30 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
31
32 .. code-block:: bash
33
34 $ pip install git+https://github.com/aio-libs/aioredis@master#egg=aioredis
35
36 Connecting
37 ----------
38
39 :download:`get source code<../examples/getting_started/00_connect.py>`
40
41 .. literalinclude:: ../examples/getting_started/00_connect.py
42 :language: python3
43
44 :func:`aioredis.create_redis_pool` creates a Redis client backed by a pool of
45 connections. The only required argument is the address of Redis server.
46 Redis server address can be either host and port tuple
47 (ex: ``('localhost', 6379)``), or a string which will be parsed into
48 TCP or UNIX socket address (ex: ``'unix://var/run/redis.sock'``,
49 ``'//var/run/redis.sock'``, ``redis://redis-host-or-ip:6379/1``).
50
51 Closing the client. Calling ``redis.close()`` and then ``redis.wait_closed()``
52 is strongly encouraged as this will methods will shutdown all open connections
53 and cleanup resources.
54
55 See the :doc:`commands reference </mixins>` for the full list of supported commands.
56
57 Connecting to specific DB
58 ~~~~~~~~~~~~~~~~~~~~~~~~~
59
60 There are several ways you can specify database index to select on connection:
61
62 #. explicitly pass db index as ``db`` argument:
63
64 .. code-block:: python
65
66 redis = await aioredis.create_redis_pool(
67 'redis://localhost', db=1)
68
69 #. pass db index in URI as path component:
70
71 .. code-block:: python
72
73 redis = await aioredis.create_redis_pool(
74 'redis://localhost/2')
75
76 .. note::
77
78 DB index specified in URI will take precedence over
79 ``db`` keyword argument.
80
81 #. call :meth:`~aioredis.Redis.select` method:
82
83 .. code-block:: python
84
85 redis = await aioredis.create_redis_pool(
86 'redis://localhost/')
87 await redis.select(3)
88
89
90 Connecting to password-protected Redis instance
91 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
92
93 The password can be specified either in keyword argument or in address URI:
94
95 .. code-block:: python
96
97 redis = await aioredis.create_redis_pool(
98 'redis://localhost', password='sEcRet')
99
100 redis = await aioredis.create_redis_pool(
101 'redis://:sEcRet@localhost/')
102
103 redis = await aioredis.create_redis_pool(
104 'redis://localhost/?password=sEcRet')
24105
25106 .. note::
26
27 For convenience :mod:`aioredis` provides
28 :meth:`~TransactionsCommandsMixin.pipeline`
29 method allowing to execute bulk of commands as one
30 (:download:`get source code<../examples/pipeline.py>`):
31
32 .. literalinclude:: ../examples/pipeline.py
33 :language: python3
34 :lines: 23-31
35 :dedent: 4
107 Password specified in URI will take precedence over password keyword.
108
109 Also specifying both password as authentication component and
110 query parameter in URI is forbidden.
111
112 .. code-block:: python
113
114 # This will cause assertion error
115 await aioredis.create_redis_pool(
116 'redis://:sEcRet@localhost/?password=SeCreT')
117
118 Result messages decoding
119 ------------------------
120
121 By default :mod:`aioredis` will return :class:`bytes` for most Redis
122 commands that return string replies. Redis error replies are known to be
123 valid UTF-8 strings so error messages are decoded automatically.
124
125 If you know that data in Redis is valid string you can tell :mod:`aioredis`
126 to decode result by passing keyword-only argument ``encoding``
127 in a command call:
128
129 :download:`get source code<../examples/getting_started/01_decoding.py>`
130
131 .. literalinclude:: ../examples/getting_started/01_decoding.py
132 :language: python3
133
134
135 :mod:`aioredis` can decode messages for all Redis data types like
136 lists, hashes, sorted sets, etc:
137
138 :download:`get source code<../examples/getting_started/02_decoding.py>`
139
140 .. literalinclude:: ../examples/getting_started/02_decoding.py
141 :language: python3
36142
37143
38144 Multi/Exec transactions
39145 -----------------------
40146
41 :mod:`aioredis` provides several ways for executing transactions:
42
43 * when using raw connection you can issue ``Multi``/``Exec`` commands
44 manually;
45
46 * when using :class:`aioredis.Redis` instance you can use
47 :meth:`~TransactionsCommandsMixin.multi_exec` transaction pipeline.
147 :download:`get source code<../examples/getting_started/03_multiexec.py>`
148
149 .. literalinclude:: ../examples/getting_started/03_multiexec.py
150 :language: python3
48151
49152 :meth:`~TransactionsCommandsMixin.multi_exec` method creates and returns new
50153 :class:`~aioredis.commands.MultiExec` object which is used for buffering commands and
51154 then executing them inside MULTI/EXEC block.
52155
53 Here is a simple example
54 (:download:`get source code<../examples/transaction2.py>`):
55
56 .. literalinclude:: ../examples/transaction2.py
57 :language: python3
58 :lines: 9-15
59 :linenos:
60 :emphasize-lines: 5
61 :dedent: 4
62
63 As you can notice ``await`` is **only** used at line 5 with ``tr.execute``
64 and **not with** ``tr.set(...)`` calls.
65
66156 .. warning::
67157
68158 It is very important not to ``await`` buffered command
79169
80170 :mod:`aioredis` provides support for Redis Publish/Subscribe messaging.
81171
82 To switch connection to subscribe mode you must execute ``subscribe`` command
83 by yield'ing from :meth:`~PubSubCommandsMixin.subscribe` it returns a list of
84 :class:`~aioredis.Channel` objects representing subscribed channels.
85
86 As soon as connection is switched to subscribed mode the channel will receive
87 and store messages
172 To start listening for messages you must call either
173 :meth:`~PubSubCommandsMixin.subscribe` or
174 :meth:`~PubSubCommandsMixin.psubscribe` method.
175 Both methods return list of :class:`~aioredis.Channel` objects representing
176 subscribed channels.
177
178 Right after that the channel will receive and store messages
88179 (the ``Channel`` object is basically a wrapper around :class:`asyncio.Queue`).
89180 To read messages from channel you need to use :meth:`~aioredis.Channel.get`
90181 or :meth:`~aioredis.Channel.get_json` coroutines.
91182
92 .. note::
93 In Pub/Sub mode redis connection can only receive messages or issue
94 (P)SUBSCRIBE / (P)UNSUBSCRIBE commands.
95
96 Pub/Sub example (:download:`get source code<../examples/pubsub2.py>`):
97
98 .. literalinclude:: ../examples/pubsub2.py
99 :language: python3
100 :lines: 6-31
101 :dedent: 4
102
103 .. .. warning::
104 Using Pub/Sub mode with :class:`~aioredis.Pool` is possible but
105 only within ``with`` block or by explicitly ``acquiring/releasing``
106 connection. See example below.
107
108 Pub/Sub example (:download:`get source code<../examples/pool_pubsub.py>`):
109
110 .. literalinclude:: ../examples/pool_pubsub.py
111 :language: python3
112 :lines: 13-36
113 :dedent: 4
114
115
116 Python 3.5 ``async with`` / ``async for`` support
117 -------------------------------------------------
118
119 :mod:`aioredis` is compatible with :pep:`492`.
120
121 :class:`~aioredis.Pool` can be used with :ref:`async with<async with>`
122 (:download:`get source code<../examples/pool2.py>`):
123
124 .. literalinclude:: ../examples/pool2.py
125 :language: python3
126 :lines: 7-8,20-22
127 :dedent: 4
128
129
130 It also can be used with ``await``:
131
132 .. literalinclude:: ../examples/pool2.py
133 :language: python3
134 :lines: 7-8,26-30
135 :dedent: 4
136
137
138 New ``scan``-family commands added with support of :ref:`async for<async for>`
139 (:download:`get source code<../examples/iscan.py>`):
140
141 .. literalinclude:: ../examples/iscan.py
142 :language: python3
143 :lines: 7-9,29-31,34-36,39-41,44-45
144 :dedent: 4
145
146
147 SSL/TLS support
183 Example subscribing and reading channels:
184
185 :download:`get source code<../examples/getting_started/04_pubsub.py>`
186
187 .. literalinclude:: ../examples/getting_started/04_pubsub.py
188 :language: python3
189
190 Subscribing and reading patterns:
191
192 :download:`get source code<../examples/getting_started/05_pubsub.py>`
193
194 .. literalinclude:: ../examples/getting_started/05_pubsub.py
195 :language: python3
196
197 Sentinel client
148198 ---------------
149199
150 Though Redis server `does not support data encryption <data_encryption_>`_
151 it is still possible to setup Redis server behind SSL proxy. For such cases
152 :mod:`aioredis` library support secure connections through :mod:`asyncio`
153 SSL support. See `BaseEventLoop.create_connection`_ for details.
154
155 .. _data_encryption: http://redis.io/topics/security#data-encryption-support
156 .. _BaseEventLoop.create_connection: https://docs.python.org/3/library/asyncio-eventloop.html#creating-connections
200 :download:`get source code<../examples/getting_started/06_sentinel.py>`
201
202 .. literalinclude:: ../examples/getting_started/06_sentinel.py
203 :language: python3
204
205 Sentinel client requires a list of Redis Sentinel addresses to connect to
206 and start discovering services.
207
208 Calling :meth:`~aioredis.sentinel.SentinelPool.master_for` or
209 :meth:`~aioredis.sentinel.SentinelPool.slave_for` methods will return
210 Redis clients connected to specified services monitored by Sentinel.
211
212 Sentinel client will detect failover and reconnect Redis clients automatically.
213
214 See detailed reference :doc:`here <sentinel>`
215
216 ----
217
218 .. [1]
219 Celery hiredis issues
220 (`#197 <https://github.com/aio-libs/aioredis/issues/197>`_,
221 `#317 <https://github.com/aio-libs/aioredis/pull/317>`_)
2828
2929
3030 if __name__ == '__main__':
31 asyncio.get_event_loop().run_until_complete(main())
32 asyncio.get_event_loop().run_until_complete(redis_pool())
31 asyncio.run(main())
32 asyncio.run(redis_pool())
2222
2323
2424 if __name__ == '__main__':
25 asyncio.get_event_loop().run_until_complete(main())
25 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6 await redis.set('my-key', 'value')
7 value = await redis.get('my-key', encoding='utf-8')
8 print(value)
9
10 redis.close()
11 await redis.wait_closed()
12
13 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6 await redis.set('key', 'string-value')
7 bin_value = await redis.get('key')
8 assert bin_value == b'string-value'
9
10 str_value = await redis.get('key', encoding='utf-8')
11 assert str_value == 'string-value'
12
13 redis.close()
14 await redis.wait_closed()
15
16 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6
7 await redis.hmset_dict('hash',
8 key1='value1',
9 key2='value2',
10 key3=123)
11
12 result = await redis.hgetall('hash', encoding='utf-8')
13 assert result == {
14 'key1': 'value1',
15 'key2': 'value2',
16 'key3': '123', # note that Redis returns int as string
17 }
18
19 redis.close()
20 await redis.wait_closed()
21
22 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6
7 tr = redis.multi_exec()
8 tr.set('key1', 'value1')
9 tr.set('key2', 'value2')
10 ok1, ok2 = await tr.execute()
11 assert ok1
12 assert ok2
13
14 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6
7 ch1, ch2 = await redis.subscribe('channel:1', 'channel:2')
8 assert isinstance(ch1, aioredis.Channel)
9 assert isinstance(ch2, aioredis.Channel)
10
11 async def reader(channel):
12 async for message in channel.iter():
13 print("Got message:", message)
14 asyncio.get_running_loop().create_task(reader(ch1))
15 asyncio.get_running_loop().create_task(reader(ch2))
16
17 await redis.publish('channel:1', 'Hello')
18 await redis.publish('channel:2', 'World')
19
20 redis.close()
21 await redis.wait_closed()
22
23 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 redis = await aioredis.create_redis_pool('redis://localhost')
6
7 ch, = await redis.psubscribe('channel:*')
8 assert isinstance(ch, aioredis.Channel)
9
10 async def reader(channel):
11 async for ch, message in channel.iter():
12 print("Got message in channel:", ch, ":", message)
13 asyncio.get_running_loop().create_task(reader(ch))
14
15 await redis.publish('channel:1', 'Hello')
16 await redis.publish('channel:2', 'World')
17
18 redis.close()
19 await redis.wait_closed()
20
21 asyncio.run(main())
0 import asyncio
1 import aioredis
2
3
4 async def main():
5 sentinel = await aioredis.create_sentinel(
6 ['redis://localhost:26379', 'redis://sentinel2:26379'])
7 redis = sentinel.master_for('mymaster')
8
9 ok = await redis.set('key', 'value')
10 assert ok
11 val = await redis.get('key', encoding='utf-8')
12 assert val == 'value'
13
14 asyncio.run(main())
+0
-52
examples/iscan.py less more
0 import asyncio
1 import aioredis
2
3
4 async def main():
5
6 redis = await aioredis.create_redis(
7 'redis://localhost')
8
9 await redis.delete('something:hash',
10 'something:set',
11 'something:zset')
12 await redis.mset('something', 'value',
13 'something:else', 'else')
14 await redis.hmset('something:hash',
15 'something:1', 'value:1',
16 'something:2', 'value:2')
17 await redis.sadd('something:set', 'something:1',
18 'something:2', 'something:else')
19 await redis.zadd('something:zset', 1, 'something:1',
20 2, 'something:2', 3, 'something:else')
21
22 await go(redis)
23 redis.close()
24 await redis.wait_closed()
25
26
27 async def go(redis):
28 async for key in redis.iscan(match='something*'):
29 print('Matched:', key)
30
31 key = 'something:hash'
32
33 async for name, val in redis.ihscan(key, match='something*'):
34 print('Matched:', name, '->', val)
35
36 key = 'something:set'
37
38 async for val in redis.isscan(key, match='something*'):
39 print('Matched:', val)
40
41 key = 'something:zset'
42
43 async for val, score in redis.izscan(key, match='something*'):
44 print('Matched:', val, ':', score)
45
46
47 if __name__ == '__main__':
48 import os
49 if 'redis_version:2.6' not in os.environ.get('REDIS_VERSION', ''):
50 loop = asyncio.get_event_loop()
51 loop.run_until_complete(main())
4141
4242
4343 if __name__ == '__main__':
44 loop = asyncio.get_event_loop()
45 loop.run_until_complete(main())
44 asyncio.run(main())
1414
1515
1616 if __name__ == '__main__':
17 asyncio.get_event_loop().run_until_complete(main())
17 asyncio.run(main())
+0
-35
examples/pool2.py less more
0 import asyncio
1 import aioredis
2
3
4 async def main():
5
6 pool = await aioredis.create_pool(
7 'redis://localhost')
8
9 # async with pool.get() as conn:
10 await pool.execute('set', 'my-key', 'value')
11
12 await async_with(pool)
13 await with_await(pool)
14 pool.close()
15 await pool.wait_closed()
16
17
18 async def async_with(pool):
19 async with pool.get() as conn:
20 value = await conn.execute('get', 'my-key')
21 print('raw value:', value)
22
23
24 async def with_await(pool):
25 # This is exactly the same as:
26 # with (yield from pool) as conn:
27 with (await pool) as conn:
28 value = await conn.execute('get', 'my-key')
29 print('raw value:', value)
30
31
32 if __name__ == '__main__':
33 loop = asyncio.get_event_loop()
34 loop.run_until_complete(main())
2727
2828
2929 if __name__ == '__main__':
30 asyncio.get_event_loop().run_until_complete(main())
30 asyncio.run(main())
4040 for msg in ("Hello", ",", "world!"):
4141 for ch in ('channel:1', 'channel:2'):
4242 await pub.publish(ch, msg)
43 asyncio.get_event_loop().call_soon(pub.close)
44 asyncio.get_event_loop().call_soon(sub.close)
45 await asyncio.sleep(0)
43 await asyncio.sleep(0.1)
44 pub.close()
45 sub.close()
4646 await pub.wait_closed()
4747 await sub.wait_closed()
4848 await asyncio.gather(tsk1, tsk2)
5151 if __name__ == '__main__':
5252 import os
5353 if 'redis_version:2.6' not in os.environ.get('REDIS_VERSION', ''):
54 loop = asyncio.get_event_loop()
55 loop.run_until_complete(pubsub())
54 asyncio.run(pubsub())
1919 if __name__ == '__main__':
2020 import os
2121 if 'redis_version:2.6' not in os.environ.get('REDIS_VERSION', ''):
22 asyncio.get_event_loop().run_until_complete(main())
22 asyncio.run(main())
1515
1616
1717 if __name__ == '__main__':
18 asyncio.get_event_loop().run_until_complete(main())
18 asyncio.run(main())
1818
1919
2020 if __name__ == '__main__':
21 asyncio.get_event_loop().run_until_complete(main())
21 asyncio.run(main())
1919
2020
2121 if __name__ == '__main__':
22 loop = asyncio.get_event_loop()
23 loop.run_until_complete(main())
22 asyncio.run(main())
00 [tool:pytest]
11 minversion = 2.9.1
2 addopts = --cov-report=term --cov-report=html
2 addopts = -r a --cov-report=term --cov-report=html
33 restpaths = tests
44 markers =
5 run_loop: Mark coroutine to be run with asyncio loop.
5 timeout: Set coroutine execution timeout (default is 15 seconds).
66 redis_version(*version, reason): Mark test expecting minimum Redis version
77 skip(reason): Skip test
8 python_files =
9 test_*.py
10 *_test.py
11 _testutils.py
812
913 [coverage:run]
1014 branch = true
2828 match = regexp.match(line)
2929 if match is not None:
3030 return match.group(1)
31 else:
32 raise RuntimeError('Cannot find version in aioredis/__init__.py')
31 raise RuntimeError('Cannot find version in {}'.format(init_py))
3332
3433
3534 classifiers = [
0 import pytest
1
2 __all__ = [
3 'redis_version',
4 ]
5
6
7 def redis_version(*version, reason):
8 assert 1 < len(version) <= 3, version
9 assert all(isinstance(v, int) for v in version), version
10 return pytest.mark.redis_version(version=version, reason=reason)
66 import os
77 import ssl
88 import time
9 import logging
109 import tempfile
1110 import atexit
11 import inspect
1212
1313 from collections import namedtuple
1414 from urllib.parse import urlencode, urlunparse
3333 def loop():
3434 """Creates new event loop."""
3535 loop = asyncio.new_event_loop()
36 asyncio.set_event_loop(None)
36 if sys.version_info < (3, 8):
37 asyncio.set_event_loop(loop)
3738
3839 try:
3940 yield loop
5960
6061
6162 @pytest.fixture
62 def create_connection(_closable, loop):
63 def create_connection(_closable):
6364 """Wrapper around aioredis.create_connection."""
6465
6566 async def f(*args, **kw):
66 kw.setdefault('loop', loop)
6767 conn = await aioredis.create_connection(*args, **kw)
6868 _closable(conn)
6969 return conn
7474 aioredis.create_redis,
7575 aioredis.create_redis_pool],
7676 ids=['single', 'pool'])
77 def create_redis(_closable, loop, request):
77 def create_redis(_closable, request):
7878 """Wrapper around aioredis.create_redis."""
7979 factory = request.param
8080
8181 async def f(*args, **kw):
82 kw.setdefault('loop', loop)
8382 redis = await factory(*args, **kw)
8483 _closable(redis)
8584 return redis
8786
8887
8988 @pytest.fixture
90 def create_pool(_closable, loop):
89 def create_pool(_closable):
9190 """Wrapper around aioredis.create_pool."""
9291
9392 async def f(*args, **kw):
94 kw.setdefault('loop', loop)
9593 redis = await aioredis.create_pool(*args, **kw)
9694 _closable(redis)
9795 return redis
9997
10098
10199 @pytest.fixture
102 def create_sentinel(_closable, loop):
100 def create_sentinel(_closable):
103101 """Helper instantiating RedisSentinel client."""
104102
105103 async def f(*args, **kw):
106 kw.setdefault('loop', loop)
107104 # make it fail fast on slow CIs (if timeout argument is ommitted)
108105 kw.setdefault('timeout', .001)
109106 client = await aioredis.sentinel.create_sentinel(*args, **kw)
115112 @pytest.fixture
116113 def pool(create_pool, server, loop):
117114 """Returns RedisPool instance."""
118 pool = loop.run_until_complete(
119 create_pool(server.tcp_address, loop=loop))
120 return pool
115 return loop.run_until_complete(create_pool(server.tcp_address))
121116
122117
123118 @pytest.fixture
124119 def redis(create_redis, server, loop):
125120 """Returns Redis client instance."""
126121 redis = loop.run_until_complete(
127 create_redis(server.tcp_address, loop=loop))
128 loop.run_until_complete(redis.flushall())
122 create_redis(server.tcp_address))
123
124 async def clear():
125 await redis.flushall()
126 loop.run_until_complete(clear())
129127 return redis
130128
131129
133131 def redis_sentinel(create_sentinel, sentinel, loop):
134132 """Returns Redis Sentinel client instance."""
135133 redis_sentinel = loop.run_until_complete(
136 create_sentinel([sentinel.tcp_address], timeout=2, loop=loop))
137 assert loop.run_until_complete(redis_sentinel.ping()) == b'PONG'
134 create_sentinel([sentinel.tcp_address], timeout=2))
135
136 async def ping():
137 return await redis_sentinel.ping()
138 assert loop.run_until_complete(ping()) == b'PONG'
138139 return redis_sentinel
139140
140141
142143 def _closable(loop):
143144 conns = []
144145
145 try:
146 yield conns.append
147 finally:
146 async def close():
148147 waiters = []
149148 while conns:
150149 conn = conns.pop(0)
151150 conn.close()
152151 waiters.append(conn.wait_closed())
153152 if waiters:
154 loop.run_until_complete(asyncio.gather(*waiters, loop=loop))
153 await asyncio.gather(*waiters)
154 try:
155 yield conns.append
156 finally:
157 loop.run_until_complete(close())
155158
156159
157160 @pytest.fixture(scope='session')
380383 yield True
381384 raise RuntimeError("Redis startup timeout expired")
382385
383 def maker(name, *masters, quorum=1, noslaves=False):
386 def maker(name, *masters, quorum=1, noslaves=False,
387 down_after_milliseconds=3000,
388 failover_timeout=1000):
384389 key = (name,) + masters
385390 if key in sentinels:
386391 return sentinels[key]
409414 for master in masters:
410415 write('sentinel monitor', master.name,
411416 '127.0.0.1', master.tcp_address.port, quorum)
412 write('sentinel down-after-milliseconds', master.name, '3000')
413 write('sentinel failover-timeout', master.name, '3000')
417 write('sentinel down-after-milliseconds', master.name,
418 down_after_milliseconds)
419 write('sentinel failover-timeout', master.name,
420 failover_timeout)
414421 write('sentinel auth-pass', master.name, master.password)
415422
416423 f = open(stdout_file, 'w')
517524
518525
519526 @pytest.mark.tryfirst
520 def pytest_pycollect_makeitem(collector, name, obj):
521 if collector.funcnamefilter(name):
522 if not callable(obj):
523 return
524 item = pytest.Function(name, parent=collector)
525 if item.get_closest_marker('run_loop') is not None:
526 # TODO: re-wrap with asyncio.coroutine if not native coroutine
527 return list(collector._genfunctions(name, obj))
528
529
530 @pytest.mark.tryfirst
531527 def pytest_pyfunc_call(pyfuncitem):
532528 """
533529 Run asyncio marked test functions in an event loop instead of a normal
534530 function call.
535531 """
536 marker = pyfuncitem.get_closest_marker('run_loop')
537 if marker is not None:
532 if inspect.iscoroutinefunction(pyfuncitem.obj):
533 marker = pyfuncitem.get_closest_marker('timeout')
534 if marker is not None and marker.args:
535 timeout = marker.args[0]
536 else:
537 timeout = 15
538
538539 funcargs = pyfuncitem.funcargs
539540 loop = funcargs['loop']
540541 testargs = {arg: funcargs[arg]
541542 for arg in pyfuncitem._fixtureinfo.argnames}
542543
543544 loop.run_until_complete(
544 _wait_coro(pyfuncitem.obj, testargs,
545 timeout=marker.kwargs.get('timeout', 15),
546 loop=loop))
545 _wait_coro(pyfuncitem.obj, testargs, timeout=timeout))
547546 return True
548547
549548
550 async def _wait_coro(corofunc, kwargs, timeout, loop):
551 with async_timeout(timeout, loop=loop):
549 async def _wait_coro(corofunc, kwargs, timeout):
550 with async_timeout(timeout):
552551 return (await corofunc(**kwargs))
553552
554553
555554 def pytest_runtest_setup(item):
556 run_loop = item.get_closest_marker('run_loop')
557 if run_loop and 'loop' not in item.fixturenames:
555 is_coro = inspect.iscoroutinefunction(item.obj)
556 if is_coro and 'loop' not in item.fixturenames:
558557 # inject an event loop fixture for all async tests
559558 item.fixturenames.append('loop')
560559
584583
585584 def pytest_configure(config):
586585 bins = config.getoption('--redis-server')[:]
587 REDIS_SERVERS[:] = bins or ['/usr/bin/redis-server']
586 cmd = 'which redis-server'
587 if not bins:
588 with os.popen(cmd) as pipe:
589 path = pipe.read().rstrip()
590 assert path, (
591 "There is no redis-server on your computer."
592 " Please install it first")
593 REDIS_SERVERS[:] = [path]
594 else:
595 REDIS_SERVERS[:] = bins
596
588597 VERSIONS.update({srv: _read_server_version(srv)
589598 for srv in REDIS_SERVERS})
590599 assert VERSIONS, ("Expected to detect redis versions", REDIS_SERVERS)
607616 raise RuntimeError(
608617 "Can not import uvloop, make sure it is installed")
609618 asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
610
611
612 def logs(logger, level=None):
613 """Catches logs for given logger and level.
614
615 See unittest.TestCase.assertLogs for details.
616 """
617 return _AssertLogsContext(logger, level)
618
619
620 _LoggingWatcher = namedtuple("_LoggingWatcher", ["records", "output"])
621
622
623 class _CapturingHandler(logging.Handler):
624 """
625 A logging handler capturing all (raw and formatted) logging output.
626 """
627
628 def __init__(self):
629 logging.Handler.__init__(self)
630 self.watcher = _LoggingWatcher([], [])
631
632 def flush(self):
633 pass
634
635 def emit(self, record):
636 self.watcher.records.append(record)
637 msg = self.format(record)
638 self.watcher.output.append(msg)
639
640
641 class _AssertLogsContext:
642 """Standard unittest's _AssertLogsContext context manager
643 adopted to raise pytest failure.
644 """
645 LOGGING_FORMAT = "%(levelname)s:%(name)s:%(message)s"
646
647 def __init__(self, logger_name, level):
648 self.logger_name = logger_name
649 if level:
650 self.level = level
651 else:
652 self.level = logging.INFO
653 self.msg = None
654
655 def __enter__(self):
656 if isinstance(self.logger_name, logging.Logger):
657 logger = self.logger = self.logger_name
658 else:
659 logger = self.logger = logging.getLogger(self.logger_name)
660 formatter = logging.Formatter(self.LOGGING_FORMAT)
661 handler = _CapturingHandler()
662 handler.setFormatter(formatter)
663 self.watcher = handler.watcher
664 self.old_handlers = logger.handlers[:]
665 self.old_level = logger.level
666 self.old_propagate = logger.propagate
667 logger.handlers = [handler]
668 logger.setLevel(self.level)
669 logger.propagate = False
670 return handler.watcher
671
672 def __exit__(self, exc_type, exc_value, tb):
673 self.logger.handlers = self.old_handlers
674 self.logger.propagate = self.old_propagate
675 self.logger.setLevel(self.old_level)
676 if exc_type is not None:
677 # let unexpected exceptions pass through
678 return False
679 if len(self.watcher.records) == 0:
680 pytest.fail(
681 "no logs of level {} or higher triggered on {}"
682 .format(logging.getLevelName(self.level), self.logger.name))
683
684
685 def redis_version(*version, reason):
686 assert 1 < len(version) <= 3, version
687 assert all(isinstance(v, int) for v in version), version
688 return pytest.mark.redis_version(version=version, reason=reason)
689
690
691 def assert_almost_equal(first, second, places=None, msg=None, delta=None):
692 assert not (places is None and delta is None), \
693 "Both places and delta are not set, please set one"
694 if delta is not None:
695 assert abs(first - second) <= delta
696 else:
697 assert round(abs(first - second), places) == 0
698
699
700 def pytest_namespace():
701 return {
702 'assert_almost_equal': assert_almost_equal,
703 'redis_version': redis_version,
704 'logs': logs,
705 }
33 from aioredis import ConnectionClosedError, ReplyError
44 from aioredis.pool import ConnectionsPool
55 from aioredis import Redis
6 from _testutils import redis_version
67
78
8 @pytest.mark.run_loop
9 async def test_repr(create_redis, loop, server):
10 redis = await create_redis(
11 server.tcp_address, db=1, loop=loop)
9 async def test_repr(create_redis, server):
10 redis = await create_redis(server.tcp_address, db=1)
1211 assert repr(redis) in {
1312 '<Redis <RedisConnection [db:1]>>',
1413 '<Redis <ConnectionsPool [db:1, size:[1:10], free:1]>>',
1514 }
1615
17 redis = await create_redis(
18 server.tcp_address, db=0, loop=loop)
16 redis = await create_redis(server.tcp_address, db=0)
1917 assert repr(redis) in {
2018 '<Redis <RedisConnection [db:0]>>',
2119 '<Redis <ConnectionsPool [db:0, size:[1:10], free:1]>>',
2220 }
2321
2422
25 @pytest.mark.run_loop
2623 async def test_auth(redis):
2724 expected_message = "ERR Client sent AUTH, but no password is set"
2825 with pytest.raises(ReplyError, match=expected_message):
2926 await redis.auth('')
3027
3128
32 @pytest.mark.run_loop
3329 async def test_echo(redis):
3430 resp = await redis.echo('ECHO')
3531 assert resp == b'ECHO'
3834 await redis.echo(None)
3935
4036
41 @pytest.mark.run_loop
4237 async def test_ping(redis):
4338 assert await redis.ping() == b'PONG'
4439
4540
46 @pytest.mark.run_loop
47 async def test_quit(redis, loop):
41 async def test_quit(redis):
4842 expected = (ConnectionClosedError, ConnectionError)
4943 try:
5044 assert b'OK' == await redis.quit()
6155 assert False, "Cancelled error must not be raised"
6256
6357 # wait one loop iteration until it get surely closed
64 await asyncio.sleep(0, loop=loop)
58 await asyncio.sleep(0)
6559 assert redis.connection.closed
6660
6761 with pytest.raises(ConnectionClosedError):
6862 await redis.ping()
6963
7064
71 @pytest.mark.run_loop
7265 async def test_select(redis):
7366 assert redis.db == 0
7467
7871 assert redis.connection.db == 1
7972
8073
81 @pytest.mark.run_loop
82 async def test_encoding(create_redis, loop, server):
83 redis = await create_redis(
84 server.tcp_address,
85 db=1, encoding='utf-8',
86 loop=loop)
74 async def test_encoding(create_redis, server):
75 redis = await create_redis(server.tcp_address, db=1, encoding='utf-8')
8776 assert redis.encoding == 'utf-8'
8877
8978
90 @pytest.mark.run_loop
91 async def test_yield_from_backwards_compatability(create_redis, server, loop):
92 redis = await create_redis(server.tcp_address, loop=loop)
79 async def test_yield_from_backwards_compatibility(create_redis, server):
80 redis = await create_redis(server.tcp_address)
9381
9482 assert isinstance(redis, Redis)
9583 # TODO: there should not be warning
10088 assert await client.ping()
10189
10290
103 @pytest.redis_version(4, 0, 0, reason="SWAPDB is available since redis>=4.0.0")
104 @pytest.mark.run_loop
105 async def test_swapdb(create_redis, start_server, loop):
91 @redis_version(4, 0, 0, reason="SWAPDB is available since redis>=4.0.0")
92 async def test_swapdb(create_redis, start_server):
10693 server = start_server('swapdb_1')
107 cli1 = await create_redis(server.tcp_address, db=0, loop=loop)
108 cli2 = await create_redis(server.tcp_address, db=1, loop=loop)
94 cli1 = await create_redis(server.tcp_address, db=0)
95 cli2 = await create_redis(server.tcp_address, db=1)
10996
11097 await cli1.flushall()
11198 assert await cli1.set('key', 'val') is True
1313 Channel,
1414 MaxClientsError,
1515 )
16
17
18 @pytest.mark.run_loop
19 async def test_connect_tcp(request, create_connection, loop, server):
20 conn = await create_connection(
21 server.tcp_address, loop=loop)
16 from _testutils import redis_version
17
18
19 async def test_connect_tcp(request, create_connection, server):
20 conn = await create_connection(server.tcp_address)
2221 assert conn.db == 0
2322 assert isinstance(conn.address, tuple)
2423 assert conn.address[0] in ('127.0.0.1', '::1')
2524 assert conn.address[1] == server.tcp_address.port
2625 assert str(conn) == '<RedisConnection [db:0]>'
2726
28 conn = await create_connection(
29 ['localhost', server.tcp_address.port], loop=loop)
27 conn = await create_connection(['localhost', server.tcp_address.port])
3028 assert conn.db == 0
3129 assert isinstance(conn.address, tuple)
3230 assert conn.address[0] in ('127.0.0.1', '::1')
3432 assert str(conn) == '<RedisConnection [db:0]>'
3533
3634
37 @pytest.mark.run_loop
3835 async def test_connect_inject_connection_cls(
3936 request,
4037 create_connection,
41 loop,
4238 server):
4339
4440 class MyConnection(RedisConnection):
4541 pass
4642
4743 conn = await create_connection(
48 server.tcp_address, loop=loop, connection_cls=MyConnection)
44 server.tcp_address, connection_cls=MyConnection)
4945
5046 assert isinstance(conn, MyConnection)
5147
5248
53 @pytest.mark.run_loop
5449 async def test_connect_inject_connection_cls_invalid(
5550 request,
5651 create_connection,
57 loop,
5852 server):
5953
6054 with pytest.raises(AssertionError):
6155 await create_connection(
62 server.tcp_address, loop=loop, connection_cls=type)
63
64
65 @pytest.mark.run_loop
66 async def test_connect_tcp_timeout(request, create_connection, loop, server):
67 with patch.object(loop, 'create_connection') as\
68 open_conn_mock:
69 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2,
70 loop=loop)
56 server.tcp_address, connection_cls=type)
57
58
59 async def test_connect_tcp_timeout(request, create_connection, server):
60 with patch('aioredis.connection.open_connection') as open_conn_mock:
61 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2)
7162 with pytest.raises(asyncio.TimeoutError):
72 await create_connection(
73 server.tcp_address, loop=loop, timeout=0.1)
74
75
76 @pytest.mark.run_loop
63 await create_connection(server.tcp_address, timeout=0.1)
64
65
7766 async def test_connect_tcp_invalid_timeout(
78 request, create_connection, loop, server):
67 request, create_connection, server):
7968 with pytest.raises(ValueError):
8069 await create_connection(
81 server.tcp_address, loop=loop, timeout=0)
82
83
84 @pytest.mark.run_loop
70 server.tcp_address, timeout=0)
71
72
8573 @pytest.mark.skipif(sys.platform == 'win32',
8674 reason="No unixsocket on Windows")
87 async def test_connect_unixsocket(create_connection, loop, server):
88 conn = await create_connection(
89 server.unixsocket, db=0, loop=loop)
75 async def test_connect_unixsocket(create_connection, server):
76 conn = await create_connection(server.unixsocket, db=0)
9077 assert conn.db == 0
9178 assert conn.address == server.unixsocket
9279 assert str(conn) == '<RedisConnection [db:0]>'
9380
9481
95 @pytest.mark.run_loop
9682 @pytest.mark.skipif(sys.platform == 'win32',
9783 reason="No unixsocket on Windows")
98 async def test_connect_unixsocket_timeout(create_connection, loop, server):
99 with patch.object(loop, 'create_unix_connection') as open_conn_mock:
100 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2,
101 loop=loop)
84 async def test_connect_unixsocket_timeout(create_connection, server):
85 with patch('aioredis.connection.open_unix_connection') as open_conn_mock:
86 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2)
10287 with pytest.raises(asyncio.TimeoutError):
103 await create_connection(
104 server.unixsocket, db=0, loop=loop, timeout=0.1)
105
106
107 @pytest.mark.run_loop
108 @pytest.redis_version(2, 8, 0, reason="maxclients config setting")
109 async def test_connect_maxclients(create_connection, loop, start_server):
88 await create_connection(server.unixsocket, db=0, timeout=0.1)
89
90
91 @redis_version(2, 8, 0, reason="maxclients config setting")
92 async def test_connect_maxclients(create_connection, start_server):
11093 server = start_server('server-maxclients')
111 conn = await create_connection(
112 server.tcp_address, loop=loop)
94 conn = await create_connection(server.tcp_address)
11395 await conn.execute(b'CONFIG', b'SET', 'maxclients', 1)
11496
11597 errors = (MaxClientsError, ConnectionClosedError, ConnectionError)
11698 with pytest.raises(errors):
117 conn2 = await create_connection(
118 server.tcp_address, loop=loop)
99 conn2 = await create_connection(server.tcp_address)
119100 await conn2.execute('ping')
120101
121102
122 def test_global_loop(create_connection, loop, server):
123 asyncio.set_event_loop(loop)
124
125 conn = loop.run_until_complete(create_connection(
126 server.tcp_address, db=0))
103 async def test_select_db(create_connection, server):
104 address = server.tcp_address
105 conn = await create_connection(address)
127106 assert conn.db == 0
128 assert conn._loop is loop
129
130
131 @pytest.mark.run_loop
132 async def test_select_db(create_connection, loop, server):
133 address = server.tcp_address
134 conn = await create_connection(address, loop=loop)
135 assert conn.db == 0
136107
137108 with pytest.raises(ValueError):
138 await create_connection(address, db=-1, loop=loop)
139 with pytest.raises(TypeError):
140 await create_connection(address, db=1.0, loop=loop)
141 with pytest.raises(TypeError):
142 await create_connection(
143 address, db='bad value', loop=loop)
144 with pytest.raises(TypeError):
145 conn = await create_connection(
146 address, db=None, loop=loop)
109 await create_connection(address, db=-1)
110 with pytest.raises(TypeError):
111 await create_connection(address, db=1.0)
112 with pytest.raises(TypeError):
113 await create_connection(address, db='bad value')
114 with pytest.raises(TypeError):
115 conn = await create_connection(address, db=None)
147116 await conn.select(None)
148117 with pytest.raises(ReplyError):
149 await create_connection(
150 address, db=100000, loop=loop)
118 await create_connection(address, db=100000)
151119
152120 await conn.select(1)
153121 assert conn.db == 1
159127 assert conn.db == 1
160128
161129
162 @pytest.mark.run_loop
163 async def test_protocol_error(create_connection, loop, server):
164 conn = await create_connection(
165 server.tcp_address, loop=loop)
130 async def test_protocol_error(create_connection, server):
131 conn = await create_connection(server.tcp_address)
166132
167133 reader = conn._reader
168134
174140
175141
176142 def test_close_connection__tcp(create_connection, loop, server):
177 conn = loop.run_until_complete(create_connection(
178 server.tcp_address, loop=loop))
143 conn = loop.run_until_complete(create_connection(server.tcp_address))
179144 conn.close()
180145 with pytest.raises(ConnectionClosedError):
181146 loop.run_until_complete(conn.select(1))
182147
183 conn = loop.run_until_complete(create_connection(
184 server.tcp_address, loop=loop))
148 conn = loop.run_until_complete(create_connection(server.tcp_address))
185149 conn.close()
186150 fut = None
187151 with pytest.raises(ConnectionClosedError):
188152 fut = conn.select(1)
189153 assert fut is None
190154
191 conn = loop.run_until_complete(create_connection(
192 server.tcp_address, loop=loop))
155 conn = loop.run_until_complete(create_connection(server.tcp_address))
193156 conn.close()
194157 with pytest.raises(ConnectionClosedError):
195158 conn.execute_pubsub('subscribe', 'channel:1')
196159
197160
198 @pytest.mark.run_loop
199161 @pytest.mark.skipif(sys.platform == 'win32',
200162 reason="No unixsocket on Windows")
201 async def test_close_connection__socket(create_connection, loop, server):
202 conn = await create_connection(
203 server.unixsocket, loop=loop)
163 async def test_close_connection__socket(create_connection, server):
164 conn = await create_connection(server.unixsocket)
204165 conn.close()
205166 with pytest.raises(ConnectionClosedError):
206167 await conn.select(1)
207168
208 conn = await create_connection(
209 server.unixsocket, loop=loop)
169 conn = await create_connection(server.unixsocket)
210170 conn.close()
211171 with pytest.raises(ConnectionClosedError):
212172 await conn.execute_pubsub('subscribe', 'channel:1')
213173
214174
215 @pytest.mark.run_loop
216175 async def test_closed_connection_with_none_reader(
217 create_connection, loop, server):
176 create_connection, server):
218177 address = server.tcp_address
219 conn = await create_connection(address, loop=loop)
178 conn = await create_connection(address)
220179 stored_reader = conn._reader
221180 conn._reader = None
222181 with pytest.raises(ConnectionClosedError):
224183 conn._reader = stored_reader
225184 conn.close()
226185
227 conn = await create_connection(address, loop=loop)
186 conn = await create_connection(address)
228187 stored_reader = conn._reader
229188 conn._reader = None
230189 with pytest.raises(ConnectionClosedError):
233192 conn.close()
234193
235194
236 @pytest.mark.run_loop
237 async def test_wait_closed(create_connection, loop, server):
195 async def test_wait_closed(create_connection, server):
238196 address = server.tcp_address
239 conn = await create_connection(address, loop=loop)
197 conn = await create_connection(address)
240198 reader_task = conn._reader_task
241199 conn.close()
242200 assert not reader_task.done()
244202 assert reader_task.done()
245203
246204
247 @pytest.mark.run_loop
248205 async def test_cancel_wait_closed(create_connection, loop, server):
249206 # Regression test: Don't throw error if wait_closed() is cancelled.
250207 address = server.tcp_address
251 conn = await create_connection(address, loop=loop)
208 conn = await create_connection(address)
252209 reader_task = conn._reader_task
253210 conn.close()
254 task = asyncio.ensure_future(conn.wait_closed(), loop=loop)
211 task = asyncio.ensure_future(conn.wait_closed())
255212
256213 # Make sure the task is cancelled
257214 # after it has been started by the loop.
261218 assert reader_task.done()
262219
263220
264 @pytest.mark.run_loop
265 async def test_auth(create_connection, loop, server):
266 conn = await create_connection(
267 server.tcp_address, loop=loop)
221 async def test_auth(create_connection, server):
222 conn = await create_connection(server.tcp_address)
268223
269224 res = await conn.execute('CONFIG', 'SET', 'requirepass', 'pass')
270225 assert res == b'OK'
271226
272 conn2 = await create_connection(
273 server.tcp_address, loop=loop)
227 conn2 = await create_connection(server.tcp_address)
274228
275229 with pytest.raises(ReplyError):
276230 await conn2.select(1)
280234 res = await conn2.select(1)
281235 assert res is True
282236
283 conn3 = await create_connection(
284 server.tcp_address, password='pass', loop=loop)
237 conn3 = await create_connection(server.tcp_address, password='pass')
285238
286239 res = await conn3.select(1)
287240 assert res is True
290243 assert res == b'OK'
291244
292245
293 @pytest.mark.run_loop
294 async def test_decoding(create_connection, loop, server):
295 conn = await create_connection(
296 server.tcp_address, encoding='utf-8', loop=loop)
246 async def test_decoding(create_connection, server):
247 conn = await create_connection(server.tcp_address, encoding='utf-8')
297248 assert conn.encoding == 'utf-8'
298249 res = await conn.execute('set', '{prefix}:key1', 'value')
299250 assert res == 'OK'
314265 await conn.execute('set', '{prefix}:key1', 'значение')
315266 await conn.execute('get', '{prefix}:key1', encoding='ascii')
316267
317 conn2 = await create_connection(
318 server.tcp_address, loop=loop)
268 conn2 = await create_connection(server.tcp_address)
319269 res = await conn2.execute('get', '{prefix}:key1', encoding='utf-8')
320270 assert res == 'значение'
321271
322272
323 @pytest.mark.run_loop
324 async def test_execute_exceptions(create_connection, loop, server):
325 conn = await create_connection(
326 server.tcp_address, loop=loop)
273 async def test_execute_exceptions(create_connection, server):
274 conn = await create_connection(server.tcp_address)
327275 with pytest.raises(TypeError):
328276 await conn.execute(None)
329277 with pytest.raises(TypeError):
333281 assert len(conn._waiters) == 0
334282
335283
336 @pytest.mark.run_loop
337 async def test_subscribe_unsubscribe(create_connection, loop, server):
338 conn = await create_connection(
339 server.tcp_address, loop=loop)
284 async def test_subscribe_unsubscribe(create_connection, server):
285 conn = await create_connection(server.tcp_address)
340286
341287 assert conn.in_pubsub == 0
342288
364310 assert conn.in_pubsub == 1
365311
366312
367 @pytest.mark.run_loop
368 async def test_psubscribe_punsubscribe(create_connection, loop, server):
369 conn = await create_connection(
370 server.tcp_address, loop=loop)
313 async def test_psubscribe_punsubscribe(create_connection, server):
314 conn = await create_connection(server.tcp_address)
371315 res = await conn.execute('psubscribe', 'chan:*')
372316 assert res == [[b'psubscribe', b'chan:*', 1]]
373317 assert conn.in_pubsub == 1
374318
375319
376 @pytest.mark.run_loop
377 async def test_bad_command_in_pubsub(create_connection, loop, server):
378 conn = await create_connection(
379 server.tcp_address, loop=loop)
320 async def test_bad_command_in_pubsub(create_connection, server):
321 conn = await create_connection(server.tcp_address)
380322
381323 res = await conn.execute('subscribe', 'chan:1')
382324 assert res == [[b'subscribe', b'chan:1', 1]]
388330 conn.execute('get')
389331
390332
391 @pytest.mark.run_loop
392 async def test_pubsub_messages(create_connection, loop, server):
393 sub = await create_connection(
394 server.tcp_address, loop=loop)
395 pub = await create_connection(
396 server.tcp_address, loop=loop)
333 async def test_pubsub_messages(create_connection, server):
334 sub = await create_connection(server.tcp_address)
335 pub = await create_connection(server.tcp_address)
397336 res = await sub.execute('subscribe', 'chan:1')
398337 assert res == [[b'subscribe', b'chan:1', 1]]
399338
425364 assert msg == b'Hello!'
426365
427366
428 @pytest.mark.run_loop
429 async def test_multiple_subscribe_unsubscribe(create_connection, loop, server):
430 sub = await create_connection(server.tcp_address, loop=loop)
367 async def test_multiple_subscribe_unsubscribe(create_connection, server):
368 sub = await create_connection(server.tcp_address)
431369
432370 res = await sub.execute_pubsub('subscribe', 'chan:1')
433371 ch = sub.pubsub_channels['chan:1']
455393 assert res == [[b'punsubscribe', b'chan:*', 0]]
456394
457395
458 @pytest.mark.run_loop
459 async def test_execute_pubsub_errors(create_connection, loop, server):
460 sub = await create_connection(
461 server.tcp_address, loop=loop)
396 async def test_execute_pubsub_errors(create_connection, server):
397 sub = await create_connection(server.tcp_address)
462398
463399 with pytest.raises(TypeError):
464400 sub.execute_pubsub('subscribe', "chan:1", None)
467403 with pytest.raises(ValueError):
468404 sub.execute_pubsub(
469405 'subscribe',
470 Channel('chan:1', is_pattern=True, loop=loop))
406 Channel('chan:1', is_pattern=True))
471407 with pytest.raises(ValueError):
472408 sub.execute_pubsub(
473409 'unsubscribe',
474 Channel('chan:1', is_pattern=True, loop=loop))
410 Channel('chan:1', is_pattern=True))
475411 with pytest.raises(ValueError):
476412 sub.execute_pubsub(
477413 'psubscribe',
478 Channel('chan:1', is_pattern=False, loop=loop))
414 Channel('chan:1', is_pattern=False))
479415 with pytest.raises(ValueError):
480416 sub.execute_pubsub(
481417 'punsubscribe',
482 Channel('chan:1', is_pattern=False, loop=loop))
483
484
485 @pytest.mark.run_loop
486 async def test_multi_exec(create_connection, loop, server):
487 conn = await create_connection(server.tcp_address, loop=loop)
418 Channel('chan:1', is_pattern=False))
419
420
421 async def test_multi_exec(create_connection, server):
422 conn = await create_connection(server.tcp_address)
488423
489424 ok = await conn.execute('set', 'foo', 'bar')
490425 assert ok == b'OK'
504439 assert res == b'OK'
505440
506441
507 @pytest.mark.run_loop
508 async def test_multi_exec__enc(create_connection, loop, server):
509 conn = await create_connection(
510 server.tcp_address, loop=loop, encoding='utf-8')
442 async def test_multi_exec__enc(create_connection, server):
443 conn = await create_connection(server.tcp_address, encoding='utf-8')
511444
512445 ok = await conn.execute('set', 'foo', 'bar')
513446 assert ok == 'OK'
527460 assert res == 'OK'
528461
529462
530 @pytest.mark.run_loop
531 async def test_connection_parser_argument(create_connection, server, loop):
463 async def test_connection_parser_argument(create_connection, server):
532464 klass = mock.MagicMock()
533465 klass.return_value = reader = mock.Mock()
534 conn = await create_connection(server.tcp_address,
535 parser=klass, loop=loop)
466 conn = await create_connection(server.tcp_address, parser=klass)
536467
537468 assert klass.mock_calls == [
538469 mock.call(protocolError=ProtocolError, replyError=ReplyError),
548479 assert b'+PONG\r\n' == await conn.execute('ping')
549480
550481
551 @pytest.mark.run_loop
552 async def test_connection_idle_close(create_connection, start_server, loop):
482 async def test_connection_idle_close(create_connection, start_server):
553483 server = start_server('idle')
554 conn = await create_connection(server.tcp_address, loop=loop)
484 conn = await create_connection(server.tcp_address)
555485 ok = await conn.execute("config", "set", "timeout", 1)
556486 assert ok == b'OK'
557487
558 await asyncio.sleep(3, loop=loop)
488 await asyncio.sleep(3)
559489
560490 with pytest.raises(ConnectionClosedError):
561491 assert await conn.execute('ping') is None
566496 {'db': 1},
567497 {'encoding': 'utf-8'},
568498 ], ids=repr)
569 @pytest.mark.run_loop
570499 async def test_create_connection__tcp_url(
571 create_connection, server_tcp_url, loop, kwargs):
500 create_connection, server_tcp_url, kwargs):
572501 url = server_tcp_url(**kwargs)
573502 db = kwargs.get('db', 0)
574503 enc = kwargs.get('encoding', None)
575 conn = await create_connection(url, loop=loop)
504 conn = await create_connection(url)
576505 pong = b'PONG' if not enc else b'PONG'.decode(enc)
577506 assert await conn.execute('ping') == pong
578507 assert conn.db == db
586515 {'db': 1},
587516 {'encoding': 'utf-8'},
588517 ], ids=repr)
589 @pytest.mark.run_loop
590518 async def test_create_connection__unix_url(
591 create_connection, server_unix_url, loop, kwargs):
519 create_connection, server_unix_url, kwargs):
592520 url = server_unix_url(**kwargs)
593521 db = kwargs.get('db', 0)
594522 enc = kwargs.get('encoding', None)
595 conn = await create_connection(url, loop=loop)
523 conn = await create_connection(url)
596524 pong = b'PONG' if not enc else b'PONG'.decode(enc)
597525 assert await conn.execute('ping') == pong
598526 assert conn.db == db
66 from unittest import mock
77
88 from aioredis import ReplyError
9 from _testutils import redis_version
910
1011
1112 async def add(redis, key, value):
1314 assert ok == b'OK'
1415
1516
16 @pytest.mark.run_loop
1717 async def test_delete(redis):
1818 await add(redis, 'my-key', 123)
1919 await add(redis, 'other-key', 123)
3131 await redis.delete('my-key', 'my-key', None)
3232
3333
34 @pytest.mark.run_loop
3534 async def test_dump(redis):
3635 await add(redis, 'my-key', 123)
3736
4746 await redis.dump(None)
4847
4948
50 @pytest.mark.run_loop
5149 async def test_exists(redis, server):
5250 await add(redis, 'my-key', 123)
5351
6664 await redis.exists('key-1', 'key-2')
6765
6866
69 @pytest.redis_version(
67 @redis_version(
7068 3, 0, 3, reason='Multi-key EXISTS available since redis>=2.8.0')
71 @pytest.mark.run_loop
7269 async def test_exists_multiple(redis):
7370 await add(redis, 'my-key', 123)
7471
8582 assert res == 0
8683
8784
88 @pytest.mark.run_loop
8985 async def test_expire(redis):
9086 await add(redis, 'my-key', 132)
9187
114110 await redis.expire('my-key', 'timeout')
115111
116112
117 @pytest.mark.run_loop
118113 async def test_expireat(redis):
119114 await add(redis, 'my-key', 123)
120115 now = math.ceil(time.time())
151146 await redis.expireat('my-key', 'timestamp')
152147
153148
154 @pytest.mark.run_loop
155149 async def test_keys(redis):
156150 res = await redis.keys('*pattern*')
157151 assert res == []
176170 await redis.keys(None)
177171
178172
179 @pytest.mark.run_loop
180 async def test_migrate(create_redis, loop, server, serverB):
173 async def test_migrate(create_redis, server, serverB):
181174 redisA = await create_redis(server.tcp_address)
182175 redisB = await create_redis(serverB.tcp_address, db=2)
183176
209202 await redisA.migrate('host', 6379, 'key', 1, -1000)
210203
211204
212 @pytest.redis_version(
205 @redis_version(
213206 3, 0, 0, reason="Copy/Replace flags available since Redis 3.0")
214 @pytest.mark.run_loop
215 async def test_migrate_copy_replace(create_redis, loop, server, serverB):
207 async def test_migrate_copy_replace(create_redis, server, serverB):
216208 redisA = await create_redis(server.tcp_address)
217209 redisB = await create_redis(serverB.tcp_address, db=0)
218210
232224 assert (await redisB.get('my-key'))
233225
234226
235 @pytest.redis_version(
227 @redis_version(
236228 3, 0, 6, reason="MIGRATE…KEYS available since Redis 3.0.6")
237229 @pytest.mark.skipif(
238230 sys.platform == 'win32', reason="Seems to be unavailable in win32 build")
239 @pytest.mark.run_loop
240 async def test_migrate_keys(create_redis, loop, server, serverB):
231 async def test_migrate_keys(create_redis, server, serverB):
241232 redisA = await create_redis(server.tcp_address)
242233 redisB = await create_redis(serverB.tcp_address, db=0)
243234
292283 assert (await redisA.get('key3')) is None
293284
294285
295 @pytest.mark.run_loop
296 async def test_migrate__exceptions(redis, loop, server, unused_port):
286 async def test_migrate__exceptions(redis, server, unused_port):
297287 await add(redis, 'my-key', 123)
298288
299289 assert (await redis.exists('my-key'))
304294 'my-key', dest_db=30, timeout=10))
305295
306296
307 @pytest.redis_version(
297 @redis_version(
308298 3, 0, 6, reason="MIGRATE…KEYS available since Redis 3.0.6")
309299 @pytest.mark.skipif(
310300 sys.platform == 'win32', reason="Seems to be unavailable in win32 build")
311 @pytest.mark.run_loop
312301 async def test_migrate_keys__errors(redis):
313302 with pytest.raises(TypeError, match="host .* str"):
314303 await redis.migrate_keys(None, 1234, 'key', 1, 23)
328317 await redis.migrate_keys('host', '1234', (), 2, 123)
329318
330319
331 @pytest.mark.run_loop
332320 async def test_move(redis):
333321 await add(redis, 'my-key', 123)
334322
346334 await redis.move('my-key', 'not db')
347335
348336
349 @pytest.mark.run_loop
350337 async def test_object_refcount(redis):
351338 await add(redis, 'foo', 'bar')
352339
359346 await redis.object_refcount(None)
360347
361348
362 @pytest.mark.run_loop
363349 async def test_object_encoding(redis, server):
364350 await add(redis, 'foo', 'bar')
365351
366352 res = await redis.object_encoding('foo')
367353
368354 if server.version < (3, 0, 0):
369 assert res == b'raw'
355 assert res == 'raw'
370356 else:
371 assert res == b'embstr'
357 assert res == 'embstr'
372358
373359 res = await redis.incr('key')
374360 assert res == 1
375361 res = await redis.object_encoding('key')
376 assert res == b'int'
362 assert res == 'int'
377363 res = await redis.object_encoding('non-existent-key')
378364 assert res is None
379365
381367 await redis.object_encoding(None)
382368
383369
384 @pytest.mark.run_loop(timeout=20)
385 async def test_object_idletime(redis, loop, server):
370 @redis_version(
371 3, 0, 0, reason="Older Redis version has lower idle time resolution")
372 @pytest.mark.timeout(20)
373 async def test_object_idletime(redis, server):
386374 await add(redis, 'foo', 'bar')
387375
388376 res = await redis.object_idletime('foo')
392380 res = 0
393381 while not res:
394382 res = await redis.object_idletime('foo')
395 await asyncio.sleep(.5, loop=loop)
383 await asyncio.sleep(.5)
396384 assert res >= 1
397385
398386 res = await redis.object_idletime('non-existent-key')
402390 await redis.object_idletime(None)
403391
404392
405 @pytest.mark.run_loop
406393 async def test_persist(redis):
407394 await add(redis, 'my-key', 123)
408395 res = await redis.expire('my-key', 10)
418405 await redis.persist(None)
419406
420407
421 @pytest.mark.run_loop
422 async def test_pexpire(redis, loop):
408 async def test_pexpire(redis):
423409 await add(redis, 'my-key', 123)
424410 res = await redis.pexpire('my-key', 100)
425411 assert res is True
434420 assert res is True
435421
436422 # XXX: tests now looks strange to me.
437 await asyncio.sleep(.2, loop=loop)
423 await asyncio.sleep(.2)
438424
439425 res = await redis.exists('my-key')
440426 assert not res
445431 await redis.pexpire('my-key', 1.0)
446432
447433
448 @pytest.mark.run_loop
449434 async def test_pexpireat(redis):
450435 await add(redis, 'my-key', 123)
451 now = math.ceil((await redis.time()) * 1000)
436 now = int((await redis.time()) * 1000)
452437 fut1 = redis.pexpireat('my-key', now + 2000)
453438 fut2 = redis.ttl('my-key')
454439 fut3 = redis.pttl('my-key')
455 assert (await fut1) is True
456 assert (await fut2) == 2
457 pytest.assert_almost_equal((await fut3), 2000, -3)
440 assert await fut1 is True
441 assert await fut2 == 2
442 assert 1000 < await fut3 <= 2000
458443
459444 with pytest.raises(TypeError):
460445 await redis.pexpireat(None, 1234)
464449 await redis.pexpireat('key', 1000.0)
465450
466451
467 @pytest.mark.run_loop
468452 async def test_pttl(redis, server):
469453 await add(redis, 'key', 'val')
470454 res = await redis.pttl('key')
477461
478462 await redis.pexpire('key', 500)
479463 res = await redis.pttl('key')
480 pytest.assert_almost_equal(res, 500, -2)
464 assert 400 < res <= 500
481465
482466 with pytest.raises(TypeError):
483467 await redis.pttl(None)
484468
485469
486 @pytest.mark.run_loop
487470 async def test_randomkey(redis):
488471 await add(redis, 'key:1', 123)
489472 await add(redis, 'key:2', 123)
501484 assert res is None
502485
503486
504 @pytest.mark.run_loop
505487 async def test_rename(redis, server):
506488 await add(redis, 'foo', 'bar')
507489 await redis.delete('bar')
523505 await redis.rename('bar', b'bar')
524506
525507
526 @pytest.mark.run_loop
527508 async def test_renamenx(redis, server):
528509 await redis.delete('foo', 'bar')
529510 await add(redis, 'foo', 123)
549530 await redis.renamenx('foo', b'foo')
550531
551532
552 @pytest.mark.run_loop
553533 async def test_restore(redis):
554534 ok = await redis.set('key', 'value')
555535 assert ok
561541 assert (await redis.get('key')) == b'value'
562542
563543
564 @pytest.redis_version(2, 8, 0, reason='SCAN is available since redis>=2.8.0')
565 @pytest.mark.run_loop
544 @redis_version(2, 8, 0, reason='SCAN is available since redis>=2.8.0')
566545 async def test_scan(redis):
567546 for i in range(1, 11):
568547 foo_or_bar = 'bar' if i % 3 else 'foo'
602581 assert len(test_values) == 10
603582
604583
605 @pytest.mark.run_loop
606584 async def test_sort(redis):
607585 async def _make_list(key, items):
608586 await redis.delete(key)
659637 assert res == [b'10', b'30', b'20']
660638
661639
662 @pytest.redis_version(3, 2, 1, reason="TOUCH is available since redis>=3.2.1")
663 @pytest.mark.run_loop(timeout=20)
664 async def test_touch(redis, loop):
640 @redis_version(3, 2, 1, reason="TOUCH is available since redis>=3.2.1")
641 @pytest.mark.timeout(20)
642 async def test_touch(redis):
665643 await add(redis, 'key', 'val')
666644 res = 0
667645 while not res:
668646 res = await redis.object_idletime('key')
669 await asyncio.sleep(.5, loop=loop)
647 await asyncio.sleep(.5)
670648 assert res > 0
671649 assert await redis.touch('key', 'key', 'key') == 3
672650 res2 = await redis.object_idletime('key')
673651 assert 0 <= res2 < res
674652
675653
676 @pytest.mark.run_loop
677654 async def test_ttl(redis, server):
678655 await add(redis, 'key', 'val')
679656 res = await redis.ttl('key')
692669 await redis.ttl(None)
693670
694671
695 @pytest.mark.run_loop
696672 async def test_type(redis):
697673 await add(redis, 'key', 'val')
698674 res = await redis.type('key')
715691 await redis.type(None)
716692
717693
718 @pytest.redis_version(2, 8, 0, reason='SCAN is available since redis>=2.8.0')
719 @pytest.mark.run_loop
694 @redis_version(2, 8, 0, reason='SCAN is available since redis>=2.8.0')
720695 async def test_iscan(redis):
721696 full = set()
722697 foo = set()
760735 assert set(ret) == full
761736
762737
763 @pytest.redis_version(4, 0, 0, reason="UNLINK is available since redis>=4.0.0")
764 @pytest.mark.run_loop
738 @redis_version(4, 0, 0, reason="UNLINK is available since redis>=4.0.0")
765739 async def test_unlink(redis):
766740 await add(redis, 'my-key', 123)
767741 await add(redis, 'other-key', 123)
779753 await redis.unlink('my-key', 'my-key', None)
780754
781755
782 @pytest.redis_version(3, 0, 0, reason="WAIT is available since redis>=3.0.0")
783 @pytest.mark.run_loop
784 async def test_wait(redis, loop):
756 @redis_version(3, 0, 0, reason="WAIT is available since redis>=3.0.0")
757 async def test_wait(redis):
785758 await add(redis, 'key', 'val1')
786759 start = await redis.time()
787760 res = await redis.wait(1, 400)
00 import pytest
11
22 from aioredis import GeoPoint, GeoMember
3
4
5 @pytest.mark.run_loop
6 @pytest.redis_version(
3 from _testutils import redis_version
4
5
6 @redis_version(
77 3, 2, 0, reason='GEOADD is available since redis >= 3.2.0')
88 async def test_geoadd(redis):
99 res = await redis.geoadd('geodata', 13.361389, 38.115556, 'Palermo')
1717 assert res == 2
1818
1919
20 @pytest.mark.run_loop
21 @pytest.redis_version(
20 @redis_version(
2221 3, 2, 0, reason='GEODIST is available since redis >= 3.2.0')
2322 async def test_geodist(redis):
2423 res = await redis.geoadd(
3534 assert res == 166.2742
3635
3736
38 @pytest.mark.run_loop
39 @pytest.redis_version(
37 @redis_version(
4038 3, 2, 0, reason='GEOHASH is available since redis >= 3.2.0')
4139 async def test_geohash(redis):
4240 res = await redis.geoadd(
5755 assert res == ['sqc8b49rny0', 'sqdtr74hyu0']
5856
5957
60 @pytest.mark.run_loop
61 @pytest.redis_version(
58 @redis_version(
6259 3, 2, 0, reason='GEOPOS is available since redis >= 3.2.0')
6360 async def test_geopos(redis):
6461 res = await redis.geoadd(
8077 ]
8178
8279
83 @pytest.mark.run_loop
84 @pytest.redis_version(
80 @redis_version(
8581 3, 2, 0, reason='GEO* is available since redis >= 3.2.0')
8682 async def test_geo_not_exist_members(redis):
8783 res = await redis.geoadd('geodata', 13.361389, 38.115556, 'Palermo')
115111 ]
116112
117113
118 @pytest.mark.run_loop
119 @pytest.redis_version(
114 @redis_version(
120115 3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
121116 async def test_georadius_validation(redis):
122117 res = await redis.geoadd(
143138 )
144139
145140
146 @pytest.mark.run_loop
147 @pytest.redis_version(
141 @redis_version(
148142 3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
149143 async def test_georadius(redis):
150144 res = await redis.geoadd(
262256 ]
263257
264258
265 @pytest.mark.run_loop
266 @pytest.redis_version(
259 @redis_version(
267260 3, 2, 0, reason='GEORADIUSBYMEMBER is available since redis >= 3.2.0')
268261 async def test_georadiusbymember(redis):
269262 res = await redis.geoadd(
316309 ]
317310
318311
319 @pytest.mark.run_loop
320 @pytest.redis_version(
312 @redis_version(
321313 3, 2, 0, reason='GEOHASH is available since redis >= 3.2.0')
322314 async def test_geohash_binary(redis):
323315 res = await redis.geoadd(
338330 assert res == [b'sqc8b49rny0', b'sqdtr74hyu0']
339331
340332
341 @pytest.mark.run_loop
342 @pytest.redis_version(
333 @redis_version(
343334 3, 2, 0, reason='GEORADIUS is available since redis >= 3.2.0')
344335 async def test_georadius_binary(redis):
345336 res = await redis.geoadd(
457448 ]
458449
459450
460 @pytest.mark.run_loop
461 @pytest.redis_version(
451 @redis_version(
462452 3, 2, 0, reason='GEORADIUSBYMEMBER is available since redis >= 3.2.0')
463453 async def test_georadiusbymember_binary(redis):
464454 res = await redis.geoadd(
00 import pytest
11
22 from aioredis import ReplyError
3 from _testutils import redis_version
34
45
56 async def add(redis, key, field, value):
89 assert ok == 1
910
1011
11 @pytest.mark.run_loop
1212 async def test_hdel(redis):
1313 key, field, value = b'key:hdel', b'bar', b'zap'
1414 await add(redis, key, field, value)
2323 await redis.hdel(None, field)
2424
2525
26 @pytest.mark.run_loop
2726 async def test_hexists(redis):
2827 key, field, value = b'key:hexists', b'bar', b'zap'
2928 await add(redis, key, field, value)
4140 await redis.hexists(None, field)
4241
4342
44 @pytest.mark.run_loop
4543 async def test_hget(redis):
4644
4745 key, field, value = b'key:hget', b'bar', b'zap'
6462 await redis.hget(None, field)
6563
6664
67 @pytest.mark.run_loop
6865 async def test_hgetall(redis):
6966 await add(redis, 'key:hgetall', 'foo', 'baz')
7067 await add(redis, 'key:hgetall', 'bar', 'zap')
8582 await redis.hgetall(None)
8683
8784
88 @pytest.mark.run_loop
8985 async def test_hincrby(redis):
9086 key, field, value = b'key:hincrby', b'bar', 1
9187 await add(redis, key, field, value)
120116 await redis.hincrby(None, field, 2)
121117
122118
123 @pytest.mark.run_loop
124119 async def test_hincrbyfloat(redis):
125120 key, field, value = b'key:hincrbyfloat', b'bar', 2.71
126121 await add(redis, key, field, value)
145140 await redis.hincrbyfloat(None, field, 2)
146141
147142
148 @pytest.mark.run_loop
149143 async def test_hkeys(redis):
150144 key = b'key:hkeys'
151145 field1, field2 = b'foo', b'bar'
166160 await redis.hkeys(None)
167161
168162
169 @pytest.mark.run_loop
170163 async def test_hlen(redis):
171164 key = b'key:hlen'
172165 field1, field2 = b'foo', b'bar'
184177 await redis.hlen(None)
185178
186179
187 @pytest.mark.run_loop
188180 async def test_hmget(redis):
189181 key = b'key:hmget'
190182 field1, field2 = b'foo', b'bar'
209201 await redis.hmget(None, field1, field2)
210202
211203
212 @pytest.mark.run_loop
213204 async def test_hmset(redis):
214205 key, field, value = b'key:hmset', b'bar', b'zap'
215206 await add(redis, key, field, value)
247238 await redis.hmset(key)
248239
249240
250 @pytest.mark.run_loop
251241 async def test_hmset_dict(redis):
252242 key = 'key:hmset'
253243
299289 await redis.hmset_dict(key, {'a': 1}, {'b': 2}, 'c', 3, d=4)
300290
301291
302 @pytest.mark.run_loop
303292 async def test_hset(redis):
304293 key, field, value = b'key:hset', b'bar', b'zap'
305294 test_value = await redis.hset(key, field, value)
318307 await redis.hset(None, field, value)
319308
320309
321 @pytest.mark.run_loop
322310 async def test_hsetnx(redis):
323311 key, field, value = b'key:hsetnx', b'bar', b'zap'
324312 # field does not exists, operation should be successful
338326 await redis.hsetnx(None, field, value)
339327
340328
341 @pytest.mark.run_loop
342329 async def test_hvals(redis):
343330 key = b'key:hvals'
344331 field1, field2 = b'foo', b'bar'
358345 await redis.hvals(None)
359346
360347
361 @pytest.redis_version(2, 8, 0, reason='HSCAN is available since redis>=2.8.0')
362 @pytest.mark.run_loop
348 @redis_version(2, 8, 0, reason='HSCAN is available since redis>=2.8.0')
363349 async def test_hscan(redis):
364350 key = b'key:hscan'
365351 # setup initial values 3 "field:foo:*" items and 7 "field:bar:*" items
403389 await redis.hscan(None)
404390
405391
406 @pytest.mark.run_loop
407 async def test_hgetall_enc(create_redis, loop, server):
408 redis = await create_redis(
409 server.tcp_address, loop=loop, encoding='utf-8')
392 async def test_hgetall_enc(create_redis, server):
393 redis = await create_redis(server.tcp_address, encoding='utf-8')
410394 TEST_KEY = 'my-key-nx'
411395 await redis.hmset(TEST_KEY, 'foo', 'bar', 'baz', 'bad')
412396
416400 assert res == [{'foo': 'bar', 'baz': 'bad'}]
417401
418402
419 @pytest.mark.run_loop
420 @pytest.redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
403 @redis_version(3, 2, 0, reason="HSTRLEN new in redis 3.2.0")
421404 async def test_hstrlen(redis):
422405 ok = await redis.hset('myhash', 'str_field', 'some value')
423406 assert ok == 1
441424 assert l == 0
442425
443426
444 @pytest.redis_version(2, 8, 0, reason='HSCAN is available since redis>=2.8.0')
445 @pytest.mark.run_loop
427 @redis_version(2, 8, 0, reason='HSCAN is available since redis>=2.8.0')
446428 async def test_ihscan(redis):
447429 key = b'key:hscan'
448430 # setup initial values 3 "field:foo:*" items and 7 "field:bar:*" items
00 import pytest
11
2 from _testutils import redis_version
23
3 pytestmark = pytest.redis_version(
4 pytestmark = redis_version(
45 2, 8, 9, reason='HyperLogLog works only with redis>=2.8.9')
56
67
7 @pytest.mark.run_loop
88 async def test_pfcount(redis):
99 key = 'hll_pfcount'
1010 other_key = 'some-other-hll'
4141 await redis.pfcount(key, key, None)
4242
4343
44 @pytest.mark.run_loop
4544 async def test_pfadd(redis):
4645 key = 'hll_pfadd'
4746 values = ['a', 's', 'y', 'n', 'c', 'i', 'o']
5352 assert is_changed == 0
5453
5554
56 @pytest.mark.run_loop
5755 async def test_pfadd_wrong_input(redis):
5856 with pytest.raises(TypeError):
5957 await redis.pfadd(None, 'value')
6058
6159
62 @pytest.mark.run_loop
6360 async def test_pfmerge(redis):
6461 key = 'hll_asyncio'
6562 key_other = 'hll_aioredis'
9592 await redis.pfmerge(key_dest, key, None)
9693
9794
98 @pytest.mark.run_loop
9995 async def test_pfmerge_wrong_input(redis):
10096 with pytest.raises(TypeError):
10197 await redis.pfmerge(None, 'value')
44
55
66 @pytest.fixture
7 def pool_or_redis(_closable, server, loop):
7 def pool_or_redis(_closable, server):
88 version = tuple(map(int, aioredis.__version__.split('.')[:2]))
99 if version >= (1, 0):
1010 factory = aioredis.create_redis_pool
1212 factory = aioredis.create_pool
1313
1414 async def redis_factory(maxsize):
15 redis = await factory(server.tcp_address, loop=loop,
15 redis = await factory(server.tcp_address,
1616 minsize=1, maxsize=maxsize)
1717 _closable(redis)
1818 return redis
1919 return redis_factory
2020
2121
22 async def simple_get_set(pool, idx, loop):
22 async def simple_get_set(pool, idx):
2323 """A simple test to make sure Redis(pool) can be used as old Pool(Redis).
2424 """
2525 val = 'val:{}'.format(idx)
2828 await redis.get('key', encoding='utf-8')
2929
3030
31 async def pipeline(pool, val, loop):
31 async def pipeline(pool, val):
3232 val = 'val:{}'.format(val)
3333 with await pool as redis:
3434 f1 = redis.set('key', val)
3535 f2 = redis.get('key', encoding='utf-8')
36 ok, res = await asyncio.gather(f1, f2, loop=loop)
36 ok, res = await asyncio.gather(f1, f2)
3737
3838
39 async def transaction(pool, val, loop):
39 async def transaction(pool, val):
4040 val = 'val:{}'.format(val)
4141 with await pool as redis:
4242 tr = redis.multi_exec()
4747 assert res == val
4848
4949
50 async def blocking_pop(pool, val, loop):
50 async def blocking_pop(pool, val):
5151
5252 async def lpush():
5353 with await pool as redis:
5454 # here v0.3 has bound connection, v1.0 does not;
55 await asyncio.sleep(.1, loop=loop)
55 await asyncio.sleep(.1)
5656 await redis.lpush('list-key', 'val')
5757
5858 async def blpop():
6161 res = await redis.blpop(
6262 'list-key', timeout=2, encoding='utf-8')
6363 assert res == ['list-key', 'val'], res
64 await asyncio.gather(blpop(), lpush(), loop=loop)
64 await asyncio.gather(blpop(), lpush())
6565
6666
67 @pytest.mark.run_loop
6867 @pytest.mark.parametrize('test_case,pool_size', [
6968 (simple_get_set, 1),
7069 (pipeline, 1),
7978 (transaction, 10),
8079 (blocking_pop, 10),
8180 ], ids=lambda o: getattr(o, '__name__', repr(o)))
82 async def test_operations(pool_or_redis, test_case, pool_size, loop):
81 async def test_operations(pool_or_redis, test_case, pool_size):
8382 repeat = 100
8483 redis = await pool_or_redis(pool_size)
8584 done, pending = await asyncio.wait(
86 [asyncio.ensure_future(test_case(redis, i, loop), loop=loop)
87 for i in range(repeat)], loop=loop)
85 [asyncio.ensure_future(test_case(redis, i))
86 for i in range(repeat)])
8887
8988 assert not pending
9089 success = 0
33 from aioredis import ReplyError
44
55
6 async def push_data_with_sleep(redis, loop, key, *values):
7 await asyncio.sleep(0.2, loop=loop)
6 async def push_data_with_sleep(redis, key, *values):
7 await asyncio.sleep(0.2)
88 result = await redis.lpush(key, *values)
99 return result
1010
1111
12 @pytest.mark.run_loop
1312 async def test_blpop(redis):
1413 key1, value1 = b'key:blpop:1', b'blpop:value:1'
1514 key2, value2 = b'key:blpop:2', b'blpop:value:2'
3938 assert test_value == ['key:blpop:2', 'blpop:value:1']
4039
4140
42 @pytest.mark.run_loop
43 async def test_blpop_blocking_features(redis, create_redis, loop, server):
41 async def test_blpop_blocking_features(redis, create_redis, server):
4442 key1, key2 = b'key:blpop:1', b'key:blpop:2'
4543 value = b'blpop:value:2'
4644
47 other_redis = await create_redis(
48 server.tcp_address, loop=loop)
45 other_redis = await create_redis(server.tcp_address)
4946
5047 # create blocking task in separate connection
5148 consumer = other_redis.blpop(key1, key2)
5249
53 producer_task = asyncio.Task(
54 push_data_with_sleep(redis, loop, key2, value), loop=loop)
55 results = await asyncio.gather(
56 consumer, producer_task, loop=loop)
50 producer_task = asyncio.ensure_future(
51 push_data_with_sleep(redis, key2, value))
52 results = await asyncio.gather(consumer, producer_task)
5753
5854 assert results[0] == [key2, value]
5955 assert results[1] == 1
6662 other_redis.close()
6763
6864
69 @pytest.mark.run_loop
7065 async def test_brpop(redis):
7166 key1, value1 = b'key:brpop:1', b'brpop:value:1'
7267 key2, value2 = b'key:brpop:2', b'brpop:value:2'
9691 assert test_value == ['key:brpop:2', 'brpop:value:1']
9792
9893
99 @pytest.mark.run_loop
100 async def test_brpop_blocking_features(redis, create_redis, server, loop):
94 async def test_brpop_blocking_features(redis, create_redis, server):
10195 key1, key2 = b'key:brpop:1', b'key:brpop:2'
10296 value = b'brpop:value:2'
10397
10498 other_redis = await create_redis(
105 server.tcp_address, loop=loop)
99 server.tcp_address)
106100 # create blocking task in separate connection
107101 consumer_task = other_redis.brpop(key1, key2)
108102
109 producer_task = asyncio.Task(
110 push_data_with_sleep(redis, loop, key2, value), loop=loop)
111
112 results = await asyncio.gather(
113 consumer_task, producer_task, loop=loop)
103 producer_task = asyncio.ensure_future(
104 push_data_with_sleep(redis, key2, value))
105
106 results = await asyncio.gather(consumer_task, producer_task)
114107
115108 assert results[0] == [key2, value]
116109 assert results[1] == 1
122115 assert test_value is None
123116
124117
125 @pytest.mark.run_loop
126118 async def test_brpoplpush(redis):
127119 key = b'key:brpoplpush:1'
128120 value1, value2 = b'brpoplpush:value:1', b'brpoplpush:value:2'
161153 assert result == 'brpoplpush:value:2'
162154
163155
164 @pytest.mark.run_loop
165 async def test_brpoplpush_blocking_features(redis, create_redis, server, loop):
156 async def test_brpoplpush_blocking_features(redis, create_redis, server):
166157 source = b'key:brpoplpush:12'
167158 value = b'brpoplpush:value:2'
168159 destkey = b'destkey:brpoplpush:2'
169160 other_redis = await create_redis(
170 server.tcp_address, loop=loop)
161 server.tcp_address)
171162 # create blocking task
172163 consumer_task = other_redis.brpoplpush(source, destkey)
173 producer_task = asyncio.Task(
174 push_data_with_sleep(redis, loop, source, value), loop=loop)
175 results = await asyncio.gather(
176 consumer_task, producer_task, loop=loop)
164 producer_task = asyncio.ensure_future(
165 push_data_with_sleep(redis, source, value))
166 results = await asyncio.gather(consumer_task, producer_task)
177167 assert results[0] == value
178168 assert results[1] == 1
179169
189179 other_redis.close()
190180
191181
192 @pytest.mark.run_loop
193182 async def test_lindex(redis):
194183 key, value = b'key:lindex:1', 'value:{}'
195184 # setup list
222211 await redis.lindex(key, b'one')
223212
224213
225 @pytest.mark.run_loop
226214 async def test_linsert(redis):
227215 key = b'key:linsert:1'
228216 value1, value2, value3, value4 = b'Hello', b'World', b'foo', b'bar'
251239 await redis.linsert(None, value1, value3)
252240
253241
254 @pytest.mark.run_loop
255242 async def test_llen(redis):
256243 key = b'key:llen:1'
257244 value1, value2 = b'Hello', b'World'
267254 await redis.llen(None)
268255
269256
270 @pytest.mark.run_loop
271257 async def test_lpop(redis):
272258 key = b'key:lpop:1'
273259 value1, value2 = b'lpop:value:1', b'lpop:value:2'
294280 await redis.lpop(None)
295281
296282
297 @pytest.mark.run_loop
298283 async def test_lpush(redis):
299284 key = b'key:lpush'
300285 value1, value2 = b'value:1', b'value:2'
315300 await redis.lpush(None, value1)
316301
317302
318 @pytest.mark.run_loop
319303 async def test_lpushx(redis):
320304 key = b'key:lpushx'
321305 value1, value2 = b'value:1', b'value:2'
339323 await redis.lpushx(None, value1)
340324
341325
342 @pytest.mark.run_loop
343326 async def test_lrange(redis):
344327 key, value = b'key:lrange:1', 'value:{}'
345328 values = [value.format(i).encode('utf-8') for i in range(0, 10)]
368351 await redis.lrange(key, 0, b'one')
369352
370353
371 @pytest.mark.run_loop
372354 async def test_lrem(redis):
373355 key, value = b'key:lrem:1', 'value:{}'
374356 values = [value.format(i % 2).encode('utf-8') for i in range(0, 10)]
403385 await redis.lrem(key, b'ten', b'value:0')
404386
405387
406 @pytest.mark.run_loop
407388 async def test_lset(redis):
408389 key, value = b'key:lset', 'value:{}'
409390 values = [value.format(i).encode('utf-8') for i in range(0, 3)]
426407 await redis.lset(key, b'one', b'value:0')
427408
428409
429 @pytest.mark.run_loop
430410 async def test_ltrim(redis):
431411 key, value = b'key:ltrim', 'value:{}'
432412 values = [value.format(i).encode('utf-8') for i in range(0, 10)]
457437 await redis.ltrim(key, 0, b'one')
458438
459439
460 @pytest.mark.run_loop
461440 async def test_rpop(redis):
462441 key = b'key:rpop:1'
463442 value1, value2 = b'rpop:value:1', b'rpop:value:2'
484463 await redis.rpop(None)
485464
486465
487 @pytest.mark.run_loop
488466 async def test_rpoplpush(redis):
489467 key = b'key:rpoplpush:1'
490468 value1, value2 = b'rpoplpush:value:1', b'rpoplpush:value:2'
516494 await redis.rpoplpush(key, None)
517495
518496
519 @pytest.mark.run_loop
520497 async def test_rpush(redis):
521498 key = b'key:rpush'
522499 value1, value2 = b'value:1', b'value:2'
533510 await redis.rpush(None, value1)
534511
535512
536 @pytest.mark.run_loop
537513 async def test_rpushx(redis):
538514 key = b'key:rpushx'
539515 value1, value2 = b'value:1', b'value:2'
00 import asyncio
1 import pytest
21
32 from aioredis.locks import Lock
43
54
6 @pytest.mark.run_loop
7 async def test_finished_waiter_cancelled(loop):
8 lock = Lock(loop=loop)
5 async def test_finished_waiter_cancelled():
6 lock = Lock()
97
10 ta = asyncio.ensure_future(lock.acquire(), loop=loop)
11 await asyncio.sleep(0, loop=loop)
8 ta = asyncio.ensure_future(lock.acquire())
9 await asyncio.sleep(0)
1210 assert lock.locked()
1311
14 tb = asyncio.ensure_future(lock.acquire(), loop=loop)
15 await asyncio.sleep(0, loop=loop)
12 tb = asyncio.ensure_future(lock.acquire())
13 await asyncio.sleep(0)
1614 assert len(lock._waiters) == 1
1715
1816 # Create a second waiter, wake up the first, and cancel it.
1917 # Without the fix, the second was not woken up and the lock
2018 # will never be locked
21 asyncio.ensure_future(lock.acquire(), loop=loop)
22 await asyncio.sleep(0, loop=loop)
19 asyncio.ensure_future(lock.acquire())
20 await asyncio.sleep(0)
2321 lock.release()
2422 tb.cancel()
2523
26 await asyncio.sleep(0, loop=loop)
24 await asyncio.sleep(0)
2725 assert ta.done()
2826 assert tb.cancelled()
2927
30 await asyncio.sleep(0, loop=loop)
28 await asyncio.sleep(0)
3129 assert lock.locked()
2222 asyncio.set_event_loop(loop)
2323
2424 tr = MultiExec(conn, commands_factory=Redis)
25 assert tr._loop is loop
25 # assert tr._loop is loop
2626
2727 def make_fut(cmd, *args, **kw):
2828 fut = asyncio.get_event_loop().create_future()
00 import asyncio
11 import pytest
22 import async_timeout
3 import logging
4 import sys
35
46 from unittest.mock import patch
57
1012 ConnectionsPool,
1113 MaxClientsError,
1214 )
15 from _testutils import redis_version
16
17 BPO_34638 = sys.version_info >= (3, 8)
1318
1419
1520 def _assert_defaults(pool):
1823 assert pool.maxsize == 10
1924 assert pool.size == 1
2025 assert pool.freesize == 1
21 assert pool._close_waiter is None
26 assert not pool._close_state.is_set()
2227
2328
2429 def test_connect(pool):
2530 _assert_defaults(pool)
2631
2732
28 def test_global_loop(create_pool, loop, server):
29 asyncio.set_event_loop(loop)
30
31 pool = loop.run_until_complete(create_pool(
32 server.tcp_address))
33 _assert_defaults(pool)
34
35
36 @pytest.mark.run_loop
3733 async def test_clear(pool):
3834 _assert_defaults(pool)
3935
4137 assert pool.freesize == 0
4238
4339
44 @pytest.mark.run_loop
4540 @pytest.mark.parametrize('minsize', [None, -100, 0.0, 100])
46 async def test_minsize(minsize, create_pool, loop, server):
41 async def test_minsize(minsize, create_pool, server):
4742
4843 with pytest.raises(AssertionError):
4944 await create_pool(
5045 server.tcp_address,
51 minsize=minsize, maxsize=10, loop=loop)
52
53
54 @pytest.mark.run_loop
46 minsize=minsize, maxsize=10)
47
48
5549 @pytest.mark.parametrize('maxsize', [None, -100, 0.0, 1])
56 async def test_maxsize(maxsize, create_pool, loop, server):
50 async def test_maxsize(maxsize, create_pool, server):
5751
5852 with pytest.raises(AssertionError):
5953 await create_pool(
6054 server.tcp_address,
61 minsize=2, maxsize=maxsize, loop=loop)
62
63
64 @pytest.mark.run_loop
65 async def test_create_connection_timeout(create_pool, loop, server):
66 with patch.object(loop, 'create_connection') as\
55 minsize=2, maxsize=maxsize)
56
57
58 async def test_create_connection_timeout(create_pool, server):
59 with patch('aioredis.connection.open_connection') as\
6760 open_conn_mock:
68 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2,
69 loop=loop)
61 open_conn_mock.side_effect = lambda *a, **kw: asyncio.sleep(0.2)
7062 with pytest.raises(asyncio.TimeoutError):
7163 await create_pool(
72 server.tcp_address, loop=loop,
64 server.tcp_address,
7365 create_connection_timeout=0.1)
7466
7567
7971 pass # pragma: no cover
8072
8173
82 @pytest.mark.run_loop
83 async def test_simple_command(create_pool, loop, server):
84 pool = await create_pool(
85 server.tcp_address,
86 minsize=10, loop=loop)
74 async def test_simple_command(create_pool, server):
75 pool = await create_pool(
76 server.tcp_address,
77 minsize=10)
8778
8879 with (await pool) as conn:
8980 msg = await conn.execute('echo', 'hello')
9485 assert pool.freesize == 10
9586
9687
97 @pytest.mark.run_loop
98 async def test_create_new(create_pool, loop, server):
99 pool = await create_pool(
100 server.tcp_address,
101 minsize=1, loop=loop)
88 async def test_create_new(create_pool, server):
89 pool = await create_pool(
90 server.tcp_address,
91 minsize=1)
10292 assert pool.size == 1
10393 assert pool.freesize == 1
10494
114104 assert pool.freesize == 2
115105
116106
117 @pytest.mark.run_loop
118 async def test_create_constraints(create_pool, loop, server):
119 pool = await create_pool(
120 server.tcp_address,
121 minsize=1, maxsize=1, loop=loop)
107 async def test_create_constraints(create_pool, server):
108 pool = await create_pool(
109 server.tcp_address,
110 minsize=1, maxsize=1)
122111 assert pool.size == 1
123112 assert pool.freesize == 1
124113
128117
129118 with pytest.raises(asyncio.TimeoutError):
130119 await asyncio.wait_for(pool.acquire(),
131 timeout=0.2,
132 loop=loop)
133
134
135 @pytest.mark.run_loop
136 async def test_create_no_minsize(create_pool, loop, server):
137 pool = await create_pool(
138 server.tcp_address,
139 minsize=0, maxsize=1, loop=loop)
120 timeout=0.2)
121
122
123 async def test_create_no_minsize(create_pool, server):
124 pool = await create_pool(
125 server.tcp_address,
126 minsize=0, maxsize=1)
140127 assert pool.size == 0
141128 assert pool.freesize == 0
142129
146133
147134 with pytest.raises(asyncio.TimeoutError):
148135 await asyncio.wait_for(pool.acquire(),
149 timeout=0.2,
150 loop=loop)
151 assert pool.size == 1
152 assert pool.freesize == 1
153
154
155 @pytest.mark.run_loop
156 async def test_create_pool_cls(create_pool, loop, server):
136 timeout=0.2)
137 assert pool.size == 1
138 assert pool.freesize == 1
139
140
141 async def test_create_pool_cls(create_pool, server):
157142
158143 class MyPool(ConnectionsPool):
159144 pass
160145
161146 pool = await create_pool(
162147 server.tcp_address,
163 loop=loop,
164148 pool_cls=MyPool)
165149
166150 assert isinstance(pool, MyPool)
167151
168152
169 @pytest.mark.run_loop
170 async def test_create_pool_cls_invalid(create_pool, loop, server):
153 async def test_create_pool_cls_invalid(create_pool, server):
171154 with pytest.raises(AssertionError):
172155 await create_pool(
173156 server.tcp_address,
174 loop=loop,
175157 pool_cls=type)
176158
177159
178 @pytest.mark.run_loop
179 async def test_release_closed(create_pool, loop, server):
180 pool = await create_pool(
181 server.tcp_address,
182 minsize=1, loop=loop)
160 async def test_release_closed(create_pool, server):
161 pool = await create_pool(
162 server.tcp_address,
163 minsize=1)
183164 assert pool.size == 1
184165 assert pool.freesize == 1
185166
190171 assert pool.freesize == 0
191172
192173
193 @pytest.mark.run_loop
194 async def test_release_pending(create_pool, loop, server):
195 pool = await create_pool(
196 server.tcp_address,
197 minsize=1, loop=loop)
198 assert pool.size == 1
199 assert pool.freesize == 1
200
201 with pytest.logs('aioredis', 'WARNING') as cm:
174 async def test_release_pending(create_pool, server, caplog):
175 pool = await create_pool(
176 server.tcp_address,
177 minsize=1)
178 assert pool.size == 1
179 assert pool.freesize == 1
180
181 caplog.clear()
182 with caplog.at_level('WARNING', 'aioredis'):
202183 with (await pool) as conn:
203184 try:
204185 await asyncio.wait_for(
206187 b'blpop',
207188 b'somekey:not:exists',
208189 b'0'),
209 0.1,
210 loop=loop)
190 0.05,
191 )
211192 except asyncio.TimeoutError:
212193 pass
213194 assert pool.size == 0
214195 assert pool.freesize == 0
215 assert cm.output == [
216 'WARNING:aioredis:Connection <RedisConnection [db:0]>'
217 ' has pending commands, closing it.'
196 assert caplog.record_tuples == [
197 ('aioredis', logging.WARNING, 'Connection <RedisConnection [db:0]>'
198 ' has pending commands, closing it.'),
218199 ]
219200
220201
221 @pytest.mark.run_loop
222 async def test_release_bad_connection(create_pool, create_redis, loop, server):
223 pool = await create_pool(
224 server.tcp_address,
225 loop=loop)
202 async def test_release_bad_connection(create_pool, create_redis, server):
203 pool = await create_pool(server.tcp_address)
226204 conn = await pool.acquire()
227205 assert conn.address[0] in ('127.0.0.1', '::1')
228206 assert conn.address[1] == server.tcp_address.port
229 other_conn = await create_redis(
230 server.tcp_address,
231 loop=loop)
207 other_conn = await create_redis(server.tcp_address)
232208 with pytest.raises(AssertionError):
233209 pool.release(other_conn)
234210
237213 await other_conn.wait_closed()
238214
239215
240 @pytest.mark.run_loop
241 async def test_select_db(create_pool, loop, server):
242 pool = await create_pool(
243 server.tcp_address,
244 loop=loop)
216 async def test_select_db(create_pool, server):
217 pool = await create_pool(server.tcp_address)
245218
246219 await pool.select(1)
247220 with (await pool) as conn:
248221 assert conn.db == 1
249222
250223
251 @pytest.mark.run_loop
252 async def test_change_db(create_pool, loop, server):
253 pool = await create_pool(
254 server.tcp_address,
255 minsize=1, db=0,
256 loop=loop)
224 async def test_change_db(create_pool, server):
225 pool = await create_pool(server.tcp_address, minsize=1, db=0)
257226 assert pool.size == 1
258227 assert pool.freesize == 1
259228
275244 assert pool.db == 1
276245
277246
278 @pytest.mark.run_loop
279 async def test_change_db_errors(create_pool, loop, server):
280 pool = await create_pool(
281 server.tcp_address,
282 minsize=1, db=0,
283 loop=loop)
247 async def test_change_db_errors(create_pool, server):
248 pool = await create_pool(server.tcp_address, minsize=1, db=0)
284249
285250 with pytest.raises(TypeError):
286251 await pool.select(None)
303268
304269
305270 @pytest.mark.xfail(reason="Need to refactor this test")
306 @pytest.mark.run_loop
307 async def test_select_and_create(create_pool, loop, server):
271 async def test_select_and_create(create_pool, server):
308272 # trying to model situation when select and acquire
309273 # called simultaneously
310274 # but acquire freezes on _wait_select and
311 # then continues with propper db
275 # then continues with proper db
312276
313277 # TODO: refactor this test as there's no _wait_select any more.
314 with async_timeout.timeout(10, loop=loop):
278 with async_timeout.timeout(10):
315279 pool = await create_pool(
316280 server.tcp_address,
317281 minsize=1, db=0,
318 loop=loop)
282 )
319283 db = 0
320284 while True:
321285 db = (db + 1) & 1
322286 _, conn = await asyncio.gather(pool.select(db),
323 pool.acquire(),
324 loop=loop)
287 pool.acquire())
325288 assert pool.db == db
326289 pool.release(conn)
327290 if conn.db == db:
329292 # await asyncio.wait_for(test(), 3, loop=loop)
330293
331294
332 @pytest.mark.run_loop
333 async def test_response_decoding(create_pool, loop, server):
334 pool = await create_pool(
335 server.tcp_address,
336 encoding='utf-8', loop=loop)
295 async def test_response_decoding(create_pool, server):
296 pool = await create_pool(server.tcp_address, encoding='utf-8')
337297
338298 assert pool.encoding == 'utf-8'
339299 with (await pool) as conn:
343303 assert res == 'value'
344304
345305
346 @pytest.mark.run_loop
347 async def test_hgetall_response_decoding(create_pool, loop, server):
348 pool = await create_pool(
349 server.tcp_address,
350 encoding='utf-8', loop=loop)
306 async def test_hgetall_response_decoding(create_pool, server):
307 pool = await create_pool(server.tcp_address, encoding='utf-8')
351308
352309 assert pool.encoding == 'utf-8'
353310 with (await pool) as conn:
359316 assert res == ['foo', 'bar', 'baz', 'zap']
360317
361318
362 @pytest.mark.run_loop
363 async def test_crappy_multiexec(create_pool, loop, server):
364 pool = await create_pool(
365 server.tcp_address,
366 encoding='utf-8', loop=loop,
319 async def test_crappy_multiexec(create_pool, server):
320 pool = await create_pool(
321 server.tcp_address,
322 encoding='utf-8',
367323 minsize=1, maxsize=1)
368324
369325 with (await pool) as conn:
376332 assert value == 'def'
377333
378334
379 @pytest.mark.run_loop
380 async def test_pool_size_growth(create_pool, server, loop):
381 pool = await create_pool(
382 server.tcp_address,
383 loop=loop,
335 async def test_pool_size_growth(create_pool, server):
336 pool = await create_pool(
337 server.tcp_address,
384338 minsize=1, maxsize=1)
385339
386340 done = set()
390344 with (await pool):
391345 assert pool.size <= pool.maxsize
392346 assert pool.freesize == 0
393 await asyncio.sleep(0.2, loop=loop)
347 await asyncio.sleep(0.2)
394348 done.add(i)
395349
396350 async def task2():
400354 assert done == {0, 1}
401355
402356 for _ in range(2):
403 tasks.append(asyncio.ensure_future(task1(_), loop=loop))
404 tasks.append(asyncio.ensure_future(task2(), loop=loop))
405 await asyncio.gather(*tasks, loop=loop)
406
407
408 @pytest.mark.run_loop
409 async def test_pool_with_closed_connections(create_pool, server, loop):
410 pool = await create_pool(
411 server.tcp_address,
412 loop=loop,
357 tasks.append(asyncio.ensure_future(task1(_)))
358 tasks.append(asyncio.ensure_future(task2()))
359 await asyncio.gather(*tasks)
360
361
362 async def test_pool_with_closed_connections(create_pool, server):
363 pool = await create_pool(
364 server.tcp_address,
413365 minsize=1, maxsize=2)
414366 assert 1 == pool.freesize
415367 conn1 = pool._pool[0]
421373 assert conn1 is not conn2
422374
423375
424 @pytest.mark.run_loop
425 async def test_pool_close(create_pool, server, loop):
426 pool = await create_pool(
427 server.tcp_address, loop=loop)
376 async def test_pool_close(create_pool, server):
377 pool = await create_pool(server.tcp_address)
428378
429379 assert pool.closed is False
430380
440390 assert (await conn.execute('ping')) == b'PONG'
441391
442392
443 @pytest.mark.run_loop
444 async def test_pool_close__used(create_pool, server, loop):
445 pool = await create_pool(
446 server.tcp_address, loop=loop)
393 async def test_pool_close__used(create_pool, server):
394 pool = await create_pool(server.tcp_address)
447395
448396 assert pool.closed is False
449397
456404 await conn.execute('ping')
457405
458406
459 @pytest.mark.run_loop
460 @pytest.redis_version(2, 8, 0, reason="maxclients config setting")
407 @redis_version(2, 8, 0, reason="maxclients config setting")
461408 async def test_pool_check_closed_when_exception(
462 create_pool, create_redis, start_server, loop):
409 create_pool, create_redis, start_server, caplog):
463410 server = start_server('server-small')
464 redis = await create_redis(server.tcp_address, loop=loop)
411 redis = await create_redis(server.tcp_address)
465412 await redis.config_set('maxclients', 2)
466413
467414 errors = (MaxClientsError, ConnectionClosedError, ConnectionError)
468 with pytest.logs('aioredis', 'DEBUG') as cm:
415 caplog.clear()
416 with caplog.at_level('DEBUG', 'aioredis'):
469417 with pytest.raises(errors):
470418 await create_pool(address=tuple(server.tcp_address),
471 minsize=3, loop=loop)
472
473 assert len(cm.output) >= 3
474 connect_msg = (
475 "DEBUG:aioredis:Creating tcp connection"
476 " to ('localhost', {})".format(server.tcp_address.port))
477 assert cm.output[:2] == [connect_msg, connect_msg]
478 assert cm.output[-1] == "DEBUG:aioredis:Closed 1 connection(s)"
479
480
481 @pytest.mark.run_loop
482 async def test_pool_get_connection(create_pool, server, loop):
483 pool = await create_pool(server.tcp_address, minsize=1, maxsize=2,
484 loop=loop)
419 minsize=3)
420
421 assert len(caplog.record_tuples) >= 3
422 connect_msg = "Creating tcp connection to ('localhost', {})".format(
423 server.tcp_address.port)
424 assert caplog.record_tuples[:2] == [
425 ('aioredis', logging.DEBUG, connect_msg),
426 ('aioredis', logging.DEBUG, connect_msg),
427 ]
428 assert caplog.record_tuples[-1] == (
429 'aioredis', logging.DEBUG, 'Closed 1 connection(s)'
430 )
431
432
433 async def test_pool_get_connection(create_pool, server):
434 pool = await create_pool(server.tcp_address, minsize=1, maxsize=2)
485435 res = await pool.execute("set", "key", "val")
486436 assert res == b'OK'
487437
498448 assert res == b'value'
499449
500450
501 @pytest.mark.run_loop
502 async def test_pool_get_connection_with_pipelining(create_pool, server, loop):
503 pool = await create_pool(server.tcp_address, minsize=1, maxsize=2,
504 loop=loop)
451 async def test_pool_get_connection_with_pipelining(create_pool, server):
452 pool = await create_pool(server.tcp_address, minsize=1, maxsize=2)
505453 fut1 = pool.execute('set', 'key', 'val')
506454 fut2 = pool.execute_pubsub("subscribe", "channel:1")
507455 fut3 = pool.execute('getset', 'key', 'next')
519467 assert res == b'next'
520468
521469
522 @pytest.mark.run_loop
523 async def test_pool_idle_close(create_pool, start_server, loop):
470 @pytest.mark.skipif(sys.platform == "win32", reason="flaky on windows")
471 async def test_pool_idle_close(create_pool, start_server, caplog):
524472 server = start_server('idle')
525 conn = await create_pool(server.tcp_address, minsize=2, loop=loop)
473 conn = await create_pool(server.tcp_address, minsize=2)
526474 ok = await conn.execute("config", "set", "timeout", 1)
527475 assert ok == b'OK'
528476
529 await asyncio.sleep(2, loop=loop)
530
477 caplog.clear()
478 with caplog.at_level('DEBUG', 'aioredis'):
479 # wait for either disconnection logged or test timeout reached.
480 while len(caplog.record_tuples) < 2:
481 await asyncio.sleep(.5)
482 expected = [
483 ('aioredis', logging.DEBUG,
484 'Connection has been closed by server, response: None'),
485 ('aioredis', logging.DEBUG,
486 'Connection has been closed by server, response: None'),
487 ]
488 if BPO_34638:
489 expected += [
490 ('asyncio', logging.ERROR,
491 'An open stream object is being garbage collected; '
492 'call "stream.close()" explicitly.'),
493 ('asyncio', logging.ERROR,
494 'An open stream object is being garbage collected; '
495 'call "stream.close()" explicitly.')]
496 # The order in which logs are collected differs each time.
497 assert sorted(caplog.record_tuples) == sorted(expected)
498
499 # On CI this test fails from time to time.
500 # It is possible to pick 'unclosed' connection and send command,
501 # however on the same loop iteration it gets closed and exception is raised
531502 assert (await conn.execute('ping')) == b'PONG'
532503
533504
534 @pytest.mark.run_loop
535 async def test_await(create_pool, server, loop):
536 pool = await create_pool(
537 server.tcp_address,
538 minsize=10, loop=loop)
505 async def test_await(create_pool, server):
506 pool = await create_pool(server.tcp_address, minsize=10)
539507
540508 with (await pool) as conn:
541509 msg = await conn.execute('echo', 'hello')
542510 assert msg == b'hello'
543511
544512
545 @pytest.mark.run_loop
546 async def test_async_with(create_pool, server, loop):
547 pool = await create_pool(
548 server.tcp_address,
549 minsize=10, loop=loop)
513 async def test_async_with(create_pool, server):
514 pool = await create_pool(server.tcp_address, minsize=10)
550515
551516 async with pool.get() as conn:
552517 msg = await conn.execute('echo', 'hello')
553518 assert msg == b'hello'
554519
555520
556 @pytest.mark.run_loop
557 async def test_pool__drop_closed(create_pool, server, loop):
558 pool = await create_pool(server.tcp_address,
559 minsize=3,
560 maxsize=3,
561 loop=loop)
521 async def test_pool__drop_closed(create_pool, server):
522 pool = await create_pool(server.tcp_address, minsize=3, maxsize=3)
562523 assert pool.size == 3
563524 assert pool.freesize == 3
564525 assert not pool._pool[0].closed
00 import asyncio
11 import pytest
22 import aioredis
3
4 from _testutils import redis_version
35
46
57 async def _reader(channel, output, waiter, conn):
1113 await output.put(msg)
1214
1315
14 @pytest.mark.run_loop
1516 async def test_publish(create_connection, redis, server, loop):
16 out = asyncio.Queue(loop=loop)
17 out = asyncio.Queue()
1718 fut = loop.create_future()
18 conn = await create_connection(
19 server.tcp_address, loop=loop)
20 sub = asyncio.ensure_future(_reader('chan:1', out, fut, conn), loop=loop)
19 conn = await create_connection(server.tcp_address)
20 sub = asyncio.ensure_future(_reader('chan:1', out, fut, conn))
2121
2222 await fut
2323 await redis.publish('chan:1', 'Hello')
2727 sub.cancel()
2828
2929
30 @pytest.mark.run_loop
3130 async def test_publish_json(create_connection, redis, server, loop):
32 out = asyncio.Queue(loop=loop)
31 out = asyncio.Queue()
3332 fut = loop.create_future()
34 conn = await create_connection(
35 server.tcp_address, loop=loop)
36 sub = asyncio.ensure_future(_reader('chan:1', out, fut, conn), loop=loop)
33 conn = await create_connection(server.tcp_address)
34 sub = asyncio.ensure_future(_reader('chan:1', out, fut, conn))
3735
3836 await fut
3937
4038 res = await redis.publish_json('chan:1', {"Hello": "world"})
41 assert res == 1 # recievers
39 assert res == 1 # receivers
4240
4341 msg = await out.get()
4442 assert msg == b'{"Hello": "world"}'
4543 sub.cancel()
4644
4745
48 @pytest.mark.run_loop
4946 async def test_subscribe(redis):
5047 res = await redis.subscribe('chan:1', 'chan:2')
5148 assert redis.in_pubsub == 2
6562 @pytest.mark.parametrize('create_redis', [
6663 pytest.param(aioredis.create_redis_pool, id='pool'),
6764 ])
68 @pytest.mark.run_loop
69 async def test_subscribe_empty_pool(create_redis, server, loop, _closable):
70 redis = await create_redis(server.tcp_address, loop=loop)
65 async def test_subscribe_empty_pool(create_redis, server, _closable):
66 redis = await create_redis(server.tcp_address)
7167 _closable(redis)
7268 await redis.connection.clear()
7369
8682 [b'unsubscribe', b'chan:2', 0]]
8783
8884
89 @pytest.mark.run_loop
90 async def test_psubscribe(redis, create_redis, server, loop):
85 async def test_psubscribe(redis, create_redis, server):
9186 sub = redis
9287 res = await sub.psubscribe('patt:*', 'chan:*')
9388 assert sub.in_pubsub == 2
9691 pat2 = sub.patterns['chan:*']
9792 assert res == [pat1, pat2]
9893
99 pub = await create_redis(
100 server.tcp_address, loop=loop)
94 pub = await create_redis(server.tcp_address)
10195 await pub.publish_json('chan:123', {"Hello": "World"})
10296 res = await pat2.get_json()
10397 assert res == (b'chan:123', {"Hello": "World"})
112106 @pytest.mark.parametrize('create_redis', [
113107 pytest.param(aioredis.create_redis_pool, id='pool'),
114108 ])
115 @pytest.mark.run_loop
116 async def test_psubscribe_empty_pool(create_redis, server, loop, _closable):
117 sub = await create_redis(server.tcp_address, loop=loop)
118 pub = await create_redis(server.tcp_address, loop=loop)
109 async def test_psubscribe_empty_pool(create_redis, server, _closable):
110 sub = await create_redis(server.tcp_address)
111 pub = await create_redis(server.tcp_address)
119112 _closable(sub)
120113 _closable(pub)
121114 await sub.connection.clear()
137130 ]
138131
139132
140 @pytest.redis_version(
133 @redis_version(
141134 2, 8, 0, reason='PUBSUB CHANNELS is available since redis>=2.8.0')
142 @pytest.mark.run_loop
143 async def test_pubsub_channels(create_redis, server, loop):
144 redis = await create_redis(
145 server.tcp_address, loop=loop)
135 async def test_pubsub_channels(create_redis, server):
136 redis = await create_redis(server.tcp_address)
146137 res = await redis.pubsub_channels()
147138 assert res == []
148139
149140 res = await redis.pubsub_channels('chan:*')
150141 assert res == []
151142
152 sub = await create_redis(
153 server.tcp_address, loop=loop)
143 sub = await create_redis(server.tcp_address)
154144 await sub.subscribe('chan:1')
155145
156146 res = await redis.pubsub_channels()
166156 assert res == []
167157
168158
169 @pytest.redis_version(
159 @redis_version(
170160 2, 8, 0, reason='PUBSUB NUMSUB is available since redis>=2.8.0')
171 @pytest.mark.run_loop
172 async def test_pubsub_numsub(create_redis, server, loop):
173 redis = await create_redis(
174 server.tcp_address, loop=loop)
161 async def test_pubsub_numsub(create_redis, server):
162 redis = await create_redis(server.tcp_address)
175163 res = await redis.pubsub_numsub()
176164 assert res == {}
177165
178166 res = await redis.pubsub_numsub('chan:1')
179167 assert res == {b'chan:1': 0}
180168
181 sub = await create_redis(
182 server.tcp_address, loop=loop)
169 sub = await create_redis(server.tcp_address)
183170 await sub.subscribe('chan:1')
184171
185172 res = await redis.pubsub_numsub()
201188 assert res == {}
202189
203190
204 @pytest.redis_version(
191 @redis_version(
205192 2, 8, 0, reason='PUBSUB NUMPAT is available since redis>=2.8.0')
206 @pytest.mark.run_loop
207 async def test_pubsub_numpat(create_redis, server, loop, redis):
208 sub = await create_redis(
209 server.tcp_address, loop=loop)
193 async def test_pubsub_numpat(create_redis, server, redis):
194 sub = await create_redis(server.tcp_address)
210195
211196 res = await redis.pubsub_numpat()
212197 assert res == 0
220205 assert res == 1
221206
222207
223 @pytest.mark.run_loop
224 async def test_close_pubsub_channels(redis, loop):
208 async def test_close_pubsub_channels(redis):
225209 ch, = await redis.subscribe('chan:1')
226210
227211 async def waiter(ch):
228212 assert not await ch.wait_message()
229213
230 tsk = asyncio.ensure_future(waiter(ch), loop=loop)
214 tsk = asyncio.ensure_future(waiter(ch))
231215 redis.close()
232216 await redis.wait_closed()
233217 await tsk
234218
235219
236 @pytest.mark.run_loop
237 async def test_close_pubsub_patterns(redis, loop):
220 async def test_close_pubsub_patterns(redis):
238221 ch, = await redis.psubscribe('chan:*')
239222
240223 async def waiter(ch):
241224 assert not await ch.wait_message()
242225
243 tsk = asyncio.ensure_future(waiter(ch), loop=loop)
226 tsk = asyncio.ensure_future(waiter(ch))
244227 redis.close()
245228 await redis.wait_closed()
246229 await tsk
247230
248231
249 @pytest.mark.run_loop
250 async def test_close_cancelled_pubsub_channel(redis, loop):
232 async def test_close_cancelled_pubsub_channel(redis):
251233 ch, = await redis.subscribe('chan:1')
252234
253235 async def waiter(ch):
254236 with pytest.raises(asyncio.CancelledError):
255237 await ch.wait_message()
256238
257 tsk = asyncio.ensure_future(waiter(ch), loop=loop)
258 await asyncio.sleep(0, loop=loop)
239 tsk = asyncio.ensure_future(waiter(ch))
240 await asyncio.sleep(0)
259241 tsk.cancel()
260242
261243
262 @pytest.mark.run_loop
263244 async def test_channel_get_after_close(create_redis, loop, server):
264 sub = await create_redis(
265 server.tcp_address, loop=loop)
266 pub = await create_redis(
267 server.tcp_address, loop=loop)
245 sub = await create_redis(server.tcp_address)
246 pub = await create_redis(server.tcp_address)
268247 ch, = await sub.subscribe('chan:1')
269248
270249 await pub.publish('chan:1', 'message')
275254 assert await ch.get()
276255
277256
278 @pytest.mark.run_loop
279 async def test_subscribe_concurrency(create_redis, server, loop):
280 sub = await create_redis(
281 server.tcp_address, loop=loop)
282 pub = await create_redis(
283 server.tcp_address, loop=loop)
257 async def test_subscribe_concurrency(create_redis, server):
258 sub = await create_redis(server.tcp_address)
259 pub = await create_redis(server.tcp_address)
284260
285261 async def subscribe(*args):
286262 return await sub.subscribe(*args)
287263
288264 async def publish(*args):
289 await asyncio.sleep(0, loop=loop)
265 await asyncio.sleep(0)
290266 return await pub.publish(*args)
291267
292268 res = await asyncio.gather(
293269 subscribe('channel:0'),
294270 publish('channel:0', 'Hello'),
295271 subscribe('channel:1'),
296 loop=loop)
272 )
297273 (ch1,), subs, (ch2,) = res
298274
299275 assert ch1.name == b'channel:0'
301277 assert ch2.name == b'channel:1'
302278
303279
304 @pytest.redis_version(
280 @redis_version(
305281 3, 2, 0, reason='PUBSUB PING is available since redis>=3.2.0')
306 @pytest.mark.run_loop
307282 async def test_pubsub_ping(redis):
308283 await redis.subscribe('chan:1', 'chan:2')
309284
317292 await redis.unsubscribe('chan:1', 'chan:2')
318293
319294
320 @pytest.mark.run_loop
321 async def test_pubsub_channel_iter(create_redis, server, loop):
322 sub = await create_redis(server.tcp_address, loop=loop)
323 pub = await create_redis(server.tcp_address, loop=loop)
295 async def test_pubsub_channel_iter(create_redis, server):
296 sub = await create_redis(server.tcp_address)
297 pub = await create_redis(server.tcp_address)
324298
325299 ch, = await sub.subscribe('chan:1')
326300
330304 lst.append(msg)
331305 return lst
332306
333 tsk = asyncio.ensure_future(coro(ch), loop=loop)
307 tsk = asyncio.ensure_future(coro(ch))
334308 await pub.publish_json('chan:1', {'Hello': 'World'})
335309 await pub.publish_json('chan:1', ['message'])
336 await asyncio.sleep(0, loop=loop)
310 await asyncio.sleep(0.1)
337311 ch.close()
338312 assert await tsk == [b'{"Hello": "World"}', b'["message"]']
313
314
315 @redis_version(
316 2, 8, 12, reason="extended `client kill` format required")
317 async def test_pubsub_disconnection_notification(create_redis, server):
318 sub = await create_redis(server.tcp_address)
319 pub = await create_redis(server.tcp_address)
320
321 async def coro(ch):
322 lst = []
323 async for msg in ch.iter():
324 assert ch.is_active
325 lst.append(msg)
326 return lst
327
328 ch, = await sub.subscribe('chan:1')
329 tsk = asyncio.ensure_future(coro(ch))
330 assert ch.is_active
331 await pub.publish_json('chan:1', {'Hello': 'World'})
332 assert ch.is_active
333 assert await pub.execute('client', 'kill', 'type', 'pubsub') >= 1
334 assert await pub.publish_json('chan:1', ['message']) == 0
335 assert await tsk == [b'{"Hello": "World"}']
336 assert not ch.is_active
11 import asyncio
22 import json
33 import sys
4 import logging
45
56 from unittest import mock
67
910 from aioredis.pubsub import Receiver, _Sender
1011
1112
12 def test_listener_channel(loop):
13 mpsc = Receiver(loop=loop)
13 def test_listener_channel():
14 mpsc = Receiver()
1415 assert not mpsc.is_active
1516
1617 ch_a = mpsc.channel("channel:1")
3536 assert dict(mpsc.patterns) == {}
3637
3738
38 def test_listener_pattern(loop):
39 mpsc = Receiver(loop=loop)
39 def test_listener_pattern():
40 mpsc = Receiver()
4041 assert not mpsc.is_active
4142
4243 ch_a = mpsc.pattern("*")
6162 assert dict(mpsc.patterns) == {b'*': ch}
6263
6364
64 @pytest.mark.run_loop
65 async def test_sender(loop):
65 async def test_sender():
6666 receiver = mock.Mock()
6767
6868 sender = _Sender(receiver, 'name', is_pattern=False)
9494 assert receiver.mock_calls == []
9595
9696
97 @pytest.mark.run_loop
98 async def test_subscriptions(create_connection, server, loop):
99 sub = await create_connection(server.tcp_address, loop=loop)
100 pub = await create_connection(server.tcp_address, loop=loop)
101
102 mpsc = Receiver(loop=loop)
97 async def test_subscriptions(create_connection, server):
98 sub = await create_connection(server.tcp_address)
99 pub = await create_connection(server.tcp_address)
100
101 mpsc = Receiver()
103102 await sub.execute_pubsub('subscribe',
104103 mpsc.channel('channel:1'),
105104 mpsc.channel('channel:3'))
120119 assert msg == b"Hello world"
121120
122121
123 @pytest.mark.run_loop
124 async def test_unsubscribe(create_connection, server, loop):
125 sub = await create_connection(server.tcp_address, loop=loop)
126 pub = await create_connection(server.tcp_address, loop=loop)
127
128 mpsc = Receiver(loop=loop)
122 async def test_unsubscribe(create_connection, server):
123 sub = await create_connection(server.tcp_address)
124 pub = await create_connection(server.tcp_address)
125
126 mpsc = Receiver()
129127 await sub.execute_pubsub('subscribe',
130128 mpsc.channel('channel:1'),
131129 mpsc.channel('channel:3'))
158156 assert not ch.is_pattern
159157 assert msg == b"message"
160158
161 waiter = asyncio.ensure_future(mpsc.get(), loop=loop)
159 waiter = asyncio.ensure_future(mpsc.get())
162160 await sub.execute_pubsub('unsubscribe', 'channel:3')
163161 assert not mpsc.is_active
164162 assert await waiter is None
165163
166164
167 @pytest.mark.run_loop
168 async def test_stopped(create_connection, server, loop):
169 sub = await create_connection(server.tcp_address, loop=loop)
170 pub = await create_connection(server.tcp_address, loop=loop)
171
172 mpsc = Receiver(loop=loop)
165 async def test_stopped(create_connection, server, caplog):
166 sub = await create_connection(server.tcp_address)
167 pub = await create_connection(server.tcp_address)
168
169 mpsc = Receiver()
173170 await sub.execute_pubsub('subscribe', mpsc.channel('channel:1'))
174171 assert mpsc.is_active
175172 mpsc.stop()
176173
177 with pytest.logs('aioredis', 'DEBUG') as cm:
174 caplog.clear()
175 with caplog.at_level('DEBUG', 'aioredis'):
178176 await pub.execute('publish', 'channel:1', b'Hello')
179 await asyncio.sleep(0, loop=loop)
180
181 assert len(cm.output) == 1
177 await asyncio.sleep(0)
178
179 assert len(caplog.record_tuples) == 1
182180 # Receiver must have 1 EndOfStream message
183 warn_messaege = (
184 "WARNING:aioredis:Pub/Sub listener message after stop: "
181 message = (
182 "Pub/Sub listener message after stop: "
185183 "sender: <_Sender name:b'channel:1', is_pattern:False, receiver:"
186184 "<Receiver is_active:False, senders:1, qsize:0>>, data: b'Hello'"
187185 )
188 assert cm.output == [warn_messaege]
186 assert caplog.record_tuples == [
187 ('aioredis', logging.WARNING, message),
188 ]
189189
190190 # assert (await mpsc.get()) is None
191191 with pytest.raises(ChannelClosedError):
194194 assert res is False
195195
196196
197 @pytest.mark.run_loop
198 async def test_wait_message(create_connection, server, loop):
199 sub = await create_connection(server.tcp_address, loop=loop)
200 pub = await create_connection(server.tcp_address, loop=loop)
201
202 mpsc = Receiver(loop=loop)
197 async def test_wait_message(create_connection, server):
198 sub = await create_connection(server.tcp_address)
199 pub = await create_connection(server.tcp_address)
200
201 mpsc = Receiver()
203202 await sub.execute_pubsub('subscribe', mpsc.channel('channel:1'))
204 fut = asyncio.ensure_future(mpsc.wait_message(), loop=loop)
203 fut = asyncio.ensure_future(mpsc.wait_message())
205204 assert not fut.done()
206 await asyncio.sleep(0, loop=loop)
205 await asyncio.sleep(0)
207206 assert not fut.done()
208207
209208 await pub.execute('publish', 'channel:1', 'hello')
210 await asyncio.sleep(0, loop=loop) # read in connection
211 await asyncio.sleep(0, loop=loop) # call Future.set_result
209 await asyncio.sleep(0) # read in connection
210 await asyncio.sleep(0) # call Future.set_result
212211 assert fut.done()
213212 res = await fut
214213 assert res is True
215214
216215
217 @pytest.mark.run_loop
218 async def test_decode_message(loop):
219 mpsc = Receiver(loop)
216 async def test_decode_message():
217 mpsc = Receiver()
220218 ch = mpsc.channel('channel:1')
221219 ch.put_nowait(b'Some data')
222220
237235
238236 @pytest.mark.skipif(sys.version_info >= (3, 6),
239237 reason="json.loads accept bytes since Python 3.6")
240 @pytest.mark.run_loop
241 async def test_decode_message_error(loop):
242 mpsc = Receiver(loop)
238 async def test_decode_message_error():
239 mpsc = Receiver()
243240 ch = mpsc.channel('channel:1')
244241
245242 ch.put_nowait(b'{"hello": "world"}')
254251 assert (await mpsc.get(decoder=json.loads)) == unexpected
255252
256253
257 @pytest.mark.run_loop
258 async def test_decode_message_for_pattern(loop):
259 mpsc = Receiver(loop)
254 async def test_decode_message_for_pattern():
255 mpsc = Receiver()
260256 ch = mpsc.pattern('*')
261257 ch.put_nowait((b'channel', b'Some data'))
262258
275271 assert res[1] == (b'channel', {'hello': 'world'})
276272
277273
278 @pytest.mark.run_loop
279274 async def test_pubsub_receiver_iter(create_redis, server, loop):
280 sub = await create_redis(server.tcp_address, loop=loop)
281 pub = await create_redis(server.tcp_address, loop=loop)
282
283 mpsc = Receiver(loop=loop)
275 sub = await create_redis(server.tcp_address)
276 pub = await create_redis(server.tcp_address)
277
278 mpsc = Receiver()
284279
285280 async def coro(mpsc):
286281 lst = []
288283 lst.append(msg)
289284 return lst
290285
291 tsk = asyncio.ensure_future(coro(mpsc), loop=loop)
286 tsk = asyncio.ensure_future(coro(mpsc))
292287 snd1, = await sub.subscribe(mpsc.channel('chan:1'))
293288 snd2, = await sub.subscribe(mpsc.channel('chan:2'))
294289 snd3, = await sub.psubscribe(mpsc.pattern('chan:*'))
298293 subscribers = await pub.publish_json('chan:2', ['message'])
299294 assert subscribers > 1
300295 loop.call_later(0, mpsc.stop)
301 # await asyncio.sleep(0, loop=loop)
296 await asyncio.sleep(0.01)
302297 assert await tsk == [
303298 (snd1, b'{"Hello": "World"}'),
304299 (snd3, (b'chan:1', b'{"Hello": "World"}')),
308303 assert not mpsc.is_active
309304
310305
311 @pytest.mark.run_loop(timeout=5)
306 @pytest.mark.timeout(5)
312307 async def test_pubsub_receiver_call_stop_with_empty_queue(
313308 create_redis, server, loop):
314 sub = await create_redis(server.tcp_address, loop=loop)
315
316 mpsc = Receiver(loop=loop)
309 sub = await create_redis(server.tcp_address)
310
311 mpsc = Receiver()
317312
318313 # FIXME: currently at least one subscriber is needed
319314 snd1, = await sub.subscribe(mpsc.channel('chan:1'))
327322 assert not mpsc.is_active
328323
329324
330 @pytest.mark.run_loop
331 async def test_pubsub_receiver_stop_on_disconnect(create_redis, server, loop):
332 pub = await create_redis(server.tcp_address, loop=loop)
333 sub = await create_redis(server.tcp_address, loop=loop)
325 async def test_pubsub_receiver_stop_on_disconnect(create_redis, server):
326 pub = await create_redis(server.tcp_address)
327 sub = await create_redis(server.tcp_address)
334328 sub_name = 'sub-{:X}'.format(id(sub))
335329 await sub.client_setname(sub_name)
336330 for sub_info in await pub.client_list():
338332 break
339333 assert sub_info.name == sub_name
340334
341 mpsc = Receiver(loop=loop)
335 mpsc = Receiver()
342336 await sub.subscribe(mpsc.channel('channel:1'))
343337 await sub.subscribe(mpsc.channel('channel:2'))
344338 await sub.psubscribe(mpsc.pattern('channel:*'))
345339
346 q = asyncio.Queue(loop=loop)
340 q = asyncio.Queue()
347341 EOF = object()
348342
349343 async def reader():
351345 await q.put((ch.name, msg))
352346 await q.put(EOF)
353347
354 tsk = asyncio.ensure_future(reader(), loop=loop)
348 tsk = asyncio.ensure_future(reader())
355349 await pub.publish_json('channel:1', ['hello'])
356350 await pub.publish_json('channel:2', ['hello'])
357351 # receive all messages
362356
363357 # XXX: need to implement `client kill`
364358 assert await pub.execute('client', 'kill', sub_info.addr) in (b'OK', 1)
365 await asyncio.wait_for(tsk, timeout=1, loop=loop)
359 await asyncio.wait_for(tsk, timeout=1)
366360 assert await q.get() is EOF
33 from aioredis import ReplyError
44
55
6 @pytest.mark.run_loop
76 async def test_eval(redis):
87 await redis.delete('key:eval', 'value:eval')
98
3736 await redis.eval(None)
3837
3938
40 @pytest.mark.run_loop
4139 async def test_evalsha(redis):
4240 script = b"return 42"
4341 sha_hash = await redis.script_load(script)
6159 await redis.evalsha(None)
6260
6361
64 @pytest.mark.run_loop
6562 async def test_script_exists(redis):
6663 sha_hash1 = await redis.script_load(b'return 1')
6764 sha_hash2 = await redis.script_load(b'return 2')
8178 await redis.script_exists('123', None)
8279
8380
84 @pytest.mark.run_loop
8581 async def test_script_flush(redis):
8682 sha_hash1 = await redis.script_load(b'return 1')
8783 assert len(sha_hash1) == 40
9389 assert res == [0]
9490
9591
96 @pytest.mark.run_loop
9792 async def test_script_load(redis):
9893 sha_hash1 = await redis.script_load(b'return 1')
9994 sha_hash2 = await redis.script_load(b'return 2')
10398 assert res == [1, 1]
10499
105100
106 @pytest.mark.run_loop
107 async def test_script_kill(create_redis, loop, server, redis):
101 async def test_script_kill(create_redis, server, redis):
108102 script = "while (1) do redis.call('TIME') end"
109103
110 other_redis = await create_redis(
111 server.tcp_address, loop=loop)
104 other_redis = await create_redis(server.tcp_address)
112105
113106 ok = await redis.set('key1', 'value')
114107 assert ok is True
115108
116109 fut = other_redis.eval(script, keys=['non-existent-key'], args=[10])
117 await asyncio.sleep(0.1, loop=loop)
110 await asyncio.sleep(0.1)
118111 resp = await redis.script_kill()
119112 assert resp is True
120113
00 import asyncio
11 import pytest
22 import sys
3 import logging
34
45 from aioredis import RedisError, ReplyError, PoolClosedError
56 from aioredis.errors import MasterReplyError
67 from aioredis.sentinel.commands import RedisSentinel
78 from aioredis.abc import AbcPool
8
9 pytestmark = pytest.redis_version(2, 8, 12, reason="Sentinel v2 required")
9 from _testutils import redis_version
10
11 pytestmark = redis_version(2, 8, 12, reason="Sentinel v2 required")
1012 if sys.platform == 'win32':
1113 pytestmark = pytest.mark.skip(reason="unstable on windows")
1214
1315 BPO_30399 = sys.version_info >= (3, 7, 0, 'alpha', 3)
1416
1517
16 @pytest.mark.run_loop
1718 async def test_client_close(redis_sentinel):
1819 assert isinstance(redis_sentinel, RedisSentinel)
1920 assert not redis_sentinel.closed
2627 await redis_sentinel.wait_closed()
2728
2829
29 @pytest.mark.run_loop
30 async def test_global_loop(sentinel, create_sentinel, loop):
31 asyncio.set_event_loop(loop)
32
33 # force global loop
34 client = await create_sentinel([sentinel.tcp_address],
35 timeout=1, loop=None)
36 assert client._pool._loop is loop
37
38 asyncio.set_event_loop(None)
39
40
41 @pytest.mark.run_loop
4230 async def test_ping(redis_sentinel):
4331 assert b'PONG' == (await redis_sentinel.ping())
4432
4533
46 @pytest.mark.run_loop
4734 async def test_master_info(redis_sentinel, sentinel):
4835 info = await redis_sentinel.master('master-no-fail')
4936 assert isinstance(info, dict)
8168 assert 'link-refcount' in info
8269
8370
84 @pytest.mark.run_loop
85 async def test_master__auth(create_sentinel, start_sentinel,
86 start_server, loop):
71 async def test_master__auth(create_sentinel, start_sentinel, start_server):
8772 master = start_server('master_1', password='123')
8873 start_server('slave_1', slaveof=master, password='123')
8974
9075 sentinel = start_sentinel('auth_sentinel_1', master)
9176 client1 = await create_sentinel(
92 [sentinel.tcp_address], password='123', timeout=1, loop=loop)
77 [sentinel.tcp_address], password='123', timeout=1)
9378
9479 client2 = await create_sentinel(
95 [sentinel.tcp_address], password='111', timeout=1, loop=loop)
96
97 client3 = await create_sentinel(
98 [sentinel.tcp_address], timeout=1, loop=loop)
80 [sentinel.tcp_address], password='111', timeout=1)
81
82 client3 = await create_sentinel([sentinel.tcp_address], timeout=1)
9983
10084 m1 = client1.master_for(master.name)
10185 await m1.set('mykey', 'myval')
116100 await m3.set('mykey', 'myval')
117101
118102
119 @pytest.mark.run_loop
120 async def test_master__no_auth(create_sentinel, sentinel, loop):
103 async def test_master__no_auth(create_sentinel, sentinel):
121104 client = await create_sentinel(
122 [sentinel.tcp_address], password='123', timeout=1, loop=loop)
105 [sentinel.tcp_address], password='123', timeout=1)
123106
124107 master = client.master_for('masterA')
125108 with pytest.raises(MasterReplyError):
126109 await master.set('mykey', 'myval')
127110
128111
129 @pytest.mark.run_loop
130112 async def test_master__unknown(redis_sentinel):
131113 with pytest.raises(ReplyError):
132114 await redis_sentinel.master('unknown-master')
133115
134116
135 @pytest.mark.run_loop
136117 async def test_master_address(redis_sentinel, sentinel):
137118 _, port = await redis_sentinel.master_address('master-no-fail')
138119 assert port == sentinel.masters['master-no-fail'].tcp_address.port
139120
140121
141 @pytest.mark.run_loop
142122 async def test_master_address__unknown(redis_sentinel):
143123 res = await redis_sentinel.master_address('unknown-master')
144124 assert res is None
145125
146126
147 @pytest.mark.run_loop
148127 async def test_masters(redis_sentinel):
149128 masters = await redis_sentinel.masters()
150129 assert isinstance(masters, dict)
153132 assert isinstance(masters['master-no-fail'], dict)
154133
155134
156 @pytest.mark.run_loop
157135 async def test_slave_info(sentinel, redis_sentinel):
158136 info = await redis_sentinel.slaves('master-no-fail')
159137 assert len(info) == 1
195173 assert not missing
196174
197175
198 @pytest.mark.run_loop
199176 async def test_slave__unknown(redis_sentinel):
200177 with pytest.raises(ReplyError):
201178 await redis_sentinel.slaves('unknown-master')
202179
203180
204 @pytest.mark.run_loop
205181 async def test_sentinels_empty(redis_sentinel):
206182 res = await redis_sentinel.sentinels('master-no-fail')
207183 assert res == []
210186 await redis_sentinel.sentinels('unknown-master')
211187
212188
213 @pytest.mark.run_loop(timeout=30)
189 @pytest.mark.timeout(30)
214190 async def test_sentinels__exist(create_sentinel, start_sentinel,
215 start_server, loop):
191 start_server):
216192 m1 = start_server('master-two-sentinels')
217193 s1 = start_sentinel('peer-sentinel-1', m1, quorum=2, noslaves=True)
218194 s2 = start_sentinel('peer-sentinel-2', m1, quorum=2, noslaves=True)
225201 info = await redis_sentinel.master('master-two-sentinels')
226202 if info['num-other-sentinels'] > 0:
227203 break
228 await asyncio.sleep(.2, loop=loop)
204 await asyncio.sleep(.2)
229205 info = await redis_sentinel.sentinels('master-two-sentinels')
230206 assert len(info) == 1
231207 assert 'sentinel' in info[0]['flags']
232208 assert info[0]['port'] in (s1.tcp_address.port, s2.tcp_address.port)
233209
234210
235 @pytest.mark.run_loop
236211 async def test_ckquorum(redis_sentinel):
237212 assert (await redis_sentinel.check_quorum('master-no-fail'))
238213
247222 assert (await redis_sentinel.check_quorum('master-no-fail'))
248223
249224
250 @pytest.mark.run_loop
251225 async def test_set_option(redis_sentinel):
252226 assert (await redis_sentinel.set('master-no-fail', 'quorum', 10))
253227 master = await redis_sentinel.master('master-no-fail')
261235 await redis_sentinel.set('masterA', 'foo', 'bar')
262236
263237
264 @pytest.mark.run_loop
265 async def test_sentinel_role(sentinel, create_redis, loop):
266 redis = await create_redis(sentinel.tcp_address, loop=loop)
238 async def test_sentinel_role(sentinel, create_redis):
239 redis = await create_redis(sentinel.tcp_address)
267240 info = await redis.role()
268241 assert info.role == 'sentinel'
269242 assert isinstance(info.masters, list)
270243 assert 'master-no-fail' in info.masters
271244
272245
273 @pytest.mark.run_loop(timeout=30)
274 async def test_remove(redis_sentinel, start_server, loop):
246 @pytest.mark.timeout(30)
247 async def test_remove(redis_sentinel, start_server):
275248 m1 = start_server('master-to-remove')
276249 ok = await redis_sentinel.monitor(
277250 m1.name, '127.0.0.1', m1.tcp_address.port, 1)
284257 await redis_sentinel.remove('unknown-master')
285258
286259
287 @pytest.mark.run_loop(timeout=30)
288 async def test_monitor(redis_sentinel, start_server, loop, unused_port):
260 @pytest.mark.timeout(30)
261 async def test_monitor(redis_sentinel, start_server, unused_port):
289262 m1 = start_server('master-to-monitor')
290263 ok = await redis_sentinel.monitor(
291264 m1.name, '127.0.0.1', m1.tcp_address.port, 1)
295268 assert port == m1.tcp_address.port
296269
297270
298 @pytest.mark.run_loop(timeout=5)
299 async def test_sentinel_master_pool_size(sentinel, create_sentinel):
271 @pytest.mark.timeout(5)
272 async def test_sentinel_master_pool_size(sentinel, create_sentinel, caplog):
300273 redis_s = await create_sentinel([sentinel.tcp_address], timeout=1,
301274 minsize=10, maxsize=10)
302275 master = redis_s.master_for('master-no-fail')
303276 assert isinstance(master.connection, AbcPool)
304277 assert master.connection.size == 0
305278
306 with pytest.logs('aioredis.sentinel', 'DEBUG') as cm:
279 caplog.clear()
280 with caplog.at_level('DEBUG', 'aioredis.sentinel'):
307281 assert await master.ping()
308 assert len(cm.output) == 1
309 assert cm.output == [
310 "DEBUG:aioredis.sentinel:Discoverred new address {}"
311 " for master-no-fail".format(master.address),
282 assert len(caplog.record_tuples) == 1
283 assert caplog.record_tuples == [
284 ('aioredis.sentinel', logging.DEBUG,
285 "Discoverred new address {} for master-no-fail".format(
286 master.address)
287 ),
312288 ]
313289 assert master.connection.size == 10
314290 assert master.connection.freesize == 10
55 SlaveNotFoundError,
66 ReadOnlyError,
77 )
8
9
10 pytestmark = pytest.redis_version(2, 8, 12, reason="Sentinel v2 required")
8 from _testutils import redis_version
9
10
11 pytestmark = redis_version(2, 8, 12, reason="Sentinel v2 required")
1112 if sys.platform == 'win32':
1213 pytestmark = pytest.mark.skip(reason="unstable on windows")
1314
1415
15 @pytest.mark.xfail
16 @pytest.mark.run_loop(timeout=40)
16 @pytest.mark.timeout(40)
1717 async def test_auto_failover(start_sentinel, start_server,
18 create_sentinel, create_connection, loop):
18 create_sentinel, create_connection):
1919 server1 = start_server('master-failover', ['slave-read-only yes'])
2020 start_server('slave-failover1', ['slave-read-only yes'], slaveof=server1)
2121 start_server('slave-failover2', ['slave-read-only yes'], slaveof=server1)
2222
23 sentinel1 = start_sentinel('sentinel-failover1', server1, quorum=2)
24 sentinel2 = start_sentinel('sentinel-failover2', server1, quorum=2)
23 sentinel1 = start_sentinel('sentinel-failover1', server1, quorum=2,
24 down_after_milliseconds=300,
25 failover_timeout=1000)
26 sentinel2 = start_sentinel('sentinel-failover2', server1, quorum=2,
27 down_after_milliseconds=300,
28 failover_timeout=1000)
29 # Wait a bit for sentinels to sync
30 await asyncio.sleep(3)
2531
2632 sp = await create_sentinel([sentinel1.tcp_address,
2733 sentinel2.tcp_address],
3844
3945 # wait failover
4046 conn = await create_connection(server1.tcp_address)
41 await conn.execute("debug", "sleep", 6)
42 await asyncio.sleep(3, loop=loop)
47 await conn.execute("debug", "sleep", 2)
4348
4449 # _, new_port = await sp.master_address(server1.name)
4550 # assert new_port != old_port
4954 assert master.address[1] != old_port
5055
5156
52 @pytest.mark.run_loop
5357 async def test_sentinel_normal(sentinel, create_sentinel):
5458 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
5559 redis = redis_sentinel.master_for('masterA')
7074
7175
7276 @pytest.mark.xfail(reason="same sentinel; single master;")
73 @pytest.mark.run_loop
7477 async def test_sentinel_slave(sentinel, create_sentinel):
7578 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
7679 redis = redis_sentinel.slave_for('masterA')
9093
9194
9295 @pytest.mark.xfail(reason="Need proper sentinel configuration")
93 @pytest.mark.run_loop # (timeout=600)
94 async def test_sentinel_slave_fail(sentinel, create_sentinel, loop):
96 async def test_sentinel_slave_fail(sentinel, create_sentinel):
9597 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
9698
9799 key, field, value = b'key:hset', b'bar', b'zap'
107109
108110 ret = await redis_sentinel.failover('masterA')
109111 assert ret is True
110 await asyncio.sleep(2, loop=loop)
112 await asyncio.sleep(2)
111113
112114 with pytest.raises(ReadOnlyError):
113115 await redis.hset(key, field, value)
114116
115117 ret = await redis_sentinel.failover('masterA')
116118 assert ret is True
117 await asyncio.sleep(2, loop=loop)
119 await asyncio.sleep(2)
118120 while True:
119121 try:
120 await asyncio.sleep(1, loop=loop)
122 await asyncio.sleep(1)
121123 await redis.hset(key, field, value)
122124 except SlaveNotFoundError:
123125 continue
126128
127129
128130 @pytest.mark.xfail(reason="Need proper sentinel configuration")
129 @pytest.mark.run_loop
130 async def test_sentinel_normal_fail(sentinel, create_sentinel, loop):
131 async def test_sentinel_normal_fail(sentinel, create_sentinel):
131132 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
132133
133134 key, field, value = b'key:hset', b'bar', b'zap'
141142 assert ret == 1
142143 ret = await redis_sentinel.failover('masterA')
143144 assert ret is True
144 await asyncio.sleep(2, loop=loop)
145 await asyncio.sleep(2)
145146 ret = await redis.hset(key, field, value)
146147 assert ret == 0
147148 ret = await redis_sentinel.failover('masterA')
148149 assert ret is True
149 await asyncio.sleep(2, loop=loop)
150 await asyncio.sleep(2)
150151 redis = redis_sentinel.slave_for('masterA')
151152 while True:
152153 try:
153154 await redis.hset(key, field, value)
154 await asyncio.sleep(1, loop=loop)
155 await asyncio.sleep(1)
155156 # redis = await get_slave_connection()
156157 except ReadOnlyError:
157158 break
158159
159160
160 @pytest.mark.xfail(reason="same sentinel; single master;")
161 @pytest.mark.run_loop
162 async def test_failover_command(sentinel, create_sentinel, loop):
163 master_name = 'masterA'
164 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
165
166 orig_master = await redis_sentinel.master_address(master_name)
167 ret = await redis_sentinel.failover(master_name)
168 assert ret is True
169 await asyncio.sleep(2, loop=loop)
170
171 new_master = await redis_sentinel.master_address(master_name)
161 @pytest.mark.timeout(30)
162 async def test_failover_command(start_server, start_sentinel,
163 create_sentinel):
164 server = start_server('master-failover-cmd', ['slave-read-only yes'])
165 start_server('slave-failover-cmd', ['slave-read-only yes'], slaveof=server)
166
167 sentinel = start_sentinel('sentinel-failover-cmd', server, quorum=1,
168 down_after_milliseconds=300,
169 failover_timeout=1000)
170
171 name = 'master-failover-cmd'
172 redis_sentinel = await create_sentinel([sentinel.tcp_address], timeout=1)
173 # Wait a bit for sentinels to sync
174 await asyncio.sleep(3)
175
176 orig_master = await redis_sentinel.master_address(name)
177 assert await redis_sentinel.failover(name) is True
178 await asyncio.sleep(2)
179
180 new_master = await redis_sentinel.master_address(name)
172181 assert orig_master != new_master
173182
174 ret = await redis_sentinel.failover(master_name)
175 assert ret is True
176 await asyncio.sleep(2, loop=loop)
177
178 new_master = await redis_sentinel.master_address(master_name)
183 ret = await redis_sentinel.failover(name)
184 assert ret is True
185 await asyncio.sleep(2)
186
187 new_master = await redis_sentinel.master_address(name)
179188 assert orig_master == new_master
180189
181 redis = redis_sentinel.slave_for(master_name)
182 key, field, value = b'key:hset', b'bar', b'zap'
183 while True:
184 try:
185 await asyncio.sleep(1, loop=loop)
186 await redis.hset(key, field, value)
187 except SlaveNotFoundError:
188 pass
189 except ReadOnlyError:
190 break
190 # This part takes almost 10 seconds (waiting for '+convert-to-slave').
191 # Disabled for time being.
192
193 # redis = redis_sentinel.slave_for(name)
194 # while True:
195 # try:
196 # await asyncio.sleep(.2)
197 # await redis.set('foo', 'bar')
198 # except SlaveNotFoundError:
199 # pass
200 # except ReadOnlyError:
201 # break
44 from unittest import mock
55
66 from aioredis import ReplyError
7
8
9 @pytest.mark.run_loop
7 from _testutils import redis_version
8
9
1010 async def test_client_list(redis, server, request):
1111 name = request.node.callspec.id
1212 assert (await redis.client_setname(name))
3939 assert expected in res
4040
4141
42 @pytest.mark.run_loop
4342 @pytest.mark.skipif(sys.platform == 'win32',
4443 reason="No unixsocket on Windows")
45 async def test_client_list__unixsocket(create_redis, loop, server, request):
46 redis = await create_redis(server.unixsocket, loop=loop)
44 async def test_client_list__unixsocket(create_redis, server, request):
45 redis = await create_redis(server.unixsocket)
4746 name = request.node.callspec.id
4847 assert (await redis.client_setname(name))
4948 res = await redis.client_list()
7473 assert expected in info
7574
7675
77 @pytest.mark.run_loop
78 @pytest.redis_version(
76 @redis_version(
7977 2, 9, 50, reason='CLIENT PAUSE is available since redis >= 2.9.50')
8078 async def test_client_pause(redis):
81 ts = time.time()
82 res = await redis.client_pause(2000)
83 assert res is True
84 await redis.ping()
85 assert int(time.time() - ts) >= 2
79 tr = redis.pipeline()
80 tr.time()
81 tr.client_pause(100)
82 tr.time()
83 t1, ok, t2 = await tr.execute()
84 assert ok
85 assert t2 - t1 >= .1
8686
8787 with pytest.raises(TypeError):
8888 await redis.client_pause(2.0)
9090 await redis.client_pause(-1)
9191
9292
93 @pytest.mark.run_loop
9493 async def test_client_getname(redis):
9594 res = await redis.client_getname()
9695 assert res is None
103102 assert res == 'TestClient'
104103
105104
106 @pytest.redis_version(2, 8, 13, reason="available since Redis 2.8.13")
107 @pytest.mark.run_loop
105 @redis_version(2, 8, 13, reason="available since Redis 2.8.13")
108106 async def test_command(redis):
109107 res = await redis.command()
110108 assert isinstance(res, list)
111109 assert len(res) > 0
112110
113111
114 @pytest.redis_version(2, 8, 13, reason="available since Redis 2.8.13")
115 @pytest.mark.run_loop
112 @redis_version(2, 8, 13, reason="available since Redis 2.8.13")
116113 async def test_command_count(redis):
117114 res = await redis.command_count()
118115 assert res > 0
119116
120117
121 @pytest.redis_version(3, 0, 0, reason="available since Redis 3.0.0")
122 @pytest.mark.run_loop
118 @redis_version(3, 0, 0, reason="available since Redis 3.0.0")
123119 async def test_command_getkeys(redis):
124120 res = await redis.command_getkeys('get', 'key')
125121 assert res == ['key']
136132 assert not (await redis.command_getkeys(None))
137133
138134
139 @pytest.redis_version(2, 8, 13, reason="available since Redis 2.8.13")
140 @pytest.mark.run_loop
135 @redis_version(2, 8, 13, reason="available since Redis 2.8.13")
141136 async def test_command_info(redis):
142137 res = await redis.command_info('get')
143138 assert res == [
150145 assert res == [None, None]
151146
152147
153 @pytest.mark.run_loop
154148 async def test_config_get(redis, server):
155149 res = await redis.config_get('port')
156150 assert res == {'port': str(server.tcp_address.port)}
165159 await redis.config_get(b'port')
166160
167161
168 @pytest.mark.run_loop
169162 async def test_config_rewrite(redis):
170163 with pytest.raises(ReplyError):
171164 await redis.config_rewrite()
172165
173166
174 @pytest.mark.run_loop
175167 async def test_config_set(redis):
176168 cur_value = await redis.config_get('slave-read-only')
177169 res = await redis.config_set('slave-read-only', 'no')
186178 await redis.config_set(100, 'databases')
187179
188180
189 # @pytest.mark.run_loop
190181 # @pytest.mark.skip("Not implemented")
191182 # def test_config_resetstat():
192183 # pass
193184
194 @pytest.mark.run_loop
195185 async def test_debug_object(redis):
196186 with pytest.raises(ReplyError):
197187 assert (await redis.debug_object('key')) is None
202192 assert res is not None
203193
204194
205 @pytest.mark.run_loop
206195 async def test_debug_sleep(redis):
207196 t1 = await redis.time()
208 ok = await redis.debug_sleep(2)
197 ok = await redis.debug_sleep(.2)
209198 assert ok
210199 t2 = await redis.time()
211 assert t2 - t1 >= 2
212
213
214 @pytest.mark.run_loop
200 assert t2 - t1 >= .2
201
202
215203 async def test_dbsize(redis):
216204 res = await redis.dbsize()
217205 assert res == 0
229217 assert res == 1
230218
231219
232 @pytest.mark.run_loop
233220 async def test_info(redis):
234221 res = await redis.info()
235222 assert isinstance(res, dict)
241228 await redis.info('')
242229
243230
244 @pytest.mark.run_loop
245231 async def test_lastsave(redis):
246232 res = await redis.lastsave()
247233 assert res > 0
248234
249235
250 @pytest.mark.run_loop
251 @pytest.redis_version(2, 8, 12, reason='ROLE is available since redis>=2.8.12')
236 @redis_version(2, 8, 12, reason='ROLE is available since redis>=2.8.12')
252237 async def test_role(redis):
253238 res = await redis.role()
254239 assert dict(res._asdict()) == {
258243 }
259244
260245
261 @pytest.mark.run_loop
262246 async def test_save(redis):
263247 res = await redis.dbsize()
264248 assert res == 0
269253 assert t2 >= t1
270254
271255
272 @pytest.mark.run_loop
273 async def test_time(redis):
256 @pytest.mark.parametrize('encoding', [
257 pytest.param(None, id='no decoding'),
258 pytest.param('utf-8', id='with decoding'),
259 ])
260 async def test_time(create_redis, server, encoding):
261 redis = await create_redis(server.tcp_address, encoding='utf-8')
262 now = time.time()
274263 res = await redis.time()
275264 assert isinstance(res, float)
276 pytest.assert_almost_equal(int(res), int(time.time()), delta=10)
277
278
279 @pytest.mark.run_loop
280 async def test_time_with_encoding(create_redis, server, loop):
281 redis = await create_redis(server.tcp_address, loop=loop,
282 encoding='utf-8')
283 res = await redis.time()
284 assert isinstance(res, float)
285 pytest.assert_almost_equal(int(res), int(time.time()), delta=10)
286
287
288 @pytest.mark.run_loop
265 assert res == pytest.approx(now, abs=10)
266
267
289268 async def test_slowlog_len(redis):
290269 res = await redis.slowlog_len()
291270 assert res >= 0
292271
293272
294 @pytest.mark.run_loop
295273 async def test_slowlog_get(redis):
296274 res = await redis.slowlog_get()
297275 assert isinstance(res, list)
307285 assert not (await redis.slowlog_get('1'))
308286
309287
310 @pytest.mark.run_loop
311288 async def test_slowlog_reset(redis):
312289 ok = await redis.slowlog_reset()
313290 assert ok is True
00 import pytest
1
2 from aioredis import ReplyError
3 from _testutils import redis_version
14
25
36 async def add(redis, key, members):
58 assert ok == 1
69
710
8 @pytest.mark.run_loop
911 async def test_sadd(redis):
1012 key, member = b'key:sadd', b'hello'
1113 # add member to the set, expected result: 1
2426 await redis.sadd(None, 10)
2527
2628
27 @pytest.mark.run_loop
2829 async def test_scard(redis):
2930 key, member = b'key:scard', b'hello'
3031
4344 await redis.scard(None)
4445
4546
46 @pytest.mark.run_loop
4747 async def test_sdiff(redis):
4848 key1 = b'key:sdiff:1'
4949 key2 = b'key:sdiff:2'
7171 await redis.sdiff(key1, None)
7272
7373
74 @pytest.mark.run_loop
7574 async def test_sdiffstore(redis):
7675 key1 = b'key:sdiffstore:1'
7776 key2 = b'key:sdiffstore:2'
103102 await redis.sdiffstore(destkey, key1, None)
104103
105104
106 @pytest.mark.run_loop
107105 async def test_sinter(redis):
108106 key1 = b'key:sinter:1'
109107 key2 = b'key:sinter:2'
131129 await redis.sinter(key1, None)
132130
133131
134 @pytest.mark.run_loop
135132 async def test_sinterstore(redis):
136133 key1 = b'key:sinterstore:1'
137134 key2 = b'key:sinterstore:2'
163160 await redis.sinterstore(destkey, key1, None)
164161
165162
166 @pytest.mark.run_loop
167163 async def test_sismember(redis):
168164 key, member = b'key:sismember', b'hello'
169165 # add member to the set, expected result: 1
181177 await redis.sismember(None, b'world')
182178
183179
184 @pytest.mark.run_loop
185180 async def test_smembers(redis):
186181 key = b'key:smembers'
187182 member1 = b'hello'
206201 await redis.smembers(None)
207202
208203
209 @pytest.mark.run_loop
210204 async def test_smove(redis):
211205 key1 = b'key:smove:1'
212206 key2 = b'key:smove:2'
246240 await redis.smove(key1, None, member1)
247241
248242
249 @pytest.mark.run_loop
250243 async def test_spop(redis):
251244 key = b'key:spop:1'
252245 members = b'one', b'two', b'three'
276269 await redis.spop(None)
277270
278271
279 @pytest.mark.run_loop
272 @redis_version(
273 3, 2, 0,
274 reason="The count argument in SPOP is available since redis>=3.2.0"
275 )
276 async def test_spop_count(redis):
277 key = b'key:spop:1'
278 members1 = b'one', b'two', b'three'
279 await redis.sadd(key, *members1)
280
281 # fetch 3 random members
282 test_result1 = await redis.spop(key, 3)
283 assert len(test_result1) == 3
284 assert set(test_result1).issubset(members1) is True
285
286 members2 = 'four', 'five', 'six'
287 await redis.sadd(key, *members2)
288
289 # test with encoding, fetch 3 random members
290 test_result2 = await redis.spop(key, 3, encoding='utf-8')
291 assert len(test_result2) == 3
292 assert set(test_result2).issubset(members2) is True
293
294 # try to pop data from empty set
295 test_result = await redis.spop(b'not:' + key, 2)
296 assert len(test_result) == 0
297
298 # test with negative counter
299 with pytest.raises(ReplyError):
300 await redis.spop(key, -2)
301
302 # test with counter is zero
303 test_result3 = await redis.spop(key, 0)
304 assert len(test_result3) == 0
305
306
280307 async def test_srandmember(redis):
281308 key = b'key:srandmember:1'
282309 members = b'one', b'two', b'three', b'four', b'five', b'six', b'seven'
314341 await redis.srandmember(None)
315342
316343
317 @pytest.mark.run_loop
318344 async def test_srem(redis):
319345 key = b'key:srem:1'
320346 members = b'one', b'two', b'three', b'four', b'five', b'six', b'seven'
339365 await redis.srem(None, members)
340366
341367
342 @pytest.mark.run_loop
343368 async def test_sunion(redis):
344369 key1 = b'key:sunion:1'
345370 key2 = b'key:sunion:2'
367392 await redis.sunion(key1, None)
368393
369394
370 @pytest.mark.run_loop
371395 async def test_sunionstore(redis):
372396 key1 = b'key:sunionstore:1'
373397 key2 = b'key:sunionstore:2'
399423 await redis.sunionstore(destkey, key1, None)
400424
401425
402 @pytest.redis_version(2, 8, 0, reason='SSCAN is available since redis>=2.8.0')
403 @pytest.mark.run_loop
426 @redis_version(2, 8, 0, reason='SSCAN is available since redis>=2.8.0')
404427 async def test_sscan(redis):
405428 key = b'key:sscan'
406429 for i in range(1, 11):
430453 await redis.sscan(None)
431454
432455
433 @pytest.redis_version(2, 8, 0, reason='SSCAN is available since redis>=2.8.0')
434 @pytest.mark.run_loop
456 @redis_version(2, 8, 0, reason='SSCAN is available since redis>=2.8.0')
435457 async def test_isscan(redis):
436458 key = b'key:sscan'
437459 for i in range(1, 11):
00 import itertools
1
12 import pytest
23
3
4 @pytest.mark.run_loop
4 from _testutils import redis_version
5
6
7 @redis_version(5, 0, 0, reason='BZPOPMAX is available since redis>=5.0.0')
8 async def test_bzpopmax(redis):
9 key1 = b'key:zpopmax:1'
10 key2 = b'key:zpopmax:2'
11
12 pairs = [
13 (0, b'a'), (5, b'c'), (2, b'd'), (8, b'e'), (9, b'f'), (3, b'g')
14 ]
15 await redis.zadd(key1, *pairs[0])
16 await redis.zadd(key2, *itertools.chain.from_iterable(pairs))
17
18 res = await redis.bzpopmax(key1, timeout=0)
19 assert res == [key1, b'a', b'0']
20 res = await redis.bzpopmax(key1, key2, timeout=0)
21 assert res == [key2, b'f', b'9']
22
23 with pytest.raises(TypeError):
24 await redis.bzpopmax(key1, timeout=b'one')
25 with pytest.raises(ValueError):
26 await redis.bzpopmax(key2, timeout=-10)
27
28
29 @redis_version(5, 0, 0, reason='BZPOPMIN is available since redis>=5.0.0')
30 async def test_bzpopmin(redis):
31 key1 = b'key:zpopmin:1'
32 key2 = b'key:zpopmin:2'
33
34 pairs = [
35 (0, b'a'), (5, b'c'), (2, b'd'), (8, b'e'), (9, b'f'), (3, b'g')
36 ]
37 await redis.zadd(key1, *pairs[0])
38 await redis.zadd(key2, *itertools.chain.from_iterable(pairs))
39
40 res = await redis.bzpopmin(key1, timeout=0)
41 assert res == [key1, b'a', b'0']
42 res = await redis.bzpopmin(key1, key2, timeout=0)
43 assert res == [key2, b'a', b'0']
44
45 with pytest.raises(TypeError):
46 await redis.bzpopmin(key1, timeout=b'one')
47 with pytest.raises(ValueError):
48 await redis.bzpopmin(key2, timeout=-10)
49
50
551 async def test_zadd(redis):
652 key = b'key:zadd'
753 res = await redis.zadd(key, 1, b'one')
2874 await redis.zadd(key, 3, b'three', 'four', 4)
2975
3076
31 @pytest.redis_version(
77 @redis_version(
3278 3, 0, 2, reason='ZADD options is available since redis>=3.0.2',
3379 )
34 @pytest.mark.run_loop
3580 async def test_zadd_options(redis):
3681 key = b'key:zaddopt'
3782
65110 res = await redis.zrange(key, 0, -1, withscores=False)
66111 assert res == [b'one', b'two']
67112
68
69 @pytest.mark.run_loop
113 res = await redis.zadd(key, 1, b'two', changed=True)
114 assert res == 1
115
116 res = await redis.zadd(key, 1, b'two', incr=True)
117 assert int(res) == 2
118
119 with pytest.raises(ValueError):
120 await redis.zadd(key, 1, b'one', 2, b'two', incr=True)
121
122
70123 async def test_zcard(redis):
71124 key = b'key:zcard'
72125 pairs = [1, b'one', 2, b'two', 3, b'three']
83136 await redis.zcard(None)
84137
85138
86 @pytest.mark.run_loop
87139 async def test_zcount(redis):
88140 key = b'key:zcount'
89141 pairs = [1, b'one', 1, b'uno', 2.5, b'two', 3, b'three', 7, b'seven']
127179 await redis.zcount(key, 10, 1)
128180
129181
130 @pytest.mark.run_loop
131182 async def test_zincrby(redis):
132183 key = b'key:zincrby'
133184 pairs = [1, b'one', 1, b'uno', 2.5, b'two', 3, b'three']
147198 await redis.zincrby(key, 'one', 5)
148199
149200
150 @pytest.mark.run_loop
151201 async def test_zinterstore(redis):
152202 zset1 = [2, 'one', 2, 'two']
153203 zset2 = [3, 'one', 3, 'three']
195245 assert res == [(b'one', 10)]
196246
197247
198 @pytest.redis_version(
248 @redis_version(
199249 2, 8, 9, reason='ZLEXCOUNT is available since redis>=2.8.9')
200 @pytest.mark.run_loop
201250 async def test_zlexcount(redis):
202251 key = b'key:zlexcount'
203252 pairs = [0, b'a', 0, b'b', 0, b'c', 0, b'd', 0, b'e']
221270
222271
223272 @pytest.mark.parametrize('encoding', [None, 'utf-8'])
224 @pytest.mark.run_loop
225273 async def test_zrange(redis, encoding):
226274 key = b'key:zrange'
227275 scores = [1, 1, 2.5, 3, 7]
252300 await redis.zrange(key, 0, 'last')
253301
254302
255 @pytest.redis_version(
303 @redis_version(
256304 2, 8, 9, reason='ZRANGEBYLEX is available since redis>=2.8.9')
257 @pytest.mark.run_loop
258305 async def test_zrangebylex(redis):
259306 key = b'key:zrangebylex'
260307 scores = [0] * 5
298345 offset=1, count='one')
299346
300347
301 @pytest.mark.run_loop
302348 async def test_zrank(redis):
303349 key = b'key:zrank'
304350 scores = [1, 1, 2.5, 3, 7]
320366
321367
322368 @pytest.mark.parametrize('encoding', [None, 'utf-8'])
323 @pytest.mark.run_loop
324369 async def test_zrangebyscore(redis, encoding):
325370 key = b'key:zrangebyscore'
326371 scores = [1, 1, 2.5, 3, 7]
364409 await redis.zrangebyscore(key, 1, 7, offset=1, count='one')
365410
366411
367 @pytest.mark.run_loop
368412 async def test_zrem(redis):
369413 key = b'key:zrem'
370414 scores = [1, 1, 2.5, 3, 7]
390434 await redis.zrem(None, b'one')
391435
392436
393 @pytest.redis_version(
437 @redis_version(
394438 2, 8, 9, reason='ZREMRANGEBYLEX is available since redis>=2.8.9')
395 @pytest.mark.run_loop
396439 async def test_zremrangebylex(redis):
397440 key = b'key:zremrangebylex'
398441 members = [b'aaaa', b'b', b'c', b'd', b'e', b'foo', b'zap', b'zip',
431474 await redis.zremrangebylex(key, b'a', 20)
432475
433476
434 @pytest.mark.run_loop
435477 async def test_zremrangebyrank(redis):
436478 key = b'key:zremrangebyrank'
437479 scores = [0, 1, 2, 3, 4, 5]
458500 await redis.zremrangebyrank(key, 0, 'last')
459501
460502
461 @pytest.mark.run_loop
462503 async def test_zremrangebyscore(redis):
463504 key = b'key:zremrangebyscore'
464505 scores = [1, 1, 2.5, 3, 7]
493534
494535
495536 @pytest.mark.parametrize('encoding', [None, 'utf-8'])
496 @pytest.mark.run_loop
497537 async def test_zrevrange(redis, encoding):
498538 key = b'key:zrevrange'
499539 scores = [1, 1, 2.5, 3, 7]
528568 await redis.zrevrange(key, 0, 'last')
529569
530570
531 @pytest.mark.run_loop
532571 async def test_zrevrank(redis):
533572 key = b'key:zrevrank'
534573 scores = [1, 1, 2.5, 3, 7]
549588 await redis.zrevrank(None, b'one')
550589
551590
552 @pytest.mark.run_loop
553591 async def test_zscore(redis):
554592 key = b'key:zscore'
555593 scores = [1, 1, 2.5, 3, 7]
569607 assert res is None
570608
571609
572 @pytest.mark.run_loop
573610 async def test_zunionstore(redis):
574611 zset1 = [2, 'one', 2, 'two']
575612 zset2 = [3, 'one', 3, 'three']
618655
619656
620657 @pytest.mark.parametrize('encoding', [None, 'utf-8'])
621 @pytest.mark.run_loop
622658 async def test_zrevrangebyscore(redis, encoding):
623659 key = b'key:zrevrangebyscore'
624660 scores = [1, 1, 2.5, 3, 7]
663699 await redis.zrevrangebyscore(key, 1, 7, offset=1, count='one')
664700
665701
666 @pytest.redis_version(
702 @redis_version(
667703 2, 8, 9, reason='ZREVRANGEBYLEX is available since redis>=2.8.9')
668 @pytest.mark.run_loop
669704 async def test_zrevrangebylex(redis):
670705 key = b'key:zrevrangebylex'
671706 scores = [0] * 5
711746 offset=1, count='one')
712747
713748
714 @pytest.redis_version(2, 8, 0, reason='ZSCAN is available since redis>=2.8.0')
715 @pytest.mark.run_loop
749 @redis_version(2, 8, 0, reason='ZSCAN is available since redis>=2.8.0')
716750 async def test_zscan(redis):
717751 key = b'key:zscan'
718752 scores, members = [], []
745779 await redis.zscan(None)
746780
747781
748 @pytest.redis_version(2, 8, 0, reason='ZSCAN is available since redis>=2.8.0')
749 @pytest.mark.run_loop
782 @redis_version(2, 8, 0, reason='ZSCAN is available since redis>=2.8.0')
750783 async def test_izscan(redis):
751784 key = b'key:zscan'
752785 scores, members = [], []
783816
784817 with pytest.raises(TypeError):
785818 await redis.izscan(None)
819
820
821 @redis_version(5, 0, 0, reason='ZPOPMAX is available since redis>=5.0.0')
822 async def test_zpopmax(redis):
823 key = b'key:zpopmax'
824
825 pairs = [
826 (0, b'a'), (5, b'c'), (2, b'd'), (8, b'e'), (9, b'f'), (3, b'g')
827 ]
828 await redis.zadd(key, *itertools.chain.from_iterable(pairs))
829
830 assert await redis.zpopmax(key) == [b'f', b'9']
831 assert await redis.zpopmax(key, 3) == [b'e', b'8', b'c', b'5', b'g', b'3']
832
833 with pytest.raises(TypeError):
834 await redis.zpopmax(key, b'b')
835
836
837 @redis_version(5, 0, 0, reason='ZPOPMIN is available since redis>=5.0.0')
838 async def test_zpopmin(redis):
839 key = b'key:zpopmin'
840
841 pairs = [
842 (0, b'a'), (5, b'c'), (2, b'd'), (8, b'e'), (9, b'f'), (3, b'g')
843 ]
844 await redis.zadd(key, *itertools.chain.from_iterable(pairs))
845
846 assert await redis.zpopmin(key) == [b'a', b'0']
847 assert await redis.zpopmin(key, 3) == [b'd', b'2', b'g', b'3', b'c', b'5']
848
849 with pytest.raises(TypeError):
850 await redis.zpopmin(key, b'b')
0 import pytest
10
2
3 @pytest.mark.run_loop
4 async def test_ssl_connection(create_connection, loop, server, ssl_proxy):
1 async def test_ssl_connection(create_connection, server, ssl_proxy):
52 ssl_port, ssl_ctx = ssl_proxy(server.tcp_address.port)
63
74 conn = await create_connection(
8 ('localhost', ssl_port), ssl=ssl_ctx, loop=loop)
5 ('localhost', ssl_port), ssl=ssl_ctx)
96 res = await conn.execute('ping')
107 assert res == b'PONG'
118
129
13 @pytest.mark.run_loop
14 async def test_ssl_redis(create_redis, loop, server, ssl_proxy):
10 async def test_ssl_redis(create_redis, server, ssl_proxy):
1511 ssl_port, ssl_ctx = ssl_proxy(server.tcp_address.port)
1612
1713 redis = await create_redis(
18 ('localhost', ssl_port), ssl=ssl_ctx, loop=loop)
14 ('localhost', ssl_port), ssl=ssl_ctx)
1915 res = await redis.ping()
2016 assert res == b'PONG'
2117
2218
23 @pytest.mark.run_loop
24 async def test_ssl_pool(create_pool, server, loop, ssl_proxy):
19 async def test_ssl_pool(create_pool, server, ssl_proxy):
2520 ssl_port, ssl_ctx = ssl_proxy(server.tcp_address.port)
2621
2722 pool = await create_pool(
28 ('localhost', ssl_port), ssl=ssl_ctx, loop=loop)
23 ('localhost', ssl_port), ssl=ssl_ctx)
2924 with (await pool) as conn:
3025 res = await conn.execute('PING')
3126 assert res == b'PONG'
33 from collections import OrderedDict
44 from unittest import mock
55
6 from aioredis import ReplyError
7
8
9 @asyncio.coroutine
10 async def add_message_with_sleep(redis, loop, stream, fields):
11 await asyncio.sleep(0.2, loop=loop)
6 from aioredis.commands.streams import parse_messages
7 from aioredis.errors import BusyGroupError
8 from _testutils import redis_version
9
10 pytestmark = redis_version(
11 5, 0, 0, reason="Streams only available since Redis 5.0.0")
12
13
14 async def add_message_with_sleep(redis, stream, fields):
15 await asyncio.sleep(0.2)
1216 result = await redis.xadd(stream, fields)
1317 return result
1418
1519
16 @pytest.mark.run_loop
17 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
18 "unstable branch")
1920 async def test_xadd(redis, server_bin):
2021 fields = OrderedDict((
2122 (b'field1', b'value1'),
4041 )
4142
4243
43 @pytest.mark.run_loop
44 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
45 "unstable branch")
4644 async def test_xadd_maxlen_exact(redis, server_bin):
4745 message_id1 = await redis.xadd('test_stream', {'f1': 'v1'}) # noqa
4846
6967 assert message3[1] == OrderedDict([(b'f3', b'v3')])
7068
7169
72 @pytest.mark.run_loop
73 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
74 "unstable branch")
7570 async def test_xadd_manual_message_ids(redis, server_bin):
7671 await redis.xadd('test_stream', {'f1': 'v1'}, message_id='1515958771000-0')
7772 await redis.xadd('test_stream', {'f1': 'v1'}, message_id='1515958771000-1')
8681 ]
8782
8883
89 @pytest.mark.run_loop
90 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
91 "unstable branch")
9284 async def test_xadd_maxlen_inexact(redis, server_bin):
9385 await redis.xadd('test_stream', {'f1': 'v1'})
9486 # Ensure the millisecond-based message ID increments
110102 assert len(messages) < 1000
111103
112104
113 @pytest.mark.run_loop
114 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
115 "unstable branch")
116105 async def test_xrange(redis, server_bin):
117106 stream = 'test_stream'
118107 fields = OrderedDict((
166155 assert len(messages) == 2
167156
168157
169 @pytest.mark.run_loop
170 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
171 "unstable branch")
172158 async def test_xrevrange(redis, server_bin):
173159 stream = 'test_stream'
174160 fields = OrderedDict((
222208 assert len(messages) == 2
223209
224210
225 @pytest.mark.run_loop
226 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
227 "unstable branch")
228211 async def test_xread_selection(redis, server_bin):
229212 """Test use of counts and starting IDs"""
230213 stream = 'test_stream'
257240 assert len(messages) == 2
258241
259242
260 @pytest.mark.run_loop
261 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
262 "unstable branch")
263 async def test_xread_blocking(redis, create_redis, loop, server, server_bin):
243 async def test_xread_blocking(redis, create_redis, server, server_bin):
264244 """Test the blocking read features"""
265245 fields = OrderedDict((
266246 (b'field1', b'value1'),
267247 (b'field2', b'value2'),
268248 ))
269249 other_redis = await create_redis(
270 server.tcp_address, loop=loop)
250 server.tcp_address)
271251
272252 # create blocking task in separate connection
273253 consumer = other_redis.xread(['test_stream'], timeout=1000)
274254
275255 producer_task = asyncio.Task(
276 add_message_with_sleep(redis, loop, 'test_stream', fields), loop=loop)
277 results = await asyncio.gather(
278 consumer, producer_task, loop=loop)
256 add_message_with_sleep(redis, 'test_stream', fields))
257 results = await asyncio.gather(consumer, producer_task)
279258
280259 received_messages, sent_message_id = results
281260 assert len(received_messages) == 1
295274 other_redis.close()
296275
297276
298 @pytest.mark.run_loop
299 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
300 "unstable branch")
301277 async def test_xgroup_create(redis, server_bin):
302278 # Also tests xinfo_groups()
303 # TODO: Remove xadd() if resolved:
304 # https://github.com/antirez/redis/issues/4824
305279 await redis.xadd('test_stream', {'a': 1})
306280 await redis.xgroup_create('test_stream', 'test_group')
307281 info = await redis.xinfo_groups('test_stream')
313287 }]
314288
315289
316 @pytest.mark.run_loop
317 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
318 "unstable branch")
290 async def test_xgroup_create_mkstream(redis, server_bin):
291 await redis.xgroup_create('test_stream', 'test_group', mkstream=True)
292 info = await redis.xinfo_groups('test_stream')
293 assert info == [{
294 b'name': b'test_group',
295 b'last-delivered-id': mock.ANY,
296 b'pending': 0,
297 b'consumers': 0
298 }]
299
300
319301 async def test_xgroup_create_already_exists(redis, server_bin):
320302 await redis.xadd('test_stream', {'a': 1})
321303 await redis.xgroup_create('test_stream', 'test_group')
322 with pytest.raises(ReplyError):
304 with pytest.raises(BusyGroupError):
323305 await redis.xgroup_create('test_stream', 'test_group')
324306
325307
326 @pytest.mark.run_loop
327 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
328 "unstable branch")
329308 async def test_xgroup_setid(redis, server_bin):
330309 await redis.xadd('test_stream', {'a': 1})
331310 await redis.xgroup_create('test_stream', 'test_group')
332311 await redis.xgroup_setid('test_stream', 'test_group', '$')
333312
334313
335 @pytest.mark.run_loop
336 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
337 "unstable branch")
338314 async def test_xgroup_destroy(redis, server_bin):
339315 await redis.xadd('test_stream', {'a': 1})
340316 await redis.xgroup_create('test_stream', 'test_group')
343319 assert not info
344320
345321
346 @pytest.mark.run_loop
347 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
348 "unstable branch")
349322 async def test_xread_group(redis):
350323 await redis.xadd('test_stream', {'a': 1})
351324 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
352325
326 # read all pending messages
353327 messages = await redis.xread_group(
354328 'test_group', 'test_consumer', ['test_stream'],
355 timeout=1000, latest_ids=[0]
329 timeout=1000, latest_ids=['>']
356330 )
357331 assert len(messages) == 1
358332 stream, message_id, fields = messages[0]
361335 assert fields == {b'a': b'1'}
362336
363337
364 @pytest.mark.run_loop
365 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
366 "unstable branch")
338 async def test_xread_group_with_no_ack(redis):
339 await redis.xadd('test_stream', {'a': 1})
340 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
341
342 # read all pending messages
343 messages = await redis.xread_group(
344 'test_group', 'test_consumer', ['test_stream'],
345 timeout=1000, latest_ids=['>'], no_ack=True
346 )
347 assert len(messages) == 1
348 stream, message_id, fields = messages[0]
349 assert stream == b'test_stream'
350 assert message_id
351 assert fields == {b'a': b'1'}
352
353
367354 async def test_xack_and_xpending(redis):
368355 # Test a full xread -> xack cycle, using xpending to check the status
369356 message_id = await redis.xadd('test_stream', {'a': 1})
377364 # Read the message
378365 await redis.xread_group(
379366 'test_group', 'test_consumer', ['test_stream'],
380 timeout=1000, latest_ids=[0]
367 timeout=1000, latest_ids=['>']
381368 )
382369
383370 # It is now pending
397384 assert pending_count == 0
398385
399386
400 @pytest.mark.run_loop
401 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
402 "unstable branch")
403387 async def test_xpending_get_messages(redis):
404388 # Like test_xack_and_xpending(), but using the start/end xpending()
405389 # params to get the messages
407391 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
408392 await redis.xread_group(
409393 'test_group', 'test_consumer', ['test_stream'],
410 timeout=1000, latest_ids=[0]
394 timeout=1000, latest_ids=['>']
411395 )
412396 await asyncio.sleep(0.05)
413397
425409 assert num_deliveries == 1
426410
427411
428 @pytest.mark.run_loop
429 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
430 "unstable branch")
431412 async def test_xpending_start_of_zero(redis):
432413 await redis.xadd('test_stream', {'a': 1})
433414 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
435416 await redis.xpending('test_stream', 'test_group', 0, '+', 10)
436417
437418
438 @pytest.mark.run_loop
439 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
440 "unstable branch")
441419 async def test_xclaim_simple(redis):
442420 # Put a message in a pending state then reclaim it is XCLAIM
443421 message_id = await redis.xadd('test_stream', {'a': 1})
444422 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
445423 await redis.xread_group(
446424 'test_group', 'test_consumer', ['test_stream'],
447 timeout=1000, latest_ids=[0]
425 timeout=1000, latest_ids=['>']
448426 )
449427
450428 # Message is now pending
468446 assert pel == [[b'new_consumer', b'1']]
469447
470448
471 @pytest.mark.run_loop
472 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
473 "unstable branch")
474449 async def test_xclaim_min_idle_time_includes_messages(redis):
475450 message_id = await redis.xadd('test_stream', {'a': 1})
476451 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
477452 await redis.xread_group(
478453 'test_group', 'test_consumer', ['test_stream'],
479 timeout=1000, latest_ids=[0]
454 timeout=1000, latest_ids=['>']
480455 )
481456
482457 # Message is now pending. Wait 100ms
488463 assert result
489464
490465
491 @pytest.mark.run_loop
492 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
493 "unstable branch")
494466 async def test_xclaim_min_idle_time_excludes_messages(redis):
495467 message_id = await redis.xadd('test_stream', {'a': 1})
496468 await redis.xgroup_create('test_stream', 'test_group', latest_id='0')
497469 await redis.xread_group(
498470 'test_group', 'test_consumer', ['test_stream'],
499 timeout=1000, latest_ids=[0]
471 timeout=1000, latest_ids=['>']
500472 )
501473 # Message is now pending. Wait no time at all
502474
507479 assert not result
508480
509481
510 @pytest.mark.run_loop
511 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
512 "unstable branch")
513482 async def test_xgroup_delconsumer(redis, create_redis, server):
514483 await redis.xadd('test_stream', {'a': 1})
515484 await redis.xgroup_create('test_stream', 'test_group')
530499 assert not info
531500
532501
533 @pytest.mark.run_loop
534 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
535 "unstable branch")
502 async def test_xdel_stream(redis):
503 message_id = await redis.xadd('test_stream', {'a': 1})
504 response = await redis.xdel('test_stream', id=message_id)
505 assert response >= 0
506
507
508 async def test_xtrim_stream(redis):
509 await redis.xadd('test_stream', {'a': 1})
510 await redis.xadd('test_stream', {'b': 1})
511 await redis.xadd('test_stream', {'c': 1})
512 response = await redis.xtrim('test_stream', max_len=1, exact_len=False)
513 assert response >= 0
514
515
516 async def test_xlen_stream(redis):
517 await redis.xadd('test_stream', {'a': 1})
518 response = await redis.xlen('test_stream')
519 assert response >= 0
520
521
536522 async def test_xinfo_consumers(redis):
537523 await redis.xadd('test_stream', {'a': 1})
538524 await redis.xgroup_create('test_stream', 'test_group')
550536 assert isinstance(info[0], dict)
551537
552538
553 @pytest.mark.run_loop
554 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
555 "unstable branch")
556539 async def test_xinfo_stream(redis):
557540 await redis.xadd('test_stream', {'a': 1})
558541 await redis.xgroup_create('test_stream', 'test_group')
574557 assert isinstance(info, dict)
575558
576559
577 @pytest.mark.run_loop
578 @pytest.redis_version(999, 999, 999, reason="Streams only available on redis "
579 "unstable branch")
580560 async def test_xinfo_help(redis):
581561 info = await redis.xinfo_help()
582562 assert info
563
564
565 @pytest.mark.parametrize('param', [0.1, '1'])
566 async def test_xread_param_types(redis, param):
567 with pytest.raises(TypeError):
568 await redis.xread(
569 ["system_event_stream"],
570 timeout=param, latest_ids=[0]
571 )
572
573
574 def test_parse_messages_ok():
575 message = [(b'123', [b'f1', b'v1', b'f2', b'v2'])]
576 assert parse_messages(message) == [(b'123', {b'f1': b'v1', b'f2': b'v2'})]
577
578
579 def test_parse_messages_null_fields():
580 # Redis can sometimes respond with a fields value of 'null',
581 # so ensure we handle that sensibly
582 message = [(b'123', None)]
583 assert parse_messages(message) == []
584
585
586 def test_parse_messages_null_message():
587 # Redis can sometimes respond with a fields value of 'null',
588 # so ensure we handle that sensibly
589 message = [None]
590 assert parse_messages(message) == []
1616 return reader
1717
1818
19 @pytest.mark.run_loop
2019 async def test_feed_and_parse(reader):
2120 reader.feed_data(b'+PONG\r\n')
2221 assert (await reader.readobj()) == b'PONG'
2322
2423
25 @pytest.mark.run_loop
2624 async def test_buffer_available_after_RST(reader):
2725 reader.feed_data(b'+PONG\r\n')
2826 reader.set_exception(Exception())
4543 'read_method',
4644 ['read', 'readline', 'readuntil', 'readexactly']
4745 )
48 @pytest.mark.run_loop
4946 async def test_read_flavors_not_supported(reader, read_method):
5047 with pytest.raises(RuntimeError):
5148 await getattr(reader, read_method)()
11 import pytest
22
33 from aioredis import ReplyError
4 from _testutils import redis_version
45
56
67 async def add(redis, key, value):
89 assert ok is True
910
1011
11 @pytest.mark.run_loop
1212 async def test_append(redis):
1313 len_ = await redis.append('my-key', 'Hello')
1414 assert len_ == 5
2424 await redis.append('none-key', None)
2525
2626
27 @pytest.mark.run_loop
2827 async def test_bitcount(redis):
2928 await add(redis, 'my-key', b'\x00\x10\x01')
3029
5554 await redis.bitcount('my-key', 2, None)
5655
5756
58 @pytest.mark.run_loop
5957 async def test_bitop_and(redis):
6058 key1, value1 = b'key:bitop:and:1', 5
6159 key2, value2 = b'key:bitop:and:2', 7
7775 await redis.bitop_and(destkey, key1, None)
7876
7977
80 @pytest.mark.run_loop
8178 async def test_bitop_or(redis):
8279 key1, value1 = b'key:bitop:or:1', 5
8380 key2, value2 = b'key:bitop:or:2', 7
9996 await redis.bitop_or(destkey, key1, None)
10097
10198
102 @pytest.mark.run_loop
10399 async def test_bitop_xor(redis):
104100 key1, value1 = b'key:bitop:xor:1', 5
105101 key2, value2 = b'key:bitop:xor:2', 7
121117 await redis.bitop_xor(destkey, key1, None)
122118
123119
124 @pytest.mark.run_loop
125120 async def test_bitop_not(redis):
126121 key1, value1 = b'key:bitop:not:1', 5
127122 await add(redis, key1, value1)
138133 await redis.bitop_not(destkey, None)
139134
140135
141 @pytest.redis_version(2, 8, 0, reason='BITPOS is available since redis>=2.8.0')
142 @pytest.mark.run_loop
136 @redis_version(2, 8, 0, reason='BITPOS is available since redis>=2.8.0')
143137 async def test_bitpos(redis):
144138 key, value = b'key:bitop', b'\xff\xf0\x00'
145139 await add(redis, key, value)
172166 test_value = await redis.bitpos(key, 7)
173167
174168
175 @pytest.mark.run_loop
176169 async def test_decr(redis):
177170 await redis.delete('key')
178171
191184 await redis.decr(None)
192185
193186
194 @pytest.mark.run_loop
195187 async def test_decrby(redis):
196188 await redis.delete('key')
197189
214206 await redis.decrby('key', None)
215207
216208
217 @pytest.mark.run_loop
218209 async def test_get(redis):
219210 await add(redis, 'my-key', 'value')
220211 ret = await redis.get('my-key')
231222 await redis.get(None)
232223
233224
234 @pytest.mark.run_loop
235225 async def test_getbit(redis):
236226 key, value = b'key:getbit', 10
237227 await add(redis, key, value)
259249 await redis.getbit(key, -7)
260250
261251
262 @pytest.mark.run_loop
263252 async def test_getrange(redis):
264253 key, value = b'key:getrange', b'This is a string'
265254 await add(redis, key, value)
293282 await redis.getrange(key, 0, b'seven')
294283
295284
296 @pytest.mark.run_loop
297285 async def test_getset(redis):
298286 key, value = b'key:getset', b'hello'
299287 await add(redis, key, value)
318306 await redis.getset(None, b'asyncio')
319307
320308
321 @pytest.mark.run_loop
322309 async def test_incr(redis):
323310 await redis.delete('key')
324311
337324 await redis.incr(None)
338325
339326
340 @pytest.mark.run_loop
341327 async def test_incrby(redis):
342328 await redis.delete('key')
343329
360346 await redis.incrby('key', None)
361347
362348
363 @pytest.mark.run_loop
364349 async def test_incrbyfloat(redis):
365350 await redis.delete('key')
366351
387372 await redis.incrbyfloat('key', '1.0')
388373
389374
390 @pytest.mark.run_loop
391375 async def test_mget(redis):
392376 key1, value1 = b'foo', b'bar'
393377 key2, value2 = b'baz', b'bzz'
412396 await redis.mget(key1, None)
413397
414398
415 @pytest.mark.run_loop
416399 async def test_mset(redis):
417400 key1, value1 = b'key:mset:1', b'hello'
418401 key2, value2 = b'key:mset:2', b'world'
432415 await redis.mset(key1, value1, key1)
433416
434417
435 @pytest.mark.run_loop
418 async def test_mset_with_dict(redis):
419 array = [str(n) for n in range(10)]
420 _dict = dict.fromkeys(array, 'default value', )
421
422 await redis.mset(_dict)
423
424 test_values = await redis.mget(*_dict.keys())
425 assert test_values == [str.encode(val) for val in _dict.values()]
426
427 with pytest.raises(TypeError):
428 await redis.mset('param', )
429
430
436431 async def test_msetnx(redis):
437432 key1, value1 = b'key:msetnx:1', b'Hello'
438433 key2, value2 = b'key:msetnx:2', b'there'
453448 await redis.msetnx(key1, value1, key2)
454449
455450
456 @pytest.mark.run_loop
457 async def test_psetex(redis, loop):
451 async def test_psetex(redis):
458452 key, value = b'key:psetex:1', b'Hello'
459453 # test expiration in milliseconds
460454 tr = redis.multi_exec()
465459 test_value = await fut2
466460 assert test_value == value
467461
468 await asyncio.sleep(0.050, loop=loop)
462 await asyncio.sleep(0.050)
469463 test_value = await redis.get(key)
470464 assert test_value is None
471465
475469 await redis.psetex(key, 7.5, value)
476470
477471
478 @pytest.mark.run_loop
479472 async def test_set(redis):
480473 ok = await redis.set('my-key', 'value')
481474 assert ok is True
490483 await redis.set(None, 'value')
491484
492485
493 @pytest.mark.run_loop
494 async def test_set_expire(redis, loop):
486 async def test_set_expire(redis):
495487 key, value = b'key:set:expire', b'foo'
496488 # test expiration in milliseconds
497489 tr = redis.multi_exec()
501493 await fut1
502494 result_1 = await fut2
503495 assert result_1 == value
504 await asyncio.sleep(0.050, loop=loop)
496 await asyncio.sleep(0.050)
505497 result_2 = await redis.get(key)
506498 assert result_2 is None
507499
513505 await fut1
514506 result_3 = await fut2
515507 assert result_3 == value
516 await asyncio.sleep(1.050, loop=loop)
508 await asyncio.sleep(1.050)
517509 result_4 = await redis.get(key)
518510 assert result_4 is None
519511
520512
521 @pytest.mark.run_loop
522513 async def test_set_only_if_not_exists(redis):
523514 key, value = b'key:set:only_if_not_exists', b'foo'
524515 await redis.set(
534525 assert result_2 == value
535526
536527
537 @pytest.mark.run_loop
538528 async def test_set_only_if_exists(redis):
539529 key, value = b'key:set:only_if_exists', b'only_if_exists:foo'
540530 # ensure that such key does not exits, and value not sets
550540 assert result_2 == b'foo'
551541
552542
553 @pytest.mark.run_loop
554543 async def test_set_wrong_input(redis):
555544 key, value = b'key:set:', b'foo'
556545
562551 await redis.set(key, value, pexpire=7.8)
563552
564553
565 @pytest.mark.run_loop
566554 async def test_setbit(redis):
567555 key = b'key:setbit'
568556 result = await redis.setbit(key, 7, 1)
580568 await redis.setbit(key, 1, 7)
581569
582570
583 @pytest.mark.run_loop
584 async def test_setex(redis, loop):
571 async def test_setex(redis):
585572 key, value = b'key:setex:1', b'Hello'
586573 tr = redis.multi_exec()
587574 fut1 = tr.setex(key, 1, value)
590577 await fut1
591578 test_value = await fut2
592579 assert test_value == value
593 await asyncio.sleep(1.050, loop=loop)
580 await asyncio.sleep(1.050)
594581 test_value = await redis.get(key)
595582 assert test_value is None
596583
601588 await fut1
602589 test_value = await fut2
603590 assert test_value == value
604 await asyncio.sleep(0.50, loop=loop)
591 await asyncio.sleep(0.50)
605592 test_value = await redis.get(key)
606593 assert test_value is None
607594
611598 await redis.setex(key, b'one', value)
612599
613600
614 @pytest.mark.run_loop
615601 async def test_setnx(redis):
616602 key, value = b'key:setnx:1', b'Hello'
617603 # set fresh new value
633619 await redis.setnx(None, value)
634620
635621
636 @pytest.mark.run_loop
637622 async def test_setrange(redis):
638623 key, value = b'key:setrange', b'Hello World'
639624 await add(redis, key, value)
655640 await redis.setrange(key, -1, b'Redis')
656641
657642
658 @pytest.mark.run_loop
659643 async def test_strlen(redis):
660644 key, value = b'key:strlen', b'asyncio'
661645 await add(redis, key, value)
669653 await redis.strlen(None)
670654
671655
672 @pytest.mark.run_loop
673656 async def test_cancel_hang(redis):
674657 exists_coro = redis.execute("EXISTS", b"key:test1")
675658 exists_coro.cancel()
677660 assert not exists_check
678661
679662
680 @pytest.mark.run_loop
681 async def test_set_enc(create_redis, loop, server):
682 redis = await create_redis(
683 server.tcp_address, loop=loop, encoding='utf-8')
663 async def test_set_enc(create_redis, server):
664 redis = await create_redis(server.tcp_address, encoding='utf-8')
684665 TEST_KEY = 'my-key'
685666 ok = await redis.set(TEST_KEY, 'value')
686667 assert ok is True
22 import asyncio
33
44
5 @pytest.mark.run_loop
65 async def test_future_cancellation(create_connection, loop, server):
7 conn = await create_connection(
8 server.tcp_address, loop=loop)
6 conn = await create_connection(server.tcp_address)
97
108 ts = loop.time()
119 fut = conn.execute('BLPOP', 'some-list', 5)
1210 with pytest.raises(asyncio.TimeoutError):
13 await asyncio.wait_for(fut, 1, loop=loop)
11 await asyncio.wait_for(fut, 1)
1412 assert fut.cancelled()
1513
1614 # NOTE: Connection becomes available only after timeout expires
44 from aioredis import ConnectionClosedError
55
66
7 @pytest.mark.run_loop
8 async def test_multi_exec(redis, loop):
7 async def test_multi_exec(redis):
98 await redis.delete('foo', 'bar')
109
1110 tr = redis.multi_exec()
1312 f2 = tr.incr('bar')
1413 res = await tr.execute()
1514 assert res == [1, 1]
16 res2 = await asyncio.gather(f1, f2, loop=loop)
15 res2 = await asyncio.gather(f1, f2)
1716 assert res == res2
1817
1918 tr = redis.multi_exec()
2827 f2 = tr.incrbyfloat('foo', 1.2)
2928 res = await tr.execute()
3029 assert res == [True, 2.2]
31 res2 = await asyncio.gather(f1, f2, loop=loop)
30 res2 = await asyncio.gather(f1, f2)
3231 assert res == res2
3332
3433 tr = redis.multi_exec()
3938 await f1
4039
4140
42 @pytest.mark.run_loop
4341 async def test_empty(redis):
4442 tr = redis.multi_exec()
4543 res = await tr.execute()
4644 assert res == []
4745
4846
49 @pytest.mark.run_loop
5047 async def test_double_execute(redis):
5148 tr = redis.multi_exec()
5249 await tr.execute()
5653 await tr.incr('foo')
5754
5855
59 @pytest.mark.run_loop
6056 async def test_connection_closed(redis):
6157 tr = redis.multi_exec()
6258 fut1 = tr.quit()
8884 (ConnectionClosedError, ConnectionError))
8985
9086
91 @pytest.mark.run_loop
9287 async def test_discard(redis):
9388 await redis.delete('foo')
9489 tr = redis.multi_exec()
107102 assert res == 1
108103
109104
110 @pytest.mark.run_loop
111105 async def test_exec_error(redis):
112106 tr = redis.multi_exec()
113107 fut = tr.connection.execute('INCRBY', 'key', '1.0')
125119 await fut
126120
127121
128 @pytest.mark.run_loop
129122 async def test_command_errors(redis):
130123 tr = redis.multi_exec()
131124 fut = tr.incrby('key', 1.0)
135128 await fut
136129
137130
138 @pytest.mark.run_loop
139131 async def test_several_command_errors(redis):
140132 tr = redis.multi_exec()
141133 fut1 = tr.incrby('key', 1.0)
148140 await fut2
149141
150142
151 @pytest.mark.run_loop
152143 async def test_error_in_connection(redis):
153144 await redis.set('foo', 1)
154145 tr = redis.multi_exec()
161152 await fut2
162153
163154
164 @pytest.mark.run_loop
165155 async def test_watch_unwatch(redis):
166156 res = await redis.watch('key')
167157 assert res is True
179169 assert res is True
180170
181171
182 @pytest.mark.run_loop
183172 async def test_encoding(redis):
184173 res = await redis.set('key', 'value')
185174 assert res is True
200189 assert res == {'foo': 'val1', 'bar': 'val2'}
201190
202191
203 @pytest.mark.run_loop
204 async def test_global_encoding(redis, create_redis, server, loop):
205 redis = await create_redis(
206 server.tcp_address,
207 loop=loop, encoding='utf-8')
192 async def test_global_encoding(redis, create_redis, server):
193 redis = await create_redis(server.tcp_address, encoding='utf-8')
208194 res = await redis.set('key', 'value')
209195 assert res is True
210196 res = await redis.hmset(
214200 tr = redis.multi_exec()
215201 fut1 = tr.get('key')
216202 fut2 = tr.get('key', encoding='utf-8')
217 fut3 = tr.hgetall('hash-key', encoding='utf-8')
203 fut3 = tr.get('key', encoding=None)
204 fut4 = tr.hgetall('hash-key', encoding='utf-8')
218205 await tr.execute()
219206 res = await fut1
220207 assert res == 'value'
221208 res = await fut2
222209 assert res == 'value'
223210 res = await fut3
211 assert res == b'value'
212 res = await fut4
224213 assert res == {'foo': 'val1', 'bar': 'val2'}
225214
226215
227 @pytest.mark.run_loop
228 async def test_transaction__watch_error(redis, create_redis, server, loop):
229 other = await create_redis(
230 server.tcp_address, loop=loop)
216 async def test_transaction__watch_error(redis, create_redis, server):
217 other = await create_redis(server.tcp_address)
231218
232219 ok = await redis.set('foo', 'bar')
233220 assert ok is True
249236 await fut2
250237
251238
252 @pytest.mark.run_loop
253239 async def test_multi_exec_and_pool_release(redis):
254240 # Test the case when pool connection is released before
255241 # `exec` result is received.
270256 assert (await fut1) is None
271257
272258
273 @pytest.mark.run_loop
274259 async def test_multi_exec_db_select(redis):
275260 await redis.set('foo', 'bar')
276261