Import upstream version 1.3.2b1+git20210727.1.2bddbf6
Debian Janitor
2 years ago
0 | 1.3.2b1 (2021-07-11) | |
1 | ^^^^^^^^^^^^^^^^^^^^ | |
2 | ||
3 | * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_ | |
4 | ||
5 | ||
6 | 1.3.1 (2021-07-08) | |
7 | ^^^^^^^^^^^^^^^^^^ | |
8 | ||
9 | ||
10 | 1.3.1b2 (2021-07-06) | |
11 | ^^^^^^^^^^^^^^^^^^^^ | |
12 | ||
13 | * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_ | |
14 | ||
15 | ||
16 | 1.3.1b1 (2021-07-05) | |
17 | ^^^^^^^^^^^^^^^^^^^^ | |
18 | ||
19 | * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_ | |
20 | ||
21 | ||
22 | 1.3.0 (2021-06-30) | |
23 | ^^^^^^^^^^^^^^^^^^ | |
24 | ||
25 | ||
26 | 1.3.0b4 (2021-06-28) | |
27 | ^^^^^^^^^^^^^^^^^^^^ | |
28 | ||
29 | * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_ | |
30 | ||
31 | ||
32 | 1.3.0b3 (2021-04-03) | |
33 | ^^^^^^^^^^^^^^^^^^^^ | |
34 | ||
35 | * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_ | |
36 | ||
37 | ||
38 | 1.3.0b2 (2021-04-02) | |
39 | ^^^^^^^^^^^^^^^^^^^^ | |
40 | ||
41 | * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_ | |
42 | ||
43 | ||
44 | 1.3.0b1 (2021-03-30) | |
45 | ^^^^^^^^^^^^^^^^^^^^ | |
46 | ||
47 | * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_ | |
48 | ||
49 | ||
50 | 1.3.0b0 (2021-03-25) | |
51 | ^^^^^^^^^^^^^^^^^^^^ | |
52 | ||
53 | * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_ | |
54 | ||
55 | ||
56 | 1.2.1 (2021-03-23) | |
57 | ^^^^^^^^^^^^^^^^^^ | |
58 | ||
59 | * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_ | |
60 | ||
61 | ||
62 | 1.2.0b4 (2021-03-23) | |
63 | ^^^^^^^^^^^^^^^^^^^^ | |
64 | ||
65 | * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_ | |
66 | ||
67 | ||
68 | 1.2.0b3 (2021-03-22) | |
69 | ^^^^^^^^^^^^^^^^^^^^ | |
70 | ||
71 | * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_ | |
72 | ||
73 | * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_ | |
74 | ||
75 | ||
0 | 76 | 1.2.0b2 (2020-12-21) |
1 | 77 | ^^^^^^^^^^^^^^^^^^^^ |
2 | 78 |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: aiopg |
2 | Version: 1.2.0b2 | |
2 | Version: 1.3.2b1 | |
3 | 3 | Summary: Postgres integration with asyncio. |
4 | 4 | Home-page: https://aiopg.readthedocs.io |
5 | 5 | Author: Andrew Svetlov |
90 | 90 | loop.run_until_complete(go()) |
91 | 91 | |
92 | 92 | .. _PostgreSQL: http://www.postgresql.org/ |
93 | .. _asyncio: http://docs.python.org/3.4/library/asyncio.html | |
93 | .. _asyncio: https://docs.python.org/3/library/asyncio.html | |
94 | 94 | |
95 | 95 | Please use:: |
96 | 96 | |
102 | 102 | |
103 | 103 | Changelog |
104 | 104 | --------- |
105 | ||
106 | 1.3.2b1 (2021-07-11) | |
107 | ^^^^^^^^^^^^^^^^^^^^ | |
108 | ||
109 | * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_ | |
110 | ||
111 | ||
112 | 1.3.1 (2021-07-08) | |
113 | ^^^^^^^^^^^^^^^^^^ | |
114 | ||
115 | ||
116 | 1.3.1b2 (2021-07-06) | |
117 | ^^^^^^^^^^^^^^^^^^^^ | |
118 | ||
119 | * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_ | |
120 | ||
121 | ||
122 | 1.3.1b1 (2021-07-05) | |
123 | ^^^^^^^^^^^^^^^^^^^^ | |
124 | ||
125 | * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_ | |
126 | ||
127 | ||
128 | 1.3.0 (2021-06-30) | |
129 | ^^^^^^^^^^^^^^^^^^ | |
130 | ||
131 | ||
132 | 1.3.0b4 (2021-06-28) | |
133 | ^^^^^^^^^^^^^^^^^^^^ | |
134 | ||
135 | * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_ | |
136 | ||
137 | ||
138 | 1.3.0b3 (2021-04-03) | |
139 | ^^^^^^^^^^^^^^^^^^^^ | |
140 | ||
141 | * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_ | |
142 | ||
143 | ||
144 | 1.3.0b2 (2021-04-02) | |
145 | ^^^^^^^^^^^^^^^^^^^^ | |
146 | ||
147 | * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_ | |
148 | ||
149 | ||
150 | 1.3.0b1 (2021-03-30) | |
151 | ^^^^^^^^^^^^^^^^^^^^ | |
152 | ||
153 | * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_ | |
154 | ||
155 | ||
156 | 1.3.0b0 (2021-03-25) | |
157 | ^^^^^^^^^^^^^^^^^^^^ | |
158 | ||
159 | * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_ | |
160 | ||
161 | ||
162 | 1.2.1 (2021-03-23) | |
163 | ^^^^^^^^^^^^^^^^^^ | |
164 | ||
165 | * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_ | |
166 | ||
167 | ||
168 | 1.2.0b4 (2021-03-23) | |
169 | ^^^^^^^^^^^^^^^^^^^^ | |
170 | ||
171 | * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_ | |
172 | ||
173 | ||
174 | 1.2.0b3 (2021-03-22) | |
175 | ^^^^^^^^^^^^^^^^^^^^ | |
176 | ||
177 | * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_ | |
178 | ||
179 | * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_ | |
180 | ||
105 | 181 | |
106 | 182 | 1.2.0b2 (2020-12-21) |
107 | 183 | ^^^^^^^^^^^^^^^^^^^^ |
73 | 73 | loop.run_until_complete(go()) |
74 | 74 | |
75 | 75 | .. _PostgreSQL: http://www.postgresql.org/ |
76 | .. _asyncio: http://docs.python.org/3.4/library/asyncio.html | |
76 | .. _asyncio: https://docs.python.org/3/library/asyncio.html | |
77 | 77 | |
78 | 78 | Please use:: |
79 | 79 |
2 | 2 | import warnings |
3 | 3 | from collections import namedtuple |
4 | 4 | |
5 | from .connection import TIMEOUT as DEFAULT_TIMEOUT | |
6 | from .connection import Connection, connect | |
7 | from .cursor import Cursor | |
5 | from .connection import ( | |
6 | TIMEOUT as DEFAULT_TIMEOUT, | |
7 | Connection, | |
8 | Cursor, | |
9 | DefaultCompiler, | |
10 | IsolationCompiler, | |
11 | IsolationLevel, | |
12 | ReadCommittedCompiler, | |
13 | RepeatableReadCompiler, | |
14 | SerializableCompiler, | |
15 | Transaction, | |
16 | connect, | |
17 | ) | |
8 | 18 | from .pool import Pool, create_pool |
9 | from .transaction import IsolationLevel, Transaction | |
10 | 19 | from .utils import get_running_loop |
11 | 20 | |
12 | 21 | warnings.filterwarnings( |
13 | 'always', '.*', | |
22 | "always", | |
23 | ".*", | |
14 | 24 | category=ResourceWarning, |
15 | module=r'aiopg(\.\w+)+', | |
16 | append=False | |
25 | module=r"aiopg(\.\w+)+", | |
26 | append=False, | |
17 | 27 | ) |
18 | 28 | |
19 | __all__ = ('connect', 'create_pool', 'get_running_loop', | |
20 | 'Connection', 'Cursor', 'Pool', 'version', 'version_info', | |
21 | 'DEFAULT_TIMEOUT', 'IsolationLevel', 'Transaction') | |
29 | __all__ = ( | |
30 | "connect", | |
31 | "create_pool", | |
32 | "get_running_loop", | |
33 | "Connection", | |
34 | "Cursor", | |
35 | "Pool", | |
36 | "version", | |
37 | "version_info", | |
38 | "DEFAULT_TIMEOUT", | |
39 | "IsolationLevel", | |
40 | "Transaction", | |
41 | ) | |
22 | 42 | |
23 | __version__ = '1.2.0b2' | |
43 | __version__ = "1.3.2b1" | |
24 | 44 | |
25 | version = __version__ + ' , Python ' + sys.version | |
45 | version = f"{__version__}, Python {sys.version}" | |
26 | 46 | |
27 | VersionInfo = namedtuple('VersionInfo', | |
28 | 'major minor micro releaselevel serial') | |
47 | VersionInfo = namedtuple( | |
48 | "VersionInfo", "major minor micro releaselevel serial" | |
49 | ) | |
29 | 50 | |
30 | 51 | |
31 | def _parse_version(ver): | |
52 | def _parse_version(ver: str) -> VersionInfo: | |
32 | 53 | RE = ( |
33 | r'^' | |
34 | r'(?P<major>\d+)\.(?P<minor>\d+)\.(?P<micro>\d+)' | |
35 | r'((?P<releaselevel>[a-z]+)(?P<serial>\d+)?)?' | |
36 | r'$' | |
54 | r"^" | |
55 | r"(?P<major>\d+)\.(?P<minor>\d+)\.(?P<micro>\d+)" | |
56 | r"((?P<releaselevel>[a-z]+)(?P<serial>\d+)?)?" | |
57 | r"$" | |
37 | 58 | ) |
38 | 59 | match = re.match(RE, ver) |
60 | if not match: | |
61 | raise ImportError(f"Invalid package version {ver}") | |
39 | 62 | try: |
40 | major = int(match.group('major')) | |
41 | minor = int(match.group('minor')) | |
42 | micro = int(match.group('micro')) | |
43 | levels = {'rc': 'candidate', | |
44 | 'a': 'alpha', | |
45 | 'b': 'beta', | |
46 | None: 'final'} | |
47 | releaselevel = levels[match.group('releaselevel')] | |
48 | serial = int(match.group('serial')) if match.group('serial') else 0 | |
63 | major = int(match.group("major")) | |
64 | minor = int(match.group("minor")) | |
65 | micro = int(match.group("micro")) | |
66 | levels = {"rc": "candidate", "a": "alpha", "b": "beta", None: "final"} | |
67 | releaselevel = levels[match.group("releaselevel")] | |
68 | serial = int(match.group("serial")) if match.group("serial") else 0 | |
49 | 69 | return VersionInfo(major, minor, micro, releaselevel, serial) |
50 | 70 | except Exception as e: |
51 | raise ImportError("Invalid package version {}".format(ver)) from e | |
71 | raise ImportError(f"Invalid package version {ver}") from e | |
52 | 72 | |
53 | 73 | |
54 | 74 | version_info = _parse_version(__version__) |
55 | 75 | |
56 | 76 | # make pyflakes happy |
57 | (connect, create_pool, Connection, Cursor, Pool, DEFAULT_TIMEOUT, | |
58 | IsolationLevel, Transaction, get_running_loop) | |
77 | ( | |
78 | connect, | |
79 | create_pool, | |
80 | Connection, | |
81 | Cursor, | |
82 | Pool, | |
83 | DEFAULT_TIMEOUT, | |
84 | IsolationLevel, | |
85 | Transaction, | |
86 | get_running_loop, | |
87 | IsolationCompiler, | |
88 | DefaultCompiler, | |
89 | ReadCommittedCompiler, | |
90 | RepeatableReadCompiler, | |
91 | SerializableCompiler, | |
92 | ) |
0 | import abc | |
0 | 1 | import asyncio |
1 | 2 | import contextlib |
3 | import datetime | |
4 | import enum | |
2 | 5 | import errno |
3 | 6 | import platform |
4 | 7 | import select |
5 | 8 | import sys |
6 | 9 | import traceback |
10 | import uuid | |
7 | 11 | import warnings |
8 | 12 | import weakref |
9 | 13 | from collections.abc import Mapping |
14 | from types import TracebackType | |
15 | from typing import ( | |
16 | Any, | |
17 | Callable, | |
18 | Generator, | |
19 | List, | |
20 | Optional, | |
21 | Sequence, | |
22 | Tuple, | |
23 | Type, | |
24 | cast, | |
25 | ) | |
10 | 26 | |
11 | 27 | import psycopg2 |
12 | from psycopg2 import extras | |
13 | from psycopg2.extensions import POLL_ERROR, POLL_OK, POLL_READ, POLL_WRITE | |
14 | ||
15 | from .cursor import Cursor | |
16 | from .utils import _ContextManager, get_running_loop | |
17 | ||
18 | __all__ = ('connect',) | |
28 | import psycopg2.extensions | |
29 | import psycopg2.extras | |
30 | ||
31 | from .log import logger | |
32 | from .utils import ( | |
33 | ClosableQueue, | |
34 | _ContextManager, | |
35 | create_completed_future, | |
36 | get_running_loop, | |
37 | ) | |
19 | 38 | |
20 | 39 | TIMEOUT = 60.0 |
21 | 40 | |
24 | 43 | WSAENOTSOCK = 10038 |
25 | 44 | |
26 | 45 | |
27 | def connect(dsn=None, *, timeout=TIMEOUT, enable_json=True, | |
28 | enable_hstore=True, enable_uuid=True, echo=False, **kwargs): | |
46 | def connect( | |
47 | dsn: Optional[str] = None, | |
48 | *, | |
49 | timeout: float = TIMEOUT, | |
50 | enable_json: bool = True, | |
51 | enable_hstore: bool = True, | |
52 | enable_uuid: bool = True, | |
53 | echo: bool = False, | |
54 | **kwargs: Any, | |
55 | ) -> _ContextManager["Connection"]: | |
29 | 56 | """A factory for connecting to PostgreSQL. |
30 | 57 | |
31 | 58 | The coroutine accepts all parameters that psycopg2.connect() does |
34 | 61 | Returns instantiated Connection object. |
35 | 62 | |
36 | 63 | """ |
37 | coro = Connection( | |
38 | dsn, timeout, bool(echo), | |
64 | connection = Connection( | |
65 | dsn, | |
66 | timeout, | |
67 | bool(echo), | |
39 | 68 | enable_hstore=enable_hstore, |
40 | 69 | enable_uuid=enable_uuid, |
41 | 70 | enable_json=enable_json, |
42 | **kwargs | |
71 | **kwargs, | |
43 | 72 | ) |
44 | ||
45 | return _ContextManager(coro) | |
46 | ||
47 | ||
48 | def _is_bad_descriptor_error(os_error): | |
49 | if platform.system() == 'Windows': # pragma: no cover | |
50 | return os_error.winerror == WSAENOTSOCK | |
51 | else: | |
52 | return os_error.errno == errno.EBADF | |
73 | return _ContextManager[Connection](connection, disconnect) # type: ignore | |
74 | ||
75 | ||
76 | async def disconnect(c: "Connection") -> None: | |
77 | await c.close() | |
78 | ||
79 | ||
80 | def _is_bad_descriptor_error(os_error: OSError) -> bool: | |
81 | if platform.system() == "Windows": # pragma: no cover | |
82 | winerror = int(getattr(os_error, "winerror", 0)) | |
83 | return winerror == WSAENOTSOCK | |
84 | return os_error.errno == errno.EBADF | |
85 | ||
86 | ||
87 | class IsolationCompiler(abc.ABC): | |
88 | __slots__ = ("_isolation_level", "_readonly", "_deferrable") | |
89 | ||
90 | def __init__( | |
91 | self, isolation_level: Optional[str], readonly: bool, deferrable: bool | |
92 | ): | |
93 | self._isolation_level = isolation_level | |
94 | self._readonly = readonly | |
95 | self._deferrable = deferrable | |
96 | ||
97 | @property | |
98 | def name(self) -> str: | |
99 | return self._isolation_level or "Unknown" | |
100 | ||
101 | def savepoint(self, unique_id: str) -> str: | |
102 | return f"SAVEPOINT {unique_id}" | |
103 | ||
104 | def release_savepoint(self, unique_id: str) -> str: | |
105 | return f"RELEASE SAVEPOINT {unique_id}" | |
106 | ||
107 | def rollback_savepoint(self, unique_id: str) -> str: | |
108 | return f"ROLLBACK TO SAVEPOINT {unique_id}" | |
109 | ||
110 | def commit(self) -> str: | |
111 | return "COMMIT" | |
112 | ||
113 | def rollback(self) -> str: | |
114 | return "ROLLBACK" | |
115 | ||
116 | def begin(self) -> str: | |
117 | query = "BEGIN" | |
118 | if self._isolation_level is not None: | |
119 | query += f" ISOLATION LEVEL {self._isolation_level.upper()}" | |
120 | ||
121 | if self._readonly: | |
122 | query += " READ ONLY" | |
123 | ||
124 | if self._deferrable: | |
125 | query += " DEFERRABLE" | |
126 | ||
127 | return query | |
128 | ||
129 | def __repr__(self) -> str: | |
130 | return self.name | |
131 | ||
132 | ||
133 | class ReadCommittedCompiler(IsolationCompiler): | |
134 | __slots__ = () | |
135 | ||
136 | def __init__(self, readonly: bool, deferrable: bool): | |
137 | super().__init__("Read committed", readonly, deferrable) | |
138 | ||
139 | ||
140 | class RepeatableReadCompiler(IsolationCompiler): | |
141 | __slots__ = () | |
142 | ||
143 | def __init__(self, readonly: bool, deferrable: bool): | |
144 | super().__init__("Repeatable read", readonly, deferrable) | |
145 | ||
146 | ||
147 | class SerializableCompiler(IsolationCompiler): | |
148 | __slots__ = () | |
149 | ||
150 | def __init__(self, readonly: bool, deferrable: bool): | |
151 | super().__init__("Serializable", readonly, deferrable) | |
152 | ||
153 | ||
154 | class DefaultCompiler(IsolationCompiler): | |
155 | __slots__ = () | |
156 | ||
157 | def __init__(self, readonly: bool, deferrable: bool): | |
158 | super().__init__(None, readonly, deferrable) | |
159 | ||
160 | @property | |
161 | def name(self) -> str: | |
162 | return "Default" | |
163 | ||
164 | ||
165 | class IsolationLevel(enum.Enum): | |
166 | serializable = SerializableCompiler | |
167 | repeatable_read = RepeatableReadCompiler | |
168 | read_committed = ReadCommittedCompiler | |
169 | default = DefaultCompiler | |
170 | ||
171 | def __call__(self, readonly: bool, deferrable: bool) -> IsolationCompiler: | |
172 | return self.value(readonly, deferrable) # type: ignore | |
173 | ||
174 | ||
175 | async def _release_savepoint(t: "Transaction") -> None: | |
176 | await t.release_savepoint() | |
177 | ||
178 | ||
179 | async def _rollback_savepoint(t: "Transaction") -> None: | |
180 | await t.rollback_savepoint() | |
181 | ||
182 | ||
183 | class Transaction: | |
184 | __slots__ = ("_cursor", "_is_begin", "_isolation", "_unique_id") | |
185 | ||
186 | def __init__( | |
187 | self, | |
188 | cursor: "Cursor", | |
189 | isolation_level: Callable[[bool, bool], IsolationCompiler], | |
190 | readonly: bool = False, | |
191 | deferrable: bool = False, | |
192 | ): | |
193 | self._cursor = cursor | |
194 | self._is_begin = False | |
195 | self._unique_id: Optional[str] = None | |
196 | self._isolation = isolation_level(readonly, deferrable) | |
197 | ||
198 | @property | |
199 | def is_begin(self) -> bool: | |
200 | return self._is_begin | |
201 | ||
202 | async def begin(self) -> "Transaction": | |
203 | if self._is_begin: | |
204 | raise psycopg2.ProgrammingError( | |
205 | "You are trying to open a new transaction, use the save point" | |
206 | ) | |
207 | self._is_begin = True | |
208 | await self._cursor.execute(self._isolation.begin()) | |
209 | return self | |
210 | ||
211 | async def commit(self) -> None: | |
212 | self._check_commit_rollback() | |
213 | await self._cursor.execute(self._isolation.commit()) | |
214 | self._is_begin = False | |
215 | ||
216 | async def rollback(self) -> None: | |
217 | self._check_commit_rollback() | |
218 | if not self._cursor.closed: | |
219 | await self._cursor.execute(self._isolation.rollback()) | |
220 | self._is_begin = False | |
221 | ||
222 | async def rollback_savepoint(self) -> None: | |
223 | self._check_release_rollback() | |
224 | if not self._cursor.closed: | |
225 | await self._cursor.execute( | |
226 | self._isolation.rollback_savepoint( | |
227 | self._unique_id # type: ignore | |
228 | ) | |
229 | ) | |
230 | self._unique_id = None | |
231 | ||
232 | async def release_savepoint(self) -> None: | |
233 | self._check_release_rollback() | |
234 | await self._cursor.execute( | |
235 | self._isolation.release_savepoint(self._unique_id) # type: ignore | |
236 | ) | |
237 | self._unique_id = None | |
238 | ||
239 | async def savepoint(self) -> "Transaction": | |
240 | self._check_commit_rollback() | |
241 | if self._unique_id is not None: | |
242 | raise psycopg2.ProgrammingError("You do not shut down savepoint") | |
243 | ||
244 | self._unique_id = f"s{uuid.uuid1().hex}" | |
245 | await self._cursor.execute(self._isolation.savepoint(self._unique_id)) | |
246 | ||
247 | return self | |
248 | ||
249 | def point(self) -> _ContextManager["Transaction"]: | |
250 | return _ContextManager[Transaction]( | |
251 | self.savepoint(), | |
252 | _release_savepoint, | |
253 | _rollback_savepoint, | |
254 | ) | |
255 | ||
256 | def _check_commit_rollback(self) -> None: | |
257 | if not self._is_begin: | |
258 | raise psycopg2.ProgrammingError( | |
259 | "You are trying to commit " "the transaction does not open" | |
260 | ) | |
261 | ||
262 | def _check_release_rollback(self) -> None: | |
263 | self._check_commit_rollback() | |
264 | if self._unique_id is None: | |
265 | raise psycopg2.ProgrammingError("You do not start savepoint") | |
266 | ||
267 | def __repr__(self) -> str: | |
268 | return ( | |
269 | f"<{self.__class__.__name__} " | |
270 | f"transaction={self._isolation} id={id(self):#x}>" | |
271 | ) | |
272 | ||
273 | def __del__(self) -> None: | |
274 | if self._is_begin: | |
275 | warnings.warn( | |
276 | f"You have not closed transaction {self!r}", ResourceWarning | |
277 | ) | |
278 | ||
279 | if self._unique_id is not None: | |
280 | warnings.warn( | |
281 | f"You have not closed savepoint {self!r}", ResourceWarning | |
282 | ) | |
283 | ||
284 | async def __aenter__(self) -> "Transaction": | |
285 | return await self.begin() | |
286 | ||
287 | async def __aexit__( | |
288 | self, | |
289 | exc_type: Optional[Type[BaseException]], | |
290 | exc: Optional[BaseException], | |
291 | tb: Optional[TracebackType], | |
292 | ) -> None: | |
293 | if exc_type is not None: | |
294 | await self.rollback() | |
295 | else: | |
296 | await self.commit() | |
297 | ||
298 | ||
299 | async def _commit_transaction(t: Transaction) -> None: | |
300 | await t.commit() | |
301 | ||
302 | ||
303 | async def _rollback_transaction(t: Transaction) -> None: | |
304 | await t.rollback() | |
305 | ||
306 | ||
307 | class Cursor: | |
308 | def __init__( | |
309 | self, | |
310 | conn: "Connection", | |
311 | impl: Any, | |
312 | timeout: float, | |
313 | echo: bool, | |
314 | isolation_level: Optional[IsolationLevel] = None, | |
315 | ): | |
316 | self._conn = conn | |
317 | self._impl = impl | |
318 | self._timeout = timeout | |
319 | self._echo = echo | |
320 | self._transaction = Transaction( | |
321 | self, isolation_level or IsolationLevel.default | |
322 | ) | |
323 | ||
324 | @property | |
325 | def echo(self) -> bool: | |
326 | """Return echo mode status.""" | |
327 | return self._echo | |
328 | ||
329 | @property | |
330 | def description(self) -> Optional[Sequence[Any]]: | |
331 | """This read-only attribute is a sequence of 7-item sequences. | |
332 | ||
333 | Each of these sequences is a collections.namedtuple containing | |
334 | information describing one result column: | |
335 | ||
336 | 0. name: the name of the column returned. | |
337 | 1. type_code: the PostgreSQL OID of the column. | |
338 | 2. display_size: the actual length of the column in bytes. | |
339 | 3. internal_size: the size in bytes of the column associated to | |
340 | this column on the server. | |
341 | 4. precision: total number of significant digits in columns of | |
342 | type NUMERIC. None for other types. | |
343 | 5. scale: count of decimal digits in the fractional part in | |
344 | columns of type NUMERIC. None for other types. | |
345 | 6. null_ok: always None as not easy to retrieve from the libpq. | |
346 | ||
347 | This attribute will be None for operations that do not | |
348 | return rows or if the cursor has not had an operation invoked | |
349 | via the execute() method yet. | |
350 | ||
351 | """ | |
352 | return self._impl.description # type: ignore | |
353 | ||
354 | def close(self) -> None: | |
355 | """Close the cursor now.""" | |
356 | if not self.closed: | |
357 | self._impl.close() | |
358 | ||
359 | @property | |
360 | def closed(self) -> bool: | |
361 | """Read-only boolean attribute: specifies if the cursor is closed.""" | |
362 | return self._impl.closed # type: ignore | |
363 | ||
364 | @property | |
365 | def connection(self) -> "Connection": | |
366 | """Read-only attribute returning a reference to the `Connection`.""" | |
367 | return self._conn | |
368 | ||
369 | @property | |
370 | def raw(self) -> Any: | |
371 | """Underlying psycopg cursor object, readonly""" | |
372 | return self._impl | |
373 | ||
374 | @property | |
375 | def name(self) -> str: | |
376 | # Not supported | |
377 | return self._impl.name # type: ignore | |
378 | ||
379 | @property | |
380 | def scrollable(self) -> Optional[bool]: | |
381 | # Not supported | |
382 | return self._impl.scrollable # type: ignore | |
383 | ||
384 | @scrollable.setter | |
385 | def scrollable(self, val: bool) -> None: | |
386 | # Not supported | |
387 | self._impl.scrollable = val | |
388 | ||
389 | @property | |
390 | def withhold(self) -> bool: | |
391 | # Not supported | |
392 | return self._impl.withhold # type: ignore | |
393 | ||
394 | @withhold.setter | |
395 | def withhold(self, val: bool) -> None: | |
396 | # Not supported | |
397 | self._impl.withhold = val | |
398 | ||
399 | async def execute( | |
400 | self, | |
401 | operation: str, | |
402 | parameters: Any = None, | |
403 | *, | |
404 | timeout: Optional[float] = None, | |
405 | ) -> None: | |
406 | """Prepare and execute a database operation (query or command). | |
407 | ||
408 | Parameters may be provided as sequence or mapping and will be | |
409 | bound to variables in the operation. Variables are specified | |
410 | either with positional %s or named %({name})s placeholders. | |
411 | ||
412 | """ | |
413 | if timeout is None: | |
414 | timeout = self._timeout | |
415 | waiter = self._conn._create_waiter("cursor.execute") | |
416 | if self._echo: | |
417 | logger.info(operation) | |
418 | logger.info("%r", parameters) | |
419 | try: | |
420 | self._impl.execute(operation, parameters) | |
421 | except BaseException: | |
422 | self._conn._waiter = None | |
423 | raise | |
424 | try: | |
425 | await self._conn._poll(waiter, timeout) | |
426 | except asyncio.TimeoutError: | |
427 | self._impl.close() | |
428 | raise | |
429 | ||
430 | async def executemany(self, *args: Any, **kwargs: Any) -> None: | |
431 | # Not supported | |
432 | raise psycopg2.ProgrammingError( | |
433 | "executemany cannot be used in asynchronous mode" | |
434 | ) | |
435 | ||
436 | async def callproc( | |
437 | self, | |
438 | procname: str, | |
439 | parameters: Any = None, | |
440 | *, | |
441 | timeout: Optional[float] = None, | |
442 | ) -> None: | |
443 | """Call a stored database procedure with the given name. | |
444 | ||
445 | The sequence of parameters must contain one entry for each | |
446 | argument that the procedure expects. The result of the call is | |
447 | returned as modified copy of the input sequence. Input | |
448 | parameters are left untouched, output and input/output | |
449 | parameters replaced with possibly new values. | |
450 | ||
451 | """ | |
452 | if timeout is None: | |
453 | timeout = self._timeout | |
454 | waiter = self._conn._create_waiter("cursor.callproc") | |
455 | if self._echo: | |
456 | logger.info("CALL %s", procname) | |
457 | logger.info("%r", parameters) | |
458 | try: | |
459 | self._impl.callproc(procname, parameters) | |
460 | except BaseException: | |
461 | self._conn._waiter = None | |
462 | raise | |
463 | else: | |
464 | await self._conn._poll(waiter, timeout) | |
465 | ||
466 | def begin(self) -> _ContextManager[Transaction]: | |
467 | return _ContextManager[Transaction]( | |
468 | self._transaction.begin(), | |
469 | _commit_transaction, | |
470 | _rollback_transaction, | |
471 | ) | |
472 | ||
473 | def begin_nested(self) -> _ContextManager[Transaction]: | |
474 | if self._transaction.is_begin: | |
475 | return self._transaction.point() | |
476 | ||
477 | return _ContextManager[Transaction]( | |
478 | self._transaction.begin(), | |
479 | _commit_transaction, | |
480 | _rollback_transaction, | |
481 | ) | |
482 | ||
483 | def mogrify(self, operation: str, parameters: Any = None) -> str: | |
484 | """Return a query string after arguments binding. | |
485 | ||
486 | The string returned is exactly the one that would be sent to | |
487 | the database running the .execute() method or similar. | |
488 | ||
489 | """ | |
490 | ret = self._impl.mogrify(operation, parameters) | |
491 | assert ( | |
492 | not self._conn.isexecuting() | |
493 | ), "Don't support server side mogrify" | |
494 | return ret # type: ignore | |
495 | ||
496 | async def setinputsizes(self, sizes: int) -> None: | |
497 | """This method is exposed in compliance with the DBAPI. | |
498 | ||
499 | It currently does nothing but it is safe to call it. | |
500 | ||
501 | """ | |
502 | self._impl.setinputsizes(sizes) | |
503 | ||
504 | async def fetchone(self) -> Any: | |
505 | """Fetch the next row of a query result set. | |
506 | ||
507 | Returns a single tuple, or None when no more data is | |
508 | available. | |
509 | ||
510 | """ | |
511 | ret = self._impl.fetchone() | |
512 | assert ( | |
513 | not self._conn.isexecuting() | |
514 | ), "Don't support server side cursors yet" | |
515 | return ret | |
516 | ||
517 | async def fetchmany(self, size: Optional[int] = None) -> List[Any]: | |
518 | """Fetch the next set of rows of a query result. | |
519 | ||
520 | Returns a list of tuples. An empty list is returned when no | |
521 | more rows are available. | |
522 | ||
523 | The number of rows to fetch per call is specified by the | |
524 | parameter. If it is not given, the cursor's .arraysize | |
525 | determines the number of rows to be fetched. The method should | |
526 | try to fetch as many rows as indicated by the size | |
527 | parameter. If this is not possible due to the specified number | |
528 | of rows not being available, fewer rows may be returned. | |
529 | ||
530 | """ | |
531 | if size is None: | |
532 | size = self._impl.arraysize | |
533 | ret = self._impl.fetchmany(size) | |
534 | assert ( | |
535 | not self._conn.isexecuting() | |
536 | ), "Don't support server side cursors yet" | |
537 | return ret # type: ignore | |
538 | ||
539 | async def fetchall(self) -> List[Any]: | |
540 | """Fetch all (remaining) rows of a query result. | |
541 | ||
542 | Returns them as a list of tuples. An empty list is returned | |
543 | if there is no more record to fetch. | |
544 | ||
545 | """ | |
546 | ret = self._impl.fetchall() | |
547 | assert ( | |
548 | not self._conn.isexecuting() | |
549 | ), "Don't support server side cursors yet" | |
550 | return ret # type: ignore | |
551 | ||
552 | async def scroll(self, value: int, mode: str = "relative") -> None: | |
553 | """Scroll to a new position according to mode. | |
554 | ||
555 | If mode is relative (default), value is taken as offset | |
556 | to the current position in the result set, if set to | |
557 | absolute, value states an absolute target position. | |
558 | ||
559 | """ | |
560 | self._impl.scroll(value, mode) | |
561 | assert ( | |
562 | not self._conn.isexecuting() | |
563 | ), "Don't support server side cursors yet" | |
564 | ||
565 | @property | |
566 | def arraysize(self) -> int: | |
567 | """How many rows will be returned by fetchmany() call. | |
568 | ||
569 | This read/write attribute specifies the number of rows to | |
570 | fetch at a time with fetchmany(). It defaults to | |
571 | 1 meaning to fetch a single row at a time. | |
572 | ||
573 | """ | |
574 | return self._impl.arraysize # type: ignore | |
575 | ||
576 | @arraysize.setter | |
577 | def arraysize(self, val: int) -> None: | |
578 | """How many rows will be returned by fetchmany() call. | |
579 | ||
580 | This read/write attribute specifies the number of rows to | |
581 | fetch at a time with fetchmany(). It defaults to | |
582 | 1 meaning to fetch a single row at a time. | |
583 | ||
584 | """ | |
585 | self._impl.arraysize = val | |
586 | ||
587 | @property | |
588 | def itersize(self) -> int: | |
589 | # Not supported | |
590 | return self._impl.itersize # type: ignore | |
591 | ||
592 | @itersize.setter | |
593 | def itersize(self, val: int) -> None: | |
594 | # Not supported | |
595 | self._impl.itersize = val | |
596 | ||
597 | @property | |
598 | def rowcount(self) -> int: | |
599 | """Returns the number of rows that has been produced of affected. | |
600 | ||
601 | This read-only attribute specifies the number of rows that the | |
602 | last :meth:`execute` produced (for Data Query Language | |
603 | statements like SELECT) or affected (for Data Manipulation | |
604 | Language statements like UPDATE or INSERT). | |
605 | ||
606 | The attribute is -1 in case no .execute() has been performed | |
607 | on the cursor or the row count of the last operation if it | |
608 | can't be determined by the interface. | |
609 | ||
610 | """ | |
611 | return self._impl.rowcount # type: ignore | |
612 | ||
613 | @property | |
614 | def rownumber(self) -> int: | |
615 | """Row index. | |
616 | ||
617 | This read-only attribute provides the current 0-based index of the | |
618 | cursor in the result set or ``None`` if the index cannot be | |
619 | determined.""" | |
620 | ||
621 | return self._impl.rownumber # type: ignore | |
622 | ||
623 | @property | |
624 | def lastrowid(self) -> int: | |
625 | """OID of the last inserted row. | |
626 | ||
627 | This read-only attribute provides the OID of the last row | |
628 | inserted by the cursor. If the table wasn't created with OID | |
629 | support or the last operation is not a single record insert, | |
630 | the attribute is set to None. | |
631 | ||
632 | """ | |
633 | return self._impl.lastrowid # type: ignore | |
634 | ||
635 | @property | |
636 | def query(self) -> Optional[str]: | |
637 | """The last executed query string. | |
638 | ||
639 | Read-only attribute containing the body of the last query sent | |
640 | to the backend (including bound arguments) as bytes | |
641 | string. None if no query has been executed yet. | |
642 | ||
643 | """ | |
644 | return self._impl.query # type: ignore | |
645 | ||
646 | @property | |
647 | def statusmessage(self) -> str: | |
648 | """the message returned by the last command.""" | |
649 | return self._impl.statusmessage # type: ignore | |
650 | ||
651 | @property | |
652 | def tzinfo_factory(self) -> datetime.tzinfo: | |
653 | """The time zone factory used to handle data types such as | |
654 | `TIMESTAMP WITH TIME ZONE`. | |
655 | """ | |
656 | return self._impl.tzinfo_factory # type: ignore | |
657 | ||
658 | @tzinfo_factory.setter | |
659 | def tzinfo_factory(self, val: datetime.tzinfo) -> None: | |
660 | """The time zone factory used to handle data types such as | |
661 | `TIMESTAMP WITH TIME ZONE`. | |
662 | """ | |
663 | self._impl.tzinfo_factory = val | |
664 | ||
665 | async def nextset(self) -> None: | |
666 | # Not supported | |
667 | self._impl.nextset() # raises psycopg2.NotSupportedError | |
668 | ||
669 | async def setoutputsize( | |
670 | self, size: int, column: Optional[int] = None | |
671 | ) -> None: | |
672 | # Does nothing | |
673 | self._impl.setoutputsize(size, column) | |
674 | ||
675 | async def copy_from(self, *args: Any, **kwargs: Any) -> None: | |
676 | raise psycopg2.ProgrammingError( | |
677 | "copy_from cannot be used in asynchronous mode" | |
678 | ) | |
679 | ||
680 | async def copy_to(self, *args: Any, **kwargs: Any) -> None: | |
681 | raise psycopg2.ProgrammingError( | |
682 | "copy_to cannot be used in asynchronous mode" | |
683 | ) | |
684 | ||
685 | async def copy_expert(self, *args: Any, **kwargs: Any) -> None: | |
686 | raise psycopg2.ProgrammingError( | |
687 | "copy_expert cannot be used in asynchronous mode" | |
688 | ) | |
689 | ||
690 | @property | |
691 | def timeout(self) -> float: | |
692 | """Return default timeout for cursor operations.""" | |
693 | return self._timeout | |
694 | ||
695 | def __aiter__(self) -> "Cursor": | |
696 | return self | |
697 | ||
698 | async def __anext__(self) -> Any: | |
699 | ret = await self.fetchone() | |
700 | if ret is not None: | |
701 | return ret | |
702 | raise StopAsyncIteration | |
703 | ||
704 | async def __aenter__(self) -> "Cursor": | |
705 | return self | |
706 | ||
707 | async def __aexit__( | |
708 | self, | |
709 | exc_type: Optional[Type[BaseException]], | |
710 | exc: Optional[BaseException], | |
711 | tb: Optional[TracebackType], | |
712 | ) -> None: | |
713 | self.close() | |
714 | ||
715 | def __repr__(self) -> str: | |
716 | return ( | |
717 | f"<" | |
718 | f"{type(self).__module__}::{type(self).__name__} " | |
719 | f"name={self.name}, " | |
720 | f"closed={self.closed}" | |
721 | f">" | |
722 | ) | |
723 | ||
724 | ||
725 | async def _close_cursor(c: Cursor) -> None: | |
726 | c.close() | |
53 | 727 | |
54 | 728 | |
55 | 729 | class Connection: |
63 | 737 | _source_traceback = None |
64 | 738 | |
65 | 739 | def __init__( |
66 | self, dsn, timeout, echo, | |
67 | *, enable_json=True, enable_hstore=True, | |
68 | enable_uuid=True, **kwargs | |
740 | self, | |
741 | dsn: Optional[str], | |
742 | timeout: float, | |
743 | echo: bool = False, | |
744 | enable_json: bool = True, | |
745 | enable_hstore: bool = True, | |
746 | enable_uuid: bool = True, | |
747 | **kwargs: Any, | |
69 | 748 | ): |
70 | 749 | self._enable_json = enable_json |
71 | 750 | self._enable_hstore = enable_hstore |
72 | 751 | self._enable_uuid = enable_uuid |
73 | self._loop = get_running_loop(kwargs.pop('loop', None) is not None) | |
74 | self._waiter = self._loop.create_future() | |
75 | ||
76 | kwargs['async_'] = kwargs.pop('async', True) | |
752 | self._loop = get_running_loop() | |
753 | self._waiter: Optional[ | |
754 | "asyncio.Future[None]" | |
755 | ] = self._loop.create_future() | |
756 | ||
757 | kwargs["async_"] = kwargs.pop("async", True) | |
758 | kwargs.pop("loop", None) # backward compatibility | |
77 | 759 | self._conn = psycopg2.connect(dsn, **kwargs) |
78 | 760 | |
79 | 761 | self._dsn = self._conn.dsn |
80 | 762 | assert self._conn.isexecuting(), "Is conn an async at all???" |
81 | self._fileno = self._conn.fileno() | |
763 | self._fileno: Optional[int] = self._conn.fileno() | |
82 | 764 | self._timeout = timeout |
83 | 765 | self._last_usage = self._loop.time() |
84 | 766 | self._writing = False |
85 | 767 | self._echo = echo |
86 | self._cursor_instance = None | |
87 | self._notifies = asyncio.Queue() | |
768 | self._notifies = asyncio.Queue() # type: ignore | |
769 | self._notifies_proxy = ClosableQueue(self._notifies, self._loop) | |
88 | 770 | self._weakref = weakref.ref(self) |
89 | self._loop.add_reader(self._fileno, self._ready, self._weakref) | |
771 | self._loop.add_reader( | |
772 | self._fileno, self._ready, self._weakref # type: ignore | |
773 | ) | |
90 | 774 | |
91 | 775 | if self._loop.get_debug(): |
92 | 776 | self._source_traceback = traceback.extract_stack(sys._getframe(1)) |
93 | 777 | |
94 | 778 | @staticmethod |
95 | def _ready(weak_self): | |
96 | self = weak_self() | |
779 | def _ready(weak_self: "weakref.ref[Any]") -> None: | |
780 | self = cast(Connection, weak_self()) | |
97 | 781 | if self is None: |
98 | 782 | return |
99 | 783 | |
127 | 811 | # chain exception otherwise |
128 | 812 | exc2.__cause__ = exc |
129 | 813 | exc = exc2 |
814 | self._notifies_proxy.close(exc) | |
130 | 815 | if waiter is not None and not waiter.done(): |
131 | 816 | waiter.set_exception(exc) |
132 | 817 | else: |
134 | 819 | # connection closed |
135 | 820 | if waiter is not None and not waiter.done(): |
136 | 821 | waiter.set_exception( |
137 | psycopg2.OperationalError("Connection closed")) | |
138 | if state == POLL_OK: | |
822 | psycopg2.OperationalError("Connection closed") | |
823 | ) | |
824 | if state == psycopg2.extensions.POLL_OK: | |
139 | 825 | if self._writing: |
140 | self._loop.remove_writer(self._fileno) | |
826 | self._loop.remove_writer(self._fileno) # type: ignore | |
141 | 827 | self._writing = False |
142 | 828 | if waiter is not None and not waiter.done(): |
143 | 829 | waiter.set_result(None) |
144 | elif state == POLL_READ: | |
830 | elif state == psycopg2.extensions.POLL_READ: | |
145 | 831 | if self._writing: |
146 | self._loop.remove_writer(self._fileno) | |
832 | self._loop.remove_writer(self._fileno) # type: ignore | |
147 | 833 | self._writing = False |
148 | elif state == POLL_WRITE: | |
834 | elif state == psycopg2.extensions.POLL_WRITE: | |
149 | 835 | if not self._writing: |
150 | self._loop.add_writer(self._fileno, self._ready, weak_self) | |
836 | self._loop.add_writer( | |
837 | self._fileno, self._ready, weak_self # type: ignore | |
838 | ) | |
151 | 839 | self._writing = True |
152 | elif state == POLL_ERROR: | |
153 | self._fatal_error("Fatal error on aiopg connection: " | |
154 | "POLL_ERROR from underlying .poll() call") | |
840 | elif state == psycopg2.extensions.POLL_ERROR: | |
841 | self._fatal_error( | |
842 | "Fatal error on aiopg connection: " | |
843 | "POLL_ERROR from underlying .poll() call" | |
844 | ) | |
155 | 845 | else: |
156 | self._fatal_error("Fatal error on aiopg connection: " | |
157 | "unknown answer {} from underlying " | |
158 | ".poll() call" | |
159 | .format(state)) | |
160 | ||
161 | def _fatal_error(self, message): | |
846 | self._fatal_error( | |
847 | f"Fatal error on aiopg connection: " | |
848 | f"unknown answer {state} from underlying " | |
849 | f".poll() call" | |
850 | ) | |
851 | ||
852 | def _fatal_error(self, message: str) -> None: | |
162 | 853 | # Should be called from exception handler only. |
163 | self._loop.call_exception_handler({ | |
164 | 'message': message, | |
165 | 'connection': self, | |
166 | }) | |
854 | self._loop.call_exception_handler( | |
855 | { | |
856 | "message": message, | |
857 | "connection": self, | |
858 | } | |
859 | ) | |
167 | 860 | self.close() |
168 | 861 | if self._waiter and not self._waiter.done(): |
169 | 862 | self._waiter.set_exception(psycopg2.OperationalError(message)) |
170 | 863 | |
171 | def _create_waiter(self, func_name): | |
864 | def _create_waiter(self, func_name: str) -> "asyncio.Future[None]": | |
172 | 865 | if self._waiter is not None: |
173 | raise RuntimeError('%s() called while another coroutine is ' | |
174 | 'already waiting for incoming data' % func_name) | |
866 | raise RuntimeError( | |
867 | f"{func_name}() called while another coroutine " | |
868 | f"is already waiting for incoming data" | |
869 | ) | |
175 | 870 | self._waiter = self._loop.create_future() |
176 | 871 | return self._waiter |
177 | 872 | |
178 | async def _poll(self, waiter, timeout): | |
873 | async def _poll( | |
874 | self, waiter: "asyncio.Future[None]", timeout: float | |
875 | ) -> None: | |
179 | 876 | assert waiter is self._waiter, (waiter, self._waiter) |
180 | 877 | self._ready(self._weakref) |
181 | 878 | |
185 | 882 | await asyncio.shield(self.close()) |
186 | 883 | raise exc |
187 | 884 | except psycopg2.extensions.QueryCanceledError as exc: |
188 | self._loop.call_exception_handler({ | |
189 | 'message': exc.pgerror, | |
190 | 'exception': exc, | |
191 | 'future': self._waiter, | |
192 | }) | |
885 | self._loop.call_exception_handler( | |
886 | { | |
887 | "message": exc.pgerror, | |
888 | "exception": exc, | |
889 | "future": self._waiter, | |
890 | } | |
891 | ) | |
193 | 892 | raise asyncio.CancelledError |
194 | 893 | finally: |
195 | 894 | self._waiter = None |
196 | 895 | |
197 | def _isexecuting(self): | |
198 | return self._conn.isexecuting() | |
199 | ||
200 | def cursor(self, name=None, cursor_factory=None, | |
201 | scrollable=None, withhold=False, timeout=None, | |
202 | isolation_level=None): | |
896 | def isexecuting(self) -> bool: | |
897 | return self._conn.isexecuting() # type: ignore | |
898 | ||
899 | def cursor( | |
900 | self, | |
901 | name: Optional[str] = None, | |
902 | cursor_factory: Any = None, | |
903 | scrollable: Optional[bool] = None, | |
904 | withhold: bool = False, | |
905 | timeout: Optional[float] = None, | |
906 | isolation_level: Optional[IsolationLevel] = None, | |
907 | ) -> _ContextManager[Cursor]: | |
203 | 908 | """A coroutine that returns a new cursor object using the connection. |
204 | 909 | |
205 | 910 | *cursor_factory* argument can be used to create non-standard |
209 | 914 | *name*, *scrollable* and *withhold* parameters are not supported by |
210 | 915 | psycopg in asynchronous mode. |
211 | 916 | |
212 | NOTE: as of [TODO] any previously created created cursor from this | |
213 | connection will be closed | |
214 | """ | |
917 | """ | |
918 | ||
215 | 919 | self._last_usage = self._loop.time() |
216 | coro = self._cursor(name=name, cursor_factory=cursor_factory, | |
217 | scrollable=scrollable, withhold=withhold, | |
218 | timeout=timeout, isolation_level=isolation_level) | |
219 | return _ContextManager(coro) | |
220 | ||
221 | async def _cursor(self, name=None, cursor_factory=None, | |
222 | scrollable=None, withhold=False, timeout=None, | |
223 | isolation_level=None): | |
224 | ||
225 | if not self.closed_cursor: | |
226 | warnings.warn(('You can only have one cursor per connection. ' | |
227 | 'The cursor for connection will be closed forcibly' | |
228 | ' {!r}.').format(self), ResourceWarning) | |
229 | ||
230 | self.free_cursor() | |
231 | ||
920 | coro = self._cursor( | |
921 | name=name, | |
922 | cursor_factory=cursor_factory, | |
923 | scrollable=scrollable, | |
924 | withhold=withhold, | |
925 | timeout=timeout, | |
926 | isolation_level=isolation_level, | |
927 | ) | |
928 | return _ContextManager[Cursor](coro, _close_cursor) | |
929 | ||
930 | async def _cursor( | |
931 | self, | |
932 | name: Optional[str] = None, | |
933 | cursor_factory: Any = None, | |
934 | scrollable: Optional[bool] = None, | |
935 | withhold: bool = False, | |
936 | timeout: Optional[float] = None, | |
937 | isolation_level: Optional[IsolationLevel] = None, | |
938 | ) -> Cursor: | |
232 | 939 | if timeout is None: |
233 | 940 | timeout = self._timeout |
234 | 941 | |
235 | impl = await self._cursor_impl(name=name, | |
236 | cursor_factory=cursor_factory, | |
237 | scrollable=scrollable, | |
238 | withhold=withhold) | |
239 | self._cursor_instance = Cursor( | |
240 | self, impl, timeout, self._echo, isolation_level | |
241 | ) | |
242 | return self._cursor_instance | |
243 | ||
244 | async def _cursor_impl(self, name=None, cursor_factory=None, | |
245 | scrollable=None, withhold=False): | |
942 | impl = await self._cursor_impl( | |
943 | name=name, | |
944 | cursor_factory=cursor_factory, | |
945 | scrollable=scrollable, | |
946 | withhold=withhold, | |
947 | ) | |
948 | cursor = Cursor(self, impl, timeout, self._echo, isolation_level) | |
949 | return cursor | |
950 | ||
951 | async def _cursor_impl( | |
952 | self, | |
953 | name: Optional[str] = None, | |
954 | cursor_factory: Any = None, | |
955 | scrollable: Optional[bool] = None, | |
956 | withhold: bool = False, | |
957 | ) -> Any: | |
246 | 958 | if cursor_factory is None: |
247 | impl = self._conn.cursor(name=name, | |
248 | scrollable=scrollable, withhold=withhold) | |
959 | impl = self._conn.cursor( | |
960 | name=name, scrollable=scrollable, withhold=withhold | |
961 | ) | |
249 | 962 | else: |
250 | impl = self._conn.cursor(name=name, cursor_factory=cursor_factory, | |
251 | scrollable=scrollable, withhold=withhold) | |
963 | impl = self._conn.cursor( | |
964 | name=name, | |
965 | cursor_factory=cursor_factory, | |
966 | scrollable=scrollable, | |
967 | withhold=withhold, | |
968 | ) | |
252 | 969 | return impl |
253 | 970 | |
254 | def _close(self): | |
971 | def _close(self) -> None: | |
255 | 972 | """Remove the connection from the event_loop and close it.""" |
256 | 973 | # N.B. If connection contains uncommitted transaction the |
257 | 974 | # transaction will be discarded |
262 | 979 | self._loop.remove_writer(self._fileno) |
263 | 980 | |
264 | 981 | self._conn.close() |
265 | self.free_cursor() | |
266 | ||
267 | if self._waiter is not None and not self._waiter.done(): | |
268 | self._waiter.set_exception( | |
269 | psycopg2.OperationalError("Connection closed")) | |
270 | ||
271 | @property | |
272 | def closed_cursor(self): | |
273 | if not self._cursor_instance: | |
274 | return True | |
275 | ||
276 | return bool(self._cursor_instance.closed) | |
277 | ||
278 | def free_cursor(self): | |
279 | if not self.closed_cursor: | |
280 | self._cursor_instance.close() | |
281 | self._cursor_instance = None | |
282 | ||
283 | def close(self): | |
982 | ||
983 | if not self._loop.is_closed(): | |
984 | if self._waiter is not None and not self._waiter.done(): | |
985 | self._waiter.set_exception( | |
986 | psycopg2.OperationalError("Connection closed") | |
987 | ) | |
988 | ||
989 | self._notifies_proxy.close( | |
990 | psycopg2.OperationalError("Connection closed") | |
991 | ) | |
992 | ||
993 | def close(self) -> "asyncio.Future[None]": | |
284 | 994 | self._close() |
285 | ret = self._loop.create_future() | |
286 | ret.set_result(None) | |
287 | return ret | |
288 | ||
289 | @property | |
290 | def closed(self): | |
995 | return create_completed_future(self._loop) | |
996 | ||
997 | @property | |
998 | def closed(self) -> bool: | |
291 | 999 | """Connection status. |
292 | 1000 | |
293 | 1001 | Read-only attribute reporting whether the database connection is |
294 | 1002 | open (False) or closed (True). |
295 | 1003 | |
296 | 1004 | """ |
297 | return self._conn.closed | |
298 | ||
299 | @property | |
300 | def raw(self): | |
1005 | return self._conn.closed # type: ignore | |
1006 | ||
1007 | @property | |
1008 | def raw(self) -> Any: | |
301 | 1009 | """Underlying psycopg connection object, readonly""" |
302 | 1010 | return self._conn |
303 | 1011 | |
304 | async def commit(self): | |
305 | raise psycopg2.ProgrammingError( | |
306 | "commit cannot be used in asynchronous mode") | |
307 | ||
308 | async def rollback(self): | |
309 | raise psycopg2.ProgrammingError( | |
310 | "rollback cannot be used in asynchronous mode") | |
1012 | async def commit(self) -> None: | |
1013 | raise psycopg2.ProgrammingError( | |
1014 | "commit cannot be used in asynchronous mode" | |
1015 | ) | |
1016 | ||
1017 | async def rollback(self) -> None: | |
1018 | raise psycopg2.ProgrammingError( | |
1019 | "rollback cannot be used in asynchronous mode" | |
1020 | ) | |
311 | 1021 | |
312 | 1022 | # TPC |
313 | 1023 | |
314 | async def xid(self, format_id, gtrid, bqual): | |
315 | return self._conn.xid(format_id, gtrid, bqual) | |
316 | ||
317 | async def tpc_begin(self, xid=None): | |
318 | raise psycopg2.ProgrammingError( | |
319 | "tpc_begin cannot be used in asynchronous mode") | |
320 | ||
321 | async def tpc_prepare(self): | |
322 | raise psycopg2.ProgrammingError( | |
323 | "tpc_prepare cannot be used in asynchronous mode") | |
324 | ||
325 | async def tpc_commit(self, xid=None): | |
326 | raise psycopg2.ProgrammingError( | |
327 | "tpc_commit cannot be used in asynchronous mode") | |
328 | ||
329 | async def tpc_rollback(self, xid=None): | |
330 | raise psycopg2.ProgrammingError( | |
331 | "tpc_rollback cannot be used in asynchronous mode") | |
332 | ||
333 | async def tpc_recover(self): | |
334 | raise psycopg2.ProgrammingError( | |
335 | "tpc_recover cannot be used in asynchronous mode") | |
336 | ||
337 | async def cancel(self): | |
338 | raise psycopg2.ProgrammingError( | |
339 | "cancel cannot be used in asynchronous mode") | |
340 | ||
341 | async def reset(self): | |
342 | raise psycopg2.ProgrammingError( | |
343 | "reset cannot be used in asynchronous mode") | |
344 | ||
345 | @property | |
346 | def dsn(self): | |
1024 | async def xid( | |
1025 | self, format_id: int, gtrid: str, bqual: str | |
1026 | ) -> Tuple[int, str, str]: | |
1027 | return self._conn.xid(format_id, gtrid, bqual) # type: ignore | |
1028 | ||
1029 | async def tpc_begin(self, *args: Any, **kwargs: Any) -> None: | |
1030 | raise psycopg2.ProgrammingError( | |
1031 | "tpc_begin cannot be used in asynchronous mode" | |
1032 | ) | |
1033 | ||
1034 | async def tpc_prepare(self) -> None: | |
1035 | raise psycopg2.ProgrammingError( | |
1036 | "tpc_prepare cannot be used in asynchronous mode" | |
1037 | ) | |
1038 | ||
1039 | async def tpc_commit(self, *args: Any, **kwargs: Any) -> None: | |
1040 | raise psycopg2.ProgrammingError( | |
1041 | "tpc_commit cannot be used in asynchronous mode" | |
1042 | ) | |
1043 | ||
1044 | async def tpc_rollback(self, *args: Any, **kwargs: Any) -> None: | |
1045 | raise psycopg2.ProgrammingError( | |
1046 | "tpc_rollback cannot be used in asynchronous mode" | |
1047 | ) | |
1048 | ||
1049 | async def tpc_recover(self) -> None: | |
1050 | raise psycopg2.ProgrammingError( | |
1051 | "tpc_recover cannot be used in asynchronous mode" | |
1052 | ) | |
1053 | ||
1054 | async def cancel(self) -> None: | |
1055 | raise psycopg2.ProgrammingError( | |
1056 | "cancel cannot be used in asynchronous mode" | |
1057 | ) | |
1058 | ||
1059 | async def reset(self) -> None: | |
1060 | raise psycopg2.ProgrammingError( | |
1061 | "reset cannot be used in asynchronous mode" | |
1062 | ) | |
1063 | ||
1064 | @property | |
1065 | def dsn(self) -> Optional[str]: | |
347 | 1066 | """DSN connection string. |
348 | 1067 | |
349 | 1068 | Read-only attribute representing dsn connection string used |
350 | 1069 | for connectint to PostgreSQL server. |
351 | 1070 | |
352 | 1071 | """ |
353 | return self._dsn | |
354 | ||
355 | async def set_session(self, *, isolation_level=None, readonly=None, | |
356 | deferrable=None, autocommit=None): | |
357 | raise psycopg2.ProgrammingError( | |
358 | "set_session cannot be used in asynchronous mode") | |
359 | ||
360 | @property | |
361 | def autocommit(self): | |
1072 | return self._dsn # type: ignore | |
1073 | ||
1074 | async def set_session(self, *args: Any, **kwargs: Any) -> None: | |
1075 | raise psycopg2.ProgrammingError( | |
1076 | "set_session cannot be used in asynchronous mode" | |
1077 | ) | |
1078 | ||
1079 | @property | |
1080 | def autocommit(self) -> bool: | |
362 | 1081 | """Autocommit status""" |
363 | return self._conn.autocommit | |
1082 | return self._conn.autocommit # type: ignore | |
364 | 1083 | |
365 | 1084 | @autocommit.setter |
366 | def autocommit(self, val): | |
1085 | def autocommit(self, val: bool) -> None: | |
367 | 1086 | """Autocommit status""" |
368 | 1087 | self._conn.autocommit = val |
369 | 1088 | |
370 | 1089 | @property |
371 | def isolation_level(self): | |
1090 | def isolation_level(self) -> int: | |
372 | 1091 | """Transaction isolation level. |
373 | 1092 | |
374 | 1093 | The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. |
375 | 1094 | |
376 | 1095 | """ |
377 | return self._conn.isolation_level | |
378 | ||
379 | async def set_isolation_level(self, val): | |
1096 | return self._conn.isolation_level # type: ignore | |
1097 | ||
1098 | async def set_isolation_level(self, val: int) -> None: | |
380 | 1099 | """Transaction isolation level. |
381 | 1100 | |
382 | 1101 | The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. |
385 | 1104 | self._conn.set_isolation_level(val) |
386 | 1105 | |
387 | 1106 | @property |
388 | def encoding(self): | |
1107 | def encoding(self) -> str: | |
389 | 1108 | """Client encoding for SQL operations.""" |
390 | return self._conn.encoding | |
391 | ||
392 | async def set_client_encoding(self, val): | |
1109 | return self._conn.encoding # type: ignore | |
1110 | ||
1111 | async def set_client_encoding(self, val: str) -> None: | |
393 | 1112 | self._conn.set_client_encoding(val) |
394 | 1113 | |
395 | 1114 | @property |
396 | def notices(self): | |
1115 | def notices(self) -> List[str]: | |
397 | 1116 | """A list of all db messages sent to the client during the session.""" |
398 | return self._conn.notices | |
399 | ||
400 | @property | |
401 | def cursor_factory(self): | |
1117 | return self._conn.notices # type: ignore | |
1118 | ||
1119 | @property | |
1120 | def cursor_factory(self) -> Any: | |
402 | 1121 | """The default cursor factory used by .cursor().""" |
403 | 1122 | return self._conn.cursor_factory |
404 | 1123 | |
405 | async def get_backend_pid(self): | |
1124 | async def get_backend_pid(self) -> int: | |
406 | 1125 | """Returns the PID of the backend server process.""" |
407 | return self._conn.get_backend_pid() | |
408 | ||
409 | async def get_parameter_status(self, parameter): | |
1126 | return self._conn.get_backend_pid() # type: ignore | |
1127 | ||
1128 | async def get_parameter_status(self, parameter: str) -> Optional[str]: | |
410 | 1129 | """Look up a current parameter setting of the server.""" |
411 | return self._conn.get_parameter_status(parameter) | |
412 | ||
413 | async def get_transaction_status(self): | |
1130 | return self._conn.get_parameter_status(parameter) # type: ignore | |
1131 | ||
1132 | async def get_transaction_status(self) -> int: | |
414 | 1133 | """Return the current session transaction status as an integer.""" |
415 | return self._conn.get_transaction_status() | |
416 | ||
417 | @property | |
418 | def protocol_version(self): | |
1134 | return self._conn.get_transaction_status() # type: ignore | |
1135 | ||
1136 | @property | |
1137 | def protocol_version(self) -> int: | |
419 | 1138 | """A read-only integer representing protocol being used.""" |
420 | return self._conn.protocol_version | |
421 | ||
422 | @property | |
423 | def server_version(self): | |
1139 | return self._conn.protocol_version # type: ignore | |
1140 | ||
1141 | @property | |
1142 | def server_version(self) -> int: | |
424 | 1143 | """A read-only integer representing the backend version.""" |
425 | return self._conn.server_version | |
426 | ||
427 | @property | |
428 | def status(self): | |
1144 | return self._conn.server_version # type: ignore | |
1145 | ||
1146 | @property | |
1147 | def status(self) -> int: | |
429 | 1148 | """A read-only integer representing the status of the connection.""" |
430 | return self._conn.status | |
431 | ||
432 | async def lobject(self, *args, **kwargs): | |
433 | raise psycopg2.ProgrammingError( | |
434 | "lobject cannot be used in asynchronous mode") | |
435 | ||
436 | @property | |
437 | def timeout(self): | |
1149 | return self._conn.status # type: ignore | |
1150 | ||
1151 | async def lobject(self, *args: Any, **kwargs: Any) -> None: | |
1152 | raise psycopg2.ProgrammingError( | |
1153 | "lobject cannot be used in asynchronous mode" | |
1154 | ) | |
1155 | ||
1156 | @property | |
1157 | def timeout(self) -> float: | |
438 | 1158 | """Return default timeout for connection operations.""" |
439 | 1159 | return self._timeout |
440 | 1160 | |
441 | 1161 | @property |
442 | def last_usage(self): | |
1162 | def last_usage(self) -> float: | |
443 | 1163 | """Return time() when connection was used.""" |
444 | 1164 | return self._last_usage |
445 | 1165 | |
446 | 1166 | @property |
447 | def echo(self): | |
1167 | def echo(self) -> bool: | |
448 | 1168 | """Return echo mode status.""" |
449 | 1169 | return self._echo |
450 | 1170 | |
451 | def __repr__(self): | |
452 | msg = ( | |
453 | '<' | |
454 | '{module_name}::{class_name} ' | |
455 | 'isexecuting={isexecuting}, ' | |
456 | 'closed={closed}, ' | |
457 | 'echo={echo}, ' | |
458 | 'cursor={cursor}' | |
459 | '>' | |
460 | ) | |
461 | return msg.format( | |
462 | module_name=type(self).__module__, | |
463 | class_name=type(self).__name__, | |
464 | echo=self.echo, | |
465 | isexecuting=self._isexecuting(), | |
466 | closed=bool(self.closed), | |
467 | cursor=repr(self._cursor_instance) | |
468 | ) | |
469 | ||
470 | def __del__(self): | |
1171 | def __repr__(self) -> str: | |
1172 | return ( | |
1173 | f"<" | |
1174 | f"{type(self).__module__}::{type(self).__name__} " | |
1175 | f"isexecuting={self.isexecuting()}, " | |
1176 | f"closed={self.closed}, " | |
1177 | f"echo={self.echo}, " | |
1178 | f">" | |
1179 | ) | |
1180 | ||
1181 | def __del__(self) -> None: | |
471 | 1182 | try: |
472 | 1183 | _conn = self._conn |
473 | 1184 | except AttributeError: |
474 | 1185 | return |
475 | 1186 | if _conn is not None and not _conn.closed: |
476 | 1187 | self.close() |
477 | warnings.warn("Unclosed connection {!r}".format(self), | |
478 | ResourceWarning) | |
479 | ||
480 | context = {'connection': self, | |
481 | 'message': 'Unclosed connection'} | |
1188 | warnings.warn(f"Unclosed connection {self!r}", ResourceWarning) | |
1189 | ||
1190 | context = {"connection": self, "message": "Unclosed connection"} | |
482 | 1191 | if self._source_traceback is not None: |
483 | context['source_traceback'] = self._source_traceback | |
1192 | context["source_traceback"] = self._source_traceback | |
484 | 1193 | self._loop.call_exception_handler(context) |
485 | 1194 | |
486 | 1195 | @property |
487 | def notifies(self): | |
488 | """Return notification queue.""" | |
489 | return self._notifies | |
490 | ||
491 | async def _get_oids(self): | |
492 | cur = await self.cursor() | |
1196 | def notifies(self) -> ClosableQueue: | |
1197 | """Return notification queue (an asyncio.Queue -like object).""" | |
1198 | return self._notifies_proxy | |
1199 | ||
1200 | async def _get_oids(self) -> Tuple[Any, Any]: | |
1201 | cursor = await self.cursor() | |
493 | 1202 | rv0, rv1 = [], [] |
494 | 1203 | try: |
495 | await cur.execute( | |
1204 | await cursor.execute( | |
496 | 1205 | "SELECT t.oid, typarray " |
497 | 1206 | "FROM pg_type t JOIN pg_namespace ns ON typnamespace = ns.oid " |
498 | 1207 | "WHERE typname = 'hstore';" |
499 | 1208 | ) |
500 | 1209 | |
501 | async for oids in cur: | |
1210 | async for oids in cursor: | |
502 | 1211 | if isinstance(oids, Mapping): |
503 | rv0.append(oids['oid']) | |
504 | rv1.append(oids['typarray']) | |
1212 | rv0.append(oids["oid"]) | |
1213 | rv1.append(oids["typarray"]) | |
505 | 1214 | else: |
506 | 1215 | rv0.append(oids[0]) |
507 | 1216 | rv1.append(oids[1]) |
508 | 1217 | finally: |
509 | cur.close() | |
1218 | cursor.close() | |
510 | 1219 | |
511 | 1220 | return tuple(rv0), tuple(rv1) |
512 | 1221 | |
513 | async def _connect(self): | |
1222 | async def _connect(self) -> "Connection": | |
514 | 1223 | try: |
515 | await self._poll(self._waiter, self._timeout) | |
516 | except Exception: | |
517 | self.close() | |
1224 | await self._poll(self._waiter, self._timeout) # type: ignore | |
1225 | except BaseException: | |
1226 | await asyncio.shield(self.close()) | |
518 | 1227 | raise |
519 | 1228 | if self._enable_json: |
520 | extras.register_default_json(self._conn) | |
1229 | psycopg2.extras.register_default_json(self._conn) | |
521 | 1230 | if self._enable_uuid: |
522 | extras.register_uuid(conn_or_curs=self._conn) | |
1231 | psycopg2.extras.register_uuid(conn_or_curs=self._conn) | |
523 | 1232 | if self._enable_hstore: |
524 | oids = await self._get_oids() | |
525 | if oids is not None: | |
526 | oid, array_oid = oids | |
527 | extras.register_hstore( | |
528 | self._conn, | |
529 | oid=oid, | |
530 | array_oid=array_oid | |
531 | ) | |
1233 | oid, array_oid = await self._get_oids() | |
1234 | psycopg2.extras.register_hstore( | |
1235 | self._conn, oid=oid, array_oid=array_oid | |
1236 | ) | |
532 | 1237 | |
533 | 1238 | return self |
534 | 1239 | |
535 | def __await__(self): | |
1240 | def __await__(self) -> Generator[Any, None, "Connection"]: | |
536 | 1241 | return self._connect().__await__() |
537 | 1242 | |
538 | async def __aenter__(self): | |
1243 | async def __aenter__(self) -> "Connection": | |
539 | 1244 | return self |
540 | 1245 | |
541 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
542 | self.close() | |
1246 | async def __aexit__( | |
1247 | self, | |
1248 | exc_type: Optional[Type[BaseException]], | |
1249 | exc: Optional[BaseException], | |
1250 | tb: Optional[TracebackType], | |
1251 | ) -> None: | |
1252 | await self.close() |
0 | import asyncio | |
1 | ||
2 | import psycopg2 | |
3 | ||
4 | from .log import logger | |
5 | from .transaction import IsolationLevel, Transaction | |
6 | from .utils import _TransactionBeginContextManager | |
7 | ||
8 | ||
9 | class Cursor: | |
10 | def __init__(self, conn, impl, timeout, echo, isolation_level): | |
11 | self._conn = conn | |
12 | self._impl = impl | |
13 | self._timeout = timeout | |
14 | self._echo = echo | |
15 | self._transaction = Transaction( | |
16 | self, isolation_level or IsolationLevel.default | |
17 | ) | |
18 | ||
19 | @property | |
20 | def echo(self): | |
21 | """Return echo mode status.""" | |
22 | return self._echo | |
23 | ||
24 | @property | |
25 | def description(self): | |
26 | """This read-only attribute is a sequence of 7-item sequences. | |
27 | ||
28 | Each of these sequences is a collections.namedtuple containing | |
29 | information describing one result column: | |
30 | ||
31 | 0. name: the name of the column returned. | |
32 | 1. type_code: the PostgreSQL OID of the column. | |
33 | 2. display_size: the actual length of the column in bytes. | |
34 | 3. internal_size: the size in bytes of the column associated to | |
35 | this column on the server. | |
36 | 4. precision: total number of significant digits in columns of | |
37 | type NUMERIC. None for other types. | |
38 | 5. scale: count of decimal digits in the fractional part in | |
39 | columns of type NUMERIC. None for other types. | |
40 | 6. null_ok: always None as not easy to retrieve from the libpq. | |
41 | ||
42 | This attribute will be None for operations that do not | |
43 | return rows or if the cursor has not had an operation invoked | |
44 | via the execute() method yet. | |
45 | ||
46 | """ | |
47 | return self._impl.description | |
48 | ||
49 | def close(self): | |
50 | """Close the cursor now.""" | |
51 | if not self.closed: | |
52 | self._impl.close() | |
53 | ||
54 | @property | |
55 | def closed(self): | |
56 | """Read-only boolean attribute: specifies if the cursor is closed.""" | |
57 | return self._impl.closed | |
58 | ||
59 | @property | |
60 | def connection(self): | |
61 | """Read-only attribute returning a reference to the `Connection`.""" | |
62 | return self._conn | |
63 | ||
64 | @property | |
65 | def raw(self): | |
66 | """Underlying psycopg cursor object, readonly""" | |
67 | return self._impl | |
68 | ||
69 | @property | |
70 | def name(self): | |
71 | # Not supported | |
72 | return self._impl.name | |
73 | ||
74 | @property | |
75 | def scrollable(self): | |
76 | # Not supported | |
77 | return self._impl.scrollable | |
78 | ||
79 | @scrollable.setter | |
80 | def scrollable(self, val): | |
81 | # Not supported | |
82 | self._impl.scrollable = val | |
83 | ||
84 | @property | |
85 | def withhold(self): | |
86 | # Not supported | |
87 | return self._impl.withhold | |
88 | ||
89 | @withhold.setter | |
90 | def withhold(self, val): | |
91 | # Not supported | |
92 | self._impl.withhold = val | |
93 | ||
94 | async def execute(self, operation, parameters=None, *, timeout=None): | |
95 | """Prepare and execute a database operation (query or command). | |
96 | ||
97 | Parameters may be provided as sequence or mapping and will be | |
98 | bound to variables in the operation. Variables are specified | |
99 | either with positional %s or named %({name})s placeholders. | |
100 | ||
101 | """ | |
102 | if timeout is None: | |
103 | timeout = self._timeout | |
104 | waiter = self._conn._create_waiter('cursor.execute') | |
105 | if self._echo: | |
106 | logger.info(operation) | |
107 | logger.info("%r", parameters) | |
108 | try: | |
109 | self._impl.execute(operation, parameters) | |
110 | except BaseException: | |
111 | self._conn._waiter = None | |
112 | raise | |
113 | try: | |
114 | await self._conn._poll(waiter, timeout) | |
115 | except asyncio.TimeoutError: | |
116 | self._impl.close() | |
117 | raise | |
118 | ||
119 | async def executemany(self, operation, seq_of_parameters): | |
120 | # Not supported | |
121 | raise psycopg2.ProgrammingError( | |
122 | "executemany cannot be used in asynchronous mode") | |
123 | ||
124 | async def callproc(self, procname, parameters=None, *, timeout=None): | |
125 | """Call a stored database procedure with the given name. | |
126 | ||
127 | The sequence of parameters must contain one entry for each | |
128 | argument that the procedure expects. The result of the call is | |
129 | returned as modified copy of the input sequence. Input | |
130 | parameters are left untouched, output and input/output | |
131 | parameters replaced with possibly new values. | |
132 | ||
133 | """ | |
134 | if timeout is None: | |
135 | timeout = self._timeout | |
136 | waiter = self._conn._create_waiter('cursor.callproc') | |
137 | if self._echo: | |
138 | logger.info("CALL %s", procname) | |
139 | logger.info("%r", parameters) | |
140 | try: | |
141 | self._impl.callproc(procname, parameters) | |
142 | except BaseException: | |
143 | self._conn._waiter = None | |
144 | raise | |
145 | else: | |
146 | await self._conn._poll(waiter, timeout) | |
147 | ||
148 | def begin(self): | |
149 | return _TransactionBeginContextManager(self._transaction.begin()) | |
150 | ||
151 | def begin_nested(self): | |
152 | if not self._transaction.is_begin: | |
153 | return _TransactionBeginContextManager( | |
154 | self._transaction.begin()) | |
155 | else: | |
156 | return self._transaction.point() | |
157 | ||
158 | def mogrify(self, operation, parameters=None): | |
159 | """Return a query string after arguments binding. | |
160 | ||
161 | The string returned is exactly the one that would be sent to | |
162 | the database running the .execute() method or similar. | |
163 | ||
164 | """ | |
165 | ret = self._impl.mogrify(operation, parameters) | |
166 | assert not self._conn._isexecuting(), ("Don't support server side " | |
167 | "mogrify") | |
168 | return ret | |
169 | ||
170 | async def setinputsizes(self, sizes): | |
171 | """This method is exposed in compliance with the DBAPI. | |
172 | ||
173 | It currently does nothing but it is safe to call it. | |
174 | ||
175 | """ | |
176 | self._impl.setinputsizes(sizes) | |
177 | ||
178 | async def fetchone(self): | |
179 | """Fetch the next row of a query result set. | |
180 | ||
181 | Returns a single tuple, or None when no more data is | |
182 | available. | |
183 | ||
184 | """ | |
185 | ret = self._impl.fetchone() | |
186 | assert not self._conn._isexecuting(), ("Don't support server side " | |
187 | "cursors yet") | |
188 | return ret | |
189 | ||
190 | async def fetchmany(self, size=None): | |
191 | """Fetch the next set of rows of a query result. | |
192 | ||
193 | Returns a list of tuples. An empty list is returned when no | |
194 | more rows are available. | |
195 | ||
196 | The number of rows to fetch per call is specified by the | |
197 | parameter. If it is not given, the cursor's .arraysize | |
198 | determines the number of rows to be fetched. The method should | |
199 | try to fetch as many rows as indicated by the size | |
200 | parameter. If this is not possible due to the specified number | |
201 | of rows not being available, fewer rows may be returned. | |
202 | ||
203 | """ | |
204 | if size is None: | |
205 | size = self._impl.arraysize | |
206 | ret = self._impl.fetchmany(size) | |
207 | assert not self._conn._isexecuting(), ("Don't support server side " | |
208 | "cursors yet") | |
209 | return ret | |
210 | ||
211 | async def fetchall(self): | |
212 | """Fetch all (remaining) rows of a query result. | |
213 | ||
214 | Returns them as a list of tuples. An empty list is returned | |
215 | if there is no more record to fetch. | |
216 | ||
217 | """ | |
218 | ret = self._impl.fetchall() | |
219 | assert not self._conn._isexecuting(), ("Don't support server side " | |
220 | "cursors yet") | |
221 | return ret | |
222 | ||
223 | async def scroll(self, value, mode="relative"): | |
224 | """Scroll to a new position according to mode. | |
225 | ||
226 | If mode is relative (default), value is taken as offset | |
227 | to the current position in the result set, if set to | |
228 | absolute, value states an absolute target position. | |
229 | ||
230 | """ | |
231 | ret = self._impl.scroll(value, mode) | |
232 | assert not self._conn._isexecuting(), ("Don't support server side " | |
233 | "cursors yet") | |
234 | return ret | |
235 | ||
236 | @property | |
237 | def arraysize(self): | |
238 | """How many rows will be returned by fetchmany() call. | |
239 | ||
240 | This read/write attribute specifies the number of rows to | |
241 | fetch at a time with fetchmany(). It defaults to | |
242 | 1 meaning to fetch a single row at a time. | |
243 | ||
244 | """ | |
245 | return self._impl.arraysize | |
246 | ||
247 | @arraysize.setter | |
248 | def arraysize(self, val): | |
249 | """How many rows will be returned by fetchmany() call. | |
250 | ||
251 | This read/write attribute specifies the number of rows to | |
252 | fetch at a time with fetchmany(). It defaults to | |
253 | 1 meaning to fetch a single row at a time. | |
254 | ||
255 | """ | |
256 | self._impl.arraysize = val | |
257 | ||
258 | @property | |
259 | def itersize(self): | |
260 | # Not supported | |
261 | return self._impl.itersize | |
262 | ||
263 | @itersize.setter | |
264 | def itersize(self, val): | |
265 | # Not supported | |
266 | self._impl.itersize = val | |
267 | ||
268 | @property | |
269 | def rowcount(self): | |
270 | """Returns the number of rows that has been produced of affected. | |
271 | ||
272 | This read-only attribute specifies the number of rows that the | |
273 | last :meth:`execute` produced (for Data Query Language | |
274 | statements like SELECT) or affected (for Data Manipulation | |
275 | Language statements like UPDATE or INSERT). | |
276 | ||
277 | The attribute is -1 in case no .execute() has been performed | |
278 | on the cursor or the row count of the last operation if it | |
279 | can't be determined by the interface. | |
280 | ||
281 | """ | |
282 | return self._impl.rowcount | |
283 | ||
284 | @property | |
285 | def rownumber(self): | |
286 | """Row index. | |
287 | ||
288 | This read-only attribute provides the current 0-based index of the | |
289 | cursor in the result set or ``None`` if the index cannot be | |
290 | determined.""" | |
291 | ||
292 | return self._impl.rownumber | |
293 | ||
294 | @property | |
295 | def lastrowid(self): | |
296 | """OID of the last inserted row. | |
297 | ||
298 | This read-only attribute provides the OID of the last row | |
299 | inserted by the cursor. If the table wasn't created with OID | |
300 | support or the last operation is not a single record insert, | |
301 | the attribute is set to None. | |
302 | ||
303 | """ | |
304 | return self._impl.lastrowid | |
305 | ||
306 | @property | |
307 | def query(self): | |
308 | """The last executed query string. | |
309 | ||
310 | Read-only attribute containing the body of the last query sent | |
311 | to the backend (including bound arguments) as bytes | |
312 | string. None if no query has been executed yet. | |
313 | ||
314 | """ | |
315 | return self._impl.query | |
316 | ||
317 | @property | |
318 | def statusmessage(self): | |
319 | """the message returned by the last command.""" | |
320 | ||
321 | return self._impl.statusmessage | |
322 | ||
323 | # async def cast(self, old, s): | |
324 | # ... | |
325 | ||
326 | @property | |
327 | def tzinfo_factory(self): | |
328 | """The time zone factory used to handle data types such as | |
329 | `TIMESTAMP WITH TIME ZONE`. | |
330 | """ | |
331 | return self._impl.tzinfo_factory | |
332 | ||
333 | @tzinfo_factory.setter | |
334 | def tzinfo_factory(self, val): | |
335 | """The time zone factory used to handle data types such as | |
336 | `TIMESTAMP WITH TIME ZONE`. | |
337 | """ | |
338 | self._impl.tzinfo_factory = val | |
339 | ||
340 | async def nextset(self): | |
341 | # Not supported | |
342 | self._impl.nextset() # raises psycopg2.NotSupportedError | |
343 | ||
344 | async def setoutputsize(self, size, column=None): | |
345 | # Does nothing | |
346 | self._impl.setoutputsize(size, column) | |
347 | ||
348 | async def copy_from(self, file, table, sep='\t', null='\\N', size=8192, | |
349 | columns=None): | |
350 | raise psycopg2.ProgrammingError( | |
351 | "copy_from cannot be used in asynchronous mode") | |
352 | ||
353 | async def copy_to(self, file, table, sep='\t', null='\\N', columns=None): | |
354 | raise psycopg2.ProgrammingError( | |
355 | "copy_to cannot be used in asynchronous mode") | |
356 | ||
357 | async def copy_expert(self, sql, file, size=8192): | |
358 | raise psycopg2.ProgrammingError( | |
359 | "copy_expert cannot be used in asynchronous mode") | |
360 | ||
361 | @property | |
362 | def timeout(self): | |
363 | """Return default timeout for cursor operations.""" | |
364 | return self._timeout | |
365 | ||
366 | def __aiter__(self): | |
367 | return self | |
368 | ||
369 | async def __anext__(self): | |
370 | ret = await self.fetchone() | |
371 | if ret is not None: | |
372 | return ret | |
373 | else: | |
374 | raise StopAsyncIteration | |
375 | ||
376 | async def __aenter__(self): | |
377 | return self | |
378 | ||
379 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
380 | self.close() | |
381 | return | |
382 | ||
383 | def __repr__(self): | |
384 | msg = ( | |
385 | '<' | |
386 | '{module_name}::{class_name} ' | |
387 | 'name={name}, ' | |
388 | 'closed={closed}' | |
389 | '>' | |
390 | ) | |
391 | return msg.format( | |
392 | module_name=type(self).__module__, | |
393 | class_name=type(self).__name__, | |
394 | name=self.name, | |
395 | closed=self.closed | |
396 | ) |
0 | 0 | import asyncio |
1 | 1 | import collections |
2 | 2 | import warnings |
3 | from types import TracebackType | |
4 | from typing import ( | |
5 | Any, | |
6 | Awaitable, | |
7 | Callable, | |
8 | Deque, | |
9 | Generator, | |
10 | Optional, | |
11 | Set, | |
12 | Type, | |
13 | ) | |
3 | 14 | |
4 | 15 | import async_timeout |
5 | from psycopg2.extensions import TRANSACTION_STATUS_IDLE | |
6 | ||
7 | from .connection import TIMEOUT, connect | |
8 | from .utils import ( | |
9 | _PoolAcquireContextManager, | |
10 | _PoolConnectionContextManager, | |
11 | _PoolContextManager, | |
12 | _PoolCursorContextManager, | |
13 | ensure_future, | |
14 | get_running_loop, | |
15 | ) | |
16 | ||
17 | ||
18 | def create_pool(dsn=None, *, minsize=1, maxsize=10, | |
19 | timeout=TIMEOUT, pool_recycle=-1, | |
20 | enable_json=True, enable_hstore=True, enable_uuid=True, | |
21 | echo=False, on_connect=None, | |
22 | **kwargs): | |
16 | import psycopg2.extensions | |
17 | ||
18 | from .connection import TIMEOUT, Connection, Cursor, connect | |
19 | from .utils import _ContextManager, create_completed_future, get_running_loop | |
20 | ||
21 | ||
22 | def create_pool( | |
23 | dsn: Optional[str] = None, | |
24 | *, | |
25 | minsize: int = 1, | |
26 | maxsize: int = 10, | |
27 | timeout: float = TIMEOUT, | |
28 | pool_recycle: float = -1.0, | |
29 | enable_json: bool = True, | |
30 | enable_hstore: bool = True, | |
31 | enable_uuid: bool = True, | |
32 | echo: bool = False, | |
33 | on_connect: Optional[Callable[[Connection], Awaitable[None]]] = None, | |
34 | **kwargs: Any, | |
35 | ) -> _ContextManager["Pool"]: | |
23 | 36 | coro = Pool.from_pool_fill( |
24 | dsn, minsize, maxsize, timeout, | |
25 | enable_json=enable_json, enable_hstore=enable_hstore, | |
26 | enable_uuid=enable_uuid, echo=echo, on_connect=on_connect, | |
27 | pool_recycle=pool_recycle, **kwargs | |
37 | dsn, | |
38 | minsize, | |
39 | maxsize, | |
40 | timeout, | |
41 | enable_json=enable_json, | |
42 | enable_hstore=enable_hstore, | |
43 | enable_uuid=enable_uuid, | |
44 | echo=echo, | |
45 | on_connect=on_connect, | |
46 | pool_recycle=pool_recycle, | |
47 | **kwargs, | |
28 | 48 | ) |
29 | ||
30 | return _PoolContextManager(coro) | |
31 | ||
32 | ||
33 | class Pool(asyncio.AbstractServer): | |
49 | return _ContextManager[Pool](coro, _destroy_pool) | |
50 | ||
51 | ||
52 | async def _destroy_pool(pool: "Pool") -> None: | |
53 | pool.close() | |
54 | await pool.wait_closed() | |
55 | ||
56 | ||
57 | class _PoolConnectionContextManager: | |
58 | """Context manager. | |
59 | ||
60 | This enables the following idiom for acquiring and releasing a | |
61 | connection around a block: | |
62 | ||
63 | async with pool as conn: | |
64 | cur = await conn.cursor() | |
65 | ||
66 | while failing loudly when accidentally using: | |
67 | ||
68 | with pool: | |
69 | <block> | |
70 | """ | |
71 | ||
72 | __slots__ = ("_pool", "_conn") | |
73 | ||
74 | def __init__(self, pool: "Pool", conn: Connection): | |
75 | self._pool: Optional[Pool] = pool | |
76 | self._conn: Optional[Connection] = conn | |
77 | ||
78 | def __enter__(self) -> Connection: | |
79 | assert self._conn | |
80 | return self._conn | |
81 | ||
82 | def __exit__( | |
83 | self, | |
84 | exc_type: Optional[Type[BaseException]], | |
85 | exc: Optional[BaseException], | |
86 | tb: Optional[TracebackType], | |
87 | ) -> None: | |
88 | if self._pool is None or self._conn is None: | |
89 | return | |
90 | try: | |
91 | self._pool.release(self._conn) | |
92 | finally: | |
93 | self._pool = None | |
94 | self._conn = None | |
95 | ||
96 | async def __aenter__(self) -> Connection: | |
97 | assert self._conn | |
98 | return self._conn | |
99 | ||
100 | async def __aexit__( | |
101 | self, | |
102 | exc_type: Optional[Type[BaseException]], | |
103 | exc: Optional[BaseException], | |
104 | tb: Optional[TracebackType], | |
105 | ) -> None: | |
106 | if self._pool is None or self._conn is None: | |
107 | return | |
108 | try: | |
109 | await self._pool.release(self._conn) | |
110 | finally: | |
111 | self._pool = None | |
112 | self._conn = None | |
113 | ||
114 | ||
115 | class _PoolCursorContextManager: | |
116 | """Context manager. | |
117 | ||
118 | This enables the following idiom for acquiring and releasing a | |
119 | cursor around a block: | |
120 | ||
121 | async with pool.cursor() as cur: | |
122 | await cur.execute("SELECT 1") | |
123 | ||
124 | while failing loudly when accidentally using: | |
125 | ||
126 | with pool: | |
127 | <block> | |
128 | """ | |
129 | ||
130 | __slots__ = ("_pool", "_conn", "_cursor") | |
131 | ||
132 | def __init__(self, pool: "Pool", conn: Connection, cursor: Cursor): | |
133 | self._pool = pool | |
134 | self._conn = conn | |
135 | self._cursor = cursor | |
136 | ||
137 | def __enter__(self) -> Cursor: | |
138 | return self._cursor | |
139 | ||
140 | def __exit__( | |
141 | self, | |
142 | exc_type: Optional[Type[BaseException]], | |
143 | exc: Optional[BaseException], | |
144 | tb: Optional[TracebackType], | |
145 | ) -> None: | |
146 | try: | |
147 | self._cursor.close() | |
148 | except psycopg2.ProgrammingError: | |
149 | # seen instances where the cursor fails to close: | |
150 | # https://github.com/aio-libs/aiopg/issues/364 | |
151 | # We close it here so we don't return a bad connection to the pool | |
152 | self._conn.close() | |
153 | raise | |
154 | finally: | |
155 | try: | |
156 | self._pool.release(self._conn) | |
157 | finally: | |
158 | self._pool = None # type: ignore | |
159 | self._conn = None # type: ignore | |
160 | self._cursor = None # type: ignore | |
161 | ||
162 | ||
163 | class Pool: | |
34 | 164 | """Connection pool""" |
35 | 165 | |
36 | def __init__(self, dsn, minsize, maxsize, timeout, *, | |
37 | enable_json, enable_hstore, enable_uuid, echo, | |
38 | on_connect, pool_recycle, **kwargs): | |
166 | def __init__( | |
167 | self, | |
168 | dsn: str, | |
169 | minsize: int, | |
170 | maxsize: int, | |
171 | timeout: float, | |
172 | *, | |
173 | enable_json: bool, | |
174 | enable_hstore: bool, | |
175 | enable_uuid: bool, | |
176 | echo: bool, | |
177 | on_connect: Optional[Callable[[Connection], Awaitable[None]]], | |
178 | pool_recycle: float, | |
179 | **kwargs: Any, | |
180 | ): | |
39 | 181 | if minsize < 0: |
40 | 182 | raise ValueError("minsize should be zero or greater") |
41 | 183 | if maxsize < minsize and maxsize != 0: |
42 | 184 | raise ValueError("maxsize should be not less than minsize") |
43 | 185 | self._dsn = dsn |
44 | 186 | self._minsize = minsize |
45 | self._loop = get_running_loop(kwargs.pop('loop', None) is not None) | |
187 | self._loop = get_running_loop() | |
46 | 188 | self._timeout = timeout |
47 | 189 | self._recycle = pool_recycle |
48 | 190 | self._enable_json = enable_json |
52 | 194 | self._on_connect = on_connect |
53 | 195 | self._conn_kwargs = kwargs |
54 | 196 | self._acquiring = 0 |
55 | self._free = collections.deque(maxlen=maxsize or None) | |
197 | self._free: Deque[Connection] = collections.deque( | |
198 | maxlen=maxsize or None | |
199 | ) | |
56 | 200 | self._cond = asyncio.Condition() |
57 | self._used = set() | |
58 | self._terminated = set() | |
201 | self._used: Set[Connection] = set() | |
202 | self._terminated: Set[Connection] = set() | |
59 | 203 | self._closing = False |
60 | 204 | self._closed = False |
61 | 205 | |
62 | 206 | @property |
63 | def echo(self): | |
207 | def echo(self) -> bool: | |
64 | 208 | return self._echo |
65 | 209 | |
66 | 210 | @property |
67 | def minsize(self): | |
211 | def minsize(self) -> int: | |
68 | 212 | return self._minsize |
69 | 213 | |
70 | 214 | @property |
71 | def maxsize(self): | |
215 | def maxsize(self) -> Optional[int]: | |
72 | 216 | return self._free.maxlen |
73 | 217 | |
74 | 218 | @property |
75 | def size(self): | |
219 | def size(self) -> int: | |
76 | 220 | return self.freesize + len(self._used) + self._acquiring |
77 | 221 | |
78 | 222 | @property |
79 | def freesize(self): | |
223 | def freesize(self) -> int: | |
80 | 224 | return len(self._free) |
81 | 225 | |
82 | 226 | @property |
83 | def timeout(self): | |
227 | def timeout(self) -> float: | |
84 | 228 | return self._timeout |
85 | 229 | |
86 | async def clear(self): | |
230 | async def clear(self) -> None: | |
87 | 231 | """Close all free connections in pool.""" |
88 | 232 | async with self._cond: |
89 | 233 | while self._free: |
92 | 236 | self._cond.notify() |
93 | 237 | |
94 | 238 | @property |
95 | def closed(self): | |
239 | def closed(self) -> bool: | |
96 | 240 | return self._closed |
97 | 241 | |
98 | def close(self): | |
242 | def close(self) -> None: | |
99 | 243 | """Close pool. |
100 | 244 | |
101 | 245 | Mark all pool connections to be closed on getting back to pool. |
105 | 249 | return |
106 | 250 | self._closing = True |
107 | 251 | |
108 | def terminate(self): | |
252 | def terminate(self) -> None: | |
109 | 253 | """Terminate pool. |
110 | 254 | |
111 | 255 | Close pool with instantly closing all acquired connections also. |
119 | 263 | |
120 | 264 | self._used.clear() |
121 | 265 | |
122 | async def wait_closed(self): | |
266 | async def wait_closed(self) -> None: | |
123 | 267 | """Wait for closing all pool's connections.""" |
124 | 268 | |
125 | 269 | if self._closed: |
126 | 270 | return |
127 | 271 | if not self._closing: |
128 | raise RuntimeError(".wait_closed() should be called " | |
129 | "after .close()") | |
272 | raise RuntimeError( | |
273 | ".wait_closed() should be called " "after .close()" | |
274 | ) | |
130 | 275 | |
131 | 276 | while self._free: |
132 | 277 | conn = self._free.popleft() |
133 | conn.close() | |
278 | await conn.close() | |
134 | 279 | |
135 | 280 | async with self._cond: |
136 | 281 | while self.size > self.freesize: |
138 | 283 | |
139 | 284 | self._closed = True |
140 | 285 | |
141 | def acquire(self): | |
286 | def acquire(self) -> _ContextManager[Connection]: | |
142 | 287 | """Acquire free connection from the pool.""" |
143 | 288 | coro = self._acquire() |
144 | return _PoolAcquireContextManager(coro, self) | |
289 | return _ContextManager[Connection](coro, self.release) | |
145 | 290 | |
146 | 291 | @classmethod |
147 | async def from_pool_fill(cls, *args, **kwargs): | |
292 | async def from_pool_fill(cls, *args: Any, **kwargs: Any) -> "Pool": | |
148 | 293 | """constructor for filling the free pool with connections, |
149 | 294 | the number is controlled by the minsize parameter |
150 | 295 | """ |
155 | 300 | |
156 | 301 | return self |
157 | 302 | |
158 | async def _acquire(self): | |
303 | async def _acquire(self) -> Connection: | |
159 | 304 | if self._closing: |
160 | 305 | raise RuntimeError("Cannot acquire connection after closing pool") |
161 | 306 | async with async_timeout.timeout(self._timeout), self._cond: |
170 | 315 | else: |
171 | 316 | await self._cond.wait() |
172 | 317 | |
173 | async def _fill_free_pool(self, override_min): | |
318 | async def _fill_free_pool(self, override_min: bool) -> None: | |
174 | 319 | # iterate over free connections and remove timeouted ones |
175 | 320 | n, free = 0, len(self._free) |
176 | 321 | while n < free: |
178 | 323 | if conn.closed: |
179 | 324 | self._free.pop() |
180 | 325 | elif -1 < self._recycle < self._loop.time() - conn.last_usage: |
181 | conn.close() | |
326 | await conn.close() | |
182 | 327 | self._free.pop() |
183 | 328 | else: |
184 | 329 | self._free.rotate() |
188 | 333 | self._acquiring += 1 |
189 | 334 | try: |
190 | 335 | conn = await connect( |
191 | self._dsn, timeout=self._timeout, | |
336 | self._dsn, | |
337 | timeout=self._timeout, | |
192 | 338 | enable_json=self._enable_json, |
193 | 339 | enable_hstore=self._enable_hstore, |
194 | 340 | enable_uuid=self._enable_uuid, |
195 | 341 | echo=self._echo, |
196 | **self._conn_kwargs) | |
342 | **self._conn_kwargs, | |
343 | ) | |
197 | 344 | if self._on_connect is not None: |
198 | 345 | await self._on_connect(conn) |
199 | 346 | # raise exception if pool is closing |
204 | 351 | if self._free: |
205 | 352 | return |
206 | 353 | |
207 | if override_min and self.size < self.maxsize: | |
354 | if override_min and self.size < (self.maxsize or 0): | |
208 | 355 | self._acquiring += 1 |
209 | 356 | try: |
210 | 357 | conn = await connect( |
211 | self._dsn, timeout=self._timeout, | |
358 | self._dsn, | |
359 | timeout=self._timeout, | |
212 | 360 | enable_json=self._enable_json, |
213 | 361 | enable_hstore=self._enable_hstore, |
214 | 362 | enable_uuid=self._enable_uuid, |
215 | 363 | echo=self._echo, |
216 | **self._conn_kwargs) | |
364 | **self._conn_kwargs, | |
365 | ) | |
217 | 366 | if self._on_connect is not None: |
218 | 367 | await self._on_connect(conn) |
219 | 368 | # raise exception if pool is closing |
222 | 371 | finally: |
223 | 372 | self._acquiring -= 1 |
224 | 373 | |
225 | async def _wakeup(self): | |
374 | async def _wakeup(self) -> None: | |
226 | 375 | async with self._cond: |
227 | 376 | self._cond.notify() |
228 | 377 | |
229 | def release(self, conn): | |
230 | """Release free connection back to the connection pool. | |
231 | """ | |
232 | fut = self._loop.create_future() | |
233 | fut.set_result(None) | |
378 | def release(self, conn: Connection) -> "asyncio.Future[None]": | |
379 | """Release free connection back to the connection pool.""" | |
380 | future = create_completed_future(self._loop) | |
234 | 381 | if conn in self._terminated: |
235 | 382 | assert conn.closed, conn |
236 | 383 | self._terminated.remove(conn) |
237 | return fut | |
384 | return future | |
238 | 385 | assert conn in self._used, (conn, self._used) |
239 | 386 | self._used.remove(conn) |
240 | if not conn.closed: | |
241 | tran_status = conn._conn.get_transaction_status() | |
242 | if tran_status != TRANSACTION_STATUS_IDLE: | |
243 | warnings.warn( | |
244 | ("Invalid transaction status on " | |
245 | "released connection: {}").format(tran_status), | |
246 | ResourceWarning | |
247 | ) | |
248 | conn.close() | |
249 | return fut | |
250 | if self._closing: | |
251 | conn.close() | |
252 | else: | |
253 | conn.free_cursor() | |
254 | self._free.append(conn) | |
255 | fut = ensure_future(self._wakeup(), loop=self._loop) | |
256 | return fut | |
257 | ||
258 | async def cursor(self, name=None, cursor_factory=None, | |
259 | scrollable=None, withhold=False, *, timeout=None): | |
387 | if conn.closed: | |
388 | return future | |
389 | transaction_status = conn.raw.get_transaction_status() | |
390 | if transaction_status != psycopg2.extensions.TRANSACTION_STATUS_IDLE: | |
391 | warnings.warn( | |
392 | f"Invalid transaction status on " | |
393 | f"released connection: {transaction_status}", | |
394 | ResourceWarning, | |
395 | ) | |
396 | conn.close() | |
397 | return future | |
398 | if self._closing: | |
399 | conn.close() | |
400 | else: | |
401 | self._free.append(conn) | |
402 | return asyncio.ensure_future(self._wakeup(), loop=self._loop) | |
403 | ||
404 | async def cursor( | |
405 | self, | |
406 | name: Optional[str] = None, | |
407 | cursor_factory: Any = None, | |
408 | scrollable: Optional[bool] = None, | |
409 | withhold: bool = False, | |
410 | *, | |
411 | timeout: Optional[float] = None, | |
412 | ) -> _PoolCursorContextManager: | |
260 | 413 | conn = await self.acquire() |
261 | cur = await conn.cursor(name=name, cursor_factory=cursor_factory, | |
262 | scrollable=scrollable, withhold=withhold, | |
263 | timeout=timeout) | |
264 | return _PoolCursorContextManager(self, conn, cur) | |
265 | ||
266 | def __await__(self): | |
414 | cursor = await conn.cursor( | |
415 | name=name, | |
416 | cursor_factory=cursor_factory, | |
417 | scrollable=scrollable, | |
418 | withhold=withhold, | |
419 | timeout=timeout, | |
420 | ) | |
421 | return _PoolCursorContextManager(self, conn, cursor) | |
422 | ||
423 | def __await__(self) -> Generator[Any, Any, _PoolConnectionContextManager]: | |
267 | 424 | # This is not a coroutine. It is meant to enable the idiom: |
268 | 425 | # |
269 | 426 | # with (await pool) as conn: |
279 | 436 | conn = yield from self._acquire().__await__() |
280 | 437 | return _PoolConnectionContextManager(self, conn) |
281 | 438 | |
282 | def __enter__(self): | |
439 | def __enter__(self) -> "Pool": | |
283 | 440 | raise RuntimeError( |
284 | '"await" should be used as context manager expression') | |
285 | ||
286 | def __exit__(self, *args): | |
441 | '"await" should be used as context manager expression' | |
442 | ) | |
443 | ||
444 | def __exit__( | |
445 | self, | |
446 | exc_type: Optional[Type[BaseException]], | |
447 | exc: Optional[BaseException], | |
448 | tb: Optional[TracebackType], | |
449 | ) -> None: | |
287 | 450 | # This must exist because __enter__ exists, even though that |
288 | 451 | # always raises; that's how the with-statement works. |
289 | 452 | pass # pragma: nocover |
290 | 453 | |
291 | async def __aenter__(self): | |
454 | async def __aenter__(self) -> "Pool": | |
292 | 455 | return self |
293 | 456 | |
294 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
457 | async def __aexit__( | |
458 | self, | |
459 | exc_type: Optional[Type[BaseException]], | |
460 | exc: Optional[BaseException], | |
461 | tb: Optional[TracebackType], | |
462 | ) -> None: | |
295 | 463 | self.close() |
296 | 464 | await self.wait_closed() |
297 | 465 | |
298 | def __del__(self): | |
466 | def __del__(self) -> None: | |
299 | 467 | try: |
300 | 468 | self._free |
301 | 469 | except AttributeError: |
307 | 475 | conn.close() |
308 | 476 | left += 1 |
309 | 477 | warnings.warn( |
310 | "Unclosed {} connections in {!r}".format(left, self), | |
311 | ResourceWarning) | |
478 | f"Unclosed {left} connections in {self!r}", ResourceWarning | |
479 | ) |
8 | 8 | ResourceClosedError, |
9 | 9 | ) |
10 | 10 | |
11 | __all__ = ('create_engine', 'SAConnection', 'Error', | |
12 | 'ArgumentError', 'InvalidRequestError', 'NoSuchColumnError', | |
13 | 'ResourceClosedError', 'Engine') | |
11 | __all__ = ( | |
12 | "create_engine", | |
13 | "SAConnection", | |
14 | "Error", | |
15 | "ArgumentError", | |
16 | "InvalidRequestError", | |
17 | "NoSuchColumnError", | |
18 | "ResourceClosedError", | |
19 | "Engine", | |
20 | ) | |
14 | 21 | |
15 | 22 | |
16 | (SAConnection, Error, ArgumentError, InvalidRequestError, | |
17 | NoSuchColumnError, ResourceClosedError, create_engine, Engine) | |
23 | ( | |
24 | SAConnection, | |
25 | Error, | |
26 | ArgumentError, | |
27 | InvalidRequestError, | |
28 | NoSuchColumnError, | |
29 | ResourceClosedError, | |
30 | create_engine, | |
31 | Engine, | |
32 | ) |
0 | import asyncio | |
1 | import contextlib | |
2 | import weakref | |
3 | ||
0 | 4 | from sqlalchemy.sql import ClauseElement |
1 | 5 | from sqlalchemy.sql.ddl import DDLElement |
2 | 6 | from sqlalchemy.sql.dml import UpdateBase |
3 | 7 | |
4 | from ..utils import _SAConnectionContextManager, _TransactionContextManager | |
8 | from ..utils import _ContextManager, _IterableContextManager | |
5 | 9 | from . import exc |
6 | 10 | from .result import ResultProxy |
7 | 11 | from .transaction import ( |
12 | 16 | ) |
13 | 17 | |
14 | 18 | |
19 | async def _commit_transaction_if_active(t: Transaction) -> None: | |
20 | if t.is_active: | |
21 | await t.commit() | |
22 | ||
23 | ||
24 | async def _rollback_transaction(t: Transaction) -> None: | |
25 | await t.rollback() | |
26 | ||
27 | ||
28 | async def _close_result_proxy(c: "ResultProxy") -> None: | |
29 | c.close() | |
30 | ||
31 | ||
15 | 32 | class SAConnection: |
33 | _QUERY_COMPILE_KWARGS = (("render_postcompile", True),) | |
34 | ||
35 | __slots__ = ( | |
36 | "_connection", | |
37 | "_transaction", | |
38 | "_savepoint_seq", | |
39 | "_engine", | |
40 | "_dialect", | |
41 | "_cursors", | |
42 | "_query_compile_kwargs", | |
43 | ) | |
44 | ||
16 | 45 | def __init__(self, connection, engine): |
17 | 46 | self._connection = connection |
18 | 47 | self._transaction = None |
19 | 48 | self._savepoint_seq = 0 |
20 | 49 | self._engine = engine |
21 | 50 | self._dialect = engine.dialect |
22 | self._cursor = None | |
51 | self._cursors = weakref.WeakSet() | |
52 | self._query_compile_kwargs = dict(self._QUERY_COMPILE_KWARGS) | |
23 | 53 | |
24 | 54 | def execute(self, query, *multiparams, **params): |
25 | 55 | """Executes a SQL query with optional parameters. |
59 | 89 | |
60 | 90 | """ |
61 | 91 | coro = self._execute(query, *multiparams, **params) |
62 | return _SAConnectionContextManager(coro) | |
63 | ||
64 | async def _get_cursor(self): | |
65 | if self._cursor and not self._cursor.closed: | |
66 | return self._cursor | |
67 | ||
68 | self._cursor = await self._connection.cursor() | |
69 | return self._cursor | |
92 | return _IterableContextManager[ResultProxy](coro, _close_result_proxy) | |
93 | ||
94 | async def _open_cursor(self): | |
95 | if self._connection is None: | |
96 | raise exc.ResourceClosedError("This connection is closed.") | |
97 | cursor = await self._connection.cursor() | |
98 | self._cursors.add(cursor) | |
99 | return cursor | |
100 | ||
101 | def _close_cursor(self, cursor): | |
102 | self._cursors.remove(cursor) | |
103 | cursor.close() | |
70 | 104 | |
71 | 105 | async def _execute(self, query, *multiparams, **params): |
72 | cursor = await self._get_cursor() | |
106 | cursor = await self._open_cursor() | |
73 | 107 | dp = _distill_params(multiparams, params) |
74 | 108 | if len(dp) > 1: |
75 | 109 | raise exc.ArgumentError("aiopg doesn't support executemany") |
81 | 115 | if isinstance(query, str): |
82 | 116 | await cursor.execute(query, dp) |
83 | 117 | elif isinstance(query, ClauseElement): |
84 | compiled = query.compile(dialect=self._dialect) | |
85 | 118 | # parameters = compiled.params |
86 | 119 | if not isinstance(query, DDLElement): |
120 | compiled = query.compile( | |
121 | dialect=self._dialect, | |
122 | compile_kwargs=self._query_compile_kwargs, | |
123 | ) | |
87 | 124 | if dp and isinstance(dp, (list, tuple)): |
88 | 125 | if isinstance(query, UpdateBase): |
89 | dp = {c.key: pval | |
90 | for c, pval in zip(query.table.c, dp)} | |
126 | dp = { | |
127 | c.key: pval for c, pval in zip(query.table.c, dp) | |
128 | } | |
91 | 129 | else: |
92 | raise exc.ArgumentError("Don't mix sqlalchemy SELECT " | |
93 | "clause with positional " | |
94 | "parameters") | |
130 | raise exc.ArgumentError( | |
131 | "Don't mix sqlalchemy SELECT " | |
132 | "clause with positional " | |
133 | "parameters" | |
134 | ) | |
95 | 135 | |
96 | 136 | compiled_parameters = [compiled.construct_params(dp)] |
97 | 137 | processed_parameters = [] |
98 | 138 | processors = compiled._bind_processors |
99 | 139 | for compiled_params in compiled_parameters: |
100 | params = {key: (processors[key](compiled_params[key]) | |
101 | if key in processors | |
102 | else compiled_params[key]) | |
103 | for key in compiled_params} | |
140 | params = { | |
141 | key: ( | |
142 | processors[key](compiled_params[key]) | |
143 | if key in processors | |
144 | else compiled_params[key] | |
145 | ) | |
146 | for key in compiled_params | |
147 | } | |
104 | 148 | processed_parameters.append(params) |
105 | 149 | post_processed_params = self._dialect.execute_sequence_format( |
106 | processed_parameters) | |
150 | processed_parameters | |
151 | ) | |
107 | 152 | |
108 | 153 | # _result_columns is a private API of Compiled, |
109 | 154 | # but I couldn't find any public API exposing this data. |
110 | 155 | result_map = compiled._result_columns |
111 | 156 | |
112 | 157 | else: |
158 | compiled = query.compile(dialect=self._dialect) | |
113 | 159 | if dp: |
114 | raise exc.ArgumentError("Don't mix sqlalchemy DDL clause " | |
115 | "and execution with parameters") | |
160 | raise exc.ArgumentError( | |
161 | "Don't mix sqlalchemy DDL clause " | |
162 | "and execution with parameters" | |
163 | ) | |
116 | 164 | post_processed_params = [compiled.construct_params()] |
117 | 165 | result_map = None |
118 | 166 | |
119 | 167 | await cursor.execute(str(compiled), post_processed_params[0]) |
120 | 168 | else: |
121 | raise exc.ArgumentError("sql statement should be str or " | |
122 | "SQLAlchemy data " | |
123 | "selection/modification clause") | |
169 | raise exc.ArgumentError( | |
170 | "sql statement should be str or " | |
171 | "SQLAlchemy data " | |
172 | "selection/modification clause" | |
173 | ) | |
124 | 174 | |
125 | 175 | return ResultProxy(self, cursor, self._dialect, result_map) |
126 | 176 | |
176 | 226 | |
177 | 227 | """ |
178 | 228 | coro = self._begin(isolation_level, readonly, deferrable) |
179 | return _TransactionContextManager(coro) | |
229 | return _ContextManager[Transaction]( | |
230 | coro, _commit_transaction_if_active, _rollback_transaction | |
231 | ) | |
180 | 232 | |
181 | 233 | async def _begin(self, isolation_level, readonly, deferrable): |
182 | 234 | if self._transaction is None: |
183 | 235 | self._transaction = RootTransaction(self) |
184 | 236 | await self._begin_impl(isolation_level, readonly, deferrable) |
185 | 237 | return self._transaction |
186 | else: | |
187 | return Transaction(self, self._transaction) | |
238 | return Transaction(self, self._transaction) | |
188 | 239 | |
189 | 240 | async def _begin_impl(self, isolation_level, readonly, deferrable): |
190 | stmt = 'BEGIN' | |
241 | stmt = "BEGIN" | |
191 | 242 | if isolation_level is not None: |
192 | stmt += ' ISOLATION LEVEL ' + isolation_level | |
243 | stmt += f" ISOLATION LEVEL {isolation_level}" | |
193 | 244 | if readonly: |
194 | stmt += ' READ ONLY' | |
245 | stmt += " READ ONLY" | |
195 | 246 | if deferrable: |
196 | stmt += ' DEFERRABLE' | |
197 | ||
198 | cur = await self._get_cursor() | |
199 | try: | |
200 | await cur.execute(stmt) | |
201 | finally: | |
202 | cur.close() | |
247 | stmt += " DEFERRABLE" | |
248 | ||
249 | cursor = await self._open_cursor() | |
250 | try: | |
251 | await cursor.execute(stmt) | |
252 | finally: | |
253 | self._close_cursor(cursor) | |
203 | 254 | |
204 | 255 | async def _commit_impl(self): |
205 | cur = await self._get_cursor() | |
206 | try: | |
207 | await cur.execute('COMMIT') | |
208 | finally: | |
209 | cur.close() | |
256 | cursor = await self._open_cursor() | |
257 | try: | |
258 | await cursor.execute("COMMIT") | |
259 | finally: | |
260 | self._close_cursor(cursor) | |
210 | 261 | self._transaction = None |
211 | 262 | |
212 | 263 | async def _rollback_impl(self): |
213 | cur = await self._get_cursor() | |
214 | try: | |
215 | await cur.execute('ROLLBACK') | |
216 | finally: | |
217 | cur.close() | |
264 | try: | |
265 | if self._connection.closed: | |
266 | return | |
267 | cursor = await self._open_cursor() | |
268 | try: | |
269 | await cursor.execute("ROLLBACK") | |
270 | finally: | |
271 | self._close_cursor(cursor) | |
272 | finally: | |
218 | 273 | self._transaction = None |
219 | 274 | |
220 | 275 | def begin_nested(self): |
229 | 284 | transaction of a whole. |
230 | 285 | """ |
231 | 286 | coro = self._begin_nested() |
232 | return _TransactionContextManager(coro) | |
287 | return _ContextManager( | |
288 | coro, _commit_transaction_if_active, _rollback_transaction | |
289 | ) | |
233 | 290 | |
234 | 291 | async def _begin_nested(self): |
235 | 292 | if self._transaction is None: |
240 | 297 | self._transaction._savepoint = await self._savepoint_impl() |
241 | 298 | return self._transaction |
242 | 299 | |
243 | async def _savepoint_impl(self, name=None): | |
300 | async def _savepoint_impl(self): | |
244 | 301 | self._savepoint_seq += 1 |
245 | name = 'aiopg_sa_savepoint_%s' % self._savepoint_seq | |
246 | ||
247 | cur = await self._get_cursor() | |
248 | try: | |
249 | await cur.execute('SAVEPOINT ' + name) | |
302 | name = f"aiopg_sa_savepoint_{self._savepoint_seq}" | |
303 | ||
304 | cursor = await self._open_cursor() | |
305 | try: | |
306 | await cursor.execute(f"SAVEPOINT {name}") | |
250 | 307 | return name |
251 | 308 | finally: |
252 | cur.close() | |
309 | self._close_cursor(cursor) | |
253 | 310 | |
254 | 311 | async def _rollback_to_savepoint_impl(self, name, parent): |
255 | cur = await self._get_cursor() | |
256 | try: | |
257 | await cur.execute('ROLLBACK TO SAVEPOINT ' + name) | |
258 | finally: | |
259 | cur.close() | |
260 | self._transaction = parent | |
312 | try: | |
313 | if self._connection.closed: | |
314 | return | |
315 | cursor = await self._open_cursor() | |
316 | try: | |
317 | await cursor.execute(f"ROLLBACK TO SAVEPOINT {name}") | |
318 | finally: | |
319 | self._close_cursor(cursor) | |
320 | finally: | |
321 | self._transaction = parent | |
261 | 322 | |
262 | 323 | async def _release_savepoint_impl(self, name, parent): |
263 | cur = await self._get_cursor() | |
264 | try: | |
265 | await cur.execute('RELEASE SAVEPOINT ' + name) | |
266 | finally: | |
267 | cur.close() | |
324 | cursor = await self._open_cursor() | |
325 | try: | |
326 | await cursor.execute(f"RELEASE SAVEPOINT {name}") | |
327 | finally: | |
328 | self._close_cursor(cursor) | |
329 | ||
268 | 330 | self._transaction = parent |
269 | 331 | |
270 | 332 | async def begin_twophase(self, xid=None): |
283 | 345 | if self._transaction is not None: |
284 | 346 | raise exc.InvalidRequestError( |
285 | 347 | "Cannot start a two phase transaction when a transaction " |
286 | "is already in progress.") | |
348 | "is already in progress." | |
349 | ) | |
287 | 350 | if xid is None: |
288 | 351 | xid = self._dialect.create_xid() |
289 | 352 | self._transaction = TwoPhaseTransaction(self, xid) |
290 | await self._begin_impl() | |
353 | await self._begin_impl(None, False, False) | |
291 | 354 | return self._transaction |
292 | 355 | |
293 | 356 | async def _prepare_twophase_impl(self, xid): |
294 | await self.execute("PREPARE TRANSACTION '%s'" % xid) | |
357 | await self.execute(f"PREPARE TRANSACTION {xid!r}") | |
295 | 358 | |
296 | 359 | async def recover_twophase(self): |
297 | 360 | """Return a list of prepared twophase transaction ids.""" |
301 | 364 | async def rollback_prepared(self, xid, *, is_prepared=True): |
302 | 365 | """Rollback prepared twophase transaction.""" |
303 | 366 | if is_prepared: |
304 | await self.execute("ROLLBACK PREPARED '%s'" % xid) | |
367 | await self.execute(f"ROLLBACK PREPARED {xid:!r}") | |
305 | 368 | else: |
306 | 369 | await self._rollback_impl() |
307 | 370 | |
308 | 371 | async def commit_prepared(self, xid, *, is_prepared=True): |
309 | 372 | """Commit prepared twophase transaction.""" |
310 | 373 | if is_prepared: |
311 | await self.execute("COMMIT PREPARED '%s'" % xid) | |
374 | await self.execute(f"COMMIT PREPARED {xid!r}") | |
312 | 375 | else: |
313 | 376 | await self._commit_impl() |
314 | 377 | |
331 | 394 | After .close() is called, the SAConnection is permanently in a |
332 | 395 | closed state, and will allow no further operations. |
333 | 396 | """ |
397 | ||
334 | 398 | if self.connection is None: |
335 | 399 | return |
336 | 400 | |
401 | await asyncio.shield(self._close()) | |
402 | ||
403 | async def _close(self): | |
337 | 404 | if self._transaction is not None: |
338 | await self._transaction.rollback() | |
405 | with contextlib.suppress(Exception): | |
406 | await self._transaction.rollback() | |
339 | 407 | self._transaction = None |
340 | # don't close underlying connection, it can be reused by pool | |
341 | # conn.close() | |
342 | ||
343 | self._engine.release(self) | |
344 | self._connection = None | |
345 | self._engine = None | |
408 | ||
409 | for cursor in self._cursors: | |
410 | cursor.close() | |
411 | self._cursors.clear() | |
412 | ||
413 | if self._engine is not None: | |
414 | with contextlib.suppress(Exception): | |
415 | await self._engine.release(self) | |
416 | self._connection = None | |
417 | self._engine = None | |
346 | 418 | |
347 | 419 | |
348 | 420 | def _distill_params(multiparams, params): |
363 | 435 | elif len(multiparams) == 1: |
364 | 436 | zero = multiparams[0] |
365 | 437 | if isinstance(zero, (list, tuple)): |
366 | if not zero or hasattr(zero[0], '__iter__') and \ | |
367 | not hasattr(zero[0], 'strip'): | |
438 | if ( | |
439 | not zero | |
440 | or hasattr(zero[0], "__iter__") | |
441 | and not hasattr(zero[0], "strip") | |
442 | ): | |
368 | 443 | # execute(stmt, [{}, {}, {}, ...]) |
369 | 444 | # execute(stmt, [(), (), (), ...]) |
370 | 445 | return zero |
371 | 446 | else: |
372 | 447 | # execute(stmt, ("value", "value")) |
373 | 448 | return [zero] |
374 | elif hasattr(zero, 'keys'): | |
449 | elif hasattr(zero, "keys"): | |
375 | 450 | # execute(stmt, {"key":"value"}) |
376 | 451 | return [zero] |
377 | 452 | else: |
378 | 453 | # execute(stmt, "value") |
379 | 454 | return [[zero]] |
380 | 455 | else: |
381 | if (hasattr(multiparams[0], '__iter__') and | |
382 | not hasattr(multiparams[0], 'strip')): | |
456 | if hasattr(multiparams[0], "__iter__") and not hasattr( | |
457 | multiparams[0], "strip" | |
458 | ): | |
383 | 459 | return multiparams |
384 | 460 | else: |
385 | 461 | return [multiparams] |
0 | import asyncio | |
0 | 1 | import json |
1 | 2 | |
2 | 3 | import aiopg |
3 | 4 | |
4 | 5 | from ..connection import TIMEOUT |
5 | from ..utils import _PoolAcquireContextManager, _PoolContextManager | |
6 | from ..utils import _ContextManager, get_running_loop | |
6 | 7 | from .connection import SAConnection |
7 | 8 | |
8 | 9 | try: |
11 | 12 | PGDialect_psycopg2, |
12 | 13 | ) |
13 | 14 | except ImportError: # pragma: no cover |
14 | raise ImportError('aiopg.sa requires sqlalchemy') | |
15 | raise ImportError("aiopg.sa requires sqlalchemy") | |
15 | 16 | |
16 | 17 | |
17 | 18 | class APGCompiler_psycopg2(PGCompiler_psycopg2): |
31 | 32 | |
32 | 33 | |
33 | 34 | def get_dialect(json_serializer=json.dumps, json_deserializer=lambda x: x): |
34 | dialect = PGDialect_psycopg2(json_serializer=json_serializer, | |
35 | json_deserializer=json_deserializer) | |
35 | dialect = PGDialect_psycopg2( | |
36 | json_serializer=json_serializer, json_deserializer=json_deserializer | |
37 | ) | |
36 | 38 | |
37 | 39 | dialect.statement_compiler = APGCompiler_psycopg2 |
38 | 40 | dialect.implicit_returning = True |
48 | 50 | _dialect = get_dialect() |
49 | 51 | |
50 | 52 | |
51 | def create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, | |
52 | timeout=TIMEOUT, pool_recycle=-1, **kwargs): | |
53 | def create_engine( | |
54 | dsn=None, | |
55 | *, | |
56 | minsize=1, | |
57 | maxsize=10, | |
58 | dialect=_dialect, | |
59 | timeout=TIMEOUT, | |
60 | pool_recycle=-1, | |
61 | **kwargs | |
62 | ): | |
53 | 63 | """A coroutine for Engine creation. |
54 | 64 | |
55 | 65 | Returns Engine instance with embedded connection pool. |
57 | 67 | The pool has *minsize* opened connections to PostgreSQL server. |
58 | 68 | """ |
59 | 69 | |
60 | coro = _create_engine(dsn=dsn, minsize=minsize, maxsize=maxsize, | |
61 | dialect=dialect, timeout=timeout, | |
62 | pool_recycle=pool_recycle, **kwargs) | |
63 | return _EngineContextManager(coro) | |
64 | ||
65 | ||
66 | async def _create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, | |
67 | timeout=TIMEOUT, pool_recycle=-1, **kwargs): | |
70 | coro = _create_engine( | |
71 | dsn=dsn, | |
72 | minsize=minsize, | |
73 | maxsize=maxsize, | |
74 | dialect=dialect, | |
75 | timeout=timeout, | |
76 | pool_recycle=pool_recycle, | |
77 | **kwargs | |
78 | ) | |
79 | return _ContextManager(coro, _close_engine) | |
80 | ||
81 | ||
82 | async def _create_engine( | |
83 | dsn=None, | |
84 | *, | |
85 | minsize=1, | |
86 | maxsize=10, | |
87 | dialect=_dialect, | |
88 | timeout=TIMEOUT, | |
89 | pool_recycle=-1, | |
90 | **kwargs | |
91 | ): | |
68 | 92 | |
69 | 93 | pool = await aiopg.create_pool( |
70 | dsn, minsize=minsize, maxsize=maxsize, | |
71 | timeout=timeout, pool_recycle=pool_recycle, **kwargs | |
94 | dsn, | |
95 | minsize=minsize, | |
96 | maxsize=maxsize, | |
97 | timeout=timeout, | |
98 | pool_recycle=pool_recycle, | |
99 | **kwargs | |
72 | 100 | ) |
73 | 101 | conn = await pool.acquire() |
74 | 102 | try: |
78 | 106 | await pool.release(conn) |
79 | 107 | |
80 | 108 | |
109 | async def _close_engine(engine: "Engine") -> None: | |
110 | engine.close() | |
111 | await engine.wait_closed() | |
112 | ||
113 | ||
114 | async def _close_connection(c: SAConnection) -> None: | |
115 | await c.close() | |
116 | ||
117 | ||
81 | 118 | class Engine: |
82 | 119 | """Connects a aiopg.Pool and |
83 | 120 | sqlalchemy.engine.interfaces.Dialect together to provide a |
87 | 124 | create_engine coroutine. |
88 | 125 | """ |
89 | 126 | |
127 | __slots__ = ("_dialect", "_pool", "_dsn", "_loop") | |
128 | ||
90 | 129 | def __init__(self, dialect, pool, dsn): |
91 | 130 | self._dialect = dialect |
92 | 131 | self._pool = pool |
93 | 132 | self._dsn = dsn |
133 | self._loop = get_running_loop() | |
94 | 134 | |
95 | 135 | @property |
96 | 136 | def dialect(self): |
159 | 199 | def acquire(self): |
160 | 200 | """Get a connection from pool.""" |
161 | 201 | coro = self._acquire() |
162 | return _EngineAcquireContextManager(coro, self) | |
202 | return _ContextManager[SAConnection](coro, _close_connection) | |
163 | 203 | |
164 | 204 | async def _acquire(self): |
165 | 205 | raw = await self._pool.acquire() |
166 | conn = SAConnection(raw, self) | |
167 | return conn | |
206 | return SAConnection(raw, self) | |
168 | 207 | |
169 | 208 | def release(self, conn): |
170 | raw = conn.connection | |
171 | fut = self._pool.release(raw) | |
172 | return fut | |
209 | return self._pool.release(conn.connection) | |
173 | 210 | |
174 | 211 | def __enter__(self): |
175 | 212 | raise RuntimeError( |
176 | '"await" should be used as context manager expression') | |
213 | '"await" should be used as context manager expression' | |
214 | ) | |
177 | 215 | |
178 | 216 | def __exit__(self, *args): |
179 | 217 | # This must exist because __enter__ exists, even though that |
194 | 232 | # finally: |
195 | 233 | # engine.release(conn) |
196 | 234 | conn = yield from self._acquire().__await__() |
197 | return _ConnectionContextManager(self, conn) | |
235 | return _ConnectionContextManager(conn, self._loop) | |
198 | 236 | |
199 | 237 | async def __aenter__(self): |
200 | 238 | return self |
204 | 242 | await self.wait_closed() |
205 | 243 | |
206 | 244 | |
207 | _EngineContextManager = _PoolContextManager | |
208 | _EngineAcquireContextManager = _PoolAcquireContextManager | |
209 | ||
210 | ||
211 | 245 | class _ConnectionContextManager: |
212 | 246 | """Context manager. |
213 | 247 | |
223 | 257 | <block> |
224 | 258 | """ |
225 | 259 | |
226 | __slots__ = ('_engine', '_conn') | |
227 | ||
228 | def __init__(self, engine, conn): | |
229 | self._engine = engine | |
260 | __slots__ = ("_conn", "_loop") | |
261 | ||
262 | def __init__(self, conn: SAConnection, loop: asyncio.AbstractEventLoop): | |
230 | 263 | self._conn = conn |
264 | self._loop = loop | |
231 | 265 | |
232 | 266 | def __enter__(self): |
233 | 267 | return self._conn |
234 | 268 | |
235 | 269 | def __exit__(self, *args): |
236 | try: | |
237 | self._engine.release(self._conn) | |
238 | finally: | |
239 | self._engine = None | |
240 | self._conn = None | |
270 | asyncio.ensure_future(self._conn.close(), loop=self._loop) | |
271 | self._conn = None |
3 | 3 | from sqlalchemy.sql import expression, sqltypes |
4 | 4 | |
5 | 5 | from . import exc |
6 | from .utils import SQLALCHEMY_VERSION | |
7 | ||
8 | if SQLALCHEMY_VERSION >= ["1", "4"]: | |
9 | from sqlalchemy.util import string_or_unprintable | |
10 | else: | |
11 | from sqlalchemy.sql.expression import ( | |
12 | _string_or_unprintable as string_or_unprintable, | |
13 | ) | |
6 | 14 | |
7 | 15 | |
8 | 16 | class RowProxy(Mapping): |
9 | __slots__ = ('_result_proxy', '_row', '_processors', '_keymap') | |
17 | __slots__ = ("_result_proxy", "_row", "_processors", "_keymap") | |
10 | 18 | |
11 | 19 | def __init__(self, result_proxy, row, processors, keymap): |
12 | 20 | """RowProxy objects are constructed by ResultProxy objects.""" |
41 | 49 | # raise |
42 | 50 | if index is None: |
43 | 51 | raise exc.InvalidRequestError( |
44 | "Ambiguous column name '%s' in result set! " | |
45 | "try 'use_labels' option on select statement." % key) | |
52 | f"Ambiguous column name {key!r} in result set! " | |
53 | f"try 'use_labels' option on select statement." | |
54 | ) | |
46 | 55 | if processor is not None: |
47 | 56 | return processor(self._row[index]) |
48 | 57 | else: |
77 | 86 | return repr(self.as_tuple()) |
78 | 87 | |
79 | 88 | |
80 | class ResultMetaData(object): | |
89 | class ResultMetaData: | |
81 | 90 | """Handle cursor.description, applying additional info from an execution |
82 | 91 | context.""" |
83 | 92 | |
96 | 105 | # `dbapi_type_map` property removed in SQLAlchemy 1.2+. |
97 | 106 | # Usage of `getattr` only needed for backward compatibility with |
98 | 107 | # older versions of SQLAlchemy. |
99 | typemap = getattr(dialect, 'dbapi_type_map', {}) | |
100 | ||
101 | assert dialect.case_sensitive, \ | |
102 | "Doesn't support case insensitive database connection" | |
108 | typemap = getattr(dialect, "dbapi_type_map", {}) | |
109 | ||
110 | assert ( | |
111 | dialect.case_sensitive | |
112 | ), "Doesn't support case insensitive database connection" | |
103 | 113 | |
104 | 114 | # high precedence key values. |
105 | 115 | primary_keymap = {} |
106 | 116 | |
107 | assert not dialect.description_encoding, \ | |
108 | "psycopg in py3k should not use this" | |
117 | assert ( | |
118 | not dialect.description_encoding | |
119 | ), "psycopg in py3k should not use this" | |
109 | 120 | |
110 | 121 | for i, rec in enumerate(cursor_description): |
111 | 122 | colname = rec[0] |
118 | 129 | name, obj, type_ = ( |
119 | 130 | map_column_name.get(colname, colname), |
120 | 131 | None, |
121 | map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE)) | |
132 | map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE)), | |
122 | 133 | ) |
123 | 134 | |
124 | 135 | processor = type_._cached_result_processor(dialect, coltype) |
131 | 142 | primary_keymap[i] = rec |
132 | 143 | |
133 | 144 | # populate primary keymap, looking for conflicts. |
134 | if primary_keymap.setdefault(name, rec) is not rec: | |
145 | if primary_keymap.setdefault(name, rec) != rec: | |
135 | 146 | # place a record that doesn't have the "index" - this |
136 | 147 | # is interpreted later as an AmbiguousColumnError, |
137 | 148 | # but only when actually accessed. Columns |
159 | 170 | map_column_name = {} |
160 | 171 | for elem in data_map: |
161 | 172 | name = elem[0] |
162 | priority_name = getattr(elem[2][0], 'key', name) | |
173 | priority_name = getattr(elem[2][0], "key", name) | |
163 | 174 | map_type[name] = elem[3] # type column |
164 | 175 | map_column_name[name] = priority_name |
165 | 176 | |
175 | 186 | # or colummn('name') constructs to ColumnElements, or after a |
176 | 187 | # pickle/unpickle roundtrip |
177 | 188 | elif isinstance(key, expression.ColumnElement): |
178 | if (key._label and key._label in map): | |
189 | if key._label and key._label in map: | |
179 | 190 | result = map[key._label] |
180 | elif (hasattr(key, 'key') and key.key in map): | |
191 | elif hasattr(key, "key") and key.key in map: | |
181 | 192 | # match is only on name. |
182 | 193 | result = map[key.key] |
183 | 194 | # search extra hard to make sure this |
193 | 204 | if result is None: |
194 | 205 | if raiseerr: |
195 | 206 | raise exc.NoSuchColumnError( |
196 | "Could not locate column in row for column '%s'" % | |
197 | expression._string_or_unprintable(key)) | |
207 | f"Could not locate column in row for column " | |
208 | f"{string_or_unprintable(key)!r}" | |
209 | ) | |
198 | 210 | else: |
199 | 211 | return None |
200 | 212 | else: |
289 | 301 | cursor_description = self.cursor.description |
290 | 302 | if cursor_description is not None: |
291 | 303 | self._metadata = ResultMetaData(self, cursor_description) |
292 | self._weak = weakref.ref(self, lambda wr: self.cursor.close()) | |
304 | self._weak = weakref.ref(self, lambda _: self.close()) | |
293 | 305 | else: |
294 | 306 | self.close() |
295 | self._weak = None | |
296 | 307 | |
297 | 308 | @property |
298 | 309 | def returns_rows(self): |
328 | 339 | * cursor.description is None. |
329 | 340 | """ |
330 | 341 | |
331 | if not self.closed: | |
332 | self.cursor.close() | |
333 | # allow consistent errors | |
334 | self._cursor = None | |
335 | self._weak = None | |
342 | if self._cursor is None: | |
343 | return | |
344 | ||
345 | if not self._cursor.closed: | |
346 | self._cursor.close() | |
347 | ||
348 | self._cursor = None | |
349 | self._weak = None | |
336 | 350 | |
337 | 351 | def __aiter__(self): |
338 | 352 | return self |
341 | 355 | ret = await self.fetchone() |
342 | 356 | if ret is not None: |
343 | 357 | return ret |
344 | else: | |
345 | raise StopAsyncIteration | |
358 | raise StopAsyncIteration | |
346 | 359 | |
347 | 360 | def _non_result(self): |
348 | 361 | if self._metadata is None: |
349 | 362 | raise exc.ResourceClosedError( |
350 | 363 | "This result object does not return rows. " |
351 | "It has been closed automatically.") | |
364 | "It has been closed automatically." | |
365 | ) | |
352 | 366 | else: |
353 | 367 | raise exc.ResourceClosedError("This result object is closed.") |
354 | 368 | |
357 | 371 | metadata = self._metadata |
358 | 372 | keymap = metadata._keymap |
359 | 373 | processors = metadata._processors |
360 | return [process_row(metadata, row, processors, keymap) | |
361 | for row in rows] | |
374 | return [process_row(metadata, row, processors, keymap) for row in rows] | |
362 | 375 | |
363 | 376 | async def fetchall(self): |
364 | 377 | """Fetch all rows, just like DB-API cursor.fetchall().""" |
0 | 0 | from . import exc |
1 | 1 | |
2 | 2 | |
3 | class Transaction(object): | |
3 | class Transaction: | |
4 | 4 | """Represent a database transaction in progress. |
5 | 5 | |
6 | 6 | The Transaction object is procured by |
22 | 22 | See also: SAConnection.begin(), SAConnection.begin_twophase(), |
23 | 23 | SAConnection.begin_nested(). |
24 | 24 | """ |
25 | ||
26 | __slots__ = ("_connection", "_parent", "_is_active") | |
25 | 27 | |
26 | 28 | def __init__(self, connection, parent): |
27 | 29 | self._connection = connection |
82 | 84 | async def __aexit__(self, exc_type, exc_val, exc_tb): |
83 | 85 | if exc_type: |
84 | 86 | await self.rollback() |
85 | else: | |
86 | if self._is_active: | |
87 | await self.commit() | |
87 | elif self._is_active: | |
88 | await self.commit() | |
88 | 89 | |
89 | 90 | |
90 | 91 | class RootTransaction(Transaction): |
92 | __slots__ = () | |
91 | 93 | |
92 | 94 | def __init__(self, connection): |
93 | 95 | super().__init__(connection, None) |
108 | 110 | The interface is the same as that of Transaction class. |
109 | 111 | """ |
110 | 112 | |
111 | _savepoint = None | |
113 | __slots__ = ("_savepoint",) | |
112 | 114 | |
113 | 115 | def __init__(self, connection, parent): |
114 | super(NestedTransaction, self).__init__(connection, parent) | |
116 | super().__init__(connection, parent) | |
117 | self._savepoint = None | |
115 | 118 | |
116 | 119 | async def _do_rollback(self): |
117 | 120 | assert self._savepoint is not None, "Broken transaction logic" |
118 | 121 | if self._is_active: |
119 | 122 | await self._connection._rollback_to_savepoint_impl( |
120 | self._savepoint, self._parent) | |
123 | self._savepoint, self._parent | |
124 | ) | |
121 | 125 | |
122 | 126 | async def _do_commit(self): |
123 | 127 | assert self._savepoint is not None, "Broken transaction logic" |
124 | 128 | if self._is_active: |
125 | 129 | await self._connection._release_savepoint_impl( |
126 | self._savepoint, self._parent) | |
130 | self._savepoint, self._parent | |
131 | ) | |
127 | 132 | |
128 | 133 | |
129 | 134 | class TwoPhaseTransaction(Transaction): |
135 | 140 | The interface is the same as that of Transaction class |
136 | 141 | with the addition of the .prepare() method. |
137 | 142 | """ |
143 | ||
144 | __slots__ = ("_is_prepared", "_xid") | |
138 | 145 | |
139 | 146 | def __init__(self, connection, xid): |
140 | 147 | super().__init__(connection, None) |
159 | 166 | |
160 | 167 | async def _do_rollback(self): |
161 | 168 | await self._connection._rollback_twophase_impl( |
162 | self._xid, is_prepared=self._is_prepared) | |
169 | self._xid, is_prepared=self._is_prepared | |
170 | ) | |
163 | 171 | |
164 | 172 | async def _do_commit(self): |
165 | 173 | await self._connection._commit_twophase_impl( |
166 | self._xid, is_prepared=self._is_prepared) | |
174 | self._xid, is_prepared=self._is_prepared | |
175 | ) |
0 | import enum | |
1 | import uuid | |
2 | import warnings | |
3 | from abc import ABC | |
4 | ||
5 | import psycopg2 | |
6 | ||
7 | from aiopg.utils import _TransactionPointContextManager | |
8 | ||
9 | __all__ = ('IsolationLevel', 'Transaction') | |
10 | ||
11 | ||
12 | class IsolationCompiler(ABC): | |
13 | __slots__ = ('_isolation_level', '_readonly', '_deferrable') | |
14 | ||
15 | def __init__(self, isolation_level, readonly, deferrable): | |
16 | self._isolation_level = isolation_level | |
17 | self._readonly = readonly | |
18 | self._deferrable = deferrable | |
19 | ||
20 | @property | |
21 | def name(self): | |
22 | return self._isolation_level | |
23 | ||
24 | def savepoint(self, unique_id): | |
25 | return 'SAVEPOINT {}'.format(unique_id) | |
26 | ||
27 | def release_savepoint(self, unique_id): | |
28 | return 'RELEASE SAVEPOINT {}'.format(unique_id) | |
29 | ||
30 | def rollback_savepoint(self, unique_id): | |
31 | return 'ROLLBACK TO SAVEPOINT {}'.format(unique_id) | |
32 | ||
33 | def commit(self): | |
34 | return 'COMMIT' | |
35 | ||
36 | def rollback(self): | |
37 | return 'ROLLBACK' | |
38 | ||
39 | def begin(self): | |
40 | query = 'BEGIN' | |
41 | if self._isolation_level is not None: | |
42 | query += ( | |
43 | ' ISOLATION LEVEL {}'.format(self._isolation_level.upper()) | |
44 | ) | |
45 | ||
46 | if self._readonly: | |
47 | query += ' READ ONLY' | |
48 | ||
49 | if self._deferrable: | |
50 | query += ' DEFERRABLE' | |
51 | ||
52 | return query | |
53 | ||
54 | def __repr__(self): | |
55 | return self.name | |
56 | ||
57 | ||
58 | class ReadCommittedCompiler(IsolationCompiler): | |
59 | __slots__ = () | |
60 | ||
61 | def __init__(self, readonly, deferrable): | |
62 | super().__init__('Read committed', readonly, deferrable) | |
63 | ||
64 | ||
65 | class RepeatableReadCompiler(IsolationCompiler): | |
66 | __slots__ = () | |
67 | ||
68 | def __init__(self, readonly, deferrable): | |
69 | super().__init__('Repeatable read', readonly, deferrable) | |
70 | ||
71 | ||
72 | class SerializableCompiler(IsolationCompiler): | |
73 | __slots__ = () | |
74 | ||
75 | def __init__(self, readonly, deferrable): | |
76 | super().__init__('Serializable', readonly, deferrable) | |
77 | ||
78 | ||
79 | class DefaultCompiler(IsolationCompiler): | |
80 | __slots__ = () | |
81 | ||
82 | def __init__(self, readonly, deferrable): | |
83 | super().__init__(None, readonly, deferrable) | |
84 | ||
85 | @property | |
86 | def name(self): | |
87 | return 'Default' | |
88 | ||
89 | ||
90 | class IsolationLevel(enum.Enum): | |
91 | serializable = SerializableCompiler | |
92 | repeatable_read = RepeatableReadCompiler | |
93 | read_committed = ReadCommittedCompiler | |
94 | default = DefaultCompiler | |
95 | ||
96 | def __call__(self, readonly, deferrable): | |
97 | return self.value(readonly, deferrable) | |
98 | ||
99 | ||
100 | class Transaction: | |
101 | __slots__ = ('_cur', '_is_begin', '_isolation', '_unique_id') | |
102 | ||
103 | def __init__(self, cur, isolation_level, | |
104 | readonly=False, deferrable=False): | |
105 | self._cur = cur | |
106 | self._is_begin = False | |
107 | self._unique_id = None | |
108 | self._isolation = isolation_level(readonly, deferrable) | |
109 | ||
110 | @property | |
111 | def is_begin(self): | |
112 | return self._is_begin | |
113 | ||
114 | async def begin(self): | |
115 | if self._is_begin: | |
116 | raise psycopg2.ProgrammingError( | |
117 | 'You are trying to open a new transaction, use the save point') | |
118 | self._is_begin = True | |
119 | await self._cur.execute(self._isolation.begin()) | |
120 | return self | |
121 | ||
122 | async def commit(self): | |
123 | self._check_commit_rollback() | |
124 | await self._cur.execute(self._isolation.commit()) | |
125 | self._is_begin = False | |
126 | ||
127 | async def rollback(self): | |
128 | self._check_commit_rollback() | |
129 | await self._cur.execute(self._isolation.rollback()) | |
130 | self._is_begin = False | |
131 | ||
132 | async def rollback_savepoint(self): | |
133 | self._check_release_rollback() | |
134 | await self._cur.execute( | |
135 | self._isolation.rollback_savepoint(self._unique_id)) | |
136 | self._unique_id = None | |
137 | ||
138 | async def release_savepoint(self): | |
139 | self._check_release_rollback() | |
140 | await self._cur.execute( | |
141 | self._isolation.release_savepoint(self._unique_id)) | |
142 | self._unique_id = None | |
143 | ||
144 | async def savepoint(self): | |
145 | self._check_commit_rollback() | |
146 | if self._unique_id is not None: | |
147 | raise psycopg2.ProgrammingError('You do not shut down savepoint') | |
148 | ||
149 | self._unique_id = 's{}'.format(uuid.uuid1().hex) | |
150 | await self._cur.execute( | |
151 | self._isolation.savepoint(self._unique_id)) | |
152 | ||
153 | return self | |
154 | ||
155 | def point(self): | |
156 | return _TransactionPointContextManager(self.savepoint()) | |
157 | ||
158 | def _check_commit_rollback(self): | |
159 | if not self._is_begin: | |
160 | raise psycopg2.ProgrammingError('You are trying to commit ' | |
161 | 'the transaction does not open') | |
162 | ||
163 | def _check_release_rollback(self): | |
164 | self._check_commit_rollback() | |
165 | if self._unique_id is None: | |
166 | raise psycopg2.ProgrammingError('You do not start savepoint') | |
167 | ||
168 | def __repr__(self): | |
169 | return "<{} transaction={} id={:#x}>".format( | |
170 | self.__class__.__name__, | |
171 | self._isolation, | |
172 | id(self) | |
173 | ) | |
174 | ||
175 | def __del__(self): | |
176 | if self._is_begin: | |
177 | warnings.warn( | |
178 | "You have not closed transaction {!r}".format(self), | |
179 | ResourceWarning) | |
180 | ||
181 | if self._unique_id is not None: | |
182 | warnings.warn( | |
183 | "You have not closed savepoint {!r}".format(self), | |
184 | ResourceWarning) | |
185 | ||
186 | async def __aenter__(self): | |
187 | return await self.begin() | |
188 | ||
189 | async def __aexit__(self, exc_type, exc, tb): | |
190 | if exc_type is not None: | |
191 | await self.rollback() | |
192 | else: | |
193 | await self.commit() |
0 | 0 | import asyncio |
1 | 1 | import sys |
2 | import warnings | |
3 | from collections.abc import Coroutine | |
4 | ||
5 | import psycopg2 | |
6 | ||
7 | from .log import logger | |
8 | ||
9 | try: | |
10 | ensure_future = asyncio.ensure_future | |
11 | except AttributeError: | |
12 | ensure_future = getattr(asyncio, 'async') | |
2 | from types import TracebackType | |
3 | from typing import ( | |
4 | Any, | |
5 | Awaitable, | |
6 | Callable, | |
7 | Coroutine, | |
8 | Generator, | |
9 | Generic, | |
10 | Optional, | |
11 | Type, | |
12 | TypeVar, | |
13 | Union, | |
14 | ) | |
13 | 15 | |
14 | 16 | if sys.version_info >= (3, 7, 0): |
15 | 17 | __get_running_loop = asyncio.get_running_loop |
16 | 18 | else: |
19 | ||
17 | 20 | def __get_running_loop() -> asyncio.AbstractEventLoop: |
18 | 21 | loop = asyncio.get_event_loop() |
19 | 22 | if not loop.is_running(): |
20 | raise RuntimeError('no running event loop') | |
23 | raise RuntimeError("no running event loop") | |
21 | 24 | return loop |
22 | 25 | |
23 | 26 | |
24 | def get_running_loop(is_warn: bool = False) -> asyncio.AbstractEventLoop: | |
25 | loop = __get_running_loop() | |
27 | def get_running_loop() -> asyncio.AbstractEventLoop: | |
28 | return __get_running_loop() | |
26 | 29 | |
27 | if is_warn: | |
28 | warnings.warn( | |
29 | 'aiopg always uses "aiopg.get_running_loop", ' | |
30 | 'look the documentation.', | |
31 | DeprecationWarning, | |
32 | stacklevel=3 | |
30 | ||
31 | def create_completed_future( | |
32 | loop: asyncio.AbstractEventLoop, | |
33 | ) -> "asyncio.Future[Any]": | |
34 | future = loop.create_future() | |
35 | future.set_result(None) | |
36 | return future | |
37 | ||
38 | ||
39 | _TObj = TypeVar("_TObj") | |
40 | _Release = Callable[[_TObj], Awaitable[None]] | |
41 | ||
42 | ||
43 | class _ContextManager(Coroutine[Any, None, _TObj], Generic[_TObj]): | |
44 | __slots__ = ("_coro", "_obj", "_release", "_release_on_exception") | |
45 | ||
46 | def __init__( | |
47 | self, | |
48 | coro: Coroutine[Any, None, _TObj], | |
49 | release: _Release[_TObj], | |
50 | release_on_exception: Optional[_Release[_TObj]] = None, | |
51 | ): | |
52 | self._coro = coro | |
53 | self._obj: Optional[_TObj] = None | |
54 | self._release = release | |
55 | self._release_on_exception = ( | |
56 | release if release_on_exception is None else release_on_exception | |
33 | 57 | ) |
34 | 58 | |
35 | if loop.get_debug(): | |
36 | logger.warning( | |
37 | 'aiopg always uses "aiopg.get_running_loop", ' | |
38 | 'look the documentation.', | |
39 | exc_info=True | |
40 | ) | |
59 | def send(self, value: Any) -> "Any": | |
60 | return self._coro.send(value) | |
41 | 61 | |
42 | return loop | |
62 | def throw( # type: ignore | |
63 | self, | |
64 | typ: Type[BaseException], | |
65 | val: Optional[Union[BaseException, object]] = None, | |
66 | tb: Optional[TracebackType] = None, | |
67 | ) -> Any: | |
68 | if val is None: | |
69 | return self._coro.throw(typ) | |
70 | if tb is None: | |
71 | return self._coro.throw(typ, val) | |
72 | return self._coro.throw(typ, val, tb) | |
73 | ||
74 | def close(self) -> None: | |
75 | self._coro.close() | |
76 | ||
77 | def __await__(self) -> Generator[Any, None, _TObj]: | |
78 | return self._coro.__await__() | |
79 | ||
80 | async def __aenter__(self) -> _TObj: | |
81 | self._obj = await self._coro | |
82 | assert self._obj | |
83 | return self._obj | |
84 | ||
85 | async def __aexit__( | |
86 | self, | |
87 | exc_type: Optional[Type[BaseException]], | |
88 | exc: Optional[BaseException], | |
89 | tb: Optional[TracebackType], | |
90 | ) -> None: | |
91 | if self._obj is None: | |
92 | return | |
93 | ||
94 | try: | |
95 | if exc_type is not None: | |
96 | await self._release_on_exception(self._obj) | |
97 | else: | |
98 | await self._release(self._obj) | |
99 | finally: | |
100 | self._obj = None | |
43 | 101 | |
44 | 102 | |
45 | class _ContextManager(Coroutine): | |
46 | __slots__ = ('_coro', '_obj') | |
47 | ||
48 | def __init__(self, coro): | |
49 | self._coro = coro | |
50 | self._obj = None | |
51 | ||
52 | def send(self, value): | |
53 | return self._coro.send(value) | |
54 | ||
55 | def throw(self, typ, val=None, tb=None): | |
56 | if val is None: | |
57 | return self._coro.throw(typ) | |
58 | elif tb is None: | |
59 | return self._coro.throw(typ, val) | |
60 | else: | |
61 | return self._coro.throw(typ, val, tb) | |
62 | ||
63 | def close(self): | |
64 | return self._coro.close() | |
65 | ||
66 | @property | |
67 | def gi_frame(self): | |
68 | return self._coro.gi_frame | |
69 | ||
70 | @property | |
71 | def gi_running(self): | |
72 | return self._coro.gi_running | |
73 | ||
74 | @property | |
75 | def gi_code(self): | |
76 | return self._coro.gi_code | |
77 | ||
78 | def __next__(self): | |
79 | return self.send(None) | |
80 | ||
81 | def __await__(self): | |
82 | resp = self._coro.__await__() | |
83 | return resp | |
84 | ||
85 | async def __aenter__(self): | |
86 | self._obj = await self._coro | |
87 | return self._obj | |
88 | ||
89 | async def __aexit__(self, exc_type, exc, tb): | |
90 | self._obj.close() | |
91 | self._obj = None | |
92 | ||
93 | ||
94 | class _SAConnectionContextManager(_ContextManager): | |
103 | class _IterableContextManager(_ContextManager[_TObj]): | |
95 | 104 | __slots__ = () |
96 | 105 | |
97 | def __aiter__(self): | |
106 | def __init__(self, *args: Any, **kwargs: Any): | |
107 | super().__init__(*args, **kwargs) | |
108 | ||
109 | def __aiter__(self) -> "_IterableContextManager[_TObj]": | |
98 | 110 | return self |
99 | 111 | |
100 | async def __anext__(self): | |
112 | async def __anext__(self) -> _TObj: | |
101 | 113 | if self._obj is None: |
102 | 114 | self._obj = await self._coro |
103 | 115 | |
104 | 116 | try: |
105 | return await self._obj.__anext__() | |
117 | return await self._obj.__anext__() # type: ignore | |
106 | 118 | except StopAsyncIteration: |
107 | self._obj.close() | |
108 | self._obj = None | |
119 | try: | |
120 | await self._release(self._obj) | |
121 | finally: | |
122 | self._obj = None | |
109 | 123 | raise |
110 | 124 | |
111 | 125 | |
112 | class _PoolContextManager(_ContextManager): | |
113 | __slots__ = () | |
126 | class ClosableQueue: | |
127 | """ | |
128 | Proxy object for an asyncio.Queue that is "closable" | |
114 | 129 | |
115 | async def __aexit__(self, exc_type, exc, tb): | |
116 | self._obj.close() | |
117 | await self._obj.wait_closed() | |
118 | self._obj = None | |
130 | When the ClosableQueue is closed, with an exception object as parameter, | |
131 | subsequent or ongoing attempts to read from the queue will result in that | |
132 | exception being result in that exception being raised. | |
119 | 133 | |
120 | ||
121 | class _TransactionPointContextManager(_ContextManager): | |
122 | __slots__ = () | |
123 | ||
124 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
125 | if exc_type is not None: | |
126 | await self._obj.rollback_savepoint() | |
127 | else: | |
128 | await self._obj.release_savepoint() | |
129 | ||
130 | self._obj = None | |
131 | ||
132 | ||
133 | class _TransactionBeginContextManager(_ContextManager): | |
134 | __slots__ = () | |
135 | ||
136 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
137 | if exc_type is not None: | |
138 | await self._obj.rollback() | |
139 | else: | |
140 | await self._obj.commit() | |
141 | ||
142 | self._obj = None | |
143 | ||
144 | ||
145 | class _TransactionContextManager(_ContextManager): | |
146 | __slots__ = () | |
147 | ||
148 | async def __aexit__(self, exc_type, exc, tb): | |
149 | if exc_type: | |
150 | await self._obj.rollback() | |
151 | else: | |
152 | if self._obj.is_active: | |
153 | await self._obj.commit() | |
154 | self._obj = None | |
155 | ||
156 | ||
157 | class _PoolAcquireContextManager(_ContextManager): | |
158 | __slots__ = ('_coro', '_obj', '_pool') | |
159 | ||
160 | def __init__(self, coro, pool): | |
161 | super().__init__(coro) | |
162 | self._pool = pool | |
163 | ||
164 | async def __aexit__(self, exc_type, exc, tb): | |
165 | await self._pool.release(self._obj) | |
166 | self._pool = None | |
167 | self._obj = None | |
168 | ||
169 | ||
170 | class _PoolConnectionContextManager: | |
171 | """Context manager. | |
172 | ||
173 | This enables the following idiom for acquiring and releasing a | |
174 | connection around a block: | |
175 | ||
176 | async with pool as conn: | |
177 | cur = await conn.cursor() | |
178 | ||
179 | while failing loudly when accidentally using: | |
180 | ||
181 | with pool: | |
182 | <block> | |
134 | Note: closing a queue with exception will still allow to read any items | |
135 | pending in the queue. The close exception is raised only once all items | |
136 | are consumed. | |
183 | 137 | """ |
184 | 138 | |
185 | __slots__ = ('_pool', '_conn') | |
139 | __slots__ = ("_loop", "_queue", "_close_event") | |
186 | 140 | |
187 | def __init__(self, pool, conn): | |
188 | self._pool = pool | |
189 | self._conn = conn | |
141 | def __init__( | |
142 | self, | |
143 | queue: asyncio.Queue, # type: ignore | |
144 | loop: asyncio.AbstractEventLoop, | |
145 | ): | |
146 | self._loop = loop | |
147 | self._queue = queue | |
148 | self._close_event = loop.create_future() | |
149 | # suppress Future exception was never retrieved | |
150 | self._close_event.add_done_callback(lambda f: f.exception()) | |
190 | 151 | |
191 | def __enter__(self): | |
192 | assert self._conn | |
193 | return self._conn | |
152 | def close(self, exception: Exception) -> None: | |
153 | if self._close_event.done(): | |
154 | return | |
155 | self._close_event.set_exception(exception) | |
194 | 156 | |
195 | def __exit__(self, exc_type, exc_val, exc_tb): | |
157 | async def get(self) -> Any: | |
158 | if self._close_event.done(): | |
159 | try: | |
160 | return self._queue.get_nowait() | |
161 | except asyncio.QueueEmpty: | |
162 | return self._close_event.result() | |
163 | ||
164 | get = asyncio.ensure_future(self._queue.get(), loop=self._loop) | |
196 | 165 | try: |
197 | self._pool.release(self._conn) | |
166 | await asyncio.wait( | |
167 | [get, self._close_event], return_when=asyncio.FIRST_COMPLETED | |
168 | ) | |
169 | except asyncio.CancelledError: | |
170 | get.cancel() | |
171 | raise | |
172 | ||
173 | if get.done(): | |
174 | return get.result() | |
175 | ||
176 | try: | |
177 | return self._close_event.result() | |
198 | 178 | finally: |
199 | self._pool = None | |
200 | self._conn = None | |
179 | get.cancel() | |
201 | 180 | |
202 | async def __aenter__(self): | |
203 | assert not self._conn | |
204 | self._conn = await self._pool.acquire() | |
205 | return self._conn | |
181 | def empty(self) -> bool: | |
182 | return self._queue.empty() | |
206 | 183 | |
207 | async def __aexit__(self, exc_type, exc_val, exc_tb): | |
208 | try: | |
209 | await self._pool.release(self._conn) | |
210 | finally: | |
211 | self._pool = None | |
212 | self._conn = None | |
184 | def qsize(self) -> int: | |
185 | return self._queue.qsize() | |
213 | 186 | |
187 | def get_nowait(self) -> Any: | |
188 | if self._close_event.done(): | |
189 | try: | |
190 | return self._queue.get_nowait() | |
191 | except asyncio.QueueEmpty: | |
192 | return self._close_event.result() | |
214 | 193 | |
215 | class _PoolCursorContextManager: | |
216 | """Context manager. | |
217 | ||
218 | This enables the following idiom for acquiring and releasing a | |
219 | cursor around a block: | |
220 | ||
221 | async with pool.cursor() as cur: | |
222 | await cur.execute("SELECT 1") | |
223 | ||
224 | while failing loudly when accidentally using: | |
225 | ||
226 | with pool: | |
227 | <block> | |
228 | """ | |
229 | ||
230 | __slots__ = ('_pool', '_conn', '_cur') | |
231 | ||
232 | def __init__(self, pool, conn, cur): | |
233 | self._pool = pool | |
234 | self._conn = conn | |
235 | self._cur = cur | |
236 | ||
237 | def __enter__(self): | |
238 | return self._cur | |
239 | ||
240 | def __exit__(self, *args): | |
241 | try: | |
242 | self._cur.close() | |
243 | except psycopg2.ProgrammingError: | |
244 | # seen instances where the cursor fails to close: | |
245 | # https://github.com/aio-libs/aiopg/issues/364 | |
246 | # We close it here so we don't return a bad connection to the pool | |
247 | self._conn.close() | |
248 | raise | |
249 | finally: | |
250 | try: | |
251 | self._pool.release(self._conn) | |
252 | finally: | |
253 | self._pool = None | |
254 | self._conn = None | |
255 | self._cur = None | |
194 | return self._queue.get_nowait() |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: aiopg |
2 | Version: 1.2.0b2 | |
2 | Version: 1.3.2b1 | |
3 | 3 | Summary: Postgres integration with asyncio. |
4 | 4 | Home-page: https://aiopg.readthedocs.io |
5 | 5 | Author: Andrew Svetlov |
90 | 90 | loop.run_until_complete(go()) |
91 | 91 | |
92 | 92 | .. _PostgreSQL: http://www.postgresql.org/ |
93 | .. _asyncio: http://docs.python.org/3.4/library/asyncio.html | |
93 | .. _asyncio: https://docs.python.org/3/library/asyncio.html | |
94 | 94 | |
95 | 95 | Please use:: |
96 | 96 | |
102 | 102 | |
103 | 103 | Changelog |
104 | 104 | --------- |
105 | ||
106 | 1.3.2b1 (2021-07-11) | |
107 | ^^^^^^^^^^^^^^^^^^^^ | |
108 | ||
109 | * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_ | |
110 | ||
111 | ||
112 | 1.3.1 (2021-07-08) | |
113 | ^^^^^^^^^^^^^^^^^^ | |
114 | ||
115 | ||
116 | 1.3.1b2 (2021-07-06) | |
117 | ^^^^^^^^^^^^^^^^^^^^ | |
118 | ||
119 | * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_ | |
120 | ||
121 | ||
122 | 1.3.1b1 (2021-07-05) | |
123 | ^^^^^^^^^^^^^^^^^^^^ | |
124 | ||
125 | * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_ | |
126 | ||
127 | ||
128 | 1.3.0 (2021-06-30) | |
129 | ^^^^^^^^^^^^^^^^^^ | |
130 | ||
131 | ||
132 | 1.3.0b4 (2021-06-28) | |
133 | ^^^^^^^^^^^^^^^^^^^^ | |
134 | ||
135 | * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_ | |
136 | ||
137 | ||
138 | 1.3.0b3 (2021-04-03) | |
139 | ^^^^^^^^^^^^^^^^^^^^ | |
140 | ||
141 | * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_ | |
142 | ||
143 | ||
144 | 1.3.0b2 (2021-04-02) | |
145 | ^^^^^^^^^^^^^^^^^^^^ | |
146 | ||
147 | * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_ | |
148 | ||
149 | ||
150 | 1.3.0b1 (2021-03-30) | |
151 | ^^^^^^^^^^^^^^^^^^^^ | |
152 | ||
153 | * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_ | |
154 | ||
155 | ||
156 | 1.3.0b0 (2021-03-25) | |
157 | ^^^^^^^^^^^^^^^^^^^^ | |
158 | ||
159 | * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_ | |
160 | ||
161 | ||
162 | 1.2.1 (2021-03-23) | |
163 | ^^^^^^^^^^^^^^^^^^ | |
164 | ||
165 | * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_ | |
166 | ||
167 | ||
168 | 1.2.0b4 (2021-03-23) | |
169 | ^^^^^^^^^^^^^^^^^^^^ | |
170 | ||
171 | * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_ | |
172 | ||
173 | ||
174 | 1.2.0b3 (2021-03-22) | |
175 | ^^^^^^^^^^^^^^^^^^^^ | |
176 | ||
177 | * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_ | |
178 | ||
179 | * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_ | |
180 | ||
105 | 181 | |
106 | 182 | 1.2.0b2 (2020-12-21) |
107 | 183 | ^^^^^^^^^^^^^^^^^^^^ |
6 | 6 | setup.py |
7 | 7 | aiopg/__init__.py |
8 | 8 | aiopg/connection.py |
9 | aiopg/cursor.py | |
10 | 9 | aiopg/log.py |
11 | 10 | aiopg/pool.py |
12 | aiopg/transaction.py | |
13 | 11 | aiopg/utils.py |
14 | 12 | aiopg.egg-info/PKG-INFO |
15 | 13 | aiopg.egg-info/SOURCES.txt |
21 | 19 | aiopg/sa/engine.py |
22 | 20 | aiopg/sa/exc.py |
23 | 21 | aiopg/sa/result.py |
24 | aiopg/sa/transaction.py⏎ | |
22 | aiopg/sa/transaction.py | |
23 | aiopg/sa/utils.py⏎ |
0 | async_timeout<4.0,>=3.0 | |
0 | 1 | psycopg2-binary>=2.8.4 |
1 | async_timeout<4.0,>=3.0 | |
2 | 2 | |
3 | 3 | [sa] |
4 | sqlalchemy[postgresql_psycopg2binary]>=1.1 | |
4 | sqlalchemy[postgresql_psycopg2binary]<1.5,>=1.3 |
0 | import os | |
1 | 0 | import re |
1 | from pathlib import Path | |
2 | 2 | |
3 | 3 | from setuptools import setup, find_packages |
4 | 4 | |
5 | install_requires = ['psycopg2-binary>=2.8.4', 'async_timeout>=3.0,<4.0'] | |
6 | extras_require = {'sa': ['sqlalchemy[postgresql_psycopg2binary]>=1.1']} | |
5 | install_requires = ["psycopg2-binary>=2.8.4", "async_timeout>=3.0,<4.0"] | |
6 | extras_require = {"sa": ["sqlalchemy[postgresql_psycopg2binary]>=1.3,<1.5"]} | |
7 | 7 | |
8 | 8 | |
9 | def read(f): | |
10 | return open(os.path.join(os.path.dirname(__file__), f)).read().strip() | |
9 | def read(*parts): | |
10 | return Path(__file__).resolve().parent.joinpath(*parts).read_text().strip() | |
11 | 11 | |
12 | 12 | |
13 | def get_maintainers(path='MAINTAINERS.txt'): | |
14 | with open(os.path.join(os.path.dirname(__file__), path)) as f: | |
15 | return ', '.join(x.strip().strip('*').strip() for x in f.readlines()) | |
13 | def get_maintainers(path="MAINTAINERS.txt"): | |
14 | return ", ".join(x.strip().strip("*").strip() for x in read(path).splitlines()) | |
16 | 15 | |
17 | 16 | |
18 | 17 | def read_version(): |
19 | regexp = re.compile(r"^__version__\W*=\W*'([\d.abrc]+)'") | |
20 | init_py = os.path.join(os.path.dirname(__file__), 'aiopg', '__init__.py') | |
21 | with open(init_py) as f: | |
22 | for line in f: | |
23 | match = regexp.match(line) | |
24 | if match is not None: | |
25 | return match.group(1) | |
26 | else: | |
27 | raise RuntimeError('Cannot find version in aiopg/__init__.py') | |
18 | regexp = re.compile(r"^__version__\W*=\W*\"([\d.abrc]+)\"") | |
19 | for line in read("aiopg", "__init__.py").splitlines(): | |
20 | match = regexp.match(line) | |
21 | if match is not None: | |
22 | return match.group(1) | |
23 | ||
24 | raise RuntimeError("Cannot find version in aiopg/__init__.py") | |
28 | 25 | |
29 | 26 | |
30 | def read_changelog(path='CHANGES.txt'): | |
31 | return 'Changelog\n---------\n\n{}'.format(read(path)) | |
27 | def read_changelog(path="CHANGES.txt"): | |
28 | return f"Changelog\n---------\n\n{read(path)}" | |
32 | 29 | |
33 | 30 | |
34 | 31 | classifiers = [ |
35 | 'License :: OSI Approved :: BSD License', | |
36 | 'Intended Audience :: Developers', | |
37 | 'Programming Language :: Python :: 3', | |
38 | 'Programming Language :: Python :: 3 :: Only', | |
39 | 'Programming Language :: Python :: 3.6', | |
40 | 'Programming Language :: Python :: 3.7', | |
41 | 'Programming Language :: Python :: 3.8', | |
42 | 'Programming Language :: Python :: 3.9', | |
43 | 'Operating System :: POSIX', | |
44 | 'Operating System :: MacOS :: MacOS X', | |
45 | 'Operating System :: Microsoft :: Windows', | |
46 | 'Environment :: Web Environment', | |
47 | 'Development Status :: 5 - Production/Stable', | |
48 | 'Topic :: Database', | |
49 | 'Topic :: Database :: Front-Ends', | |
50 | 'Framework :: AsyncIO', | |
32 | "License :: OSI Approved :: BSD License", | |
33 | "Intended Audience :: Developers", | |
34 | "Programming Language :: Python :: 3", | |
35 | "Programming Language :: Python :: 3 :: Only", | |
36 | "Programming Language :: Python :: 3.6", | |
37 | "Programming Language :: Python :: 3.7", | |
38 | "Programming Language :: Python :: 3.8", | |
39 | "Programming Language :: Python :: 3.9", | |
40 | "Operating System :: POSIX", | |
41 | "Operating System :: MacOS :: MacOS X", | |
42 | "Operating System :: Microsoft :: Windows", | |
43 | "Environment :: Web Environment", | |
44 | "Development Status :: 5 - Production/Stable", | |
45 | "Topic :: Database", | |
46 | "Topic :: Database :: Front-Ends", | |
47 | "Framework :: AsyncIO", | |
51 | 48 | ] |
52 | 49 | |
53 | 50 | setup( |
54 | name='aiopg', | |
51 | name="aiopg", | |
55 | 52 | version=read_version(), |
56 | description='Postgres integration with asyncio.', | |
57 | long_description='\n\n'.join((read('README.rst'), read_changelog())), | |
58 | long_description_content_type='text/x-rst', | |
53 | description="Postgres integration with asyncio.", | |
54 | long_description="\n\n".join((read("README.rst"), read_changelog())), | |
55 | long_description_content_type="text/x-rst", | |
59 | 56 | classifiers=classifiers, |
60 | platforms=['macOS', 'POSIX', 'Windows'], | |
61 | author='Andrew Svetlov', | |
62 | python_requires='>=3.6', | |
57 | platforms=["macOS", "POSIX", "Windows"], | |
58 | author="Andrew Svetlov", | |
59 | python_requires=">=3.6", | |
63 | 60 | project_urls={ |
64 | 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby', | |
65 | 'CI: GA': 'https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI', | |
66 | 'Coverage: codecov': 'https://codecov.io/gh/aio-libs/aiopg', | |
67 | 'Docs: RTD': 'https://aiopg.readthedocs.io', | |
68 | 'GitHub: issues': 'https://github.com/aio-libs/aiopg/issues', | |
69 | 'GitHub: repo': 'https://github.com/aio-libs/aiopg', | |
61 | "Chat: Gitter": "https://gitter.im/aio-libs/Lobby", | |
62 | "CI: GA": "https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI", | |
63 | "Coverage: codecov": "https://codecov.io/gh/aio-libs/aiopg", | |
64 | "Docs: RTD": "https://aiopg.readthedocs.io", | |
65 | "GitHub: issues": "https://github.com/aio-libs/aiopg/issues", | |
66 | "GitHub: repo": "https://github.com/aio-libs/aiopg", | |
70 | 67 | }, |
71 | author_email='andrew.svetlov@gmail.com', | |
68 | author_email="andrew.svetlov@gmail.com", | |
72 | 69 | maintainer=get_maintainers(), |
73 | maintainer_email='virmir49@gmail.com', | |
74 | url='https://aiopg.readthedocs.io', | |
75 | download_url='https://pypi.python.org/pypi/aiopg', | |
76 | license='BSD', | |
70 | maintainer_email="virmir49@gmail.com", | |
71 | url="https://aiopg.readthedocs.io", | |
72 | download_url="https://pypi.python.org/pypi/aiopg", | |
73 | license="BSD", | |
77 | 74 | packages=find_packages(), |
78 | 75 | install_requires=install_requires, |
79 | 76 | extras_require=extras_require, |
80 | include_package_data=True | |
77 | include_package_data=True, | |
81 | 78 | ) |