Codebase list aiopg / 930dc93
New upstream version 1.3.3 Piotr Ożarowski 2 years ago
19 changed file(s) with 2906 addition(s) and 2218 deletion(s). Raw diff Collapse all Expand all
0 1.3.3 (2021-11-01)
1 ^^^^^^^^^^^^^^^^^^
2
3 * Support async-timeout 4.0+
4
5
6 1.3.2 (2021-10-07)
7 ^^^^^^^^^^^^^^^^^^
8
9
10 1.3.2b2 (2021-10-07)
11 ^^^^^^^^^^^^^^^^^^^^
12
13 * Respect use_labels for select statement `#882 <https://github.com/aio-libs/aiopg/pull/882>`_
14
15
16 1.3.2b1 (2021-07-11)
17 ^^^^^^^^^^^^^^^^^^^^
18
19 * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_
20
21
22 1.3.1 (2021-07-08)
23 ^^^^^^^^^^^^^^^^^^
24
25
26 1.3.1b2 (2021-07-06)
27 ^^^^^^^^^^^^^^^^^^^^
28
29 * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_
30
31
32 1.3.1b1 (2021-07-05)
33 ^^^^^^^^^^^^^^^^^^^^
34
35 * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_
36
37
38 1.3.0 (2021-06-30)
39 ^^^^^^^^^^^^^^^^^^
40
41
42 1.3.0b4 (2021-06-28)
43 ^^^^^^^^^^^^^^^^^^^^
44
45 * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_
46
47
48 1.3.0b3 (2021-04-03)
49 ^^^^^^^^^^^^^^^^^^^^
50
51 * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_
52
53
54 1.3.0b2 (2021-04-02)
55 ^^^^^^^^^^^^^^^^^^^^
56
57 * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_
58
59
60 1.3.0b1 (2021-03-30)
61 ^^^^^^^^^^^^^^^^^^^^
62
63 * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_
64
65
66 1.3.0b0 (2021-03-25)
67 ^^^^^^^^^^^^^^^^^^^^
68
69 * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_
70
71
72 1.2.1 (2021-03-23)
73 ^^^^^^^^^^^^^^^^^^
74
75 * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_
76
77
78 1.2.0b4 (2021-03-23)
79 ^^^^^^^^^^^^^^^^^^^^
80
81 * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_
82
83
84 1.2.0b3 (2021-03-22)
85 ^^^^^^^^^^^^^^^^^^^^
86
87 * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_
88
89 * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_
90
91
092 1.2.0b2 (2020-12-21)
193 ^^^^^^^^^^^^^^^^^^^^
294
+483
-387
PKG-INFO less more
00 Metadata-Version: 2.1
11 Name: aiopg
2 Version: 1.2.0b2
2 Version: 1.3.3
33 Summary: Postgres integration with asyncio.
44 Home-page: https://aiopg.readthedocs.io
55 Author: Andrew Svetlov
1414 Project-URL: Docs: RTD, https://aiopg.readthedocs.io
1515 Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues
1616 Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg
17 Description: aiopg
18 =====
19 .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg
20 :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI
21 .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg
22 :target: https://codecov.io/gh/aio-libs/aiopg
23 .. image:: https://badges.gitter.im/Join%20Chat.svg
24 :target: https://gitter.im/aio-libs/Lobby
25 :alt: Chat on Gitter
26
27 **aiopg** is a library for accessing a PostgreSQL_ database
28 from the asyncio_ (PEP-3156/tulip) framework. It wraps
29 asynchronous features of the Psycopg database driver.
30
31 Example
32 -------
33
34 .. code:: python
35
36 import asyncio
37 import aiopg
38
39 dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'
40
41 async def go():
42 pool = await aiopg.create_pool(dsn)
43 async with pool.acquire() as conn:
44 async with conn.cursor() as cur:
45 await cur.execute("SELECT 1")
46 ret = []
47 async for row in cur:
48 ret.append(row)
49 assert ret == [(1,)]
50
51 loop = asyncio.get_event_loop()
52 loop.run_until_complete(go())
53
54
55 Example of SQLAlchemy optional integration
56 ------------------------------------------
57
58 .. code:: python
59
60 import asyncio
61 from aiopg.sa import create_engine
62 import sqlalchemy as sa
63
64 metadata = sa.MetaData()
65
66 tbl = sa.Table('tbl', metadata,
67 sa.Column('id', sa.Integer, primary_key=True),
68 sa.Column('val', sa.String(255)))
69
70 async def create_table(engine):
71 async with engine.acquire() as conn:
72 await conn.execute('DROP TABLE IF EXISTS tbl')
73 await conn.execute('''CREATE TABLE tbl (
74 id serial PRIMARY KEY,
75 val varchar(255))''')
76
77 async def go():
78 async with create_engine(user='aiopg',
79 database='aiopg',
80 host='127.0.0.1',
81 password='passwd') as engine:
82
83 async with engine.acquire() as conn:
84 await conn.execute(tbl.insert().values(val='abc'))
85
86 async for row in conn.execute(tbl.select()):
87 print(row.id, row.val)
88
89 loop = asyncio.get_event_loop()
90 loop.run_until_complete(go())
91
92 .. _PostgreSQL: http://www.postgresql.org/
93 .. _asyncio: http://docs.python.org/3.4/library/asyncio.html
94
95 Please use::
96
97 $ make test
98
99 for executing the project's unittests.
100 See https://aiopg.readthedocs.io/en/stable/contributing.html for details
101 on how to set up your environment to run the tests.
102
103 Changelog
104 ---------
105
106 1.2.0b2 (2020-12-21)
107 ^^^^^^^^^^^^^^^^^^^^
108
109 * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 <https://github.com/aio-libs/aiopg/pull/770>`_
110
111 * Fix python 3.8 warnings in tests `#771 <https://github.com/aio-libs/aiopg/pull/771>`_
112
113
114 1.2.0b1 (2020-12-16)
115 ^^^^^^^^^^^^^^^^^^^^
116
117 * Deprecate blocking connection.cancel() method `#570 <https://github.com/aio-libs/aiopg/pull/570>`_
118
119
120 1.2.0b0 (2020-12-15)
121 ^^^^^^^^^^^^^^^^^^^^
122
123 * Implement timeout on acquiring connection from pool `#766 <https://github.com/aio-libs/aiopg/pull/766>`_
124
125
126 1.1.0 (2020-12-10)
127 ^^^^^^^^^^^^^^^^^^
128
129
130 1.1.0b2 (2020-12-09)
131 ^^^^^^^^^^^^^^^^^^^^
132
133 * Added missing slots to context managers `#763 <https://github.com/aio-libs/aiopg/pull/763>`_
134
135
136 1.1.0b1 (2020-12-07)
137 ^^^^^^^^^^^^^^^^^^^^
138
139 * Fix on_connect multiple call on acquire `#552 <https://github.com/aio-libs/aiopg/pull/552>`_
140
141 * Fix python 3.8 warnings `#622 <https://github.com/aio-libs/aiopg/pull/642>`_
142
143 * Bump minimum psycopg version to 2.8.4 `#754 <https://github.com/aio-libs/aiopg/pull/754>`_
144
145 * Fix Engine.release method to release connection in any way `#756 <https://github.com/aio-libs/aiopg/pull/756>`_
146
147
148 1.0.0 (2019-09-20)
149 ^^^^^^^^^^^^^^^^^^
150
151 * Removal of an asynchronous call in favor of issues # 550
152
153 * Big editing of documentation and minor bugs #534
154
155
156 0.16.0 (2019-01-25)
157 ^^^^^^^^^^^^^^^^^^^
158
159 * Fix select priority name `#525 <https://github.com/aio-libs/aiopg/issues/525>`_
160
161 * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 <https://github.com/aio-libs/aiopg/issues/507>`_
162
163 * Fix `#189 <https://github.com/aio-libs/aiopg/issues/189>`_ hstore when using ReadDictCursor `#512 <https://github.com/aio-libs/aiopg/issues/512>`_
164
165 * close cannot be used while an asynchronous query is underway `#452 <https://github.com/aio-libs/aiopg/issues/452>`_
166
167 * sqlalchemy adapter trx begin allow transaction_mode `#498 <https://github.com/aio-libs/aiopg/issues/498>`_
168
169
170 0.15.0 (2018-08-14)
171 ^^^^^^^^^^^^^^^^^^^
172
173 * Support Python 3.7 `#437 <https://github.com/aio-libs/aiopg/issues/437>`_
174
175
176 0.14.0 (2018-05-10)
177 ^^^^^^^^^^^^^^^^^^^
178
179 * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 <https://github.com/aio-libs/aiopg/issues/451>`_
180
181
182 0.13.2 (2018-01-03)
183 ^^^^^^^^^^^^^^^^^^^
184
185 * Fixed compatibility with SQLAlchemy 1.2.0 `#412 <https://github.com/aio-libs/aiopg/issues/412>`_
186
187 * Added support for transaction isolation levels `#219 <https://github.com/aio-libs/aiopg/issues/219>`_
188
189
190 0.13.1 (2017-09-10)
191 ^^^^^^^^^^^^^^^^^^^
192
193 * Added connection poll recycling logic `#373 <https://github.com/aio-libs/aiopg/issues/373>`_
194
195
196 0.13.0 (2016-12-02)
197 ^^^^^^^^^^^^^^^^^^^
198
199 * Add `async with` support to `.begin_nested()` `#208 <https://github.com/aio-libs/aiopg/issues/208>`_
200
201 * Fix connection.cancel() `#212 <https://github.com/aio-libs/aiopg/issues/212>`_ `#223 <https://github.com/aio-libs/aiopg/issues/223>`_
202
203 * Raise informative error on unexpected connection closing `#191 <https://github.com/aio-libs/aiopg/issues/191>`_
204
205 * Added support for python types columns issues `#217 <https://github.com/aio-libs/aiopg/issues/217>`_
206
207 * Added support for default values in SA table issues `#206 <https://github.com/aio-libs/aiopg/issues/206>`_
208
209
210 0.12.0 (2016-10-09)
211 ^^^^^^^^^^^^^^^^^^^
212
213 * Add an on_connect callback parameter to pool `#141 <https://github.com/aio-libs/aiopg/issues/141>`_
214
215 * Fixed connection to work under both windows and posix based systems `#142 <https://github.com/aio-libs/aiopg/issues/142>`_
216
217
218 0.11.0 (2016-09-12)
219 ^^^^^^^^^^^^^^^^^^^
220
221 * Immediately remove callbacks from a closed file descriptor `#139 <https://github.com/aio-libs/aiopg/issues/139>`_
222
223 * Drop Python 3.3 support
224
225
226 0.10.0 (2016-07-16)
227 ^^^^^^^^^^^^^^^^^^^
228
229 * Refactor tests to use dockerized Postgres server `#107 <https://github.com/aio-libs/aiopg/issues/107>`_
230
231 * Reduce default pool minsize to 1 `#106 <https://github.com/aio-libs/aiopg/issues/106>`_
232
233 * Explicitly enumerate packages in setup.py `#85 <https://github.com/aio-libs/aiopg/issues/85>`_
234
235 * Remove expired connections from pool on acquire `#116 <https://github.com/aio-libs/aiopg/issues/116>`_
236
237 * Don't crash when Connection is GC'ed `#124 <https://github.com/aio-libs/aiopg/issues/124>`_
238
239 * Use loop.create_future() if available
240
241
242 0.9.2 (2016-01-31)
243 ^^^^^^^^^^^^^^^^^^
244
245 * Make pool.release return asyncio.Future, so we can wait on it in
246 `__aexit__` `#102 <https://github.com/aio-libs/aiopg/issues/102>`_
247
248 * Add support for uuid type `#103 <https://github.com/aio-libs/aiopg/issues/103>`_
249
250
251 0.9.1 (2016-01-17)
252 ^^^^^^^^^^^^^^^^^^
253
254 * Documentation update `#101 <https://github.com/aio-libs/aiopg/issues/101>`_
255
256
257 0.9.0 (2016-01-14)
258 ^^^^^^^^^^^^^^^^^^
259
260 * Add async context managers for transactions `#91 <https://github.com/aio-libs/aiopg/issues/91>`_
261
262 * Support async iterator in ResultProxy `#92 <https://github.com/aio-libs/aiopg/issues/92>`_
263
264 * Add async with for engine `#90 <https://github.com/aio-libs/aiopg/issues/90>`_
265
266
267 0.8.0 (2015-12-31)
268 ^^^^^^^^^^^^^^^^^^
269
270 * Add PostgreSQL notification support `#58 <https://github.com/aio-libs/aiopg/issues/58>`_
271
272 * Support pools with unlimited size `#59 <https://github.com/aio-libs/aiopg/issues/59>`_
273
274 * Cancel current DB operation on asyncio timeout `#66 <https://github.com/aio-libs/aiopg/issues/66>`_
275
276 * Add async with support for Pool, Connection, Cursor `#88 <https://github.com/aio-libs/aiopg/issues/88>`_
277
278
279 0.7.0 (2015-04-22)
280 ^^^^^^^^^^^^^^^^^^
281
282 * Get rid of resource leak on connection failure.
283
284 * Report ResourceWarning on non-closed connections.
285
286 * Deprecate iteration protocol support in cursor and ResultProxy.
287
288 * Release sa connection to pool on `connection.close()`.
289
290
291 0.6.0 (2015-02-03)
292 ^^^^^^^^^^^^^^^^^^
293
294 * Accept dict, list, tuple, named and positional parameters in
295 `SAConnection.execute()`
296
297
298 0.5.2 (2014-12-08)
299 ^^^^^^^^^^^^^^^^^^
300
301 * Minor release, fixes a bug that leaves connection in broken state
302 after `cursor.execute()` failure.
303
304
305 0.5.1 (2014-10-31)
306 ^^^^^^^^^^^^^^^^^^
307
308 * Fix a bug for processing transactions in line.
309
310
311 0.5.0 (2014-10-31)
312 ^^^^^^^^^^^^^^^^^^
313
314 * Add .terminate() to Pool and Engine
315
316 * Reimplement connection pool (now pool size cannot be greater than pool.maxsize)
317
318 * Add .close() and .wait_closed() to Pool and Engine
319
320 * Add minsize, maxsize, size and freesize properties to sa.Engine
321
322 * Support *echo* parameter for logging executed SQL commands
323
324 * Connection.close() is not a coroutine (but we keep backward compatibility).
325
326
327 0.4.1 (2014-10-02)
328 ^^^^^^^^^^^^^^^^^^
329
330 * make cursor iterable
331
332 * update docs
333
334
335 0.4.0 (2014-10-02)
336 ^^^^^^^^^^^^^^^^^^
337
338 * add timeouts for database operations.
339
340 * Autoregister psycopg2 support for json data type.
341
342 * Support JSON in aiopg.sa
343
344 * Support ARRAY in aiopg.sa
345
346 * Autoregister hstore support if present in connected DB
347
348 * Support HSTORE in aiopg.sa
349
350
351 0.3.2 (2014-07-07)
352 ^^^^^^^^^^^^^^^^^^
353
354 * change signature to cursor.execute(operation, parameters=None) to
355 follow psycopg2 convention.
356
357
358 0.3.1 (2014-07-04)
359 ^^^^^^^^^^^^^^^^^^
360
361 * Forward arguments to cursor constructor for pooled connections.
362
363
364 0.3.0 (2014-06-22)
365 ^^^^^^^^^^^^^^^^^^
366
367 * Allow executing SQLAlchemy DDL statements.
368
369 * Fix bug with race conditions on acquiring/releasing connections from pool.
370
371
372 0.2.3 (2014-06-12)
373 ^^^^^^^^^^^^^^^^^^
374
375 * Fix bug in connection pool.
376
377
378 0.2.2 (2014-06-07)
379 ^^^^^^^^^^^^^^^^^^
380
381 * Fix bug with passing parameters into SAConnection.execute when
382 executing raw SQL expression.
383
384
385 0.2.1 (2014-05-08)
386 ^^^^^^^^^^^^^^^^^^
387
388 * Close connection with invalid transaction status on returning to pool.
389
390
391 0.2.0 (2014-05-04)
392 ^^^^^^^^^^^^^^^^^^
393
394 * Implemented optional support for sqlalchemy functional sql layer.
395
396
397 0.1.0 (2014-04-06)
398 ^^^^^^^^^^^^^^^^^^
399
400 * Implemented plain connections: connect, Connection, Cursor.
401
402 * Implemented database pools: create_pool and Pool.
40317 Platform: macOS
40418 Platform: POSIX
40519 Platform: Windows
41125 Classifier: Programming Language :: Python :: 3.7
41226 Classifier: Programming Language :: Python :: 3.8
41327 Classifier: Programming Language :: Python :: 3.9
28 Classifier: Programming Language :: Python :: 3.10
41429 Classifier: Operating System :: POSIX
41530 Classifier: Operating System :: MacOS :: MacOS X
41631 Classifier: Operating System :: Microsoft :: Windows
42237 Requires-Python: >=3.6
42338 Description-Content-Type: text/x-rst
42439 Provides-Extra: sa
40 License-File: LICENSE
41
42 aiopg
43 =====
44 .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg
45 :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI
46 .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg
47 :target: https://codecov.io/gh/aio-libs/aiopg
48 .. image:: https://badges.gitter.im/Join%20Chat.svg
49 :target: https://gitter.im/aio-libs/Lobby
50 :alt: Chat on Gitter
51
52 **aiopg** is a library for accessing a PostgreSQL_ database
53 from the asyncio_ (PEP-3156/tulip) framework. It wraps
54 asynchronous features of the Psycopg database driver.
55
56 Example
57 -------
58
59 .. code:: python
60
61 import asyncio
62 import aiopg
63
64 dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'
65
66 async def go():
67 pool = await aiopg.create_pool(dsn)
68 async with pool.acquire() as conn:
69 async with conn.cursor() as cur:
70 await cur.execute("SELECT 1")
71 ret = []
72 async for row in cur:
73 ret.append(row)
74 assert ret == [(1,)]
75
76 loop = asyncio.get_event_loop()
77 loop.run_until_complete(go())
78
79
80 Example of SQLAlchemy optional integration
81 ------------------------------------------
82
83 .. code:: python
84
85 import asyncio
86 from aiopg.sa import create_engine
87 import sqlalchemy as sa
88
89 metadata = sa.MetaData()
90
91 tbl = sa.Table('tbl', metadata,
92 sa.Column('id', sa.Integer, primary_key=True),
93 sa.Column('val', sa.String(255)))
94
95 async def create_table(engine):
96 async with engine.acquire() as conn:
97 await conn.execute('DROP TABLE IF EXISTS tbl')
98 await conn.execute('''CREATE TABLE tbl (
99 id serial PRIMARY KEY,
100 val varchar(255))''')
101
102 async def go():
103 async with create_engine(user='aiopg',
104 database='aiopg',
105 host='127.0.0.1',
106 password='passwd') as engine:
107
108 async with engine.acquire() as conn:
109 await conn.execute(tbl.insert().values(val='abc'))
110
111 async for row in conn.execute(tbl.select()):
112 print(row.id, row.val)
113
114 loop = asyncio.get_event_loop()
115 loop.run_until_complete(go())
116
117 .. _PostgreSQL: http://www.postgresql.org/
118 .. _asyncio: https://docs.python.org/3/library/asyncio.html
119
120 Please use::
121
122 $ make test
123
124 for executing the project's unittests.
125 See https://aiopg.readthedocs.io/en/stable/contributing.html for details
126 on how to set up your environment to run the tests.
127
128 Changelog
129 ---------
130
131 1.3.3 (2021-11-01)
132 ^^^^^^^^^^^^^^^^^^
133
134 * Support async-timeout 4.0+
135
136
137 1.3.2 (2021-10-07)
138 ^^^^^^^^^^^^^^^^^^
139
140
141 1.3.2b2 (2021-10-07)
142 ^^^^^^^^^^^^^^^^^^^^
143
144 * Respect use_labels for select statement `#882 <https://github.com/aio-libs/aiopg/pull/882>`_
145
146
147 1.3.2b1 (2021-07-11)
148 ^^^^^^^^^^^^^^^^^^^^
149
150 * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_
151
152
153 1.3.1 (2021-07-08)
154 ^^^^^^^^^^^^^^^^^^
155
156
157 1.3.1b2 (2021-07-06)
158 ^^^^^^^^^^^^^^^^^^^^
159
160 * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_
161
162
163 1.3.1b1 (2021-07-05)
164 ^^^^^^^^^^^^^^^^^^^^
165
166 * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_
167
168
169 1.3.0 (2021-06-30)
170 ^^^^^^^^^^^^^^^^^^
171
172
173 1.3.0b4 (2021-06-28)
174 ^^^^^^^^^^^^^^^^^^^^
175
176 * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_
177
178
179 1.3.0b3 (2021-04-03)
180 ^^^^^^^^^^^^^^^^^^^^
181
182 * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_
183
184
185 1.3.0b2 (2021-04-02)
186 ^^^^^^^^^^^^^^^^^^^^
187
188 * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_
189
190
191 1.3.0b1 (2021-03-30)
192 ^^^^^^^^^^^^^^^^^^^^
193
194 * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_
195
196
197 1.3.0b0 (2021-03-25)
198 ^^^^^^^^^^^^^^^^^^^^
199
200 * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_
201
202
203 1.2.1 (2021-03-23)
204 ^^^^^^^^^^^^^^^^^^
205
206 * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_
207
208
209 1.2.0b4 (2021-03-23)
210 ^^^^^^^^^^^^^^^^^^^^
211
212 * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_
213
214
215 1.2.0b3 (2021-03-22)
216 ^^^^^^^^^^^^^^^^^^^^
217
218 * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_
219
220 * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_
221
222
223 1.2.0b2 (2020-12-21)
224 ^^^^^^^^^^^^^^^^^^^^
225
226 * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 <https://github.com/aio-libs/aiopg/pull/770>`_
227
228 * Fix python 3.8 warnings in tests `#771 <https://github.com/aio-libs/aiopg/pull/771>`_
229
230
231 1.2.0b1 (2020-12-16)
232 ^^^^^^^^^^^^^^^^^^^^
233
234 * Deprecate blocking connection.cancel() method `#570 <https://github.com/aio-libs/aiopg/pull/570>`_
235
236
237 1.2.0b0 (2020-12-15)
238 ^^^^^^^^^^^^^^^^^^^^
239
240 * Implement timeout on acquiring connection from pool `#766 <https://github.com/aio-libs/aiopg/pull/766>`_
241
242
243 1.1.0 (2020-12-10)
244 ^^^^^^^^^^^^^^^^^^
245
246
247 1.1.0b2 (2020-12-09)
248 ^^^^^^^^^^^^^^^^^^^^
249
250 * Added missing slots to context managers `#763 <https://github.com/aio-libs/aiopg/pull/763>`_
251
252
253 1.1.0b1 (2020-12-07)
254 ^^^^^^^^^^^^^^^^^^^^
255
256 * Fix on_connect multiple call on acquire `#552 <https://github.com/aio-libs/aiopg/pull/552>`_
257
258 * Fix python 3.8 warnings `#622 <https://github.com/aio-libs/aiopg/pull/642>`_
259
260 * Bump minimum psycopg version to 2.8.4 `#754 <https://github.com/aio-libs/aiopg/pull/754>`_
261
262 * Fix Engine.release method to release connection in any way `#756 <https://github.com/aio-libs/aiopg/pull/756>`_
263
264
265 1.0.0 (2019-09-20)
266 ^^^^^^^^^^^^^^^^^^
267
268 * Removal of an asynchronous call in favor of issues # 550
269
270 * Big editing of documentation and minor bugs #534
271
272
273 0.16.0 (2019-01-25)
274 ^^^^^^^^^^^^^^^^^^^
275
276 * Fix select priority name `#525 <https://github.com/aio-libs/aiopg/issues/525>`_
277
278 * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 <https://github.com/aio-libs/aiopg/issues/507>`_
279
280 * Fix `#189 <https://github.com/aio-libs/aiopg/issues/189>`_ hstore when using ReadDictCursor `#512 <https://github.com/aio-libs/aiopg/issues/512>`_
281
282 * close cannot be used while an asynchronous query is underway `#452 <https://github.com/aio-libs/aiopg/issues/452>`_
283
284 * sqlalchemy adapter trx begin allow transaction_mode `#498 <https://github.com/aio-libs/aiopg/issues/498>`_
285
286
287 0.15.0 (2018-08-14)
288 ^^^^^^^^^^^^^^^^^^^
289
290 * Support Python 3.7 `#437 <https://github.com/aio-libs/aiopg/issues/437>`_
291
292
293 0.14.0 (2018-05-10)
294 ^^^^^^^^^^^^^^^^^^^
295
296 * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 <https://github.com/aio-libs/aiopg/issues/451>`_
297
298
299 0.13.2 (2018-01-03)
300 ^^^^^^^^^^^^^^^^^^^
301
302 * Fixed compatibility with SQLAlchemy 1.2.0 `#412 <https://github.com/aio-libs/aiopg/issues/412>`_
303
304 * Added support for transaction isolation levels `#219 <https://github.com/aio-libs/aiopg/issues/219>`_
305
306
307 0.13.1 (2017-09-10)
308 ^^^^^^^^^^^^^^^^^^^
309
310 * Added connection poll recycling logic `#373 <https://github.com/aio-libs/aiopg/issues/373>`_
311
312
313 0.13.0 (2016-12-02)
314 ^^^^^^^^^^^^^^^^^^^
315
316 * Add `async with` support to `.begin_nested()` `#208 <https://github.com/aio-libs/aiopg/issues/208>`_
317
318 * Fix connection.cancel() `#212 <https://github.com/aio-libs/aiopg/issues/212>`_ `#223 <https://github.com/aio-libs/aiopg/issues/223>`_
319
320 * Raise informative error on unexpected connection closing `#191 <https://github.com/aio-libs/aiopg/issues/191>`_
321
322 * Added support for python types columns issues `#217 <https://github.com/aio-libs/aiopg/issues/217>`_
323
324 * Added support for default values in SA table issues `#206 <https://github.com/aio-libs/aiopg/issues/206>`_
325
326
327 0.12.0 (2016-10-09)
328 ^^^^^^^^^^^^^^^^^^^
329
330 * Add an on_connect callback parameter to pool `#141 <https://github.com/aio-libs/aiopg/issues/141>`_
331
332 * Fixed connection to work under both windows and posix based systems `#142 <https://github.com/aio-libs/aiopg/issues/142>`_
333
334
335 0.11.0 (2016-09-12)
336 ^^^^^^^^^^^^^^^^^^^
337
338 * Immediately remove callbacks from a closed file descriptor `#139 <https://github.com/aio-libs/aiopg/issues/139>`_
339
340 * Drop Python 3.3 support
341
342
343 0.10.0 (2016-07-16)
344 ^^^^^^^^^^^^^^^^^^^
345
346 * Refactor tests to use dockerized Postgres server `#107 <https://github.com/aio-libs/aiopg/issues/107>`_
347
348 * Reduce default pool minsize to 1 `#106 <https://github.com/aio-libs/aiopg/issues/106>`_
349
350 * Explicitly enumerate packages in setup.py `#85 <https://github.com/aio-libs/aiopg/issues/85>`_
351
352 * Remove expired connections from pool on acquire `#116 <https://github.com/aio-libs/aiopg/issues/116>`_
353
354 * Don't crash when Connection is GC'ed `#124 <https://github.com/aio-libs/aiopg/issues/124>`_
355
356 * Use loop.create_future() if available
357
358
359 0.9.2 (2016-01-31)
360 ^^^^^^^^^^^^^^^^^^
361
362 * Make pool.release return asyncio.Future, so we can wait on it in
363 `__aexit__` `#102 <https://github.com/aio-libs/aiopg/issues/102>`_
364
365 * Add support for uuid type `#103 <https://github.com/aio-libs/aiopg/issues/103>`_
366
367
368 0.9.1 (2016-01-17)
369 ^^^^^^^^^^^^^^^^^^
370
371 * Documentation update `#101 <https://github.com/aio-libs/aiopg/issues/101>`_
372
373
374 0.9.0 (2016-01-14)
375 ^^^^^^^^^^^^^^^^^^
376
377 * Add async context managers for transactions `#91 <https://github.com/aio-libs/aiopg/issues/91>`_
378
379 * Support async iterator in ResultProxy `#92 <https://github.com/aio-libs/aiopg/issues/92>`_
380
381 * Add async with for engine `#90 <https://github.com/aio-libs/aiopg/issues/90>`_
382
383
384 0.8.0 (2015-12-31)
385 ^^^^^^^^^^^^^^^^^^
386
387 * Add PostgreSQL notification support `#58 <https://github.com/aio-libs/aiopg/issues/58>`_
388
389 * Support pools with unlimited size `#59 <https://github.com/aio-libs/aiopg/issues/59>`_
390
391 * Cancel current DB operation on asyncio timeout `#66 <https://github.com/aio-libs/aiopg/issues/66>`_
392
393 * Add async with support for Pool, Connection, Cursor `#88 <https://github.com/aio-libs/aiopg/issues/88>`_
394
395
396 0.7.0 (2015-04-22)
397 ^^^^^^^^^^^^^^^^^^
398
399 * Get rid of resource leak on connection failure.
400
401 * Report ResourceWarning on non-closed connections.
402
403 * Deprecate iteration protocol support in cursor and ResultProxy.
404
405 * Release sa connection to pool on `connection.close()`.
406
407
408 0.6.0 (2015-02-03)
409 ^^^^^^^^^^^^^^^^^^
410
411 * Accept dict, list, tuple, named and positional parameters in
412 `SAConnection.execute()`
413
414
415 0.5.2 (2014-12-08)
416 ^^^^^^^^^^^^^^^^^^
417
418 * Minor release, fixes a bug that leaves connection in broken state
419 after `cursor.execute()` failure.
420
421
422 0.5.1 (2014-10-31)
423 ^^^^^^^^^^^^^^^^^^
424
425 * Fix a bug for processing transactions in line.
426
427
428 0.5.0 (2014-10-31)
429 ^^^^^^^^^^^^^^^^^^
430
431 * Add .terminate() to Pool and Engine
432
433 * Reimplement connection pool (now pool size cannot be greater than pool.maxsize)
434
435 * Add .close() and .wait_closed() to Pool and Engine
436
437 * Add minsize, maxsize, size and freesize properties to sa.Engine
438
439 * Support *echo* parameter for logging executed SQL commands
440
441 * Connection.close() is not a coroutine (but we keep backward compatibility).
442
443
444 0.4.1 (2014-10-02)
445 ^^^^^^^^^^^^^^^^^^
446
447 * make cursor iterable
448
449 * update docs
450
451
452 0.4.0 (2014-10-02)
453 ^^^^^^^^^^^^^^^^^^
454
455 * add timeouts for database operations.
456
457 * Autoregister psycopg2 support for json data type.
458
459 * Support JSON in aiopg.sa
460
461 * Support ARRAY in aiopg.sa
462
463 * Autoregister hstore support if present in connected DB
464
465 * Support HSTORE in aiopg.sa
466
467
468 0.3.2 (2014-07-07)
469 ^^^^^^^^^^^^^^^^^^
470
471 * change signature to cursor.execute(operation, parameters=None) to
472 follow psycopg2 convention.
473
474
475 0.3.1 (2014-07-04)
476 ^^^^^^^^^^^^^^^^^^
477
478 * Forward arguments to cursor constructor for pooled connections.
479
480
481 0.3.0 (2014-06-22)
482 ^^^^^^^^^^^^^^^^^^
483
484 * Allow executing SQLAlchemy DDL statements.
485
486 * Fix bug with race conditions on acquiring/releasing connections from pool.
487
488
489 0.2.3 (2014-06-12)
490 ^^^^^^^^^^^^^^^^^^
491
492 * Fix bug in connection pool.
493
494
495 0.2.2 (2014-06-07)
496 ^^^^^^^^^^^^^^^^^^
497
498 * Fix bug with passing parameters into SAConnection.execute when
499 executing raw SQL expression.
500
501
502 0.2.1 (2014-05-08)
503 ^^^^^^^^^^^^^^^^^^
504
505 * Close connection with invalid transaction status on returning to pool.
506
507
508 0.2.0 (2014-05-04)
509 ^^^^^^^^^^^^^^^^^^
510
511 * Implemented optional support for sqlalchemy functional sql layer.
512
513
514 0.1.0 (2014-04-06)
515 ^^^^^^^^^^^^^^^^^^
516
517 * Implemented plain connections: connect, Connection, Cursor.
518
519 * Implemented database pools: create_pool and Pool.
520
7373 loop.run_until_complete(go())
7474
7575 .. _PostgreSQL: http://www.postgresql.org/
76 .. _asyncio: http://docs.python.org/3.4/library/asyncio.html
76 .. _asyncio: https://docs.python.org/3/library/asyncio.html
7777
7878 Please use::
7979
22 import warnings
33 from collections import namedtuple
44
5 from .connection import TIMEOUT as DEFAULT_TIMEOUT
6 from .connection import Connection, connect
7 from .cursor import Cursor
5 from .connection import (
6 TIMEOUT as DEFAULT_TIMEOUT,
7 Connection,
8 Cursor,
9 DefaultCompiler,
10 IsolationCompiler,
11 IsolationLevel,
12 ReadCommittedCompiler,
13 RepeatableReadCompiler,
14 SerializableCompiler,
15 Transaction,
16 connect,
17 )
818 from .pool import Pool, create_pool
9 from .transaction import IsolationLevel, Transaction
1019 from .utils import get_running_loop
1120
1221 warnings.filterwarnings(
13 'always', '.*',
22 "always",
23 ".*",
1424 category=ResourceWarning,
15 module=r'aiopg(\.\w+)+',
16 append=False
25 module=r"aiopg(\.\w+)+",
26 append=False,
1727 )
1828
19 __all__ = ('connect', 'create_pool', 'get_running_loop',
20 'Connection', 'Cursor', 'Pool', 'version', 'version_info',
21 'DEFAULT_TIMEOUT', 'IsolationLevel', 'Transaction')
29 __all__ = (
30 "connect",
31 "create_pool",
32 "get_running_loop",
33 "Connection",
34 "Cursor",
35 "Pool",
36 "version",
37 "version_info",
38 "DEFAULT_TIMEOUT",
39 "IsolationLevel",
40 "Transaction",
41 )
2242
23 __version__ = '1.2.0b2'
43 __version__ = "1.3.3"
2444
25 version = __version__ + ' , Python ' + sys.version
45 version = f"{__version__}, Python {sys.version}"
2646
27 VersionInfo = namedtuple('VersionInfo',
28 'major minor micro releaselevel serial')
47 VersionInfo = namedtuple(
48 "VersionInfo", "major minor micro releaselevel serial"
49 )
2950
3051
31 def _parse_version(ver):
52 def _parse_version(ver: str) -> VersionInfo:
3253 RE = (
33 r'^'
34 r'(?P<major>\d+)\.(?P<minor>\d+)\.(?P<micro>\d+)'
35 r'((?P<releaselevel>[a-z]+)(?P<serial>\d+)?)?'
36 r'$'
54 r"^"
55 r"(?P<major>\d+)\.(?P<minor>\d+)\.(?P<micro>\d+)"
56 r"((?P<releaselevel>[a-z]+)(?P<serial>\d+)?)?"
57 r"$"
3758 )
3859 match = re.match(RE, ver)
60 if not match:
61 raise ImportError(f"Invalid package version {ver}")
3962 try:
40 major = int(match.group('major'))
41 minor = int(match.group('minor'))
42 micro = int(match.group('micro'))
43 levels = {'rc': 'candidate',
44 'a': 'alpha',
45 'b': 'beta',
46 None: 'final'}
47 releaselevel = levels[match.group('releaselevel')]
48 serial = int(match.group('serial')) if match.group('serial') else 0
63 major = int(match.group("major"))
64 minor = int(match.group("minor"))
65 micro = int(match.group("micro"))
66 levels = {"rc": "candidate", "a": "alpha", "b": "beta", None: "final"}
67 releaselevel = levels[match.group("releaselevel")]
68 serial = int(match.group("serial")) if match.group("serial") else 0
4969 return VersionInfo(major, minor, micro, releaselevel, serial)
5070 except Exception as e:
51 raise ImportError("Invalid package version {}".format(ver)) from e
71 raise ImportError(f"Invalid package version {ver}") from e
5272
5373
5474 version_info = _parse_version(__version__)
5575
5676 # make pyflakes happy
57 (connect, create_pool, Connection, Cursor, Pool, DEFAULT_TIMEOUT,
58 IsolationLevel, Transaction, get_running_loop)
77 (
78 connect,
79 create_pool,
80 Connection,
81 Cursor,
82 Pool,
83 DEFAULT_TIMEOUT,
84 IsolationLevel,
85 Transaction,
86 get_running_loop,
87 IsolationCompiler,
88 DefaultCompiler,
89 ReadCommittedCompiler,
90 RepeatableReadCompiler,
91 SerializableCompiler,
92 )
0 import abc
01 import asyncio
12 import contextlib
3 import datetime
4 import enum
25 import errno
36 import platform
47 import select
58 import sys
69 import traceback
10 import uuid
711 import warnings
812 import weakref
913 from collections.abc import Mapping
14 from types import TracebackType
15 from typing import (
16 Any,
17 Callable,
18 Generator,
19 List,
20 Optional,
21 Sequence,
22 Tuple,
23 Type,
24 cast,
25 )
1026
1127 import psycopg2
12 from psycopg2 import extras
13 from psycopg2.extensions import POLL_ERROR, POLL_OK, POLL_READ, POLL_WRITE
14
15 from .cursor import Cursor
16 from .utils import _ContextManager, get_running_loop
17
18 __all__ = ('connect',)
28 import psycopg2.extensions
29 import psycopg2.extras
30
31 from .log import logger
32 from .utils import (
33 ClosableQueue,
34 _ContextManager,
35 create_completed_future,
36 get_running_loop,
37 )
1938
2039 TIMEOUT = 60.0
2140
2443 WSAENOTSOCK = 10038
2544
2645
27 def connect(dsn=None, *, timeout=TIMEOUT, enable_json=True,
28 enable_hstore=True, enable_uuid=True, echo=False, **kwargs):
46 def connect(
47 dsn: Optional[str] = None,
48 *,
49 timeout: float = TIMEOUT,
50 enable_json: bool = True,
51 enable_hstore: bool = True,
52 enable_uuid: bool = True,
53 echo: bool = False,
54 **kwargs: Any,
55 ) -> _ContextManager["Connection"]:
2956 """A factory for connecting to PostgreSQL.
3057
3158 The coroutine accepts all parameters that psycopg2.connect() does
3461 Returns instantiated Connection object.
3562
3663 """
37 coro = Connection(
38 dsn, timeout, bool(echo),
64 connection = Connection(
65 dsn,
66 timeout,
67 bool(echo),
3968 enable_hstore=enable_hstore,
4069 enable_uuid=enable_uuid,
4170 enable_json=enable_json,
42 **kwargs
71 **kwargs,
4372 )
44
45 return _ContextManager(coro)
46
47
48 def _is_bad_descriptor_error(os_error):
49 if platform.system() == 'Windows': # pragma: no cover
50 return os_error.winerror == WSAENOTSOCK
51 else:
52 return os_error.errno == errno.EBADF
73 return _ContextManager[Connection](connection, disconnect) # type: ignore
74
75
76 async def disconnect(c: "Connection") -> None:
77 await c.close()
78
79
80 def _is_bad_descriptor_error(os_error: OSError) -> bool:
81 if platform.system() == "Windows": # pragma: no cover
82 winerror = int(getattr(os_error, "winerror", 0))
83 return winerror == WSAENOTSOCK
84 return os_error.errno == errno.EBADF
85
86
87 class IsolationCompiler(abc.ABC):
88 __slots__ = ("_isolation_level", "_readonly", "_deferrable")
89
90 def __init__(
91 self, isolation_level: Optional[str], readonly: bool, deferrable: bool
92 ):
93 self._isolation_level = isolation_level
94 self._readonly = readonly
95 self._deferrable = deferrable
96
97 @property
98 def name(self) -> str:
99 return self._isolation_level or "Unknown"
100
101 def savepoint(self, unique_id: str) -> str:
102 return f"SAVEPOINT {unique_id}"
103
104 def release_savepoint(self, unique_id: str) -> str:
105 return f"RELEASE SAVEPOINT {unique_id}"
106
107 def rollback_savepoint(self, unique_id: str) -> str:
108 return f"ROLLBACK TO SAVEPOINT {unique_id}"
109
110 def commit(self) -> str:
111 return "COMMIT"
112
113 def rollback(self) -> str:
114 return "ROLLBACK"
115
116 def begin(self) -> str:
117 query = "BEGIN"
118 if self._isolation_level is not None:
119 query += f" ISOLATION LEVEL {self._isolation_level.upper()}"
120
121 if self._readonly:
122 query += " READ ONLY"
123
124 if self._deferrable:
125 query += " DEFERRABLE"
126
127 return query
128
129 def __repr__(self) -> str:
130 return self.name
131
132
133 class ReadCommittedCompiler(IsolationCompiler):
134 __slots__ = ()
135
136 def __init__(self, readonly: bool, deferrable: bool):
137 super().__init__("Read committed", readonly, deferrable)
138
139
140 class RepeatableReadCompiler(IsolationCompiler):
141 __slots__ = ()
142
143 def __init__(self, readonly: bool, deferrable: bool):
144 super().__init__("Repeatable read", readonly, deferrable)
145
146
147 class SerializableCompiler(IsolationCompiler):
148 __slots__ = ()
149
150 def __init__(self, readonly: bool, deferrable: bool):
151 super().__init__("Serializable", readonly, deferrable)
152
153
154 class DefaultCompiler(IsolationCompiler):
155 __slots__ = ()
156
157 def __init__(self, readonly: bool, deferrable: bool):
158 super().__init__(None, readonly, deferrable)
159
160 @property
161 def name(self) -> str:
162 return "Default"
163
164
165 class IsolationLevel(enum.Enum):
166 serializable = SerializableCompiler
167 repeatable_read = RepeatableReadCompiler
168 read_committed = ReadCommittedCompiler
169 default = DefaultCompiler
170
171 def __call__(self, readonly: bool, deferrable: bool) -> IsolationCompiler:
172 return self.value(readonly, deferrable) # type: ignore
173
174
175 async def _release_savepoint(t: "Transaction") -> None:
176 await t.release_savepoint()
177
178
179 async def _rollback_savepoint(t: "Transaction") -> None:
180 await t.rollback_savepoint()
181
182
183 class Transaction:
184 __slots__ = ("_cursor", "_is_begin", "_isolation", "_unique_id")
185
186 def __init__(
187 self,
188 cursor: "Cursor",
189 isolation_level: Callable[[bool, bool], IsolationCompiler],
190 readonly: bool = False,
191 deferrable: bool = False,
192 ):
193 self._cursor = cursor
194 self._is_begin = False
195 self._unique_id: Optional[str] = None
196 self._isolation = isolation_level(readonly, deferrable)
197
198 @property
199 def is_begin(self) -> bool:
200 return self._is_begin
201
202 async def begin(self) -> "Transaction":
203 if self._is_begin:
204 raise psycopg2.ProgrammingError(
205 "You are trying to open a new transaction, use the save point"
206 )
207 self._is_begin = True
208 await self._cursor.execute(self._isolation.begin())
209 return self
210
211 async def commit(self) -> None:
212 self._check_commit_rollback()
213 await self._cursor.execute(self._isolation.commit())
214 self._is_begin = False
215
216 async def rollback(self) -> None:
217 self._check_commit_rollback()
218 if not self._cursor.closed:
219 await self._cursor.execute(self._isolation.rollback())
220 self._is_begin = False
221
222 async def rollback_savepoint(self) -> None:
223 self._check_release_rollback()
224 if not self._cursor.closed:
225 await self._cursor.execute(
226 self._isolation.rollback_savepoint(
227 self._unique_id # type: ignore
228 )
229 )
230 self._unique_id = None
231
232 async def release_savepoint(self) -> None:
233 self._check_release_rollback()
234 await self._cursor.execute(
235 self._isolation.release_savepoint(self._unique_id) # type: ignore
236 )
237 self._unique_id = None
238
239 async def savepoint(self) -> "Transaction":
240 self._check_commit_rollback()
241 if self._unique_id is not None:
242 raise psycopg2.ProgrammingError("You do not shut down savepoint")
243
244 self._unique_id = f"s{uuid.uuid1().hex}"
245 await self._cursor.execute(self._isolation.savepoint(self._unique_id))
246
247 return self
248
249 def point(self) -> _ContextManager["Transaction"]:
250 return _ContextManager[Transaction](
251 self.savepoint(),
252 _release_savepoint,
253 _rollback_savepoint,
254 )
255
256 def _check_commit_rollback(self) -> None:
257 if not self._is_begin:
258 raise psycopg2.ProgrammingError(
259 "You are trying to commit " "the transaction does not open"
260 )
261
262 def _check_release_rollback(self) -> None:
263 self._check_commit_rollback()
264 if self._unique_id is None:
265 raise psycopg2.ProgrammingError("You do not start savepoint")
266
267 def __repr__(self) -> str:
268 return (
269 f"<{self.__class__.__name__} "
270 f"transaction={self._isolation} id={id(self):#x}>"
271 )
272
273 def __del__(self) -> None:
274 if self._is_begin:
275 warnings.warn(
276 f"You have not closed transaction {self!r}", ResourceWarning
277 )
278
279 if self._unique_id is not None:
280 warnings.warn(
281 f"You have not closed savepoint {self!r}", ResourceWarning
282 )
283
284 async def __aenter__(self) -> "Transaction":
285 return await self.begin()
286
287 async def __aexit__(
288 self,
289 exc_type: Optional[Type[BaseException]],
290 exc: Optional[BaseException],
291 tb: Optional[TracebackType],
292 ) -> None:
293 if exc_type is not None:
294 await self.rollback()
295 else:
296 await self.commit()
297
298
299 async def _commit_transaction(t: Transaction) -> None:
300 await t.commit()
301
302
303 async def _rollback_transaction(t: Transaction) -> None:
304 await t.rollback()
305
306
307 class Cursor:
308 def __init__(
309 self,
310 conn: "Connection",
311 impl: Any,
312 timeout: float,
313 echo: bool,
314 isolation_level: Optional[IsolationLevel] = None,
315 ):
316 self._conn = conn
317 self._impl = impl
318 self._timeout = timeout
319 self._echo = echo
320 self._transaction = Transaction(
321 self, isolation_level or IsolationLevel.default
322 )
323
324 @property
325 def echo(self) -> bool:
326 """Return echo mode status."""
327 return self._echo
328
329 @property
330 def description(self) -> Optional[Sequence[Any]]:
331 """This read-only attribute is a sequence of 7-item sequences.
332
333 Each of these sequences is a collections.namedtuple containing
334 information describing one result column:
335
336 0. name: the name of the column returned.
337 1. type_code: the PostgreSQL OID of the column.
338 2. display_size: the actual length of the column in bytes.
339 3. internal_size: the size in bytes of the column associated to
340 this column on the server.
341 4. precision: total number of significant digits in columns of
342 type NUMERIC. None for other types.
343 5. scale: count of decimal digits in the fractional part in
344 columns of type NUMERIC. None for other types.
345 6. null_ok: always None as not easy to retrieve from the libpq.
346
347 This attribute will be None for operations that do not
348 return rows or if the cursor has not had an operation invoked
349 via the execute() method yet.
350
351 """
352 return self._impl.description # type: ignore
353
354 def close(self) -> None:
355 """Close the cursor now."""
356 if not self.closed:
357 self._impl.close()
358
359 @property
360 def closed(self) -> bool:
361 """Read-only boolean attribute: specifies if the cursor is closed."""
362 return self._impl.closed # type: ignore
363
364 @property
365 def connection(self) -> "Connection":
366 """Read-only attribute returning a reference to the `Connection`."""
367 return self._conn
368
369 @property
370 def raw(self) -> Any:
371 """Underlying psycopg cursor object, readonly"""
372 return self._impl
373
374 @property
375 def name(self) -> str:
376 # Not supported
377 return self._impl.name # type: ignore
378
379 @property
380 def scrollable(self) -> Optional[bool]:
381 # Not supported
382 return self._impl.scrollable # type: ignore
383
384 @scrollable.setter
385 def scrollable(self, val: bool) -> None:
386 # Not supported
387 self._impl.scrollable = val
388
389 @property
390 def withhold(self) -> bool:
391 # Not supported
392 return self._impl.withhold # type: ignore
393
394 @withhold.setter
395 def withhold(self, val: bool) -> None:
396 # Not supported
397 self._impl.withhold = val
398
399 async def execute(
400 self,
401 operation: str,
402 parameters: Any = None,
403 *,
404 timeout: Optional[float] = None,
405 ) -> None:
406 """Prepare and execute a database operation (query or command).
407
408 Parameters may be provided as sequence or mapping and will be
409 bound to variables in the operation. Variables are specified
410 either with positional %s or named %({name})s placeholders.
411
412 """
413 if timeout is None:
414 timeout = self._timeout
415 waiter = self._conn._create_waiter("cursor.execute")
416 if self._echo:
417 logger.info(operation)
418 logger.info("%r", parameters)
419 try:
420 self._impl.execute(operation, parameters)
421 except BaseException:
422 self._conn._waiter = None
423 raise
424 try:
425 await self._conn._poll(waiter, timeout)
426 except asyncio.TimeoutError:
427 self._impl.close()
428 raise
429
430 async def executemany(self, *args: Any, **kwargs: Any) -> None:
431 # Not supported
432 raise psycopg2.ProgrammingError(
433 "executemany cannot be used in asynchronous mode"
434 )
435
436 async def callproc(
437 self,
438 procname: str,
439 parameters: Any = None,
440 *,
441 timeout: Optional[float] = None,
442 ) -> None:
443 """Call a stored database procedure with the given name.
444
445 The sequence of parameters must contain one entry for each
446 argument that the procedure expects. The result of the call is
447 returned as modified copy of the input sequence. Input
448 parameters are left untouched, output and input/output
449 parameters replaced with possibly new values.
450
451 """
452 if timeout is None:
453 timeout = self._timeout
454 waiter = self._conn._create_waiter("cursor.callproc")
455 if self._echo:
456 logger.info("CALL %s", procname)
457 logger.info("%r", parameters)
458 try:
459 self._impl.callproc(procname, parameters)
460 except BaseException:
461 self._conn._waiter = None
462 raise
463 else:
464 await self._conn._poll(waiter, timeout)
465
466 def begin(self) -> _ContextManager[Transaction]:
467 return _ContextManager[Transaction](
468 self._transaction.begin(),
469 _commit_transaction,
470 _rollback_transaction,
471 )
472
473 def begin_nested(self) -> _ContextManager[Transaction]:
474 if self._transaction.is_begin:
475 return self._transaction.point()
476
477 return _ContextManager[Transaction](
478 self._transaction.begin(),
479 _commit_transaction,
480 _rollback_transaction,
481 )
482
483 def mogrify(self, operation: str, parameters: Any = None) -> bytes:
484 """Return a query string after arguments binding.
485
486 The byte string returned is exactly the one that would be sent to
487 the database running the .execute() method or similar.
488
489 """
490 ret = self._impl.mogrify(operation, parameters)
491 assert (
492 not self._conn.isexecuting()
493 ), "Don't support server side mogrify"
494 return ret # type: ignore
495
496 async def setinputsizes(self, sizes: int) -> None:
497 """This method is exposed in compliance with the DBAPI.
498
499 It currently does nothing but it is safe to call it.
500
501 """
502 self._impl.setinputsizes(sizes)
503
504 async def fetchone(self) -> Any:
505 """Fetch the next row of a query result set.
506
507 Returns a single tuple, or None when no more data is
508 available.
509
510 """
511 ret = self._impl.fetchone()
512 assert (
513 not self._conn.isexecuting()
514 ), "Don't support server side cursors yet"
515 return ret
516
517 async def fetchmany(self, size: Optional[int] = None) -> List[Any]:
518 """Fetch the next set of rows of a query result.
519
520 Returns a list of tuples. An empty list is returned when no
521 more rows are available.
522
523 The number of rows to fetch per call is specified by the
524 parameter. If it is not given, the cursor's .arraysize
525 determines the number of rows to be fetched. The method should
526 try to fetch as many rows as indicated by the size
527 parameter. If this is not possible due to the specified number
528 of rows not being available, fewer rows may be returned.
529
530 """
531 if size is None:
532 size = self._impl.arraysize
533 ret = self._impl.fetchmany(size)
534 assert (
535 not self._conn.isexecuting()
536 ), "Don't support server side cursors yet"
537 return ret # type: ignore
538
539 async def fetchall(self) -> List[Any]:
540 """Fetch all (remaining) rows of a query result.
541
542 Returns them as a list of tuples. An empty list is returned
543 if there is no more record to fetch.
544
545 """
546 ret = self._impl.fetchall()
547 assert (
548 not self._conn.isexecuting()
549 ), "Don't support server side cursors yet"
550 return ret # type: ignore
551
552 async def scroll(self, value: int, mode: str = "relative") -> None:
553 """Scroll to a new position according to mode.
554
555 If mode is relative (default), value is taken as offset
556 to the current position in the result set, if set to
557 absolute, value states an absolute target position.
558
559 """
560 self._impl.scroll(value, mode)
561 assert (
562 not self._conn.isexecuting()
563 ), "Don't support server side cursors yet"
564
565 @property
566 def arraysize(self) -> int:
567 """How many rows will be returned by fetchmany() call.
568
569 This read/write attribute specifies the number of rows to
570 fetch at a time with fetchmany(). It defaults to
571 1 meaning to fetch a single row at a time.
572
573 """
574 return self._impl.arraysize # type: ignore
575
576 @arraysize.setter
577 def arraysize(self, val: int) -> None:
578 """How many rows will be returned by fetchmany() call.
579
580 This read/write attribute specifies the number of rows to
581 fetch at a time with fetchmany(). It defaults to
582 1 meaning to fetch a single row at a time.
583
584 """
585 self._impl.arraysize = val
586
587 @property
588 def itersize(self) -> int:
589 # Not supported
590 return self._impl.itersize # type: ignore
591
592 @itersize.setter
593 def itersize(self, val: int) -> None:
594 # Not supported
595 self._impl.itersize = val
596
597 @property
598 def rowcount(self) -> int:
599 """Returns the number of rows that has been produced of affected.
600
601 This read-only attribute specifies the number of rows that the
602 last :meth:`execute` produced (for Data Query Language
603 statements like SELECT) or affected (for Data Manipulation
604 Language statements like UPDATE or INSERT).
605
606 The attribute is -1 in case no .execute() has been performed
607 on the cursor or the row count of the last operation if it
608 can't be determined by the interface.
609
610 """
611 return self._impl.rowcount # type: ignore
612
613 @property
614 def rownumber(self) -> int:
615 """Row index.
616
617 This read-only attribute provides the current 0-based index of the
618 cursor in the result set or ``None`` if the index cannot be
619 determined."""
620
621 return self._impl.rownumber # type: ignore
622
623 @property
624 def lastrowid(self) -> int:
625 """OID of the last inserted row.
626
627 This read-only attribute provides the OID of the last row
628 inserted by the cursor. If the table wasn't created with OID
629 support or the last operation is not a single record insert,
630 the attribute is set to None.
631
632 """
633 return self._impl.lastrowid # type: ignore
634
635 @property
636 def query(self) -> Optional[str]:
637 """The last executed query string.
638
639 Read-only attribute containing the body of the last query sent
640 to the backend (including bound arguments) as bytes
641 string. None if no query has been executed yet.
642
643 """
644 return self._impl.query # type: ignore
645
646 @property
647 def statusmessage(self) -> str:
648 """the message returned by the last command."""
649 return self._impl.statusmessage # type: ignore
650
651 @property
652 def tzinfo_factory(self) -> datetime.tzinfo:
653 """The time zone factory used to handle data types such as
654 `TIMESTAMP WITH TIME ZONE`.
655 """
656 return self._impl.tzinfo_factory # type: ignore
657
658 @tzinfo_factory.setter
659 def tzinfo_factory(self, val: datetime.tzinfo) -> None:
660 """The time zone factory used to handle data types such as
661 `TIMESTAMP WITH TIME ZONE`.
662 """
663 self._impl.tzinfo_factory = val
664
665 async def nextset(self) -> None:
666 # Not supported
667 self._impl.nextset() # raises psycopg2.NotSupportedError
668
669 async def setoutputsize(
670 self, size: int, column: Optional[int] = None
671 ) -> None:
672 # Does nothing
673 self._impl.setoutputsize(size, column)
674
675 async def copy_from(self, *args: Any, **kwargs: Any) -> None:
676 raise psycopg2.ProgrammingError(
677 "copy_from cannot be used in asynchronous mode"
678 )
679
680 async def copy_to(self, *args: Any, **kwargs: Any) -> None:
681 raise psycopg2.ProgrammingError(
682 "copy_to cannot be used in asynchronous mode"
683 )
684
685 async def copy_expert(self, *args: Any, **kwargs: Any) -> None:
686 raise psycopg2.ProgrammingError(
687 "copy_expert cannot be used in asynchronous mode"
688 )
689
690 @property
691 def timeout(self) -> float:
692 """Return default timeout for cursor operations."""
693 return self._timeout
694
695 def __aiter__(self) -> "Cursor":
696 return self
697
698 async def __anext__(self) -> Any:
699 ret = await self.fetchone()
700 if ret is not None:
701 return ret
702 raise StopAsyncIteration
703
704 async def __aenter__(self) -> "Cursor":
705 return self
706
707 async def __aexit__(
708 self,
709 exc_type: Optional[Type[BaseException]],
710 exc: Optional[BaseException],
711 tb: Optional[TracebackType],
712 ) -> None:
713 self.close()
714
715 def __repr__(self) -> str:
716 return (
717 f"<"
718 f"{type(self).__module__}::{type(self).__name__} "
719 f"name={self.name}, "
720 f"closed={self.closed}"
721 f">"
722 )
723
724
725 async def _close_cursor(c: Cursor) -> None:
726 c.close()
53727
54728
55729 class Connection:
63737 _source_traceback = None
64738
65739 def __init__(
66 self, dsn, timeout, echo,
67 *, enable_json=True, enable_hstore=True,
68 enable_uuid=True, **kwargs
740 self,
741 dsn: Optional[str],
742 timeout: float,
743 echo: bool = False,
744 enable_json: bool = True,
745 enable_hstore: bool = True,
746 enable_uuid: bool = True,
747 **kwargs: Any,
69748 ):
70749 self._enable_json = enable_json
71750 self._enable_hstore = enable_hstore
72751 self._enable_uuid = enable_uuid
73 self._loop = get_running_loop(kwargs.pop('loop', None) is not None)
74 self._waiter = self._loop.create_future()
75
76 kwargs['async_'] = kwargs.pop('async', True)
752 self._loop = get_running_loop()
753 self._waiter: Optional[
754 "asyncio.Future[None]"
755 ] = self._loop.create_future()
756
757 kwargs["async_"] = kwargs.pop("async", True)
758 kwargs.pop("loop", None) # backward compatibility
77759 self._conn = psycopg2.connect(dsn, **kwargs)
78760
79761 self._dsn = self._conn.dsn
80762 assert self._conn.isexecuting(), "Is conn an async at all???"
81 self._fileno = self._conn.fileno()
763 self._fileno: Optional[int] = self._conn.fileno()
82764 self._timeout = timeout
83765 self._last_usage = self._loop.time()
84766 self._writing = False
85767 self._echo = echo
86 self._cursor_instance = None
87 self._notifies = asyncio.Queue()
768 self._notifies = asyncio.Queue() # type: ignore
769 self._notifies_proxy = ClosableQueue(self._notifies, self._loop)
88770 self._weakref = weakref.ref(self)
89 self._loop.add_reader(self._fileno, self._ready, self._weakref)
771 self._loop.add_reader(
772 self._fileno, self._ready, self._weakref # type: ignore
773 )
90774
91775 if self._loop.get_debug():
92776 self._source_traceback = traceback.extract_stack(sys._getframe(1))
93777
94778 @staticmethod
95 def _ready(weak_self):
96 self = weak_self()
779 def _ready(weak_self: "weakref.ref[Any]") -> None:
780 self = cast(Connection, weak_self())
97781 if self is None:
98782 return
99783
127811 # chain exception otherwise
128812 exc2.__cause__ = exc
129813 exc = exc2
814 self._notifies_proxy.close(exc)
130815 if waiter is not None and not waiter.done():
131816 waiter.set_exception(exc)
132817 else:
134819 # connection closed
135820 if waiter is not None and not waiter.done():
136821 waiter.set_exception(
137 psycopg2.OperationalError("Connection closed"))
138 if state == POLL_OK:
822 psycopg2.OperationalError("Connection closed")
823 )
824 if state == psycopg2.extensions.POLL_OK:
139825 if self._writing:
140 self._loop.remove_writer(self._fileno)
826 self._loop.remove_writer(self._fileno) # type: ignore
141827 self._writing = False
142828 if waiter is not None and not waiter.done():
143829 waiter.set_result(None)
144 elif state == POLL_READ:
830 elif state == psycopg2.extensions.POLL_READ:
145831 if self._writing:
146 self._loop.remove_writer(self._fileno)
832 self._loop.remove_writer(self._fileno) # type: ignore
147833 self._writing = False
148 elif state == POLL_WRITE:
834 elif state == psycopg2.extensions.POLL_WRITE:
149835 if not self._writing:
150 self._loop.add_writer(self._fileno, self._ready, weak_self)
836 self._loop.add_writer(
837 self._fileno, self._ready, weak_self # type: ignore
838 )
151839 self._writing = True
152 elif state == POLL_ERROR:
153 self._fatal_error("Fatal error on aiopg connection: "
154 "POLL_ERROR from underlying .poll() call")
840 elif state == psycopg2.extensions.POLL_ERROR:
841 self._fatal_error(
842 "Fatal error on aiopg connection: "
843 "POLL_ERROR from underlying .poll() call"
844 )
155845 else:
156 self._fatal_error("Fatal error on aiopg connection: "
157 "unknown answer {} from underlying "
158 ".poll() call"
159 .format(state))
160
161 def _fatal_error(self, message):
846 self._fatal_error(
847 f"Fatal error on aiopg connection: "
848 f"unknown answer {state} from underlying "
849 f".poll() call"
850 )
851
852 def _fatal_error(self, message: str) -> None:
162853 # Should be called from exception handler only.
163 self._loop.call_exception_handler({
164 'message': message,
165 'connection': self,
166 })
854 self._loop.call_exception_handler(
855 {
856 "message": message,
857 "connection": self,
858 }
859 )
167860 self.close()
168861 if self._waiter and not self._waiter.done():
169862 self._waiter.set_exception(psycopg2.OperationalError(message))
170863
171 def _create_waiter(self, func_name):
864 def _create_waiter(self, func_name: str) -> "asyncio.Future[None]":
172865 if self._waiter is not None:
173 raise RuntimeError('%s() called while another coroutine is '
174 'already waiting for incoming data' % func_name)
866 raise RuntimeError(
867 f"{func_name}() called while another coroutine "
868 f"is already waiting for incoming data"
869 )
175870 self._waiter = self._loop.create_future()
176871 return self._waiter
177872
178 async def _poll(self, waiter, timeout):
873 async def _poll(
874 self, waiter: "asyncio.Future[None]", timeout: float
875 ) -> None:
179876 assert waiter is self._waiter, (waiter, self._waiter)
180877 self._ready(self._weakref)
181878
185882 await asyncio.shield(self.close())
186883 raise exc
187884 except psycopg2.extensions.QueryCanceledError as exc:
188 self._loop.call_exception_handler({
189 'message': exc.pgerror,
190 'exception': exc,
191 'future': self._waiter,
192 })
885 self._loop.call_exception_handler(
886 {
887 "message": exc.pgerror,
888 "exception": exc,
889 "future": self._waiter,
890 }
891 )
193892 raise asyncio.CancelledError
194893 finally:
195894 self._waiter = None
196895
197 def _isexecuting(self):
198 return self._conn.isexecuting()
199
200 def cursor(self, name=None, cursor_factory=None,
201 scrollable=None, withhold=False, timeout=None,
202 isolation_level=None):
896 def isexecuting(self) -> bool:
897 return self._conn.isexecuting() # type: ignore
898
899 def cursor(
900 self,
901 name: Optional[str] = None,
902 cursor_factory: Any = None,
903 scrollable: Optional[bool] = None,
904 withhold: bool = False,
905 timeout: Optional[float] = None,
906 isolation_level: Optional[IsolationLevel] = None,
907 ) -> _ContextManager[Cursor]:
203908 """A coroutine that returns a new cursor object using the connection.
204909
205910 *cursor_factory* argument can be used to create non-standard
209914 *name*, *scrollable* and *withhold* parameters are not supported by
210915 psycopg in asynchronous mode.
211916
212 NOTE: as of [TODO] any previously created created cursor from this
213 connection will be closed
214 """
917 """
918
215919 self._last_usage = self._loop.time()
216 coro = self._cursor(name=name, cursor_factory=cursor_factory,
217 scrollable=scrollable, withhold=withhold,
218 timeout=timeout, isolation_level=isolation_level)
219 return _ContextManager(coro)
220
221 async def _cursor(self, name=None, cursor_factory=None,
222 scrollable=None, withhold=False, timeout=None,
223 isolation_level=None):
224
225 if not self.closed_cursor:
226 warnings.warn(('You can only have one cursor per connection. '
227 'The cursor for connection will be closed forcibly'
228 ' {!r}.').format(self), ResourceWarning)
229
230 self.free_cursor()
231
920 coro = self._cursor(
921 name=name,
922 cursor_factory=cursor_factory,
923 scrollable=scrollable,
924 withhold=withhold,
925 timeout=timeout,
926 isolation_level=isolation_level,
927 )
928 return _ContextManager[Cursor](coro, _close_cursor)
929
930 async def _cursor(
931 self,
932 name: Optional[str] = None,
933 cursor_factory: Any = None,
934 scrollable: Optional[bool] = None,
935 withhold: bool = False,
936 timeout: Optional[float] = None,
937 isolation_level: Optional[IsolationLevel] = None,
938 ) -> Cursor:
232939 if timeout is None:
233940 timeout = self._timeout
234941
235 impl = await self._cursor_impl(name=name,
236 cursor_factory=cursor_factory,
237 scrollable=scrollable,
238 withhold=withhold)
239 self._cursor_instance = Cursor(
240 self, impl, timeout, self._echo, isolation_level
241 )
242 return self._cursor_instance
243
244 async def _cursor_impl(self, name=None, cursor_factory=None,
245 scrollable=None, withhold=False):
942 impl = await self._cursor_impl(
943 name=name,
944 cursor_factory=cursor_factory,
945 scrollable=scrollable,
946 withhold=withhold,
947 )
948 cursor = Cursor(self, impl, timeout, self._echo, isolation_level)
949 return cursor
950
951 async def _cursor_impl(
952 self,
953 name: Optional[str] = None,
954 cursor_factory: Any = None,
955 scrollable: Optional[bool] = None,
956 withhold: bool = False,
957 ) -> Any:
246958 if cursor_factory is None:
247 impl = self._conn.cursor(name=name,
248 scrollable=scrollable, withhold=withhold)
959 impl = self._conn.cursor(
960 name=name, scrollable=scrollable, withhold=withhold
961 )
249962 else:
250 impl = self._conn.cursor(name=name, cursor_factory=cursor_factory,
251 scrollable=scrollable, withhold=withhold)
963 impl = self._conn.cursor(
964 name=name,
965 cursor_factory=cursor_factory,
966 scrollable=scrollable,
967 withhold=withhold,
968 )
252969 return impl
253970
254 def _close(self):
971 def _close(self) -> None:
255972 """Remove the connection from the event_loop and close it."""
256973 # N.B. If connection contains uncommitted transaction the
257974 # transaction will be discarded
262979 self._loop.remove_writer(self._fileno)
263980
264981 self._conn.close()
265 self.free_cursor()
266
267 if self._waiter is not None and not self._waiter.done():
268 self._waiter.set_exception(
269 psycopg2.OperationalError("Connection closed"))
270
271 @property
272 def closed_cursor(self):
273 if not self._cursor_instance:
274 return True
275
276 return bool(self._cursor_instance.closed)
277
278 def free_cursor(self):
279 if not self.closed_cursor:
280 self._cursor_instance.close()
281 self._cursor_instance = None
282
283 def close(self):
982
983 if not self._loop.is_closed():
984 if self._waiter is not None and not self._waiter.done():
985 self._waiter.set_exception(
986 psycopg2.OperationalError("Connection closed")
987 )
988
989 self._notifies_proxy.close(
990 psycopg2.OperationalError("Connection closed")
991 )
992
993 def close(self) -> "asyncio.Future[None]":
284994 self._close()
285 ret = self._loop.create_future()
286 ret.set_result(None)
287 return ret
288
289 @property
290 def closed(self):
995 return create_completed_future(self._loop)
996
997 @property
998 def closed(self) -> bool:
291999 """Connection status.
2921000
2931001 Read-only attribute reporting whether the database connection is
2941002 open (False) or closed (True).
2951003
2961004 """
297 return self._conn.closed
298
299 @property
300 def raw(self):
1005 return self._conn.closed # type: ignore
1006
1007 @property
1008 def raw(self) -> Any:
3011009 """Underlying psycopg connection object, readonly"""
3021010 return self._conn
3031011
304 async def commit(self):
305 raise psycopg2.ProgrammingError(
306 "commit cannot be used in asynchronous mode")
307
308 async def rollback(self):
309 raise psycopg2.ProgrammingError(
310 "rollback cannot be used in asynchronous mode")
1012 async def commit(self) -> None:
1013 raise psycopg2.ProgrammingError(
1014 "commit cannot be used in asynchronous mode"
1015 )
1016
1017 async def rollback(self) -> None:
1018 raise psycopg2.ProgrammingError(
1019 "rollback cannot be used in asynchronous mode"
1020 )
3111021
3121022 # TPC
3131023
314 async def xid(self, format_id, gtrid, bqual):
315 return self._conn.xid(format_id, gtrid, bqual)
316
317 async def tpc_begin(self, xid=None):
318 raise psycopg2.ProgrammingError(
319 "tpc_begin cannot be used in asynchronous mode")
320
321 async def tpc_prepare(self):
322 raise psycopg2.ProgrammingError(
323 "tpc_prepare cannot be used in asynchronous mode")
324
325 async def tpc_commit(self, xid=None):
326 raise psycopg2.ProgrammingError(
327 "tpc_commit cannot be used in asynchronous mode")
328
329 async def tpc_rollback(self, xid=None):
330 raise psycopg2.ProgrammingError(
331 "tpc_rollback cannot be used in asynchronous mode")
332
333 async def tpc_recover(self):
334 raise psycopg2.ProgrammingError(
335 "tpc_recover cannot be used in asynchronous mode")
336
337 async def cancel(self):
338 raise psycopg2.ProgrammingError(
339 "cancel cannot be used in asynchronous mode")
340
341 async def reset(self):
342 raise psycopg2.ProgrammingError(
343 "reset cannot be used in asynchronous mode")
344
345 @property
346 def dsn(self):
1024 async def xid(
1025 self, format_id: int, gtrid: str, bqual: str
1026 ) -> Tuple[int, str, str]:
1027 return self._conn.xid(format_id, gtrid, bqual) # type: ignore
1028
1029 async def tpc_begin(self, *args: Any, **kwargs: Any) -> None:
1030 raise psycopg2.ProgrammingError(
1031 "tpc_begin cannot be used in asynchronous mode"
1032 )
1033
1034 async def tpc_prepare(self) -> None:
1035 raise psycopg2.ProgrammingError(
1036 "tpc_prepare cannot be used in asynchronous mode"
1037 )
1038
1039 async def tpc_commit(self, *args: Any, **kwargs: Any) -> None:
1040 raise psycopg2.ProgrammingError(
1041 "tpc_commit cannot be used in asynchronous mode"
1042 )
1043
1044 async def tpc_rollback(self, *args: Any, **kwargs: Any) -> None:
1045 raise psycopg2.ProgrammingError(
1046 "tpc_rollback cannot be used in asynchronous mode"
1047 )
1048
1049 async def tpc_recover(self) -> None:
1050 raise psycopg2.ProgrammingError(
1051 "tpc_recover cannot be used in asynchronous mode"
1052 )
1053
1054 async def cancel(self) -> None:
1055 raise psycopg2.ProgrammingError(
1056 "cancel cannot be used in asynchronous mode"
1057 )
1058
1059 async def reset(self) -> None:
1060 raise psycopg2.ProgrammingError(
1061 "reset cannot be used in asynchronous mode"
1062 )
1063
1064 @property
1065 def dsn(self) -> Optional[str]:
3471066 """DSN connection string.
3481067
3491068 Read-only attribute representing dsn connection string used
3501069 for connectint to PostgreSQL server.
3511070
3521071 """
353 return self._dsn
354
355 async def set_session(self, *, isolation_level=None, readonly=None,
356 deferrable=None, autocommit=None):
357 raise psycopg2.ProgrammingError(
358 "set_session cannot be used in asynchronous mode")
359
360 @property
361 def autocommit(self):
1072 return self._dsn # type: ignore
1073
1074 async def set_session(self, *args: Any, **kwargs: Any) -> None:
1075 raise psycopg2.ProgrammingError(
1076 "set_session cannot be used in asynchronous mode"
1077 )
1078
1079 @property
1080 def autocommit(self) -> bool:
3621081 """Autocommit status"""
363 return self._conn.autocommit
1082 return self._conn.autocommit # type: ignore
3641083
3651084 @autocommit.setter
366 def autocommit(self, val):
1085 def autocommit(self, val: bool) -> None:
3671086 """Autocommit status"""
3681087 self._conn.autocommit = val
3691088
3701089 @property
371 def isolation_level(self):
1090 def isolation_level(self) -> int:
3721091 """Transaction isolation level.
3731092
3741093 The only allowed value is ISOLATION_LEVEL_READ_COMMITTED.
3751094
3761095 """
377 return self._conn.isolation_level
378
379 async def set_isolation_level(self, val):
1096 return self._conn.isolation_level # type: ignore
1097
1098 async def set_isolation_level(self, val: int) -> None:
3801099 """Transaction isolation level.
3811100
3821101 The only allowed value is ISOLATION_LEVEL_READ_COMMITTED.
3851104 self._conn.set_isolation_level(val)
3861105
3871106 @property
388 def encoding(self):
1107 def encoding(self) -> str:
3891108 """Client encoding for SQL operations."""
390 return self._conn.encoding
391
392 async def set_client_encoding(self, val):
1109 return self._conn.encoding # type: ignore
1110
1111 async def set_client_encoding(self, val: str) -> None:
3931112 self._conn.set_client_encoding(val)
3941113
3951114 @property
396 def notices(self):
1115 def notices(self) -> List[str]:
3971116 """A list of all db messages sent to the client during the session."""
398 return self._conn.notices
399
400 @property
401 def cursor_factory(self):
1117 return self._conn.notices # type: ignore
1118
1119 @property
1120 def cursor_factory(self) -> Any:
4021121 """The default cursor factory used by .cursor()."""
4031122 return self._conn.cursor_factory
4041123
405 async def get_backend_pid(self):
1124 async def get_backend_pid(self) -> int:
4061125 """Returns the PID of the backend server process."""
407 return self._conn.get_backend_pid()
408
409 async def get_parameter_status(self, parameter):
1126 return self._conn.get_backend_pid() # type: ignore
1127
1128 async def get_parameter_status(self, parameter: str) -> Optional[str]:
4101129 """Look up a current parameter setting of the server."""
411 return self._conn.get_parameter_status(parameter)
412
413 async def get_transaction_status(self):
1130 return self._conn.get_parameter_status(parameter) # type: ignore
1131
1132 async def get_transaction_status(self) -> int:
4141133 """Return the current session transaction status as an integer."""
415 return self._conn.get_transaction_status()
416
417 @property
418 def protocol_version(self):
1134 return self._conn.get_transaction_status() # type: ignore
1135
1136 @property
1137 def protocol_version(self) -> int:
4191138 """A read-only integer representing protocol being used."""
420 return self._conn.protocol_version
421
422 @property
423 def server_version(self):
1139 return self._conn.protocol_version # type: ignore
1140
1141 @property
1142 def server_version(self) -> int:
4241143 """A read-only integer representing the backend version."""
425 return self._conn.server_version
426
427 @property
428 def status(self):
1144 return self._conn.server_version # type: ignore
1145
1146 @property
1147 def status(self) -> int:
4291148 """A read-only integer representing the status of the connection."""
430 return self._conn.status
431
432 async def lobject(self, *args, **kwargs):
433 raise psycopg2.ProgrammingError(
434 "lobject cannot be used in asynchronous mode")
435
436 @property
437 def timeout(self):
1149 return self._conn.status # type: ignore
1150
1151 async def lobject(self, *args: Any, **kwargs: Any) -> None:
1152 raise psycopg2.ProgrammingError(
1153 "lobject cannot be used in asynchronous mode"
1154 )
1155
1156 @property
1157 def timeout(self) -> float:
4381158 """Return default timeout for connection operations."""
4391159 return self._timeout
4401160
4411161 @property
442 def last_usage(self):
1162 def last_usage(self) -> float:
4431163 """Return time() when connection was used."""
4441164 return self._last_usage
4451165
4461166 @property
447 def echo(self):
1167 def echo(self) -> bool:
4481168 """Return echo mode status."""
4491169 return self._echo
4501170
451 def __repr__(self):
452 msg = (
453 '<'
454 '{module_name}::{class_name} '
455 'isexecuting={isexecuting}, '
456 'closed={closed}, '
457 'echo={echo}, '
458 'cursor={cursor}'
459 '>'
460 )
461 return msg.format(
462 module_name=type(self).__module__,
463 class_name=type(self).__name__,
464 echo=self.echo,
465 isexecuting=self._isexecuting(),
466 closed=bool(self.closed),
467 cursor=repr(self._cursor_instance)
468 )
469
470 def __del__(self):
1171 def __repr__(self) -> str:
1172 return (
1173 f"<"
1174 f"{type(self).__module__}::{type(self).__name__} "
1175 f"isexecuting={self.isexecuting()}, "
1176 f"closed={self.closed}, "
1177 f"echo={self.echo}, "
1178 f">"
1179 )
1180
1181 def __del__(self) -> None:
4711182 try:
4721183 _conn = self._conn
4731184 except AttributeError:
4741185 return
4751186 if _conn is not None and not _conn.closed:
4761187 self.close()
477 warnings.warn("Unclosed connection {!r}".format(self),
478 ResourceWarning)
479
480 context = {'connection': self,
481 'message': 'Unclosed connection'}
1188 warnings.warn(f"Unclosed connection {self!r}", ResourceWarning)
1189
1190 context = {"connection": self, "message": "Unclosed connection"}
4821191 if self._source_traceback is not None:
483 context['source_traceback'] = self._source_traceback
1192 context["source_traceback"] = self._source_traceback
4841193 self._loop.call_exception_handler(context)
4851194
4861195 @property
487 def notifies(self):
488 """Return notification queue."""
489 return self._notifies
490
491 async def _get_oids(self):
492 cur = await self.cursor()
1196 def notifies(self) -> ClosableQueue:
1197 """Return notification queue (an asyncio.Queue -like object)."""
1198 return self._notifies_proxy
1199
1200 async def _get_oids(self) -> Tuple[Any, Any]:
1201 cursor = await self.cursor()
4931202 rv0, rv1 = [], []
4941203 try:
495 await cur.execute(
1204 await cursor.execute(
4961205 "SELECT t.oid, typarray "
4971206 "FROM pg_type t JOIN pg_namespace ns ON typnamespace = ns.oid "
4981207 "WHERE typname = 'hstore';"
4991208 )
5001209
501 async for oids in cur:
1210 async for oids in cursor:
5021211 if isinstance(oids, Mapping):
503 rv0.append(oids['oid'])
504 rv1.append(oids['typarray'])
1212 rv0.append(oids["oid"])
1213 rv1.append(oids["typarray"])
5051214 else:
5061215 rv0.append(oids[0])
5071216 rv1.append(oids[1])
5081217 finally:
509 cur.close()
1218 cursor.close()
5101219
5111220 return tuple(rv0), tuple(rv1)
5121221
513 async def _connect(self):
1222 async def _connect(self) -> "Connection":
5141223 try:
515 await self._poll(self._waiter, self._timeout)
516 except Exception:
517 self.close()
1224 await self._poll(self._waiter, self._timeout) # type: ignore
1225 except BaseException:
1226 await asyncio.shield(self.close())
5181227 raise
5191228 if self._enable_json:
520 extras.register_default_json(self._conn)
1229 psycopg2.extras.register_default_json(self._conn)
5211230 if self._enable_uuid:
522 extras.register_uuid(conn_or_curs=self._conn)
1231 psycopg2.extras.register_uuid(conn_or_curs=self._conn)
5231232 if self._enable_hstore:
524 oids = await self._get_oids()
525 if oids is not None:
526 oid, array_oid = oids
527 extras.register_hstore(
528 self._conn,
529 oid=oid,
530 array_oid=array_oid
531 )
1233 oid, array_oid = await self._get_oids()
1234 psycopg2.extras.register_hstore(
1235 self._conn, oid=oid, array_oid=array_oid
1236 )
5321237
5331238 return self
5341239
535 def __await__(self):
1240 def __await__(self) -> Generator[Any, None, "Connection"]:
5361241 return self._connect().__await__()
5371242
538 async def __aenter__(self):
1243 async def __aenter__(self) -> "Connection":
5391244 return self
5401245
541 async def __aexit__(self, exc_type, exc_val, exc_tb):
542 self.close()
1246 async def __aexit__(
1247 self,
1248 exc_type: Optional[Type[BaseException]],
1249 exc: Optional[BaseException],
1250 tb: Optional[TracebackType],
1251 ) -> None:
1252 await self.close()
+0
-397
aiopg/cursor.py less more
0 import asyncio
1
2 import psycopg2
3
4 from .log import logger
5 from .transaction import IsolationLevel, Transaction
6 from .utils import _TransactionBeginContextManager
7
8
9 class Cursor:
10 def __init__(self, conn, impl, timeout, echo, isolation_level):
11 self._conn = conn
12 self._impl = impl
13 self._timeout = timeout
14 self._echo = echo
15 self._transaction = Transaction(
16 self, isolation_level or IsolationLevel.default
17 )
18
19 @property
20 def echo(self):
21 """Return echo mode status."""
22 return self._echo
23
24 @property
25 def description(self):
26 """This read-only attribute is a sequence of 7-item sequences.
27
28 Each of these sequences is a collections.namedtuple containing
29 information describing one result column:
30
31 0. name: the name of the column returned.
32 1. type_code: the PostgreSQL OID of the column.
33 2. display_size: the actual length of the column in bytes.
34 3. internal_size: the size in bytes of the column associated to
35 this column on the server.
36 4. precision: total number of significant digits in columns of
37 type NUMERIC. None for other types.
38 5. scale: count of decimal digits in the fractional part in
39 columns of type NUMERIC. None for other types.
40 6. null_ok: always None as not easy to retrieve from the libpq.
41
42 This attribute will be None for operations that do not
43 return rows or if the cursor has not had an operation invoked
44 via the execute() method yet.
45
46 """
47 return self._impl.description
48
49 def close(self):
50 """Close the cursor now."""
51 if not self.closed:
52 self._impl.close()
53
54 @property
55 def closed(self):
56 """Read-only boolean attribute: specifies if the cursor is closed."""
57 return self._impl.closed
58
59 @property
60 def connection(self):
61 """Read-only attribute returning a reference to the `Connection`."""
62 return self._conn
63
64 @property
65 def raw(self):
66 """Underlying psycopg cursor object, readonly"""
67 return self._impl
68
69 @property
70 def name(self):
71 # Not supported
72 return self._impl.name
73
74 @property
75 def scrollable(self):
76 # Not supported
77 return self._impl.scrollable
78
79 @scrollable.setter
80 def scrollable(self, val):
81 # Not supported
82 self._impl.scrollable = val
83
84 @property
85 def withhold(self):
86 # Not supported
87 return self._impl.withhold
88
89 @withhold.setter
90 def withhold(self, val):
91 # Not supported
92 self._impl.withhold = val
93
94 async def execute(self, operation, parameters=None, *, timeout=None):
95 """Prepare and execute a database operation (query or command).
96
97 Parameters may be provided as sequence or mapping and will be
98 bound to variables in the operation. Variables are specified
99 either with positional %s or named %({name})s placeholders.
100
101 """
102 if timeout is None:
103 timeout = self._timeout
104 waiter = self._conn._create_waiter('cursor.execute')
105 if self._echo:
106 logger.info(operation)
107 logger.info("%r", parameters)
108 try:
109 self._impl.execute(operation, parameters)
110 except BaseException:
111 self._conn._waiter = None
112 raise
113 try:
114 await self._conn._poll(waiter, timeout)
115 except asyncio.TimeoutError:
116 self._impl.close()
117 raise
118
119 async def executemany(self, operation, seq_of_parameters):
120 # Not supported
121 raise psycopg2.ProgrammingError(
122 "executemany cannot be used in asynchronous mode")
123
124 async def callproc(self, procname, parameters=None, *, timeout=None):
125 """Call a stored database procedure with the given name.
126
127 The sequence of parameters must contain one entry for each
128 argument that the procedure expects. The result of the call is
129 returned as modified copy of the input sequence. Input
130 parameters are left untouched, output and input/output
131 parameters replaced with possibly new values.
132
133 """
134 if timeout is None:
135 timeout = self._timeout
136 waiter = self._conn._create_waiter('cursor.callproc')
137 if self._echo:
138 logger.info("CALL %s", procname)
139 logger.info("%r", parameters)
140 try:
141 self._impl.callproc(procname, parameters)
142 except BaseException:
143 self._conn._waiter = None
144 raise
145 else:
146 await self._conn._poll(waiter, timeout)
147
148 def begin(self):
149 return _TransactionBeginContextManager(self._transaction.begin())
150
151 def begin_nested(self):
152 if not self._transaction.is_begin:
153 return _TransactionBeginContextManager(
154 self._transaction.begin())
155 else:
156 return self._transaction.point()
157
158 def mogrify(self, operation, parameters=None):
159 """Return a query string after arguments binding.
160
161 The string returned is exactly the one that would be sent to
162 the database running the .execute() method or similar.
163
164 """
165 ret = self._impl.mogrify(operation, parameters)
166 assert not self._conn._isexecuting(), ("Don't support server side "
167 "mogrify")
168 return ret
169
170 async def setinputsizes(self, sizes):
171 """This method is exposed in compliance with the DBAPI.
172
173 It currently does nothing but it is safe to call it.
174
175 """
176 self._impl.setinputsizes(sizes)
177
178 async def fetchone(self):
179 """Fetch the next row of a query result set.
180
181 Returns a single tuple, or None when no more data is
182 available.
183
184 """
185 ret = self._impl.fetchone()
186 assert not self._conn._isexecuting(), ("Don't support server side "
187 "cursors yet")
188 return ret
189
190 async def fetchmany(self, size=None):
191 """Fetch the next set of rows of a query result.
192
193 Returns a list of tuples. An empty list is returned when no
194 more rows are available.
195
196 The number of rows to fetch per call is specified by the
197 parameter. If it is not given, the cursor's .arraysize
198 determines the number of rows to be fetched. The method should
199 try to fetch as many rows as indicated by the size
200 parameter. If this is not possible due to the specified number
201 of rows not being available, fewer rows may be returned.
202
203 """
204 if size is None:
205 size = self._impl.arraysize
206 ret = self._impl.fetchmany(size)
207 assert not self._conn._isexecuting(), ("Don't support server side "
208 "cursors yet")
209 return ret
210
211 async def fetchall(self):
212 """Fetch all (remaining) rows of a query result.
213
214 Returns them as a list of tuples. An empty list is returned
215 if there is no more record to fetch.
216
217 """
218 ret = self._impl.fetchall()
219 assert not self._conn._isexecuting(), ("Don't support server side "
220 "cursors yet")
221 return ret
222
223 async def scroll(self, value, mode="relative"):
224 """Scroll to a new position according to mode.
225
226 If mode is relative (default), value is taken as offset
227 to the current position in the result set, if set to
228 absolute, value states an absolute target position.
229
230 """
231 ret = self._impl.scroll(value, mode)
232 assert not self._conn._isexecuting(), ("Don't support server side "
233 "cursors yet")
234 return ret
235
236 @property
237 def arraysize(self):
238 """How many rows will be returned by fetchmany() call.
239
240 This read/write attribute specifies the number of rows to
241 fetch at a time with fetchmany(). It defaults to
242 1 meaning to fetch a single row at a time.
243
244 """
245 return self._impl.arraysize
246
247 @arraysize.setter
248 def arraysize(self, val):
249 """How many rows will be returned by fetchmany() call.
250
251 This read/write attribute specifies the number of rows to
252 fetch at a time with fetchmany(). It defaults to
253 1 meaning to fetch a single row at a time.
254
255 """
256 self._impl.arraysize = val
257
258 @property
259 def itersize(self):
260 # Not supported
261 return self._impl.itersize
262
263 @itersize.setter
264 def itersize(self, val):
265 # Not supported
266 self._impl.itersize = val
267
268 @property
269 def rowcount(self):
270 """Returns the number of rows that has been produced of affected.
271
272 This read-only attribute specifies the number of rows that the
273 last :meth:`execute` produced (for Data Query Language
274 statements like SELECT) or affected (for Data Manipulation
275 Language statements like UPDATE or INSERT).
276
277 The attribute is -1 in case no .execute() has been performed
278 on the cursor or the row count of the last operation if it
279 can't be determined by the interface.
280
281 """
282 return self._impl.rowcount
283
284 @property
285 def rownumber(self):
286 """Row index.
287
288 This read-only attribute provides the current 0-based index of the
289 cursor in the result set or ``None`` if the index cannot be
290 determined."""
291
292 return self._impl.rownumber
293
294 @property
295 def lastrowid(self):
296 """OID of the last inserted row.
297
298 This read-only attribute provides the OID of the last row
299 inserted by the cursor. If the table wasn't created with OID
300 support or the last operation is not a single record insert,
301 the attribute is set to None.
302
303 """
304 return self._impl.lastrowid
305
306 @property
307 def query(self):
308 """The last executed query string.
309
310 Read-only attribute containing the body of the last query sent
311 to the backend (including bound arguments) as bytes
312 string. None if no query has been executed yet.
313
314 """
315 return self._impl.query
316
317 @property
318 def statusmessage(self):
319 """the message returned by the last command."""
320
321 return self._impl.statusmessage
322
323 # async def cast(self, old, s):
324 # ...
325
326 @property
327 def tzinfo_factory(self):
328 """The time zone factory used to handle data types such as
329 `TIMESTAMP WITH TIME ZONE`.
330 """
331 return self._impl.tzinfo_factory
332
333 @tzinfo_factory.setter
334 def tzinfo_factory(self, val):
335 """The time zone factory used to handle data types such as
336 `TIMESTAMP WITH TIME ZONE`.
337 """
338 self._impl.tzinfo_factory = val
339
340 async def nextset(self):
341 # Not supported
342 self._impl.nextset() # raises psycopg2.NotSupportedError
343
344 async def setoutputsize(self, size, column=None):
345 # Does nothing
346 self._impl.setoutputsize(size, column)
347
348 async def copy_from(self, file, table, sep='\t', null='\\N', size=8192,
349 columns=None):
350 raise psycopg2.ProgrammingError(
351 "copy_from cannot be used in asynchronous mode")
352
353 async def copy_to(self, file, table, sep='\t', null='\\N', columns=None):
354 raise psycopg2.ProgrammingError(
355 "copy_to cannot be used in asynchronous mode")
356
357 async def copy_expert(self, sql, file, size=8192):
358 raise psycopg2.ProgrammingError(
359 "copy_expert cannot be used in asynchronous mode")
360
361 @property
362 def timeout(self):
363 """Return default timeout for cursor operations."""
364 return self._timeout
365
366 def __aiter__(self):
367 return self
368
369 async def __anext__(self):
370 ret = await self.fetchone()
371 if ret is not None:
372 return ret
373 else:
374 raise StopAsyncIteration
375
376 async def __aenter__(self):
377 return self
378
379 async def __aexit__(self, exc_type, exc_val, exc_tb):
380 self.close()
381 return
382
383 def __repr__(self):
384 msg = (
385 '<'
386 '{module_name}::{class_name} '
387 'name={name}, '
388 'closed={closed}'
389 '>'
390 )
391 return msg.format(
392 module_name=type(self).__module__,
393 class_name=type(self).__name__,
394 name=self.name,
395 closed=self.closed
396 )
00 import asyncio
11 import collections
22 import warnings
3 from types import TracebackType
4 from typing import (
5 Any,
6 Awaitable,
7 Callable,
8 Deque,
9 Generator,
10 Optional,
11 Set,
12 Type,
13 )
314
415 import async_timeout
5 from psycopg2.extensions import TRANSACTION_STATUS_IDLE
6
7 from .connection import TIMEOUT, connect
8 from .utils import (
9 _PoolAcquireContextManager,
10 _PoolConnectionContextManager,
11 _PoolContextManager,
12 _PoolCursorContextManager,
13 ensure_future,
14 get_running_loop,
15 )
16
17
18 def create_pool(dsn=None, *, minsize=1, maxsize=10,
19 timeout=TIMEOUT, pool_recycle=-1,
20 enable_json=True, enable_hstore=True, enable_uuid=True,
21 echo=False, on_connect=None,
22 **kwargs):
16 import psycopg2.extensions
17
18 from .connection import TIMEOUT, Connection, Cursor, connect
19 from .utils import _ContextManager, create_completed_future, get_running_loop
20
21
22 def create_pool(
23 dsn: Optional[str] = None,
24 *,
25 minsize: int = 1,
26 maxsize: int = 10,
27 timeout: float = TIMEOUT,
28 pool_recycle: float = -1.0,
29 enable_json: bool = True,
30 enable_hstore: bool = True,
31 enable_uuid: bool = True,
32 echo: bool = False,
33 on_connect: Optional[Callable[[Connection], Awaitable[None]]] = None,
34 **kwargs: Any,
35 ) -> _ContextManager["Pool"]:
2336 coro = Pool.from_pool_fill(
24 dsn, minsize, maxsize, timeout,
25 enable_json=enable_json, enable_hstore=enable_hstore,
26 enable_uuid=enable_uuid, echo=echo, on_connect=on_connect,
27 pool_recycle=pool_recycle, **kwargs
37 dsn,
38 minsize,
39 maxsize,
40 timeout,
41 enable_json=enable_json,
42 enable_hstore=enable_hstore,
43 enable_uuid=enable_uuid,
44 echo=echo,
45 on_connect=on_connect,
46 pool_recycle=pool_recycle,
47 **kwargs,
2848 )
29
30 return _PoolContextManager(coro)
31
32
33 class Pool(asyncio.AbstractServer):
49 return _ContextManager[Pool](coro, _destroy_pool)
50
51
52 async def _destroy_pool(pool: "Pool") -> None:
53 pool.close()
54 await pool.wait_closed()
55
56
57 class _PoolConnectionContextManager:
58 """Context manager.
59
60 This enables the following idiom for acquiring and releasing a
61 connection around a block:
62
63 async with pool as conn:
64 cur = await conn.cursor()
65
66 while failing loudly when accidentally using:
67
68 with pool:
69 <block>
70 """
71
72 __slots__ = ("_pool", "_conn")
73
74 def __init__(self, pool: "Pool", conn: Connection):
75 self._pool: Optional[Pool] = pool
76 self._conn: Optional[Connection] = conn
77
78 def __enter__(self) -> Connection:
79 assert self._conn
80 return self._conn
81
82 def __exit__(
83 self,
84 exc_type: Optional[Type[BaseException]],
85 exc: Optional[BaseException],
86 tb: Optional[TracebackType],
87 ) -> None:
88 if self._pool is None or self._conn is None:
89 return
90 try:
91 self._pool.release(self._conn)
92 finally:
93 self._pool = None
94 self._conn = None
95
96 async def __aenter__(self) -> Connection:
97 assert self._conn
98 return self._conn
99
100 async def __aexit__(
101 self,
102 exc_type: Optional[Type[BaseException]],
103 exc: Optional[BaseException],
104 tb: Optional[TracebackType],
105 ) -> None:
106 if self._pool is None or self._conn is None:
107 return
108 try:
109 await self._pool.release(self._conn)
110 finally:
111 self._pool = None
112 self._conn = None
113
114
115 class _PoolCursorContextManager:
116 """Context manager.
117
118 This enables the following idiom for acquiring and releasing a
119 cursor around a block:
120
121 async with pool.cursor() as cur:
122 await cur.execute("SELECT 1")
123
124 while failing loudly when accidentally using:
125
126 with pool:
127 <block>
128 """
129
130 __slots__ = ("_pool", "_conn", "_cursor")
131
132 def __init__(self, pool: "Pool", conn: Connection, cursor: Cursor):
133 self._pool = pool
134 self._conn = conn
135 self._cursor = cursor
136
137 def __enter__(self) -> Cursor:
138 return self._cursor
139
140 def __exit__(
141 self,
142 exc_type: Optional[Type[BaseException]],
143 exc: Optional[BaseException],
144 tb: Optional[TracebackType],
145 ) -> None:
146 try:
147 self._cursor.close()
148 except psycopg2.ProgrammingError:
149 # seen instances where the cursor fails to close:
150 # https://github.com/aio-libs/aiopg/issues/364
151 # We close it here so we don't return a bad connection to the pool
152 self._conn.close()
153 raise
154 finally:
155 try:
156 self._pool.release(self._conn)
157 finally:
158 self._pool = None # type: ignore
159 self._conn = None # type: ignore
160 self._cursor = None # type: ignore
161
162
163 class Pool:
34164 """Connection pool"""
35165
36 def __init__(self, dsn, minsize, maxsize, timeout, *,
37 enable_json, enable_hstore, enable_uuid, echo,
38 on_connect, pool_recycle, **kwargs):
166 def __init__(
167 self,
168 dsn: str,
169 minsize: int,
170 maxsize: int,
171 timeout: float,
172 *,
173 enable_json: bool,
174 enable_hstore: bool,
175 enable_uuid: bool,
176 echo: bool,
177 on_connect: Optional[Callable[[Connection], Awaitable[None]]],
178 pool_recycle: float,
179 **kwargs: Any,
180 ):
39181 if minsize < 0:
40182 raise ValueError("minsize should be zero or greater")
41183 if maxsize < minsize and maxsize != 0:
42184 raise ValueError("maxsize should be not less than minsize")
43185 self._dsn = dsn
44186 self._minsize = minsize
45 self._loop = get_running_loop(kwargs.pop('loop', None) is not None)
187 self._loop = get_running_loop()
46188 self._timeout = timeout
47189 self._recycle = pool_recycle
48190 self._enable_json = enable_json
52194 self._on_connect = on_connect
53195 self._conn_kwargs = kwargs
54196 self._acquiring = 0
55 self._free = collections.deque(maxlen=maxsize or None)
197 self._free: Deque[Connection] = collections.deque(
198 maxlen=maxsize or None
199 )
56200 self._cond = asyncio.Condition()
57 self._used = set()
58 self._terminated = set()
201 self._used: Set[Connection] = set()
202 self._terminated: Set[Connection] = set()
59203 self._closing = False
60204 self._closed = False
61205
62206 @property
63 def echo(self):
207 def echo(self) -> bool:
64208 return self._echo
65209
66210 @property
67 def minsize(self):
211 def minsize(self) -> int:
68212 return self._minsize
69213
70214 @property
71 def maxsize(self):
215 def maxsize(self) -> Optional[int]:
72216 return self._free.maxlen
73217
74218 @property
75 def size(self):
219 def size(self) -> int:
76220 return self.freesize + len(self._used) + self._acquiring
77221
78222 @property
79 def freesize(self):
223 def freesize(self) -> int:
80224 return len(self._free)
81225
82226 @property
83 def timeout(self):
227 def timeout(self) -> float:
84228 return self._timeout
85229
86 async def clear(self):
230 async def clear(self) -> None:
87231 """Close all free connections in pool."""
88232 async with self._cond:
89233 while self._free:
92236 self._cond.notify()
93237
94238 @property
95 def closed(self):
239 def closed(self) -> bool:
96240 return self._closed
97241
98 def close(self):
242 def close(self) -> None:
99243 """Close pool.
100244
101245 Mark all pool connections to be closed on getting back to pool.
105249 return
106250 self._closing = True
107251
108 def terminate(self):
252 def terminate(self) -> None:
109253 """Terminate pool.
110254
111255 Close pool with instantly closing all acquired connections also.
119263
120264 self._used.clear()
121265
122 async def wait_closed(self):
266 async def wait_closed(self) -> None:
123267 """Wait for closing all pool's connections."""
124268
125269 if self._closed:
126270 return
127271 if not self._closing:
128 raise RuntimeError(".wait_closed() should be called "
129 "after .close()")
272 raise RuntimeError(
273 ".wait_closed() should be called " "after .close()"
274 )
130275
131276 while self._free:
132277 conn = self._free.popleft()
133 conn.close()
278 await conn.close()
134279
135280 async with self._cond:
136281 while self.size > self.freesize:
138283
139284 self._closed = True
140285
141 def acquire(self):
286 def acquire(self) -> _ContextManager[Connection]:
142287 """Acquire free connection from the pool."""
143288 coro = self._acquire()
144 return _PoolAcquireContextManager(coro, self)
289 return _ContextManager[Connection](coro, self.release)
145290
146291 @classmethod
147 async def from_pool_fill(cls, *args, **kwargs):
292 async def from_pool_fill(cls, *args: Any, **kwargs: Any) -> "Pool":
148293 """constructor for filling the free pool with connections,
149294 the number is controlled by the minsize parameter
150295 """
155300
156301 return self
157302
158 async def _acquire(self):
303 async def _acquire(self) -> Connection:
159304 if self._closing:
160305 raise RuntimeError("Cannot acquire connection after closing pool")
161306 async with async_timeout.timeout(self._timeout), self._cond:
170315 else:
171316 await self._cond.wait()
172317
173 async def _fill_free_pool(self, override_min):
318 async def _fill_free_pool(self, override_min: bool) -> None:
174319 # iterate over free connections and remove timeouted ones
175320 n, free = 0, len(self._free)
176321 while n < free:
178323 if conn.closed:
179324 self._free.pop()
180325 elif -1 < self._recycle < self._loop.time() - conn.last_usage:
181 conn.close()
326 await conn.close()
182327 self._free.pop()
183328 else:
184329 self._free.rotate()
188333 self._acquiring += 1
189334 try:
190335 conn = await connect(
191 self._dsn, timeout=self._timeout,
336 self._dsn,
337 timeout=self._timeout,
192338 enable_json=self._enable_json,
193339 enable_hstore=self._enable_hstore,
194340 enable_uuid=self._enable_uuid,
195341 echo=self._echo,
196 **self._conn_kwargs)
342 **self._conn_kwargs,
343 )
197344 if self._on_connect is not None:
198345 await self._on_connect(conn)
199346 # raise exception if pool is closing
204351 if self._free:
205352 return
206353
207 if override_min and self.size < self.maxsize:
354 if override_min and self.size < (self.maxsize or 0):
208355 self._acquiring += 1
209356 try:
210357 conn = await connect(
211 self._dsn, timeout=self._timeout,
358 self._dsn,
359 timeout=self._timeout,
212360 enable_json=self._enable_json,
213361 enable_hstore=self._enable_hstore,
214362 enable_uuid=self._enable_uuid,
215363 echo=self._echo,
216 **self._conn_kwargs)
364 **self._conn_kwargs,
365 )
217366 if self._on_connect is not None:
218367 await self._on_connect(conn)
219368 # raise exception if pool is closing
222371 finally:
223372 self._acquiring -= 1
224373
225 async def _wakeup(self):
374 async def _wakeup(self) -> None:
226375 async with self._cond:
227376 self._cond.notify()
228377
229 def release(self, conn):
230 """Release free connection back to the connection pool.
231 """
232 fut = self._loop.create_future()
233 fut.set_result(None)
378 def release(self, conn: Connection) -> "asyncio.Future[None]":
379 """Release free connection back to the connection pool."""
380 future = create_completed_future(self._loop)
234381 if conn in self._terminated:
235382 assert conn.closed, conn
236383 self._terminated.remove(conn)
237 return fut
384 return future
238385 assert conn in self._used, (conn, self._used)
239386 self._used.remove(conn)
240 if not conn.closed:
241 tran_status = conn._conn.get_transaction_status()
242 if tran_status != TRANSACTION_STATUS_IDLE:
243 warnings.warn(
244 ("Invalid transaction status on "
245 "released connection: {}").format(tran_status),
246 ResourceWarning
247 )
248 conn.close()
249 return fut
250 if self._closing:
251 conn.close()
252 else:
253 conn.free_cursor()
254 self._free.append(conn)
255 fut = ensure_future(self._wakeup(), loop=self._loop)
256 return fut
257
258 async def cursor(self, name=None, cursor_factory=None,
259 scrollable=None, withhold=False, *, timeout=None):
387 if conn.closed:
388 return future
389 transaction_status = conn.raw.get_transaction_status()
390 if transaction_status != psycopg2.extensions.TRANSACTION_STATUS_IDLE:
391 warnings.warn(
392 f"Invalid transaction status on "
393 f"released connection: {transaction_status}",
394 ResourceWarning,
395 )
396 conn.close()
397 return future
398 if self._closing:
399 conn.close()
400 else:
401 self._free.append(conn)
402 return asyncio.ensure_future(self._wakeup(), loop=self._loop)
403
404 async def cursor(
405 self,
406 name: Optional[str] = None,
407 cursor_factory: Any = None,
408 scrollable: Optional[bool] = None,
409 withhold: bool = False,
410 *,
411 timeout: Optional[float] = None,
412 ) -> _PoolCursorContextManager:
260413 conn = await self.acquire()
261 cur = await conn.cursor(name=name, cursor_factory=cursor_factory,
262 scrollable=scrollable, withhold=withhold,
263 timeout=timeout)
264 return _PoolCursorContextManager(self, conn, cur)
265
266 def __await__(self):
414 cursor = await conn.cursor(
415 name=name,
416 cursor_factory=cursor_factory,
417 scrollable=scrollable,
418 withhold=withhold,
419 timeout=timeout,
420 )
421 return _PoolCursorContextManager(self, conn, cursor)
422
423 def __await__(self) -> Generator[Any, Any, _PoolConnectionContextManager]:
267424 # This is not a coroutine. It is meant to enable the idiom:
268425 #
269426 # with (await pool) as conn:
279436 conn = yield from self._acquire().__await__()
280437 return _PoolConnectionContextManager(self, conn)
281438
282 def __enter__(self):
439 def __enter__(self) -> "Pool":
283440 raise RuntimeError(
284 '"await" should be used as context manager expression')
285
286 def __exit__(self, *args):
441 '"await" should be used as context manager expression'
442 )
443
444 def __exit__(
445 self,
446 exc_type: Optional[Type[BaseException]],
447 exc: Optional[BaseException],
448 tb: Optional[TracebackType],
449 ) -> None:
287450 # This must exist because __enter__ exists, even though that
288451 # always raises; that's how the with-statement works.
289452 pass # pragma: nocover
290453
291 async def __aenter__(self):
454 async def __aenter__(self) -> "Pool":
292455 return self
293456
294 async def __aexit__(self, exc_type, exc_val, exc_tb):
457 async def __aexit__(
458 self,
459 exc_type: Optional[Type[BaseException]],
460 exc: Optional[BaseException],
461 tb: Optional[TracebackType],
462 ) -> None:
295463 self.close()
296464 await self.wait_closed()
297465
298 def __del__(self):
466 def __del__(self) -> None:
299467 try:
300468 self._free
301469 except AttributeError:
307475 conn.close()
308476 left += 1
309477 warnings.warn(
310 "Unclosed {} connections in {!r}".format(left, self),
311 ResourceWarning)
478 f"Unclosed {left} connections in {self!r}", ResourceWarning
479 )
88 ResourceClosedError,
99 )
1010
11 __all__ = ('create_engine', 'SAConnection', 'Error',
12 'ArgumentError', 'InvalidRequestError', 'NoSuchColumnError',
13 'ResourceClosedError', 'Engine')
11 __all__ = (
12 "create_engine",
13 "SAConnection",
14 "Error",
15 "ArgumentError",
16 "InvalidRequestError",
17 "NoSuchColumnError",
18 "ResourceClosedError",
19 "Engine",
20 )
1421
1522
16 (SAConnection, Error, ArgumentError, InvalidRequestError,
17 NoSuchColumnError, ResourceClosedError, create_engine, Engine)
23 (
24 SAConnection,
25 Error,
26 ArgumentError,
27 InvalidRequestError,
28 NoSuchColumnError,
29 ResourceClosedError,
30 create_engine,
31 Engine,
32 )
0 import asyncio
1 import contextlib
2 import weakref
3
04 from sqlalchemy.sql import ClauseElement
15 from sqlalchemy.sql.ddl import DDLElement
26 from sqlalchemy.sql.dml import UpdateBase
37
4 from ..utils import _SAConnectionContextManager, _TransactionContextManager
8 from ..utils import _ContextManager, _IterableContextManager
59 from . import exc
610 from .result import ResultProxy
711 from .transaction import (
1216 )
1317
1418
19 async def _commit_transaction_if_active(t: Transaction) -> None:
20 if t.is_active:
21 await t.commit()
22
23
24 async def _rollback_transaction(t: Transaction) -> None:
25 await t.rollback()
26
27
28 async def _close_result_proxy(c: "ResultProxy") -> None:
29 c.close()
30
31
1532 class SAConnection:
33 _QUERY_COMPILE_KWARGS = (("render_postcompile", True),)
34
35 __slots__ = (
36 "_connection",
37 "_transaction",
38 "_savepoint_seq",
39 "_engine",
40 "_dialect",
41 "_cursors",
42 "_query_compile_kwargs",
43 )
44
1645 def __init__(self, connection, engine):
1746 self._connection = connection
1847 self._transaction = None
1948 self._savepoint_seq = 0
2049 self._engine = engine
2150 self._dialect = engine.dialect
22 self._cursor = None
51 self._cursors = weakref.WeakSet()
52 self._query_compile_kwargs = dict(self._QUERY_COMPILE_KWARGS)
2353
2454 def execute(self, query, *multiparams, **params):
2555 """Executes a SQL query with optional parameters.
5989
6090 """
6191 coro = self._execute(query, *multiparams, **params)
62 return _SAConnectionContextManager(coro)
63
64 async def _get_cursor(self):
65 if self._cursor and not self._cursor.closed:
66 return self._cursor
67
68 self._cursor = await self._connection.cursor()
69 return self._cursor
92 return _IterableContextManager[ResultProxy](coro, _close_result_proxy)
93
94 async def _open_cursor(self):
95 if self._connection is None:
96 raise exc.ResourceClosedError("This connection is closed.")
97 cursor = await self._connection.cursor()
98 self._cursors.add(cursor)
99 return cursor
100
101 def _close_cursor(self, cursor):
102 self._cursors.remove(cursor)
103 cursor.close()
70104
71105 async def _execute(self, query, *multiparams, **params):
72 cursor = await self._get_cursor()
106 cursor = await self._open_cursor()
73107 dp = _distill_params(multiparams, params)
74108 if len(dp) > 1:
75109 raise exc.ArgumentError("aiopg doesn't support executemany")
81115 if isinstance(query, str):
82116 await cursor.execute(query, dp)
83117 elif isinstance(query, ClauseElement):
84 compiled = query.compile(dialect=self._dialect)
85118 # parameters = compiled.params
86119 if not isinstance(query, DDLElement):
120 compiled = query.compile(
121 dialect=self._dialect,
122 compile_kwargs=self._query_compile_kwargs,
123 )
87124 if dp and isinstance(dp, (list, tuple)):
88125 if isinstance(query, UpdateBase):
89 dp = {c.key: pval
90 for c, pval in zip(query.table.c, dp)}
126 dp = {
127 c.key: pval for c, pval in zip(query.table.c, dp)
128 }
91129 else:
92 raise exc.ArgumentError("Don't mix sqlalchemy SELECT "
93 "clause with positional "
94 "parameters")
130 raise exc.ArgumentError(
131 "Don't mix sqlalchemy SELECT "
132 "clause with positional "
133 "parameters"
134 )
95135
96136 compiled_parameters = [compiled.construct_params(dp)]
97137 processed_parameters = []
98138 processors = compiled._bind_processors
99139 for compiled_params in compiled_parameters:
100 params = {key: (processors[key](compiled_params[key])
101 if key in processors
102 else compiled_params[key])
103 for key in compiled_params}
140 params = {
141 key: (
142 processors[key](compiled_params[key])
143 if key in processors
144 else compiled_params[key]
145 )
146 for key in compiled_params
147 }
104148 processed_parameters.append(params)
105149 post_processed_params = self._dialect.execute_sequence_format(
106 processed_parameters)
150 processed_parameters
151 )
107152
108153 # _result_columns is a private API of Compiled,
109154 # but I couldn't find any public API exposing this data.
110155 result_map = compiled._result_columns
111156
112157 else:
158 compiled = query.compile(dialect=self._dialect)
113159 if dp:
114 raise exc.ArgumentError("Don't mix sqlalchemy DDL clause "
115 "and execution with parameters")
160 raise exc.ArgumentError(
161 "Don't mix sqlalchemy DDL clause "
162 "and execution with parameters"
163 )
116164 post_processed_params = [compiled.construct_params()]
117165 result_map = None
118166
119167 await cursor.execute(str(compiled), post_processed_params[0])
120168 else:
121 raise exc.ArgumentError("sql statement should be str or "
122 "SQLAlchemy data "
123 "selection/modification clause")
169 raise exc.ArgumentError(
170 "sql statement should be str or "
171 "SQLAlchemy data "
172 "selection/modification clause"
173 )
124174
125175 return ResultProxy(self, cursor, self._dialect, result_map)
126176
176226
177227 """
178228 coro = self._begin(isolation_level, readonly, deferrable)
179 return _TransactionContextManager(coro)
229 return _ContextManager[Transaction](
230 coro, _commit_transaction_if_active, _rollback_transaction
231 )
180232
181233 async def _begin(self, isolation_level, readonly, deferrable):
182234 if self._transaction is None:
183235 self._transaction = RootTransaction(self)
184236 await self._begin_impl(isolation_level, readonly, deferrable)
185237 return self._transaction
186 else:
187 return Transaction(self, self._transaction)
238 return Transaction(self, self._transaction)
188239
189240 async def _begin_impl(self, isolation_level, readonly, deferrable):
190 stmt = 'BEGIN'
241 stmt = "BEGIN"
191242 if isolation_level is not None:
192 stmt += ' ISOLATION LEVEL ' + isolation_level
243 stmt += f" ISOLATION LEVEL {isolation_level}"
193244 if readonly:
194 stmt += ' READ ONLY'
245 stmt += " READ ONLY"
195246 if deferrable:
196 stmt += ' DEFERRABLE'
197
198 cur = await self._get_cursor()
199 try:
200 await cur.execute(stmt)
201 finally:
202 cur.close()
247 stmt += " DEFERRABLE"
248
249 cursor = await self._open_cursor()
250 try:
251 await cursor.execute(stmt)
252 finally:
253 self._close_cursor(cursor)
203254
204255 async def _commit_impl(self):
205 cur = await self._get_cursor()
206 try:
207 await cur.execute('COMMIT')
208 finally:
209 cur.close()
256 cursor = await self._open_cursor()
257 try:
258 await cursor.execute("COMMIT")
259 finally:
260 self._close_cursor(cursor)
210261 self._transaction = None
211262
212263 async def _rollback_impl(self):
213 cur = await self._get_cursor()
214 try:
215 await cur.execute('ROLLBACK')
216 finally:
217 cur.close()
264 try:
265 if self._connection.closed:
266 return
267 cursor = await self._open_cursor()
268 try:
269 await cursor.execute("ROLLBACK")
270 finally:
271 self._close_cursor(cursor)
272 finally:
218273 self._transaction = None
219274
220275 def begin_nested(self):
229284 transaction of a whole.
230285 """
231286 coro = self._begin_nested()
232 return _TransactionContextManager(coro)
287 return _ContextManager(
288 coro, _commit_transaction_if_active, _rollback_transaction
289 )
233290
234291 async def _begin_nested(self):
235292 if self._transaction is None:
240297 self._transaction._savepoint = await self._savepoint_impl()
241298 return self._transaction
242299
243 async def _savepoint_impl(self, name=None):
300 async def _savepoint_impl(self):
244301 self._savepoint_seq += 1
245 name = 'aiopg_sa_savepoint_%s' % self._savepoint_seq
246
247 cur = await self._get_cursor()
248 try:
249 await cur.execute('SAVEPOINT ' + name)
302 name = f"aiopg_sa_savepoint_{self._savepoint_seq}"
303
304 cursor = await self._open_cursor()
305 try:
306 await cursor.execute(f"SAVEPOINT {name}")
250307 return name
251308 finally:
252 cur.close()
309 self._close_cursor(cursor)
253310
254311 async def _rollback_to_savepoint_impl(self, name, parent):
255 cur = await self._get_cursor()
256 try:
257 await cur.execute('ROLLBACK TO SAVEPOINT ' + name)
258 finally:
259 cur.close()
260 self._transaction = parent
312 try:
313 if self._connection.closed:
314 return
315 cursor = await self._open_cursor()
316 try:
317 await cursor.execute(f"ROLLBACK TO SAVEPOINT {name}")
318 finally:
319 self._close_cursor(cursor)
320 finally:
321 self._transaction = parent
261322
262323 async def _release_savepoint_impl(self, name, parent):
263 cur = await self._get_cursor()
264 try:
265 await cur.execute('RELEASE SAVEPOINT ' + name)
266 finally:
267 cur.close()
324 cursor = await self._open_cursor()
325 try:
326 await cursor.execute(f"RELEASE SAVEPOINT {name}")
327 finally:
328 self._close_cursor(cursor)
329
268330 self._transaction = parent
269331
270332 async def begin_twophase(self, xid=None):
283345 if self._transaction is not None:
284346 raise exc.InvalidRequestError(
285347 "Cannot start a two phase transaction when a transaction "
286 "is already in progress.")
348 "is already in progress."
349 )
287350 if xid is None:
288351 xid = self._dialect.create_xid()
289352 self._transaction = TwoPhaseTransaction(self, xid)
290 await self._begin_impl()
353 await self._begin_impl(None, False, False)
291354 return self._transaction
292355
293356 async def _prepare_twophase_impl(self, xid):
294 await self.execute("PREPARE TRANSACTION '%s'" % xid)
357 await self.execute(f"PREPARE TRANSACTION {xid!r}")
295358
296359 async def recover_twophase(self):
297360 """Return a list of prepared twophase transaction ids."""
301364 async def rollback_prepared(self, xid, *, is_prepared=True):
302365 """Rollback prepared twophase transaction."""
303366 if is_prepared:
304 await self.execute("ROLLBACK PREPARED '%s'" % xid)
367 await self.execute(f"ROLLBACK PREPARED {xid:!r}")
305368 else:
306369 await self._rollback_impl()
307370
308371 async def commit_prepared(self, xid, *, is_prepared=True):
309372 """Commit prepared twophase transaction."""
310373 if is_prepared:
311 await self.execute("COMMIT PREPARED '%s'" % xid)
374 await self.execute(f"COMMIT PREPARED {xid!r}")
312375 else:
313376 await self._commit_impl()
314377
331394 After .close() is called, the SAConnection is permanently in a
332395 closed state, and will allow no further operations.
333396 """
397
334398 if self.connection is None:
335399 return
336400
401 await asyncio.shield(self._close())
402
403 async def _close(self):
337404 if self._transaction is not None:
338 await self._transaction.rollback()
405 with contextlib.suppress(Exception):
406 await self._transaction.rollback()
339407 self._transaction = None
340 # don't close underlying connection, it can be reused by pool
341 # conn.close()
342
343 self._engine.release(self)
344 self._connection = None
345 self._engine = None
408
409 for cursor in self._cursors:
410 cursor.close()
411 self._cursors.clear()
412
413 if self._engine is not None:
414 with contextlib.suppress(Exception):
415 await self._engine.release(self)
416 self._connection = None
417 self._engine = None
346418
347419
348420 def _distill_params(multiparams, params):
363435 elif len(multiparams) == 1:
364436 zero = multiparams[0]
365437 if isinstance(zero, (list, tuple)):
366 if not zero or hasattr(zero[0], '__iter__') and \
367 not hasattr(zero[0], 'strip'):
438 if (
439 not zero
440 or hasattr(zero[0], "__iter__")
441 and not hasattr(zero[0], "strip")
442 ):
368443 # execute(stmt, [{}, {}, {}, ...])
369444 # execute(stmt, [(), (), (), ...])
370445 return zero
371446 else:
372447 # execute(stmt, ("value", "value"))
373448 return [zero]
374 elif hasattr(zero, 'keys'):
449 elif hasattr(zero, "keys"):
375450 # execute(stmt, {"key":"value"})
376451 return [zero]
377452 else:
378453 # execute(stmt, "value")
379454 return [[zero]]
380455 else:
381 if (hasattr(multiparams[0], '__iter__') and
382 not hasattr(multiparams[0], 'strip')):
456 if hasattr(multiparams[0], "__iter__") and not hasattr(
457 multiparams[0], "strip"
458 ):
383459 return multiparams
384460 else:
385461 return [multiparams]
0 import asyncio
01 import json
12
23 import aiopg
34
45 from ..connection import TIMEOUT
5 from ..utils import _PoolAcquireContextManager, _PoolContextManager
6 from ..utils import _ContextManager, get_running_loop
67 from .connection import SAConnection
78
89 try:
1112 PGDialect_psycopg2,
1213 )
1314 except ImportError: # pragma: no cover
14 raise ImportError('aiopg.sa requires sqlalchemy')
15 raise ImportError("aiopg.sa requires sqlalchemy")
1516
1617
1718 class APGCompiler_psycopg2(PGCompiler_psycopg2):
3132
3233
3334 def get_dialect(json_serializer=json.dumps, json_deserializer=lambda x: x):
34 dialect = PGDialect_psycopg2(json_serializer=json_serializer,
35 json_deserializer=json_deserializer)
35 dialect = PGDialect_psycopg2(
36 json_serializer=json_serializer, json_deserializer=json_deserializer
37 )
3638
3739 dialect.statement_compiler = APGCompiler_psycopg2
3840 dialect.implicit_returning = True
4850 _dialect = get_dialect()
4951
5052
51 def create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect,
52 timeout=TIMEOUT, pool_recycle=-1, **kwargs):
53 def create_engine(
54 dsn=None,
55 *,
56 minsize=1,
57 maxsize=10,
58 dialect=_dialect,
59 timeout=TIMEOUT,
60 pool_recycle=-1,
61 **kwargs
62 ):
5363 """A coroutine for Engine creation.
5464
5565 Returns Engine instance with embedded connection pool.
5767 The pool has *minsize* opened connections to PostgreSQL server.
5868 """
5969
60 coro = _create_engine(dsn=dsn, minsize=minsize, maxsize=maxsize,
61 dialect=dialect, timeout=timeout,
62 pool_recycle=pool_recycle, **kwargs)
63 return _EngineContextManager(coro)
64
65
66 async def _create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect,
67 timeout=TIMEOUT, pool_recycle=-1, **kwargs):
70 coro = _create_engine(
71 dsn=dsn,
72 minsize=minsize,
73 maxsize=maxsize,
74 dialect=dialect,
75 timeout=timeout,
76 pool_recycle=pool_recycle,
77 **kwargs
78 )
79 return _ContextManager(coro, _close_engine)
80
81
82 async def _create_engine(
83 dsn=None,
84 *,
85 minsize=1,
86 maxsize=10,
87 dialect=_dialect,
88 timeout=TIMEOUT,
89 pool_recycle=-1,
90 **kwargs
91 ):
6892
6993 pool = await aiopg.create_pool(
70 dsn, minsize=minsize, maxsize=maxsize,
71 timeout=timeout, pool_recycle=pool_recycle, **kwargs
94 dsn,
95 minsize=minsize,
96 maxsize=maxsize,
97 timeout=timeout,
98 pool_recycle=pool_recycle,
99 **kwargs
72100 )
73101 conn = await pool.acquire()
74102 try:
78106 await pool.release(conn)
79107
80108
109 async def _close_engine(engine: "Engine") -> None:
110 engine.close()
111 await engine.wait_closed()
112
113
114 async def _close_connection(c: SAConnection) -> None:
115 await c.close()
116
117
81118 class Engine:
82119 """Connects a aiopg.Pool and
83120 sqlalchemy.engine.interfaces.Dialect together to provide a
87124 create_engine coroutine.
88125 """
89126
127 __slots__ = ("_dialect", "_pool", "_dsn", "_loop")
128
90129 def __init__(self, dialect, pool, dsn):
91130 self._dialect = dialect
92131 self._pool = pool
93132 self._dsn = dsn
133 self._loop = get_running_loop()
94134
95135 @property
96136 def dialect(self):
159199 def acquire(self):
160200 """Get a connection from pool."""
161201 coro = self._acquire()
162 return _EngineAcquireContextManager(coro, self)
202 return _ContextManager[SAConnection](coro, _close_connection)
163203
164204 async def _acquire(self):
165205 raw = await self._pool.acquire()
166 conn = SAConnection(raw, self)
167 return conn
206 return SAConnection(raw, self)
168207
169208 def release(self, conn):
170 raw = conn.connection
171 fut = self._pool.release(raw)
172 return fut
209 return self._pool.release(conn.connection)
173210
174211 def __enter__(self):
175212 raise RuntimeError(
176 '"await" should be used as context manager expression')
213 '"await" should be used as context manager expression'
214 )
177215
178216 def __exit__(self, *args):
179217 # This must exist because __enter__ exists, even though that
194232 # finally:
195233 # engine.release(conn)
196234 conn = yield from self._acquire().__await__()
197 return _ConnectionContextManager(self, conn)
235 return _ConnectionContextManager(conn, self._loop)
198236
199237 async def __aenter__(self):
200238 return self
204242 await self.wait_closed()
205243
206244
207 _EngineContextManager = _PoolContextManager
208 _EngineAcquireContextManager = _PoolAcquireContextManager
209
210
211245 class _ConnectionContextManager:
212246 """Context manager.
213247
223257 <block>
224258 """
225259
226 __slots__ = ('_engine', '_conn')
227
228 def __init__(self, engine, conn):
229 self._engine = engine
260 __slots__ = ("_conn", "_loop")
261
262 def __init__(self, conn: SAConnection, loop: asyncio.AbstractEventLoop):
230263 self._conn = conn
264 self._loop = loop
231265
232266 def __enter__(self):
233267 return self._conn
234268
235269 def __exit__(self, *args):
236 try:
237 self._engine.release(self._conn)
238 finally:
239 self._engine = None
240 self._conn = None
270 asyncio.ensure_future(self._conn.close(), loop=self._loop)
271 self._conn = None
33 from sqlalchemy.sql import expression, sqltypes
44
55 from . import exc
6 from .utils import SQLALCHEMY_VERSION
7
8 if SQLALCHEMY_VERSION >= ["1", "4"]:
9 from sqlalchemy.util import string_or_unprintable
10 else:
11 from sqlalchemy.sql.expression import (
12 _string_or_unprintable as string_or_unprintable,
13 )
614
715
816 class RowProxy(Mapping):
9 __slots__ = ('_result_proxy', '_row', '_processors', '_keymap')
17 __slots__ = ("_result_proxy", "_row", "_processors", "_keymap")
1018
1119 def __init__(self, result_proxy, row, processors, keymap):
1220 """RowProxy objects are constructed by ResultProxy objects."""
4149 # raise
4250 if index is None:
4351 raise exc.InvalidRequestError(
44 "Ambiguous column name '%s' in result set! "
45 "try 'use_labels' option on select statement." % key)
52 f"Ambiguous column name {key!r} in result set! "
53 f"try 'use_labels' option on select statement."
54 )
4655 if processor is not None:
4756 return processor(self._row[index])
4857 else:
7786 return repr(self.as_tuple())
7887
7988
80 class ResultMetaData(object):
89 class ResultMetaData:
8190 """Handle cursor.description, applying additional info from an execution
8291 context."""
8392
96105 # `dbapi_type_map` property removed in SQLAlchemy 1.2+.
97106 # Usage of `getattr` only needed for backward compatibility with
98107 # older versions of SQLAlchemy.
99 typemap = getattr(dialect, 'dbapi_type_map', {})
100
101 assert dialect.case_sensitive, \
102 "Doesn't support case insensitive database connection"
108 typemap = getattr(dialect, "dbapi_type_map", {})
109
110 assert (
111 dialect.case_sensitive
112 ), "Doesn't support case insensitive database connection"
103113
104114 # high precedence key values.
105115 primary_keymap = {}
106116
107 assert not dialect.description_encoding, \
108 "psycopg in py3k should not use this"
117 assert (
118 not dialect.description_encoding
119 ), "psycopg in py3k should not use this"
109120
110121 for i, rec in enumerate(cursor_description):
111122 colname = rec[0]
118129 name, obj, type_ = (
119130 map_column_name.get(colname, colname),
120131 None,
121 map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE))
132 map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE)),
122133 )
123134
124135 processor = type_._cached_result_processor(dialect, coltype)
131142 primary_keymap[i] = rec
132143
133144 # populate primary keymap, looking for conflicts.
134 if primary_keymap.setdefault(name, rec) is not rec:
145 if primary_keymap.setdefault(name, rec) != rec:
135146 # place a record that doesn't have the "index" - this
136147 # is interpreted later as an AmbiguousColumnError,
137148 # but only when actually accessed. Columns
159170 map_column_name = {}
160171 for elem in data_map:
161172 name = elem[0]
162 priority_name = getattr(elem[2][0], 'key', name)
173 priority_name = getattr(elem[2][0], "key", None) or name
163174 map_type[name] = elem[3] # type column
164175 map_column_name[name] = priority_name
165176
175186 # or colummn('name') constructs to ColumnElements, or after a
176187 # pickle/unpickle roundtrip
177188 elif isinstance(key, expression.ColumnElement):
178 if (key._label and key._label in map):
189 if key._label and key._label in map:
179190 result = map[key._label]
180 elif (hasattr(key, 'key') and key.key in map):
191 elif hasattr(key, "key") and key.key in map:
181192 # match is only on name.
182193 result = map[key.key]
183194 # search extra hard to make sure this
193204 if result is None:
194205 if raiseerr:
195206 raise exc.NoSuchColumnError(
196 "Could not locate column in row for column '%s'" %
197 expression._string_or_unprintable(key))
207 f"Could not locate column in row for column "
208 f"{string_or_unprintable(key)!r}"
209 )
198210 else:
199211 return None
200212 else:
289301 cursor_description = self.cursor.description
290302 if cursor_description is not None:
291303 self._metadata = ResultMetaData(self, cursor_description)
292 self._weak = weakref.ref(self, lambda wr: self.cursor.close())
304 self._weak = weakref.ref(self, lambda _: self.close())
293305 else:
294306 self.close()
295 self._weak = None
296307
297308 @property
298309 def returns_rows(self):
328339 * cursor.description is None.
329340 """
330341
331 if not self.closed:
332 self.cursor.close()
333 # allow consistent errors
334 self._cursor = None
335 self._weak = None
342 if self._cursor is None:
343 return
344
345 if not self._cursor.closed:
346 self._cursor.close()
347
348 self._cursor = None
349 self._weak = None
336350
337351 def __aiter__(self):
338352 return self
341355 ret = await self.fetchone()
342356 if ret is not None:
343357 return ret
344 else:
345 raise StopAsyncIteration
358 raise StopAsyncIteration
346359
347360 def _non_result(self):
348361 if self._metadata is None:
349362 raise exc.ResourceClosedError(
350363 "This result object does not return rows. "
351 "It has been closed automatically.")
364 "It has been closed automatically."
365 )
352366 else:
353367 raise exc.ResourceClosedError("This result object is closed.")
354368
357371 metadata = self._metadata
358372 keymap = metadata._keymap
359373 processors = metadata._processors
360 return [process_row(metadata, row, processors, keymap)
361 for row in rows]
374 return [process_row(metadata, row, processors, keymap) for row in rows]
362375
363376 async def fetchall(self):
364377 """Fetch all rows, just like DB-API cursor.fetchall()."""
00 from . import exc
11
22
3 class Transaction(object):
3 class Transaction:
44 """Represent a database transaction in progress.
55
66 The Transaction object is procured by
2222 See also: SAConnection.begin(), SAConnection.begin_twophase(),
2323 SAConnection.begin_nested().
2424 """
25
26 __slots__ = ("_connection", "_parent", "_is_active")
2527
2628 def __init__(self, connection, parent):
2729 self._connection = connection
8284 async def __aexit__(self, exc_type, exc_val, exc_tb):
8385 if exc_type:
8486 await self.rollback()
85 else:
86 if self._is_active:
87 await self.commit()
87 elif self._is_active:
88 await self.commit()
8889
8990
9091 class RootTransaction(Transaction):
92 __slots__ = ()
9193
9294 def __init__(self, connection):
9395 super().__init__(connection, None)
108110 The interface is the same as that of Transaction class.
109111 """
110112
111 _savepoint = None
113 __slots__ = ("_savepoint",)
112114
113115 def __init__(self, connection, parent):
114 super(NestedTransaction, self).__init__(connection, parent)
116 super().__init__(connection, parent)
117 self._savepoint = None
115118
116119 async def _do_rollback(self):
117120 assert self._savepoint is not None, "Broken transaction logic"
118121 if self._is_active:
119122 await self._connection._rollback_to_savepoint_impl(
120 self._savepoint, self._parent)
123 self._savepoint, self._parent
124 )
121125
122126 async def _do_commit(self):
123127 assert self._savepoint is not None, "Broken transaction logic"
124128 if self._is_active:
125129 await self._connection._release_savepoint_impl(
126 self._savepoint, self._parent)
130 self._savepoint, self._parent
131 )
127132
128133
129134 class TwoPhaseTransaction(Transaction):
135140 The interface is the same as that of Transaction class
136141 with the addition of the .prepare() method.
137142 """
143
144 __slots__ = ("_is_prepared", "_xid")
138145
139146 def __init__(self, connection, xid):
140147 super().__init__(connection, None)
159166
160167 async def _do_rollback(self):
161168 await self._connection._rollback_twophase_impl(
162 self._xid, is_prepared=self._is_prepared)
169 self._xid, is_prepared=self._is_prepared
170 )
163171
164172 async def _do_commit(self):
165173 await self._connection._commit_twophase_impl(
166 self._xid, is_prepared=self._is_prepared)
174 self._xid, is_prepared=self._is_prepared
175 )
0 import sqlalchemy
1
2 SQLALCHEMY_VERSION = sqlalchemy.__version__.split(".")
+0
-194
aiopg/transaction.py less more
0 import enum
1 import uuid
2 import warnings
3 from abc import ABC
4
5 import psycopg2
6
7 from aiopg.utils import _TransactionPointContextManager
8
9 __all__ = ('IsolationLevel', 'Transaction')
10
11
12 class IsolationCompiler(ABC):
13 __slots__ = ('_isolation_level', '_readonly', '_deferrable')
14
15 def __init__(self, isolation_level, readonly, deferrable):
16 self._isolation_level = isolation_level
17 self._readonly = readonly
18 self._deferrable = deferrable
19
20 @property
21 def name(self):
22 return self._isolation_level
23
24 def savepoint(self, unique_id):
25 return 'SAVEPOINT {}'.format(unique_id)
26
27 def release_savepoint(self, unique_id):
28 return 'RELEASE SAVEPOINT {}'.format(unique_id)
29
30 def rollback_savepoint(self, unique_id):
31 return 'ROLLBACK TO SAVEPOINT {}'.format(unique_id)
32
33 def commit(self):
34 return 'COMMIT'
35
36 def rollback(self):
37 return 'ROLLBACK'
38
39 def begin(self):
40 query = 'BEGIN'
41 if self._isolation_level is not None:
42 query += (
43 ' ISOLATION LEVEL {}'.format(self._isolation_level.upper())
44 )
45
46 if self._readonly:
47 query += ' READ ONLY'
48
49 if self._deferrable:
50 query += ' DEFERRABLE'
51
52 return query
53
54 def __repr__(self):
55 return self.name
56
57
58 class ReadCommittedCompiler(IsolationCompiler):
59 __slots__ = ()
60
61 def __init__(self, readonly, deferrable):
62 super().__init__('Read committed', readonly, deferrable)
63
64
65 class RepeatableReadCompiler(IsolationCompiler):
66 __slots__ = ()
67
68 def __init__(self, readonly, deferrable):
69 super().__init__('Repeatable read', readonly, deferrable)
70
71
72 class SerializableCompiler(IsolationCompiler):
73 __slots__ = ()
74
75 def __init__(self, readonly, deferrable):
76 super().__init__('Serializable', readonly, deferrable)
77
78
79 class DefaultCompiler(IsolationCompiler):
80 __slots__ = ()
81
82 def __init__(self, readonly, deferrable):
83 super().__init__(None, readonly, deferrable)
84
85 @property
86 def name(self):
87 return 'Default'
88
89
90 class IsolationLevel(enum.Enum):
91 serializable = SerializableCompiler
92 repeatable_read = RepeatableReadCompiler
93 read_committed = ReadCommittedCompiler
94 default = DefaultCompiler
95
96 def __call__(self, readonly, deferrable):
97 return self.value(readonly, deferrable)
98
99
100 class Transaction:
101 __slots__ = ('_cur', '_is_begin', '_isolation', '_unique_id')
102
103 def __init__(self, cur, isolation_level,
104 readonly=False, deferrable=False):
105 self._cur = cur
106 self._is_begin = False
107 self._unique_id = None
108 self._isolation = isolation_level(readonly, deferrable)
109
110 @property
111 def is_begin(self):
112 return self._is_begin
113
114 async def begin(self):
115 if self._is_begin:
116 raise psycopg2.ProgrammingError(
117 'You are trying to open a new transaction, use the save point')
118 self._is_begin = True
119 await self._cur.execute(self._isolation.begin())
120 return self
121
122 async def commit(self):
123 self._check_commit_rollback()
124 await self._cur.execute(self._isolation.commit())
125 self._is_begin = False
126
127 async def rollback(self):
128 self._check_commit_rollback()
129 await self._cur.execute(self._isolation.rollback())
130 self._is_begin = False
131
132 async def rollback_savepoint(self):
133 self._check_release_rollback()
134 await self._cur.execute(
135 self._isolation.rollback_savepoint(self._unique_id))
136 self._unique_id = None
137
138 async def release_savepoint(self):
139 self._check_release_rollback()
140 await self._cur.execute(
141 self._isolation.release_savepoint(self._unique_id))
142 self._unique_id = None
143
144 async def savepoint(self):
145 self._check_commit_rollback()
146 if self._unique_id is not None:
147 raise psycopg2.ProgrammingError('You do not shut down savepoint')
148
149 self._unique_id = 's{}'.format(uuid.uuid1().hex)
150 await self._cur.execute(
151 self._isolation.savepoint(self._unique_id))
152
153 return self
154
155 def point(self):
156 return _TransactionPointContextManager(self.savepoint())
157
158 def _check_commit_rollback(self):
159 if not self._is_begin:
160 raise psycopg2.ProgrammingError('You are trying to commit '
161 'the transaction does not open')
162
163 def _check_release_rollback(self):
164 self._check_commit_rollback()
165 if self._unique_id is None:
166 raise psycopg2.ProgrammingError('You do not start savepoint')
167
168 def __repr__(self):
169 return "<{} transaction={} id={:#x}>".format(
170 self.__class__.__name__,
171 self._isolation,
172 id(self)
173 )
174
175 def __del__(self):
176 if self._is_begin:
177 warnings.warn(
178 "You have not closed transaction {!r}".format(self),
179 ResourceWarning)
180
181 if self._unique_id is not None:
182 warnings.warn(
183 "You have not closed savepoint {!r}".format(self),
184 ResourceWarning)
185
186 async def __aenter__(self):
187 return await self.begin()
188
189 async def __aexit__(self, exc_type, exc, tb):
190 if exc_type is not None:
191 await self.rollback()
192 else:
193 await self.commit()
00 import asyncio
11 import sys
2 import warnings
3 from collections.abc import Coroutine
4
5 import psycopg2
6
7 from .log import logger
8
9 try:
10 ensure_future = asyncio.ensure_future
11 except AttributeError:
12 ensure_future = getattr(asyncio, 'async')
2 from types import TracebackType
3 from typing import (
4 Any,
5 Awaitable,
6 Callable,
7 Coroutine,
8 Generator,
9 Generic,
10 Optional,
11 Type,
12 TypeVar,
13 Union,
14 )
1315
1416 if sys.version_info >= (3, 7, 0):
1517 __get_running_loop = asyncio.get_running_loop
1618 else:
19
1720 def __get_running_loop() -> asyncio.AbstractEventLoop:
1821 loop = asyncio.get_event_loop()
1922 if not loop.is_running():
20 raise RuntimeError('no running event loop')
23 raise RuntimeError("no running event loop")
2124 return loop
2225
2326
24 def get_running_loop(is_warn: bool = False) -> asyncio.AbstractEventLoop:
25 loop = __get_running_loop()
27 def get_running_loop() -> asyncio.AbstractEventLoop:
28 return __get_running_loop()
2629
27 if is_warn:
28 warnings.warn(
29 'aiopg always uses "aiopg.get_running_loop", '
30 'look the documentation.',
31 DeprecationWarning,
32 stacklevel=3
30
31 def create_completed_future(
32 loop: asyncio.AbstractEventLoop,
33 ) -> "asyncio.Future[Any]":
34 future = loop.create_future()
35 future.set_result(None)
36 return future
37
38
39 _TObj = TypeVar("_TObj")
40 _Release = Callable[[_TObj], Awaitable[None]]
41
42
43 class _ContextManager(Coroutine[Any, None, _TObj], Generic[_TObj]):
44 __slots__ = ("_coro", "_obj", "_release", "_release_on_exception")
45
46 def __init__(
47 self,
48 coro: Coroutine[Any, None, _TObj],
49 release: _Release[_TObj],
50 release_on_exception: Optional[_Release[_TObj]] = None,
51 ):
52 self._coro = coro
53 self._obj: Optional[_TObj] = None
54 self._release = release
55 self._release_on_exception = (
56 release if release_on_exception is None else release_on_exception
3357 )
3458
35 if loop.get_debug():
36 logger.warning(
37 'aiopg always uses "aiopg.get_running_loop", '
38 'look the documentation.',
39 exc_info=True
40 )
59 def send(self, value: Any) -> "Any":
60 return self._coro.send(value)
4161
42 return loop
62 def throw( # type: ignore
63 self,
64 typ: Type[BaseException],
65 val: Optional[Union[BaseException, object]] = None,
66 tb: Optional[TracebackType] = None,
67 ) -> Any:
68 if val is None:
69 return self._coro.throw(typ)
70 if tb is None:
71 return self._coro.throw(typ, val)
72 return self._coro.throw(typ, val, tb)
73
74 def close(self) -> None:
75 self._coro.close()
76
77 def __await__(self) -> Generator[Any, None, _TObj]:
78 return self._coro.__await__()
79
80 async def __aenter__(self) -> _TObj:
81 self._obj = await self._coro
82 assert self._obj
83 return self._obj
84
85 async def __aexit__(
86 self,
87 exc_type: Optional[Type[BaseException]],
88 exc: Optional[BaseException],
89 tb: Optional[TracebackType],
90 ) -> None:
91 if self._obj is None:
92 return
93
94 try:
95 if exc_type is not None:
96 await self._release_on_exception(self._obj)
97 else:
98 await self._release(self._obj)
99 finally:
100 self._obj = None
43101
44102
45 class _ContextManager(Coroutine):
46 __slots__ = ('_coro', '_obj')
47
48 def __init__(self, coro):
49 self._coro = coro
50 self._obj = None
51
52 def send(self, value):
53 return self._coro.send(value)
54
55 def throw(self, typ, val=None, tb=None):
56 if val is None:
57 return self._coro.throw(typ)
58 elif tb is None:
59 return self._coro.throw(typ, val)
60 else:
61 return self._coro.throw(typ, val, tb)
62
63 def close(self):
64 return self._coro.close()
65
66 @property
67 def gi_frame(self):
68 return self._coro.gi_frame
69
70 @property
71 def gi_running(self):
72 return self._coro.gi_running
73
74 @property
75 def gi_code(self):
76 return self._coro.gi_code
77
78 def __next__(self):
79 return self.send(None)
80
81 def __await__(self):
82 resp = self._coro.__await__()
83 return resp
84
85 async def __aenter__(self):
86 self._obj = await self._coro
87 return self._obj
88
89 async def __aexit__(self, exc_type, exc, tb):
90 self._obj.close()
91 self._obj = None
92
93
94 class _SAConnectionContextManager(_ContextManager):
103 class _IterableContextManager(_ContextManager[_TObj]):
95104 __slots__ = ()
96105
97 def __aiter__(self):
106 def __init__(self, *args: Any, **kwargs: Any):
107 super().__init__(*args, **kwargs)
108
109 def __aiter__(self) -> "_IterableContextManager[_TObj]":
98110 return self
99111
100 async def __anext__(self):
112 async def __anext__(self) -> _TObj:
101113 if self._obj is None:
102114 self._obj = await self._coro
103115
104116 try:
105 return await self._obj.__anext__()
117 return await self._obj.__anext__() # type: ignore
106118 except StopAsyncIteration:
107 self._obj.close()
108 self._obj = None
119 try:
120 await self._release(self._obj)
121 finally:
122 self._obj = None
109123 raise
110124
111125
112 class _PoolContextManager(_ContextManager):
113 __slots__ = ()
126 class ClosableQueue:
127 """
128 Proxy object for an asyncio.Queue that is "closable"
114129
115 async def __aexit__(self, exc_type, exc, tb):
116 self._obj.close()
117 await self._obj.wait_closed()
118 self._obj = None
130 When the ClosableQueue is closed, with an exception object as parameter,
131 subsequent or ongoing attempts to read from the queue will result in that
132 exception being result in that exception being raised.
119133
120
121 class _TransactionPointContextManager(_ContextManager):
122 __slots__ = ()
123
124 async def __aexit__(self, exc_type, exc_val, exc_tb):
125 if exc_type is not None:
126 await self._obj.rollback_savepoint()
127 else:
128 await self._obj.release_savepoint()
129
130 self._obj = None
131
132
133 class _TransactionBeginContextManager(_ContextManager):
134 __slots__ = ()
135
136 async def __aexit__(self, exc_type, exc_val, exc_tb):
137 if exc_type is not None:
138 await self._obj.rollback()
139 else:
140 await self._obj.commit()
141
142 self._obj = None
143
144
145 class _TransactionContextManager(_ContextManager):
146 __slots__ = ()
147
148 async def __aexit__(self, exc_type, exc, tb):
149 if exc_type:
150 await self._obj.rollback()
151 else:
152 if self._obj.is_active:
153 await self._obj.commit()
154 self._obj = None
155
156
157 class _PoolAcquireContextManager(_ContextManager):
158 __slots__ = ('_coro', '_obj', '_pool')
159
160 def __init__(self, coro, pool):
161 super().__init__(coro)
162 self._pool = pool
163
164 async def __aexit__(self, exc_type, exc, tb):
165 await self._pool.release(self._obj)
166 self._pool = None
167 self._obj = None
168
169
170 class _PoolConnectionContextManager:
171 """Context manager.
172
173 This enables the following idiom for acquiring and releasing a
174 connection around a block:
175
176 async with pool as conn:
177 cur = await conn.cursor()
178
179 while failing loudly when accidentally using:
180
181 with pool:
182 <block>
134 Note: closing a queue with exception will still allow to read any items
135 pending in the queue. The close exception is raised only once all items
136 are consumed.
183137 """
184138
185 __slots__ = ('_pool', '_conn')
139 __slots__ = ("_loop", "_queue", "_close_event")
186140
187 def __init__(self, pool, conn):
188 self._pool = pool
189 self._conn = conn
141 def __init__(
142 self,
143 queue: asyncio.Queue, # type: ignore
144 loop: asyncio.AbstractEventLoop,
145 ):
146 self._loop = loop
147 self._queue = queue
148 self._close_event = loop.create_future()
149 # suppress Future exception was never retrieved
150 self._close_event.add_done_callback(lambda f: f.exception())
190151
191 def __enter__(self):
192 assert self._conn
193 return self._conn
152 def close(self, exception: Exception) -> None:
153 if self._close_event.done():
154 return
155 self._close_event.set_exception(exception)
194156
195 def __exit__(self, exc_type, exc_val, exc_tb):
157 async def get(self) -> Any:
158 if self._close_event.done():
159 try:
160 return self._queue.get_nowait()
161 except asyncio.QueueEmpty:
162 return self._close_event.result()
163
164 get = asyncio.ensure_future(self._queue.get(), loop=self._loop)
196165 try:
197 self._pool.release(self._conn)
166 await asyncio.wait(
167 [get, self._close_event], return_when=asyncio.FIRST_COMPLETED
168 )
169 except asyncio.CancelledError:
170 get.cancel()
171 raise
172
173 if get.done():
174 return get.result()
175
176 try:
177 return self._close_event.result()
198178 finally:
199 self._pool = None
200 self._conn = None
179 get.cancel()
201180
202 async def __aenter__(self):
203 assert not self._conn
204 self._conn = await self._pool.acquire()
205 return self._conn
181 def empty(self) -> bool:
182 return self._queue.empty()
206183
207 async def __aexit__(self, exc_type, exc_val, exc_tb):
208 try:
209 await self._pool.release(self._conn)
210 finally:
211 self._pool = None
212 self._conn = None
184 def qsize(self) -> int:
185 return self._queue.qsize()
213186
187 def get_nowait(self) -> Any:
188 if self._close_event.done():
189 try:
190 return self._queue.get_nowait()
191 except asyncio.QueueEmpty:
192 return self._close_event.result()
214193
215 class _PoolCursorContextManager:
216 """Context manager.
217
218 This enables the following idiom for acquiring and releasing a
219 cursor around a block:
220
221 async with pool.cursor() as cur:
222 await cur.execute("SELECT 1")
223
224 while failing loudly when accidentally using:
225
226 with pool:
227 <block>
228 """
229
230 __slots__ = ('_pool', '_conn', '_cur')
231
232 def __init__(self, pool, conn, cur):
233 self._pool = pool
234 self._conn = conn
235 self._cur = cur
236
237 def __enter__(self):
238 return self._cur
239
240 def __exit__(self, *args):
241 try:
242 self._cur.close()
243 except psycopg2.ProgrammingError:
244 # seen instances where the cursor fails to close:
245 # https://github.com/aio-libs/aiopg/issues/364
246 # We close it here so we don't return a bad connection to the pool
247 self._conn.close()
248 raise
249 finally:
250 try:
251 self._pool.release(self._conn)
252 finally:
253 self._pool = None
254 self._conn = None
255 self._cur = None
194 return self._queue.get_nowait()
00 Metadata-Version: 2.1
11 Name: aiopg
2 Version: 1.2.0b2
2 Version: 1.3.3
33 Summary: Postgres integration with asyncio.
44 Home-page: https://aiopg.readthedocs.io
55 Author: Andrew Svetlov
1414 Project-URL: Docs: RTD, https://aiopg.readthedocs.io
1515 Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues
1616 Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg
17 Description: aiopg
18 =====
19 .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg
20 :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI
21 .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg
22 :target: https://codecov.io/gh/aio-libs/aiopg
23 .. image:: https://badges.gitter.im/Join%20Chat.svg
24 :target: https://gitter.im/aio-libs/Lobby
25 :alt: Chat on Gitter
26
27 **aiopg** is a library for accessing a PostgreSQL_ database
28 from the asyncio_ (PEP-3156/tulip) framework. It wraps
29 asynchronous features of the Psycopg database driver.
30
31 Example
32 -------
33
34 .. code:: python
35
36 import asyncio
37 import aiopg
38
39 dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'
40
41 async def go():
42 pool = await aiopg.create_pool(dsn)
43 async with pool.acquire() as conn:
44 async with conn.cursor() as cur:
45 await cur.execute("SELECT 1")
46 ret = []
47 async for row in cur:
48 ret.append(row)
49 assert ret == [(1,)]
50
51 loop = asyncio.get_event_loop()
52 loop.run_until_complete(go())
53
54
55 Example of SQLAlchemy optional integration
56 ------------------------------------------
57
58 .. code:: python
59
60 import asyncio
61 from aiopg.sa import create_engine
62 import sqlalchemy as sa
63
64 metadata = sa.MetaData()
65
66 tbl = sa.Table('tbl', metadata,
67 sa.Column('id', sa.Integer, primary_key=True),
68 sa.Column('val', sa.String(255)))
69
70 async def create_table(engine):
71 async with engine.acquire() as conn:
72 await conn.execute('DROP TABLE IF EXISTS tbl')
73 await conn.execute('''CREATE TABLE tbl (
74 id serial PRIMARY KEY,
75 val varchar(255))''')
76
77 async def go():
78 async with create_engine(user='aiopg',
79 database='aiopg',
80 host='127.0.0.1',
81 password='passwd') as engine:
82
83 async with engine.acquire() as conn:
84 await conn.execute(tbl.insert().values(val='abc'))
85
86 async for row in conn.execute(tbl.select()):
87 print(row.id, row.val)
88
89 loop = asyncio.get_event_loop()
90 loop.run_until_complete(go())
91
92 .. _PostgreSQL: http://www.postgresql.org/
93 .. _asyncio: http://docs.python.org/3.4/library/asyncio.html
94
95 Please use::
96
97 $ make test
98
99 for executing the project's unittests.
100 See https://aiopg.readthedocs.io/en/stable/contributing.html for details
101 on how to set up your environment to run the tests.
102
103 Changelog
104 ---------
105
106 1.2.0b2 (2020-12-21)
107 ^^^^^^^^^^^^^^^^^^^^
108
109 * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 <https://github.com/aio-libs/aiopg/pull/770>`_
110
111 * Fix python 3.8 warnings in tests `#771 <https://github.com/aio-libs/aiopg/pull/771>`_
112
113
114 1.2.0b1 (2020-12-16)
115 ^^^^^^^^^^^^^^^^^^^^
116
117 * Deprecate blocking connection.cancel() method `#570 <https://github.com/aio-libs/aiopg/pull/570>`_
118
119
120 1.2.0b0 (2020-12-15)
121 ^^^^^^^^^^^^^^^^^^^^
122
123 * Implement timeout on acquiring connection from pool `#766 <https://github.com/aio-libs/aiopg/pull/766>`_
124
125
126 1.1.0 (2020-12-10)
127 ^^^^^^^^^^^^^^^^^^
128
129
130 1.1.0b2 (2020-12-09)
131 ^^^^^^^^^^^^^^^^^^^^
132
133 * Added missing slots to context managers `#763 <https://github.com/aio-libs/aiopg/pull/763>`_
134
135
136 1.1.0b1 (2020-12-07)
137 ^^^^^^^^^^^^^^^^^^^^
138
139 * Fix on_connect multiple call on acquire `#552 <https://github.com/aio-libs/aiopg/pull/552>`_
140
141 * Fix python 3.8 warnings `#622 <https://github.com/aio-libs/aiopg/pull/642>`_
142
143 * Bump minimum psycopg version to 2.8.4 `#754 <https://github.com/aio-libs/aiopg/pull/754>`_
144
145 * Fix Engine.release method to release connection in any way `#756 <https://github.com/aio-libs/aiopg/pull/756>`_
146
147
148 1.0.0 (2019-09-20)
149 ^^^^^^^^^^^^^^^^^^
150
151 * Removal of an asynchronous call in favor of issues # 550
152
153 * Big editing of documentation and minor bugs #534
154
155
156 0.16.0 (2019-01-25)
157 ^^^^^^^^^^^^^^^^^^^
158
159 * Fix select priority name `#525 <https://github.com/aio-libs/aiopg/issues/525>`_
160
161 * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 <https://github.com/aio-libs/aiopg/issues/507>`_
162
163 * Fix `#189 <https://github.com/aio-libs/aiopg/issues/189>`_ hstore when using ReadDictCursor `#512 <https://github.com/aio-libs/aiopg/issues/512>`_
164
165 * close cannot be used while an asynchronous query is underway `#452 <https://github.com/aio-libs/aiopg/issues/452>`_
166
167 * sqlalchemy adapter trx begin allow transaction_mode `#498 <https://github.com/aio-libs/aiopg/issues/498>`_
168
169
170 0.15.0 (2018-08-14)
171 ^^^^^^^^^^^^^^^^^^^
172
173 * Support Python 3.7 `#437 <https://github.com/aio-libs/aiopg/issues/437>`_
174
175
176 0.14.0 (2018-05-10)
177 ^^^^^^^^^^^^^^^^^^^
178
179 * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 <https://github.com/aio-libs/aiopg/issues/451>`_
180
181
182 0.13.2 (2018-01-03)
183 ^^^^^^^^^^^^^^^^^^^
184
185 * Fixed compatibility with SQLAlchemy 1.2.0 `#412 <https://github.com/aio-libs/aiopg/issues/412>`_
186
187 * Added support for transaction isolation levels `#219 <https://github.com/aio-libs/aiopg/issues/219>`_
188
189
190 0.13.1 (2017-09-10)
191 ^^^^^^^^^^^^^^^^^^^
192
193 * Added connection poll recycling logic `#373 <https://github.com/aio-libs/aiopg/issues/373>`_
194
195
196 0.13.0 (2016-12-02)
197 ^^^^^^^^^^^^^^^^^^^
198
199 * Add `async with` support to `.begin_nested()` `#208 <https://github.com/aio-libs/aiopg/issues/208>`_
200
201 * Fix connection.cancel() `#212 <https://github.com/aio-libs/aiopg/issues/212>`_ `#223 <https://github.com/aio-libs/aiopg/issues/223>`_
202
203 * Raise informative error on unexpected connection closing `#191 <https://github.com/aio-libs/aiopg/issues/191>`_
204
205 * Added support for python types columns issues `#217 <https://github.com/aio-libs/aiopg/issues/217>`_
206
207 * Added support for default values in SA table issues `#206 <https://github.com/aio-libs/aiopg/issues/206>`_
208
209
210 0.12.0 (2016-10-09)
211 ^^^^^^^^^^^^^^^^^^^
212
213 * Add an on_connect callback parameter to pool `#141 <https://github.com/aio-libs/aiopg/issues/141>`_
214
215 * Fixed connection to work under both windows and posix based systems `#142 <https://github.com/aio-libs/aiopg/issues/142>`_
216
217
218 0.11.0 (2016-09-12)
219 ^^^^^^^^^^^^^^^^^^^
220
221 * Immediately remove callbacks from a closed file descriptor `#139 <https://github.com/aio-libs/aiopg/issues/139>`_
222
223 * Drop Python 3.3 support
224
225
226 0.10.0 (2016-07-16)
227 ^^^^^^^^^^^^^^^^^^^
228
229 * Refactor tests to use dockerized Postgres server `#107 <https://github.com/aio-libs/aiopg/issues/107>`_
230
231 * Reduce default pool minsize to 1 `#106 <https://github.com/aio-libs/aiopg/issues/106>`_
232
233 * Explicitly enumerate packages in setup.py `#85 <https://github.com/aio-libs/aiopg/issues/85>`_
234
235 * Remove expired connections from pool on acquire `#116 <https://github.com/aio-libs/aiopg/issues/116>`_
236
237 * Don't crash when Connection is GC'ed `#124 <https://github.com/aio-libs/aiopg/issues/124>`_
238
239 * Use loop.create_future() if available
240
241
242 0.9.2 (2016-01-31)
243 ^^^^^^^^^^^^^^^^^^
244
245 * Make pool.release return asyncio.Future, so we can wait on it in
246 `__aexit__` `#102 <https://github.com/aio-libs/aiopg/issues/102>`_
247
248 * Add support for uuid type `#103 <https://github.com/aio-libs/aiopg/issues/103>`_
249
250
251 0.9.1 (2016-01-17)
252 ^^^^^^^^^^^^^^^^^^
253
254 * Documentation update `#101 <https://github.com/aio-libs/aiopg/issues/101>`_
255
256
257 0.9.0 (2016-01-14)
258 ^^^^^^^^^^^^^^^^^^
259
260 * Add async context managers for transactions `#91 <https://github.com/aio-libs/aiopg/issues/91>`_
261
262 * Support async iterator in ResultProxy `#92 <https://github.com/aio-libs/aiopg/issues/92>`_
263
264 * Add async with for engine `#90 <https://github.com/aio-libs/aiopg/issues/90>`_
265
266
267 0.8.0 (2015-12-31)
268 ^^^^^^^^^^^^^^^^^^
269
270 * Add PostgreSQL notification support `#58 <https://github.com/aio-libs/aiopg/issues/58>`_
271
272 * Support pools with unlimited size `#59 <https://github.com/aio-libs/aiopg/issues/59>`_
273
274 * Cancel current DB operation on asyncio timeout `#66 <https://github.com/aio-libs/aiopg/issues/66>`_
275
276 * Add async with support for Pool, Connection, Cursor `#88 <https://github.com/aio-libs/aiopg/issues/88>`_
277
278
279 0.7.0 (2015-04-22)
280 ^^^^^^^^^^^^^^^^^^
281
282 * Get rid of resource leak on connection failure.
283
284 * Report ResourceWarning on non-closed connections.
285
286 * Deprecate iteration protocol support in cursor and ResultProxy.
287
288 * Release sa connection to pool on `connection.close()`.
289
290
291 0.6.0 (2015-02-03)
292 ^^^^^^^^^^^^^^^^^^
293
294 * Accept dict, list, tuple, named and positional parameters in
295 `SAConnection.execute()`
296
297
298 0.5.2 (2014-12-08)
299 ^^^^^^^^^^^^^^^^^^
300
301 * Minor release, fixes a bug that leaves connection in broken state
302 after `cursor.execute()` failure.
303
304
305 0.5.1 (2014-10-31)
306 ^^^^^^^^^^^^^^^^^^
307
308 * Fix a bug for processing transactions in line.
309
310
311 0.5.0 (2014-10-31)
312 ^^^^^^^^^^^^^^^^^^
313
314 * Add .terminate() to Pool and Engine
315
316 * Reimplement connection pool (now pool size cannot be greater than pool.maxsize)
317
318 * Add .close() and .wait_closed() to Pool and Engine
319
320 * Add minsize, maxsize, size and freesize properties to sa.Engine
321
322 * Support *echo* parameter for logging executed SQL commands
323
324 * Connection.close() is not a coroutine (but we keep backward compatibility).
325
326
327 0.4.1 (2014-10-02)
328 ^^^^^^^^^^^^^^^^^^
329
330 * make cursor iterable
331
332 * update docs
333
334
335 0.4.0 (2014-10-02)
336 ^^^^^^^^^^^^^^^^^^
337
338 * add timeouts for database operations.
339
340 * Autoregister psycopg2 support for json data type.
341
342 * Support JSON in aiopg.sa
343
344 * Support ARRAY in aiopg.sa
345
346 * Autoregister hstore support if present in connected DB
347
348 * Support HSTORE in aiopg.sa
349
350
351 0.3.2 (2014-07-07)
352 ^^^^^^^^^^^^^^^^^^
353
354 * change signature to cursor.execute(operation, parameters=None) to
355 follow psycopg2 convention.
356
357
358 0.3.1 (2014-07-04)
359 ^^^^^^^^^^^^^^^^^^
360
361 * Forward arguments to cursor constructor for pooled connections.
362
363
364 0.3.0 (2014-06-22)
365 ^^^^^^^^^^^^^^^^^^
366
367 * Allow executing SQLAlchemy DDL statements.
368
369 * Fix bug with race conditions on acquiring/releasing connections from pool.
370
371
372 0.2.3 (2014-06-12)
373 ^^^^^^^^^^^^^^^^^^
374
375 * Fix bug in connection pool.
376
377
378 0.2.2 (2014-06-07)
379 ^^^^^^^^^^^^^^^^^^
380
381 * Fix bug with passing parameters into SAConnection.execute when
382 executing raw SQL expression.
383
384
385 0.2.1 (2014-05-08)
386 ^^^^^^^^^^^^^^^^^^
387
388 * Close connection with invalid transaction status on returning to pool.
389
390
391 0.2.0 (2014-05-04)
392 ^^^^^^^^^^^^^^^^^^
393
394 * Implemented optional support for sqlalchemy functional sql layer.
395
396
397 0.1.0 (2014-04-06)
398 ^^^^^^^^^^^^^^^^^^
399
400 * Implemented plain connections: connect, Connection, Cursor.
401
402 * Implemented database pools: create_pool and Pool.
40317 Platform: macOS
40418 Platform: POSIX
40519 Platform: Windows
41125 Classifier: Programming Language :: Python :: 3.7
41226 Classifier: Programming Language :: Python :: 3.8
41327 Classifier: Programming Language :: Python :: 3.9
28 Classifier: Programming Language :: Python :: 3.10
41429 Classifier: Operating System :: POSIX
41530 Classifier: Operating System :: MacOS :: MacOS X
41631 Classifier: Operating System :: Microsoft :: Windows
42237 Requires-Python: >=3.6
42338 Description-Content-Type: text/x-rst
42439 Provides-Extra: sa
40 License-File: LICENSE
41
42 aiopg
43 =====
44 .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg
45 :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI
46 .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg
47 :target: https://codecov.io/gh/aio-libs/aiopg
48 .. image:: https://badges.gitter.im/Join%20Chat.svg
49 :target: https://gitter.im/aio-libs/Lobby
50 :alt: Chat on Gitter
51
52 **aiopg** is a library for accessing a PostgreSQL_ database
53 from the asyncio_ (PEP-3156/tulip) framework. It wraps
54 asynchronous features of the Psycopg database driver.
55
56 Example
57 -------
58
59 .. code:: python
60
61 import asyncio
62 import aiopg
63
64 dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1'
65
66 async def go():
67 pool = await aiopg.create_pool(dsn)
68 async with pool.acquire() as conn:
69 async with conn.cursor() as cur:
70 await cur.execute("SELECT 1")
71 ret = []
72 async for row in cur:
73 ret.append(row)
74 assert ret == [(1,)]
75
76 loop = asyncio.get_event_loop()
77 loop.run_until_complete(go())
78
79
80 Example of SQLAlchemy optional integration
81 ------------------------------------------
82
83 .. code:: python
84
85 import asyncio
86 from aiopg.sa import create_engine
87 import sqlalchemy as sa
88
89 metadata = sa.MetaData()
90
91 tbl = sa.Table('tbl', metadata,
92 sa.Column('id', sa.Integer, primary_key=True),
93 sa.Column('val', sa.String(255)))
94
95 async def create_table(engine):
96 async with engine.acquire() as conn:
97 await conn.execute('DROP TABLE IF EXISTS tbl')
98 await conn.execute('''CREATE TABLE tbl (
99 id serial PRIMARY KEY,
100 val varchar(255))''')
101
102 async def go():
103 async with create_engine(user='aiopg',
104 database='aiopg',
105 host='127.0.0.1',
106 password='passwd') as engine:
107
108 async with engine.acquire() as conn:
109 await conn.execute(tbl.insert().values(val='abc'))
110
111 async for row in conn.execute(tbl.select()):
112 print(row.id, row.val)
113
114 loop = asyncio.get_event_loop()
115 loop.run_until_complete(go())
116
117 .. _PostgreSQL: http://www.postgresql.org/
118 .. _asyncio: https://docs.python.org/3/library/asyncio.html
119
120 Please use::
121
122 $ make test
123
124 for executing the project's unittests.
125 See https://aiopg.readthedocs.io/en/stable/contributing.html for details
126 on how to set up your environment to run the tests.
127
128 Changelog
129 ---------
130
131 1.3.3 (2021-11-01)
132 ^^^^^^^^^^^^^^^^^^
133
134 * Support async-timeout 4.0+
135
136
137 1.3.2 (2021-10-07)
138 ^^^^^^^^^^^^^^^^^^
139
140
141 1.3.2b2 (2021-10-07)
142 ^^^^^^^^^^^^^^^^^^^^
143
144 * Respect use_labels for select statement `#882 <https://github.com/aio-libs/aiopg/pull/882>`_
145
146
147 1.3.2b1 (2021-07-11)
148 ^^^^^^^^^^^^^^^^^^^^
149
150 * Fix compatibility with SQLAlchemy >= 1.4 `#870 <https://github.com/aio-libs/aiopg/pull/870>`_
151
152
153 1.3.1 (2021-07-08)
154 ^^^^^^^^^^^^^^^^^^
155
156
157 1.3.1b2 (2021-07-06)
158 ^^^^^^^^^^^^^^^^^^^^
159
160 * Suppress "Future exception was never retrieved" `#862 <https://github.com/aio-libs/aiopg/pull/862>`_
161
162
163 1.3.1b1 (2021-07-05)
164 ^^^^^^^^^^^^^^^^^^^^
165
166 * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 <https://github.com/aio-libs/aiopg/pull/859>`_
167
168
169 1.3.0 (2021-06-30)
170 ^^^^^^^^^^^^^^^^^^
171
172
173 1.3.0b4 (2021-06-28)
174 ^^^^^^^^^^^^^^^^^^^^
175
176 * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 <https://github.com/aio-libs/aiopg/pull/559>`_
177
178
179 1.3.0b3 (2021-04-03)
180 ^^^^^^^^^^^^^^^^^^^^
181
182 * Reformat using black `#814 <https://github.com/aio-libs/aiopg/pull/814>`_
183
184
185 1.3.0b2 (2021-04-02)
186 ^^^^^^^^^^^^^^^^^^^^
187
188 * Type annotations `#813 <https://github.com/aio-libs/aiopg/pull/813>`_
189
190
191 1.3.0b1 (2021-03-30)
192 ^^^^^^^^^^^^^^^^^^^^
193
194 * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 <https://github.com/aio-libs/aiopg/pull/811>`_
195
196
197 1.3.0b0 (2021-03-25)
198 ^^^^^^^^^^^^^^^^^^^^
199
200 * Fix compatibility with SA 1.4 for IN statement `#806 <https://github.com/aio-libs/aiopg/pull/806>`_
201
202
203 1.2.1 (2021-03-23)
204 ^^^^^^^^^^^^^^^^^^
205
206 * Pop loop in connection init due to backward compatibility `#808 <https://github.com/aio-libs/aiopg/pull/808>`_
207
208
209 1.2.0b4 (2021-03-23)
210 ^^^^^^^^^^^^^^^^^^^^
211
212 * Set max supported sqlalchemy version `#805 <https://github.com/aio-libs/aiopg/pull/805>`_
213
214
215 1.2.0b3 (2021-03-22)
216 ^^^^^^^^^^^^^^^^^^^^
217
218 * Don't run ROLLBACK when the connection is closed `#778 <https://github.com/aio-libs/aiopg/pull/778>`_
219
220 * Multiple cursors support `#801 <https://github.com/aio-libs/aiopg/pull/801>`_
221
222
223 1.2.0b2 (2020-12-21)
224 ^^^^^^^^^^^^^^^^^^^^
225
226 * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 <https://github.com/aio-libs/aiopg/pull/770>`_
227
228 * Fix python 3.8 warnings in tests `#771 <https://github.com/aio-libs/aiopg/pull/771>`_
229
230
231 1.2.0b1 (2020-12-16)
232 ^^^^^^^^^^^^^^^^^^^^
233
234 * Deprecate blocking connection.cancel() method `#570 <https://github.com/aio-libs/aiopg/pull/570>`_
235
236
237 1.2.0b0 (2020-12-15)
238 ^^^^^^^^^^^^^^^^^^^^
239
240 * Implement timeout on acquiring connection from pool `#766 <https://github.com/aio-libs/aiopg/pull/766>`_
241
242
243 1.1.0 (2020-12-10)
244 ^^^^^^^^^^^^^^^^^^
245
246
247 1.1.0b2 (2020-12-09)
248 ^^^^^^^^^^^^^^^^^^^^
249
250 * Added missing slots to context managers `#763 <https://github.com/aio-libs/aiopg/pull/763>`_
251
252
253 1.1.0b1 (2020-12-07)
254 ^^^^^^^^^^^^^^^^^^^^
255
256 * Fix on_connect multiple call on acquire `#552 <https://github.com/aio-libs/aiopg/pull/552>`_
257
258 * Fix python 3.8 warnings `#622 <https://github.com/aio-libs/aiopg/pull/642>`_
259
260 * Bump minimum psycopg version to 2.8.4 `#754 <https://github.com/aio-libs/aiopg/pull/754>`_
261
262 * Fix Engine.release method to release connection in any way `#756 <https://github.com/aio-libs/aiopg/pull/756>`_
263
264
265 1.0.0 (2019-09-20)
266 ^^^^^^^^^^^^^^^^^^
267
268 * Removal of an asynchronous call in favor of issues # 550
269
270 * Big editing of documentation and minor bugs #534
271
272
273 0.16.0 (2019-01-25)
274 ^^^^^^^^^^^^^^^^^^^
275
276 * Fix select priority name `#525 <https://github.com/aio-libs/aiopg/issues/525>`_
277
278 * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 <https://github.com/aio-libs/aiopg/issues/507>`_
279
280 * Fix `#189 <https://github.com/aio-libs/aiopg/issues/189>`_ hstore when using ReadDictCursor `#512 <https://github.com/aio-libs/aiopg/issues/512>`_
281
282 * close cannot be used while an asynchronous query is underway `#452 <https://github.com/aio-libs/aiopg/issues/452>`_
283
284 * sqlalchemy adapter trx begin allow transaction_mode `#498 <https://github.com/aio-libs/aiopg/issues/498>`_
285
286
287 0.15.0 (2018-08-14)
288 ^^^^^^^^^^^^^^^^^^^
289
290 * Support Python 3.7 `#437 <https://github.com/aio-libs/aiopg/issues/437>`_
291
292
293 0.14.0 (2018-05-10)
294 ^^^^^^^^^^^^^^^^^^^
295
296 * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 <https://github.com/aio-libs/aiopg/issues/451>`_
297
298
299 0.13.2 (2018-01-03)
300 ^^^^^^^^^^^^^^^^^^^
301
302 * Fixed compatibility with SQLAlchemy 1.2.0 `#412 <https://github.com/aio-libs/aiopg/issues/412>`_
303
304 * Added support for transaction isolation levels `#219 <https://github.com/aio-libs/aiopg/issues/219>`_
305
306
307 0.13.1 (2017-09-10)
308 ^^^^^^^^^^^^^^^^^^^
309
310 * Added connection poll recycling logic `#373 <https://github.com/aio-libs/aiopg/issues/373>`_
311
312
313 0.13.0 (2016-12-02)
314 ^^^^^^^^^^^^^^^^^^^
315
316 * Add `async with` support to `.begin_nested()` `#208 <https://github.com/aio-libs/aiopg/issues/208>`_
317
318 * Fix connection.cancel() `#212 <https://github.com/aio-libs/aiopg/issues/212>`_ `#223 <https://github.com/aio-libs/aiopg/issues/223>`_
319
320 * Raise informative error on unexpected connection closing `#191 <https://github.com/aio-libs/aiopg/issues/191>`_
321
322 * Added support for python types columns issues `#217 <https://github.com/aio-libs/aiopg/issues/217>`_
323
324 * Added support for default values in SA table issues `#206 <https://github.com/aio-libs/aiopg/issues/206>`_
325
326
327 0.12.0 (2016-10-09)
328 ^^^^^^^^^^^^^^^^^^^
329
330 * Add an on_connect callback parameter to pool `#141 <https://github.com/aio-libs/aiopg/issues/141>`_
331
332 * Fixed connection to work under both windows and posix based systems `#142 <https://github.com/aio-libs/aiopg/issues/142>`_
333
334
335 0.11.0 (2016-09-12)
336 ^^^^^^^^^^^^^^^^^^^
337
338 * Immediately remove callbacks from a closed file descriptor `#139 <https://github.com/aio-libs/aiopg/issues/139>`_
339
340 * Drop Python 3.3 support
341
342
343 0.10.0 (2016-07-16)
344 ^^^^^^^^^^^^^^^^^^^
345
346 * Refactor tests to use dockerized Postgres server `#107 <https://github.com/aio-libs/aiopg/issues/107>`_
347
348 * Reduce default pool minsize to 1 `#106 <https://github.com/aio-libs/aiopg/issues/106>`_
349
350 * Explicitly enumerate packages in setup.py `#85 <https://github.com/aio-libs/aiopg/issues/85>`_
351
352 * Remove expired connections from pool on acquire `#116 <https://github.com/aio-libs/aiopg/issues/116>`_
353
354 * Don't crash when Connection is GC'ed `#124 <https://github.com/aio-libs/aiopg/issues/124>`_
355
356 * Use loop.create_future() if available
357
358
359 0.9.2 (2016-01-31)
360 ^^^^^^^^^^^^^^^^^^
361
362 * Make pool.release return asyncio.Future, so we can wait on it in
363 `__aexit__` `#102 <https://github.com/aio-libs/aiopg/issues/102>`_
364
365 * Add support for uuid type `#103 <https://github.com/aio-libs/aiopg/issues/103>`_
366
367
368 0.9.1 (2016-01-17)
369 ^^^^^^^^^^^^^^^^^^
370
371 * Documentation update `#101 <https://github.com/aio-libs/aiopg/issues/101>`_
372
373
374 0.9.0 (2016-01-14)
375 ^^^^^^^^^^^^^^^^^^
376
377 * Add async context managers for transactions `#91 <https://github.com/aio-libs/aiopg/issues/91>`_
378
379 * Support async iterator in ResultProxy `#92 <https://github.com/aio-libs/aiopg/issues/92>`_
380
381 * Add async with for engine `#90 <https://github.com/aio-libs/aiopg/issues/90>`_
382
383
384 0.8.0 (2015-12-31)
385 ^^^^^^^^^^^^^^^^^^
386
387 * Add PostgreSQL notification support `#58 <https://github.com/aio-libs/aiopg/issues/58>`_
388
389 * Support pools with unlimited size `#59 <https://github.com/aio-libs/aiopg/issues/59>`_
390
391 * Cancel current DB operation on asyncio timeout `#66 <https://github.com/aio-libs/aiopg/issues/66>`_
392
393 * Add async with support for Pool, Connection, Cursor `#88 <https://github.com/aio-libs/aiopg/issues/88>`_
394
395
396 0.7.0 (2015-04-22)
397 ^^^^^^^^^^^^^^^^^^
398
399 * Get rid of resource leak on connection failure.
400
401 * Report ResourceWarning on non-closed connections.
402
403 * Deprecate iteration protocol support in cursor and ResultProxy.
404
405 * Release sa connection to pool on `connection.close()`.
406
407
408 0.6.0 (2015-02-03)
409 ^^^^^^^^^^^^^^^^^^
410
411 * Accept dict, list, tuple, named and positional parameters in
412 `SAConnection.execute()`
413
414
415 0.5.2 (2014-12-08)
416 ^^^^^^^^^^^^^^^^^^
417
418 * Minor release, fixes a bug that leaves connection in broken state
419 after `cursor.execute()` failure.
420
421
422 0.5.1 (2014-10-31)
423 ^^^^^^^^^^^^^^^^^^
424
425 * Fix a bug for processing transactions in line.
426
427
428 0.5.0 (2014-10-31)
429 ^^^^^^^^^^^^^^^^^^
430
431 * Add .terminate() to Pool and Engine
432
433 * Reimplement connection pool (now pool size cannot be greater than pool.maxsize)
434
435 * Add .close() and .wait_closed() to Pool and Engine
436
437 * Add minsize, maxsize, size and freesize properties to sa.Engine
438
439 * Support *echo* parameter for logging executed SQL commands
440
441 * Connection.close() is not a coroutine (but we keep backward compatibility).
442
443
444 0.4.1 (2014-10-02)
445 ^^^^^^^^^^^^^^^^^^
446
447 * make cursor iterable
448
449 * update docs
450
451
452 0.4.0 (2014-10-02)
453 ^^^^^^^^^^^^^^^^^^
454
455 * add timeouts for database operations.
456
457 * Autoregister psycopg2 support for json data type.
458
459 * Support JSON in aiopg.sa
460
461 * Support ARRAY in aiopg.sa
462
463 * Autoregister hstore support if present in connected DB
464
465 * Support HSTORE in aiopg.sa
466
467
468 0.3.2 (2014-07-07)
469 ^^^^^^^^^^^^^^^^^^
470
471 * change signature to cursor.execute(operation, parameters=None) to
472 follow psycopg2 convention.
473
474
475 0.3.1 (2014-07-04)
476 ^^^^^^^^^^^^^^^^^^
477
478 * Forward arguments to cursor constructor for pooled connections.
479
480
481 0.3.0 (2014-06-22)
482 ^^^^^^^^^^^^^^^^^^
483
484 * Allow executing SQLAlchemy DDL statements.
485
486 * Fix bug with race conditions on acquiring/releasing connections from pool.
487
488
489 0.2.3 (2014-06-12)
490 ^^^^^^^^^^^^^^^^^^
491
492 * Fix bug in connection pool.
493
494
495 0.2.2 (2014-06-07)
496 ^^^^^^^^^^^^^^^^^^
497
498 * Fix bug with passing parameters into SAConnection.execute when
499 executing raw SQL expression.
500
501
502 0.2.1 (2014-05-08)
503 ^^^^^^^^^^^^^^^^^^
504
505 * Close connection with invalid transaction status on returning to pool.
506
507
508 0.2.0 (2014-05-04)
509 ^^^^^^^^^^^^^^^^^^
510
511 * Implemented optional support for sqlalchemy functional sql layer.
512
513
514 0.1.0 (2014-04-06)
515 ^^^^^^^^^^^^^^^^^^
516
517 * Implemented plain connections: connect, Connection, Cursor.
518
519 * Implemented database pools: create_pool and Pool.
520
66 setup.py
77 aiopg/__init__.py
88 aiopg/connection.py
9 aiopg/cursor.py
109 aiopg/log.py
1110 aiopg/pool.py
12 aiopg/transaction.py
1311 aiopg/utils.py
1412 aiopg.egg-info/PKG-INFO
1513 aiopg.egg-info/SOURCES.txt
2119 aiopg/sa/engine.py
2220 aiopg/sa/exc.py
2321 aiopg/sa/result.py
24 aiopg/sa/transaction.py
22 aiopg/sa/transaction.py
23 aiopg/sa/utils.py
00 psycopg2-binary>=2.8.4
1 async_timeout<4.0,>=3.0
1 async_timeout<5.0,>=3.0
22
33 [sa]
4 sqlalchemy[postgresql_psycopg2binary]>=1.1
4 sqlalchemy[postgresql_psycopg2binary]<1.5,>=1.3
0 import os
10 import re
1 from pathlib import Path
22
33 from setuptools import setup, find_packages
44
5 install_requires = ['psycopg2-binary>=2.8.4', 'async_timeout>=3.0,<4.0']
6 extras_require = {'sa': ['sqlalchemy[postgresql_psycopg2binary]>=1.1']}
5 install_requires = ["psycopg2-binary>=2.8.4", "async_timeout>=3.0,<5.0"]
6 extras_require = {"sa": ["sqlalchemy[postgresql_psycopg2binary]>=1.3,<1.5"]}
77
88
9 def read(f):
10 return open(os.path.join(os.path.dirname(__file__), f)).read().strip()
9 def read(*parts):
10 return Path(__file__).resolve().parent.joinpath(*parts).read_text().strip()
1111
1212
13 def get_maintainers(path='MAINTAINERS.txt'):
14 with open(os.path.join(os.path.dirname(__file__), path)) as f:
15 return ', '.join(x.strip().strip('*').strip() for x in f.readlines())
13 def get_maintainers(path="MAINTAINERS.txt"):
14 return ", ".join(x.strip().strip("*").strip() for x in read(path).splitlines())
1615
1716
1817 def read_version():
19 regexp = re.compile(r"^__version__\W*=\W*'([\d.abrc]+)'")
20 init_py = os.path.join(os.path.dirname(__file__), 'aiopg', '__init__.py')
21 with open(init_py) as f:
22 for line in f:
23 match = regexp.match(line)
24 if match is not None:
25 return match.group(1)
26 else:
27 raise RuntimeError('Cannot find version in aiopg/__init__.py')
18 regexp = re.compile(r"^__version__\W*=\W*\"([\d.abrc]+)\"")
19 for line in read("aiopg", "__init__.py").splitlines():
20 match = regexp.match(line)
21 if match is not None:
22 return match.group(1)
23
24 raise RuntimeError("Cannot find version in aiopg/__init__.py")
2825
2926
30 def read_changelog(path='CHANGES.txt'):
31 return 'Changelog\n---------\n\n{}'.format(read(path))
27 def read_changelog(path="CHANGES.txt"):
28 return f"Changelog\n---------\n\n{read(path)}"
3229
3330
3431 classifiers = [
35 'License :: OSI Approved :: BSD License',
36 'Intended Audience :: Developers',
37 'Programming Language :: Python :: 3',
38 'Programming Language :: Python :: 3 :: Only',
39 'Programming Language :: Python :: 3.6',
40 'Programming Language :: Python :: 3.7',
41 'Programming Language :: Python :: 3.8',
42 'Programming Language :: Python :: 3.9',
43 'Operating System :: POSIX',
44 'Operating System :: MacOS :: MacOS X',
45 'Operating System :: Microsoft :: Windows',
46 'Environment :: Web Environment',
47 'Development Status :: 5 - Production/Stable',
48 'Topic :: Database',
49 'Topic :: Database :: Front-Ends',
50 'Framework :: AsyncIO',
32 "License :: OSI Approved :: BSD License",
33 "Intended Audience :: Developers",
34 "Programming Language :: Python :: 3",
35 "Programming Language :: Python :: 3 :: Only",
36 "Programming Language :: Python :: 3.6",
37 "Programming Language :: Python :: 3.7",
38 "Programming Language :: Python :: 3.8",
39 "Programming Language :: Python :: 3.9",
40 "Programming Language :: Python :: 3.10",
41 "Operating System :: POSIX",
42 "Operating System :: MacOS :: MacOS X",
43 "Operating System :: Microsoft :: Windows",
44 "Environment :: Web Environment",
45 "Development Status :: 5 - Production/Stable",
46 "Topic :: Database",
47 "Topic :: Database :: Front-Ends",
48 "Framework :: AsyncIO",
5149 ]
5250
5351 setup(
54 name='aiopg',
52 name="aiopg",
5553 version=read_version(),
56 description='Postgres integration with asyncio.',
57 long_description='\n\n'.join((read('README.rst'), read_changelog())),
58 long_description_content_type='text/x-rst',
54 description="Postgres integration with asyncio.",
55 long_description="\n\n".join((read("README.rst"), read_changelog())),
56 long_description_content_type="text/x-rst",
5957 classifiers=classifiers,
60 platforms=['macOS', 'POSIX', 'Windows'],
61 author='Andrew Svetlov',
62 python_requires='>=3.6',
58 platforms=["macOS", "POSIX", "Windows"],
59 author="Andrew Svetlov",
60 python_requires=">=3.6",
6361 project_urls={
64 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',
65 'CI: GA': 'https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI',
66 'Coverage: codecov': 'https://codecov.io/gh/aio-libs/aiopg',
67 'Docs: RTD': 'https://aiopg.readthedocs.io',
68 'GitHub: issues': 'https://github.com/aio-libs/aiopg/issues',
69 'GitHub: repo': 'https://github.com/aio-libs/aiopg',
62 "Chat: Gitter": "https://gitter.im/aio-libs/Lobby",
63 "CI: GA": "https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI",
64 "Coverage: codecov": "https://codecov.io/gh/aio-libs/aiopg",
65 "Docs: RTD": "https://aiopg.readthedocs.io",
66 "GitHub: issues": "https://github.com/aio-libs/aiopg/issues",
67 "GitHub: repo": "https://github.com/aio-libs/aiopg",
7068 },
71 author_email='andrew.svetlov@gmail.com',
69 author_email="andrew.svetlov@gmail.com",
7270 maintainer=get_maintainers(),
73 maintainer_email='virmir49@gmail.com',
74 url='https://aiopg.readthedocs.io',
75 download_url='https://pypi.python.org/pypi/aiopg',
76 license='BSD',
71 maintainer_email="virmir49@gmail.com",
72 url="https://aiopg.readthedocs.io",
73 download_url="https://pypi.python.org/pypi/aiopg",
74 license="BSD",
7775 packages=find_packages(),
7876 install_requires=install_requires,
7977 extras_require=extras_require,
80 include_package_data=True
78 include_package_data=True,
8179 )