Codebase list httpcore / upstream/0.14.3
New upstream version 0.14.3 Sandro Tosi 2 years ago
108 changed file(s) with 8112 addition(s) and 8718 deletion(s). Raw diff Collapse all Expand all
1313
1414 strategy:
1515 matrix:
16 python-version: ["3.6", "3.7", "3.8", "3.9", "3.10.0-beta.4"]
16 python-version: ["3.6", "3.7", "3.8", "3.9", "3.10"]
1717
1818 steps:
1919 - uses: "actions/checkout@v2"
22 All notable changes to this project will be documented in this file.
33
44 The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
5
6 ## 0.14.3 (November 17th, 2021)
7
8 - Fix race condition when removing closed connections from the pool (#437)
9
10 ## 0.14.2 (November 16th, 2021)
11
12 - Failed connections no longer remain in the pool. (Pull #433)
13
14 ## 0.14.1 (November 12th, 2021)
15
16 - `max_connections` becomes optional. (Pull #429)
17 - `certifi` is now included in the install dependancies. (Pull #428)
18 - `h2` is now strictly optional. (Pull #428)
19
20 ## 0.14.0 (November 11th, 2021)
21
22 The 0.14 release is a complete reworking of `httpcore`, comprehensively addressing some underlying issues in the connection pooling, as well as substantially redesigning the API to be more user friendly.
23
24 Some of the lower-level API design also makes the components more easily testable in isolation, and the package now has 100% test coverage.
25
26 See [discussion #419](https://github.com/encode/httpcore/discussions/419) for a little more background.
27
28 There's some other neat bits in there too, such as the "trace" extension, which gives a hook into inspecting the internal events that occur during the request/response cycle. This extension is needed for the HTTPX cli, in order to...
29
30 * Log the point at which the connection is established, and the IP/port on which it is made.
31 * Determine if the outgoing request should log as HTTP/1.1 or HTTP/2, rather than having to assume it's HTTP/2 if the --http2 flag was passed. (Which may not actually be true.)
32 * Log SSL version info / certificate info.
33
34 Note that `curio` support is not currently available in 0.14.0. If you're using `httpcore` with `curio` please get in touch, so we can assess if we ought to prioritize it as a feature or not.
535
636 ## 0.13.7 (September 13th, 2021)
737
89119
90120 ### Fixed
91121
92 - Task cancelation no longer leaks connections from the connection pool. (Pull #305)
122 - Task cancellation no longer leaks connections from the connection pool. (Pull #305)
93123
94124 ## 0.12.3 (December 7th, 2020)
95125
1616 Some things HTTP Core does do:
1717
1818 * Sending HTTP requests.
19 * Thread-safe / task-safe connection pooling.
20 * HTTP(S) proxy support.
21 * Supports HTTP/1.1 and HTTP/2.
1922 * Provides both sync and async interfaces.
20 * Supports HTTP/1.1 and HTTP/2.
21 * Async backend support for `asyncio`, `trio` and `curio`.
22 * Automatic connection pooling.
23 * HTTP(S) proxy support.
23 * Async backend support for `asyncio` and `trio`.
2424
2525 ## Installation
2626
3636 $ pip install httpcore[http2]
3737 ```
3838
39 ## Quickstart
39 # Sending requests
4040
41 Here's an example of making an HTTP GET request using `httpcore`...
41 Send an HTTP request:
4242
4343 ```python
44 with httpcore.SyncConnectionPool() as http:
45 status_code, headers, stream, extensions = http.handle_request(
46 method=b'GET',
47 url=(b'https', b'example.org', 443, b'/'),
48 headers=[(b'host', b'example.org'), (b'user-agent', b'httpcore')],
49 stream=httpcore.ByteStream(b''),
50 extensions={}
51 )
52 body = stream.read()
53 print(status_code, body)
44 import httpcore
45
46 response = httpcore.request("GET", "https://www.example.com/")
47
48 print(response)
49 # <Response [200]>
50 print(response.status)
51 # 200
52 print(response.headers)
53 # [(b'Accept-Ranges', b'bytes'), (b'Age', b'557328'), (b'Cache-Control', b'max-age=604800'), ...]
54 print(response.content)
55 # b'<!doctype html>\n<html>\n<head>\n<title>Example Domain</title>\n\n<meta charset="utf-8"/>\n ...'
5456 ```
5557
56 Or, using async...
58 The top-level `httpcore.request()` function is provided for convenience. In practice whenever you're working with `httpcore` you'll want to use the connection pooling functionality that it provides.
5759
5860 ```python
59 async with httpcore.AsyncConnectionPool() as http:
60 status_code, headers, stream, extensions = await http.handle_async_request(
61 method=b'GET',
62 url=(b'https', b'example.org', 443, b'/'),
63 headers=[(b'host', b'example.org'), (b'user-agent', b'httpcore')],
64 stream=httpcore.ByteStream(b''),
65 extensions={}
66 )
67 body = await stream.aread()
68 print(status_code, body)
61 import httpcore
62
63 http = httpcore.ConnectionPool()
64 response = http.request("GET", "https://www.example.com/")
6965 ```
66
67 Once you're ready to get going, [head over to the documentation](https://www.encode.io/httpcore/).
7068
7169 ## Motivation
7270
73 You probably don't want to be using HTTP Core directly. It might make sense if
71 You *probably* don't want to be using HTTP Core directly. It might make sense if
7472 you're writing something like a proxy service in Python, and you just want
7573 something at the lowest possible level, but more typically you'll want to use
7674 a higher level client library, such as `httpx`.
+0
-82
docs/api.md less more
0 # Developer Interface
1
2 ## Async API Overview
3
4 ### Base async interfaces
5
6 These classes provide the base interface which transport classes need to implement.
7
8 :::{eval-rst}
9 .. autoclass:: httpcore.AsyncHTTPTransport
10 :members: handle_async_request, aclose
11
12 .. autoclass:: httpcore.AsyncByteStream
13 :members: __aiter__, aclose
14 :::
15
16 ### Async connection pool
17
18 :::{eval-rst}
19 .. autoclass:: httpcore.AsyncConnectionPool
20 :show-inheritance:
21 :::
22
23 ### Async proxy
24
25 :::{eval-rst}
26 .. autoclass:: httpcore.AsyncHTTPProxy
27 :show-inheritance:
28 :::
29
30 ### Async byte streams
31
32 These classes are concrete implementations of [`AsyncByteStream`](httpcore.AsyncByteStream).
33
34 :::{eval-rst}
35 .. autoclass:: httpcore.ByteStream
36 :show-inheritance:
37
38 .. autoclass:: httpcore.AsyncIteratorByteStream
39 :show-inheritance:
40 :::
41
42 ## Sync API Overview
43
44 ### Base sync interfaces
45
46 These classes provide the base interface which transport classes need to implement.
47
48 :::{eval-rst}
49 .. autoclass:: httpcore.SyncHTTPTransport
50 :members: request, close
51
52 .. autoclass:: httpcore.SyncByteStream
53 :members: __iter__, close
54 :::
55
56 ### Sync connection pool
57
58 :::{eval-rst}
59 .. autoclass:: httpcore.SyncConnectionPool
60 :show-inheritance:
61 :::
62
63 ### Sync proxy
64
65 :::{eval-rst}
66 .. autoclass:: httpcore.SyncHTTPProxy
67 :show-inheritance:
68 :::
69
70 ### Sync byte streams
71
72 These classes are concrete implementations of [`SyncByteStream`](httpcore.SyncByteStream).
73
74 :::{eval-rst}
75 .. autoclass:: httpcore.ByteStream
76 :show-inheritance:
77 :noindex:
78
79 .. autoclass:: httpcore.IteratorByteStream
80 :show-inheritance:
81 :::
0 # Async Support
1
2 HTTPX offers a standard synchronous API by default, but also gives you the option of an async client if you need it.
3
4 Async is a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets.
5
6 If you're working with an async web framework then you'll also want to use an async client for sending outgoing HTTP requests.
7
8 Launching concurrent async tasks is far more resource efficient than spawning multiple threads. The Python interpreter should be able to comfortably handle switching between over 1000 concurrent tasks, while a sensible number of threads in a thread pool might be to enable around 10 or 20 concurrent threads.
9
10 ## API differences
11
12 When using async support, you need make sure to use an async connection pool class:
13
14 ```python
15 # The async variation of `httpcore.ConnectionPool`
16 async with httpcore.AsyncConnectionPool() as http:
17 ...
18 ```
19
20 Or if connecting via a proxy:
21
22 ```python
23 # The async variation of `httpcore.HTTPProxy`
24 async with httpcore.AsyncHTTPProxy() as proxy:
25 ...
26 ```
27
28 ### Sending requests
29
30 Sending requests with the async version of `httpcore` requires the `await` keyword:
31
32 ```python
33 import asyncio
34 import httpcore
35
36 async def main():
37 async with httpcore.AsyncConnectionPool() as http:
38 response = await http.request("GET", "https://www.example.com/")
39
40
41 asyncio.run(main())
42 ```
43
44 When including content in the request, the content must either be bytes or an *async iterable* yielding bytes.
45
46 ### Streaming responses
47
48 Streaming responses also require a slightly different interface to the sync version:
49
50 * `with <pool>.stream(...) as response` → `async with <pool>.stream() as response`.
51 * `for chunk in response.iter_stream()` → `async for chunk in response.aiter_stream()`.
52 * `response.read()` → `await response.aread()`.
53 * `response.close()` → `await response.aclose()`
54
55 For example:
56
57 ```python
58 import asyncio
59 import httpcore
60
61
62 async def main():
63 async with httpcore.AsyncConnectionPool() as http:
64 async with http.stream("GET", "https://www.example.com/") as response:
65 async for chunk in response.aiter_stream():
66 print(f"Downloaded: {chunk}")
67
68
69 asyncio.run(main())
70 ```
71
72 ### Pool lifespans
73
74 When using `httpcore` in an async environment it is strongly recommended that you instantiate and use connection pools using the context managed style:
75
76 ```python
77 async with httpcore.AsyncConnectionPool() as http:
78 ...
79 ```
80
81 To benefit from connection pooling it is recommended that you instantiate a single connection pool in this style, and pass it around throughout your application.
82
83 If you do want to use a connection pool without this style then you'll need to ensure that you explicitly close the pool once it is no longer required:
84
85 ```python
86 try:
87 http = httpcore.AsyncConnectionPool()
88 ...
89 finally:
90 await http.aclose()
91 ```
92
93 This is a little different to the threaded context, where it's okay to simply instantiate a globally available connection pool, and then allow Python's garbage collection to deal with closing any connections in the pool, once the `__del__` method is called.
94
95 The reason for this difference is that asynchronous code is not able to run within the context of the synchronous `__del__` method, so there is no way for connections to be automatically closed at the point of garbage collection. This can lead to unterminated TCP connections still remaining after the Python interpreter quits.
96
97 ## Supported environments
98
99 HTTPX supports either `asyncio` or `trio` as an async environment.
100
101 It will auto-detect which of those two to use as the backend for socket operations and concurrency primitives.
102
103 ### AsyncIO
104
105 AsyncIO is Python's [built-in library](https://docs.python.org/3/library/asyncio.html) for writing concurrent code with the async/await syntax.
106
107 Let's take a look at sending several outgoing HTTP requests concurrently, using `asyncio`:
108
109 ```python
110 import asyncio
111 import httpcore
112 import time
113
114
115 async def download(http, year):
116 await http.request("GET", f"https://en.wikipedia.org/wiki/{year}")
117
118
119 async def main():
120 async with httpcore.AsyncConnectionPool() as http:
121 started = time.time()
122 # Here we use `asyncio.gather()` in order to run several tasks concurrently...
123 tasks = [download(http, year) for year in range(2000, 2020)]
124 await asyncio.gather(*tasks)
125 complete = time.time()
126
127 for connection in http.connections:
128 print(connection)
129 print("Complete in %.3f seconds" % (complete - started))
130
131
132 asyncio.run(main())
133 ```
134
135 ### Trio
136
137 Trio is [an alternative async library](https://trio.readthedocs.io/en/stable/), designed around the [the principles of structured concurrency](https://en.wikipedia.org/wiki/Structured_concurrency).
138
139 ```python
140 import httpcore
141 import trio
142 import time
143
144
145 async def download(http, year):
146 await http.request("GET", f"https://en.wikipedia.org/wiki/{year}")
147
148
149 async def main():
150 async with httpcore.AsyncConnectionPool() as http:
151 started = time.time()
152 async with trio.open_nursery() as nursery:
153 for year in range(2000, 2020):
154 nursery.start_soon(download, http, year)
155 complete = time.time()
156
157 for connection in http.connections:
158 print(connection)
159 print("Complete in %.3f seconds" % (complete - started))
160
161
162 trio.run(main)
163 ```
164
165 ### AnyIO
166
167 AnyIO is an [asynchronous networking and concurrency library](https://anyio.readthedocs.io/) that works on top of either asyncio or trio. It blends in with native libraries of your chosen backend (defaults to asyncio).
168
169 The `anyio` library is designed around the [the principles of structured concurrency](https://en.wikipedia.org/wiki/Structured_concurrency), and brings many of the same correctness and usability benefits that Trio provides, while interoperating with existing `asyncio` libraries.
170
171 ```python
172 import httpcore
173 import anyio
174 import time
175
176
177 async def download(http, year):
178 await http.request("GET", f"https://en.wikipedia.org/wiki/{year}")
179
180
181 async def main():
182 async with httpcore.AsyncConnectionPool() as http:
183 started = time.time()
184 async with anyio.create_task_group() as task_group:
185 for year in range(2000, 2020):
186 task_group.start_soon(download, http, year)
187 complete = time.time()
188
189 for connection in http.connections:
190 print(connection)
191 print("Complete in %.3f seconds" % (complete - started))
192
193
194 anyio.run(main)
195 ```
196
197 ---
198
199 # Reference
200
201 ## `httpcore.AsyncConnectionPool`
202
203 ::: httpcore.AsyncConnectionPool
204 handler: python
205 rendering:
206 show_source: False
207
208 ## `httpcore.AsyncHTTPProxy`
209
210 ::: httpcore.AsyncHTTPProxy
211 handler: python
212 rendering:
213 show_source: False
+0
-60
docs/conf.py less more
0 # See: https://www.sphinx-doc.org/en/master/usage/configuration.html
1
2 # -- Path setup --
3
4 import os
5 import sys
6
7 # Allow sphinx-autodoc to access `httpcore` contents.
8 sys.path.insert(0, os.path.abspath("."))
9
10 # -- Project information --
11
12 project = "HTTPCore"
13 copyright = "2021, Encode"
14 author = "Encode"
15
16 # -- General configuration --
17
18 extensions = [
19 "myst_parser",
20 "sphinx.ext.autodoc",
21 "sphinx.ext.viewcode",
22 "sphinx.ext.napoleon",
23 ]
24
25 myst_enable_extensions = [
26 "colon_fence",
27 ]
28
29 # Preserve :members: order.
30 autodoc_member_order = "bysource"
31
32 # Show type hints in descriptions, rather than signatures.
33 autodoc_typehints = "description"
34
35 # -- HTML configuration --
36
37 html_theme = "furo"
38
39 # -- App setup --
40
41
42 def _viewcode_follow_imported(app, modname, attribute):
43 # We set `__module__ = "httpcore"` on all public attributes for prettier
44 # repr(), so viewcode needs a little help to find the original source modules.
45
46 if modname != "httpcore":
47 return None
48
49 import httpcore
50
51 try:
52 # Set in httpcore/__init__.py
53 return getattr(httpcore, attribute).__source_module__
54 except AttributeError:
55 return None
56
57
58 def setup(app):
59 app.connect("viewcode-follow-imported", _viewcode_follow_imported)
0 # Connection Pools
1
2 While the top-level API provides convenience functions for working with `httpcore`,
3 in practice you'll almost always want to take advantage of the connection pooling
4 functionality that it provides.
5
6 To do so, instantiate a pool instance, and use it to send requests:
7
8 ```python
9 import httpcore
10
11 http = httpcore.ConnectionPool()
12 r = http.request("GET", "https://www.example.com/")
13
14 print(r)
15 # <Response [200]>
16 ```
17
18 Connection pools support the same `.request()` and `.stream()` APIs [as described in the Quickstart](../quickstart).
19
20 We can observe the benefits of connection pooling with a simple script like so:
21
22 ```python
23 import httpcore
24 import time
25
26
27 http = httpcore.ConnectionPool()
28 for counter in range(5):
29 started = time.time()
30 response = http.request("GET", "https://www.example.com/")
31 complete = time.time()
32 print(response, "in %.3f seconds" % (complete - started))
33 ```
34
35 The output *should* demonstrate the initial request as being substantially slower than the subsequent requests:
36
37 ```
38 <Response [200]> in {0.529} seconds
39 <Response [200]> in {0.096} seconds
40 <Response [200]> in {0.097} seconds
41 <Response [200]> in {0.095} seconds
42 <Response [200]> in {0.098} seconds
43 ```
44
45 This is to be expected. Once we've established a connection to `"www.example.com"` we're able to reuse it for following requests.
46
47 ## Configuration
48
49 The connection pool instance is also the main point of configuration. Let's take a look at the various options that it provides:
50
51 ### SSL configuration
52
53 * `ssl_context`: An SSL context to use for verifying connections.
54 If not specified, the default `httpcore.default_ssl_context()`
55 will be used.
56
57 ### Pooling configuration
58
59 * `max_connections`: The maximum number of concurrent HTTP connections that the pool
60 should allow. Any attempt to send a request on a pool that would
61 exceed this amount will block until a connection is available.
62 * `max_keepalive_connections`: The maximum number of idle HTTP connections that will
63 be maintained in the pool.
64 * `keepalive_expiry`: The duration in seconds that an idle HTTP connection may be
65 maintained for before being expired from the pool.
66
67 ### HTTP version support
68
69 * `http1`: A boolean indicating if HTTP/1.1 requests should be supported by the connection
70 pool. Defaults to `True`.
71 * `http2`: A boolean indicating if HTTP/2 requests should be supported by the connection
72 pool. Defaults to `False`.
73
74 ### Other options
75
76 * `retries`: The maximum number of retries when trying to establish a connection.
77 * `local_address`: Local address to connect from. Can also be used to connect using
78 a particular address family. Using `local_address="0.0.0.0"` will
79 connect using an `AF_INET` address (IPv4), while using `local_address="::"`
80 will connect using an `AF_INET6` address (IPv6).
81 * `uds`: Path to a Unix Domain Socket to use instead of TCP sockets.
82 * `network_backend`: A backend instance to use for handling network I/O.
83
84 ## Pool lifespans
85
86 Because connection pools hold onto network resources, careful developers may want to ensure that instances are properly closed once they are no longer required.
87
88 Working with a single global instance isn't a bad idea for many use case, since the connection pool will automatically be closed when the `__del__` method is called on it:
89
90 ```python
91 # This is perfectly fine for most purposes.
92 # The connection pool will automatically be closed when it is garbage collected,
93 # or when the Python interpreter exits.
94 http = httpcore.ConnectionPool()
95 ```
96
97 However, to be more explicit around the resource usage, we can use the connection pool within a context manager:
98
99 ```python
100 with httpcore.ConnectionPool() as http:
101 ...
102 ```
103
104 Or else close the pool explicitly:
105
106 ```python
107 http = httpcore.ConnectionPool()
108 try:
109 ...
110 finally:
111 http.close()
112 ```
113
114 ## Thread and task safety
115
116 Connection pools are designed to be thread-safe. Similarly, when using `httpcore` in an async context connection pools are task-safe.
117
118 This means that you can have a single connection pool instance shared by multiple threads.
119
120 ---
121
122 # Reference
123
124 ## `httpcore.ConnectionPool`
125
126 ::: httpcore.ConnectionPool
127 handler: python
128 rendering:
129 show_source: False
0 # Connections
1
2 TODO
3
4 ---
5
6 # Reference
7
8 ## `httpcore.HTTPConnection`
9
10 ::: httpcore.HTTPConnection
11 handler: python
12 rendering:
13 show_source: False
14
15 ## `httpcore.HTTP11Connection`
16
17 ::: httpcore.HTTP11Connection
18 handler: python
19 rendering:
20 show_source: False
21
22 ## `httpcore.HTTP2Connection`
23
24 ::: httpcore.HTTP2Connection
25 handler: python
26 rendering:
27 show_source: False
+0
-208
docs/contributing.md less more
0 # Contributing
1
2 Thanks for considering contributing to HTTP Core!
3
4 We welcome contributors to:
5
6 - Try [HTTPX](https://www.python-httpx.org), as it is HTTP Core's main entry point,
7 and [report bugs/issues you find](https://github.com/encode/httpx/issues/new)
8 - Help triage [issues](https://github.com/encode/httpcore/issues) and investigate
9 root causes of bugs
10 - [Review Pull Requests of others](https://github.com/encode/httpcore/pulls)
11 - Review, clarify and write documentation
12 - Participate in discussions
13
14 ## Reporting Bugs or Other Issues
15
16 HTTP Core is a fairly specialized library and its main purpose is to provide a
17 solid base for [HTTPX](https://www.python-httpx.org). HTTPX should be considered
18 the main entry point to HTTP Core and as such we encourage users to test and raise
19 issues in [HTTPX's issue tracker](https://github.com/encode/httpx/issues/new)
20 where maintainers and contributors can triage and move to HTTP Core if appropriate.
21
22 If you are convinced that the cause of the issue is on HTTP Core you're more than
23 welcome to [open an issue](https://github.com/encode/httpcore/issues/new).
24
25 Please attach as much detail as possible and, in case of a
26 bug report, provide information like:
27
28 - OS platform or Docker image
29 - Python version
30 - Installed dependencies and versions (`python -m pip freeze`)
31 - Code snippet to reproduce the issue
32 - Error traceback and output
33
34 It is quite helpful to increase the logging level of HTTP Core and include the
35 output of your program. To do so set the `HTTPCORE_LOG_LEVEL` or `HTTPX_LOG_LEVEL`
36 environment variables to `TRACE`, for example:
37
38 ```console
39 $ HTTPCORE_LOG_LEVEL=TRACE python test_script.py
40 TRACE [2020-06-06 09:55:10] httpcore._async.connection_pool - get_connection_from_pool=(b'https', b'localhost', 5000)
41 TRACE [2020-06-06 09:55:10] httpcore._async.connection_pool - created connection=<httpcore._async.connection.AsyncHTTPConnection object at 0x1110fe9d0>
42 ...
43 ```
44
45 The output will be quite long but it will help dramatically in diagnosing the problem.
46
47 For more examples please refer to the
48 [environment variables documentation in HTTPX](https://www.python-httpx.org/environment_variables/#httpx_log_level).
49
50 ## Development
51
52 To start developing HTTP Core create a **fork** of the
53 [repository](https://github.com/encode/httpcore) on GitHub.
54
55 Then clone your fork with the following command replacing `YOUR-USERNAME` with
56 your GitHub username:
57
58 ```shell
59 $ git clone https://github.com/YOUR-USERNAME/httpcore
60 ```
61
62 You can now install the project and its dependencies using:
63
64 ```shell
65 $ cd httpcore
66 $ scripts/install
67 ```
68
69 ## Unasync
70
71 HTTP Core provides synchronous and asynchronous interfaces. As you can imagine,
72 keeping two almost identical versions of code in sync can be quite time consuming.
73 To work around this problem HTTP Core uses a technique called _unasync_, where
74 the development is focused on the asynchronous version of the code and a script
75 generates the synchronous version from it.
76
77 As such developers should:
78
79 - Only make modifications in the asynchronous and shared portions of the code.
80 In practice this roughly means avoiding the `httpcore/_sync` directory.
81 - Write tests _only under `async_tests`_, synchronous tests are also generated
82 as part of the unasync process.
83 - Run `scripts/unasync` to generate the synchronous versions. Note the script
84 is ran as part of other scripts as well, so you don't usually need to run this
85 yourself.
86 - Run the entire test suite as decribed below.
87
88 ## Testing and Linting
89
90 We use custom shell scripts to automate testing, linting,
91 and documentation building workflow.
92
93 To run the tests, use:
94
95 ```shell
96 $ scripts/test
97 ```
98
99 :::{warning}
100 The test suite spawns testing servers on ports **8000** and **8001**.
101 Make sure these are not in use, so the tests can run properly.
102 :::
103
104 You can run a single test script like this:
105
106 ```shell
107 $ scripts/test -- tests/async_tests/test_interfaces.py
108 ```
109
110 To run the code auto-formatting:
111
112 ```shell
113 $ scripts/lint
114 ```
115
116 Lastly, to run code checks separately (they are also run as part of `scripts/test`), run:
117
118 ```shell
119 $ scripts/check
120 ```
121
122 ## Documenting
123
124 Documentation pages are located under the `docs/` folder.
125
126 To run the documentation site locally (useful for previewing changes), use:
127
128 ```shell
129 $ scripts/docs
130 ```
131
132 ## Resolving Build / CI Failures
133
134 Once you've submitted your pull request, the test suite will automatically run, and the results will show up in GitHub.
135 If the test suite fails, you'll want to click through to the "Details" link, and try to identify why the test suite failed.
136
137 <p align="center" style="margin: 0 0 10px">
138 <img src="https://raw.githubusercontent.com/encode/httpx/master/docs/img/gh-actions-fail.png" alt='Failing PR commit status'>
139 </p>
140
141 Here are some common ways the test suite can fail:
142
143 ### Check Job Failed
144
145 <p align="center" style="margin: 0 0 10px">
146 <img src="https://raw.githubusercontent.com/encode/httpx/master/docs/img/gh-actions-fail-check.png" alt='Failing GitHub action lint job'>
147 </p>
148
149 This job failing means there is either a code formatting issue or type-annotation issue.
150 You can look at the job output to figure out why it's failed or within a shell run:
151
152 ```shell
153 $ scripts/check
154 ```
155
156 It may be worth it to run `$ scripts/lint` to attempt auto-formatting the code
157 and if that job succeeds commit the changes.
158
159 ### Docs Job Failed
160
161 This job failing means the documentation failed to build. This can happen for
162 a variety of reasons like invalid markdown or missing configuration within `mkdocs.yml`.
163
164 ### Python 3.X Job Failed
165
166 <p align="center" style="margin: 0 0 10px">
167 <img src="https://raw.githubusercontent.com/encode/httpx/master/docs/img/gh-actions-fail-test.png" alt='Failing GitHub action test job'>
168 </p>
169
170 This job failing means the unit tests failed or not all code paths are covered by unit tests.
171
172 If tests are failing you will see this message under the coverage report:
173
174 `=== 1 failed, 435 passed, 1 skipped, 1 xfailed in 11.09s ===`
175
176 If tests succeed but coverage is lower than our current threshold, you will see this message under the coverage report:
177
178 `FAIL Required test coverage of 100% not reached. Total coverage: 99.00%`
179
180 ## Releasing
181
182 *This section is targeted at HTTPX maintainers.*
183
184 Before releasing a new version, create a pull request that includes:
185
186 - **An update to the changelog**:
187 - We follow the format from [keepachangelog](https://keepachangelog.com/en/1.0.0/).
188 - [Compare](https://github.com/encode/httpcore/compare/) `master` with the tag of the latest release, and list all entries that are of interest to our users:
189 - Things that **must** go in the changelog: added, changed, deprecated or removed features, and bug fixes.
190 - Things that **should not** go in the changelog: changes to documentation, tests or tooling.
191 - Try sorting entries in descending order of impact / importance.
192 - Keep it concise and to-the-point. 🎯
193 - **A version bump**: see `__version__.py`.
194
195 For an example, see [#99](https://github.com/encode/httpcore/pull/99).
196
197 Once the release PR is merged, create a
198 [new release](https://github.com/encode/httpcore/releases/new) including:
199
200 - Tag version like `0.9.3`.
201 - Release title `Version 0.9.3`
202 - Description copied from the changelog.
203
204 Once created this release will be automatically uploaded to PyPI.
205
206 If something goes wrong with the PyPI job the release can be published using the
207 `scripts/publish` script.
0 # Exceptions
1
2 The following exceptions may be raised when sending a request:
3
4 * `httpcore.TimeoutException`
5 * `httpcore.PoolTimeout`
6 * `httpcore.ConnectTimeout`
7 * `httpcore.ReadTimeout`
8 * `httpcore.WriteTimeout`
9 * `httpcore.NetworkError`
10 * `httpcore.ConnectError`
11 * `httpcore.ReadError`
12 * `httpcore.WriteError`
13 * `httpcore.ProtocolError`
14 * `httpcore.RemoteProtocolError`
15 * `httpcore.LocalProtocolError`
16 * `httpcore.ProxyError`
17 * `httpcore.UnsupportedProtocol`
0 # Extensions
1
2 The request/response API used by `httpcore` is kept deliberately simple and explicit.
3
4 The `Request` and `Response` models are pretty slim wrappers around this core API:
5
6 ```
7 # Pseudo-code expressing the essentials of the request/response model.
8 (
9 status_code: int,
10 headers: List[Tuple(bytes, bytes)],
11 stream: Iterable[bytes]
12 ) = handle_request(
13 method: bytes,
14 url: URL,
15 headers: List[Tuple(bytes, bytes)],
16 stream: Iterable[bytes]
17 )
18 ```
19
20 This is everything that's needed in order to represent an HTTP exchange.
21
22 Well... almost.
23
24 There is a maxim in Computer Science that *"All non-trivial abstractions, to some degree, are leaky"*. When an expression is leaky, it's important that it ought to at least leak only in well-defined places.
25
26 In order to handle cases that don't otherwise fit inside this core abstraction, `httpcore` requests and responses have 'extensions'. These are a dictionary of optional additional information.
27
28 Let's expand on our request/response abstraction...
29
30 ```
31 # Pseudo-code expressing the essentials of the request/response model,
32 # plus extensions allowing for additional API that does not fit into
33 # this abstraction.
34 (
35 status_code: int,
36 headers: List[Tuple(bytes, bytes)],
37 stream: Iterable[bytes],
38 extensions: dict
39 ) = handle_request(
40 method: bytes,
41 url: URL,
42 headers: List[Tuple(bytes, bytes)],
43 stream: Iterable[bytes],
44 extensions: dict
45 )
46 ```
47
48 Several extensions are supported both on the request:
49
50 ```python
51 r = httpcore.request(
52 "GET",
53 "https://www.example.com",
54 extensions={"timeout": {"connect": 5.0}}
55 )
56 ```
57
58 And on the response:
59
60 ```python
61 r = httpcore.request("GET", "https://www.example.com")
62
63 print(r.extensions["http_version"])
64 # When using HTTP/1.1 on the client side, the server HTTP response
65 # could feasibly be one of b"HTTP/0.9", b"HTTP/1.0", or b"HTTP/1.1".
66 ```
67
68 ## Request Extensions
69
70 ### `"timeout"`
71
72 A dictionary of `str: Optional[float]` timeout values.
73
74 May include values for `'connect'`, `'read'`, `'write'`, or `'pool'`.
75
76 For example:
77
78 ```python
79 # Timeout if a connection takes more than 5 seconds to established, or if
80 # we are blocked waiting on the connection pool for more than 10 seconds.
81 r = httpcore.request(
82 "GET",
83 "https://www.example.com",
84 extensions={"timeout": {"connect": 5.0, "pool": 10.0}}
85 )
86 ```
87
88 ### `"trace"`
89
90 The trace extension allows a callback handler to be installed to monitor the internal
91 flow of events within `httpcore`. The simplest way to explain this is with an example:
92
93 ```python
94 import httpcore
95
96 def log(event_name, info):
97 print(event_name, info)
98
99 r = httpcore.request("GET", "https://www.example.com/", extensions={"trace": log})
100 # connection.connect_tcp.started {'host': 'www.example.com', 'port': 443, 'local_address': None, 'timeout': None}
101 # connection.connect_tcp.complete {'return_value': <httpcore.backends.sync.SyncStream object at 0x1093f94d0>}
102 # connection.start_tls.started {'ssl_context': <ssl.SSLContext object at 0x1093ee750>, 'server_hostname': b'www.example.com', 'timeout': None}
103 # connection.start_tls.complete {'return_value': <httpcore.backends.sync.SyncStream object at 0x1093f9450>}
104 # http11.send_request_headers.started {'request': <Request [b'GET']>}
105 # http11.send_request_headers.complete {'return_value': None}
106 # http11.send_request_body.started {'request': <Request [b'GET']>}
107 # http11.send_request_body.complete {'return_value': None}
108 # http11.receive_response_headers.started {'request': <Request [b'GET']>}
109 # http11.receive_response_headers.complete {'return_value': (b'HTTP/1.1', 200, b'OK', [(b'Age', b'553715'), (b'Cache-Control', b'max-age=604800'), (b'Content-Type', b'text/html; charset=UTF-8'), (b'Date', b'Thu, 21 Oct 2021 17:08:42 GMT'), (b'Etag', b'"3147526947+ident"'), (b'Expires', b'Thu, 28 Oct 2021 17:08:42 GMT'), (b'Last-Modified', b'Thu, 17 Oct 2019 07:18:26 GMT'), (b'Server', b'ECS (nyb/1DCD)'), (b'Vary', b'Accept-Encoding'), (b'X-Cache', b'HIT'), (b'Content-Length', b'1256')])}
110 # http11.receive_response_body.started {'request': <Request [b'GET']>}
111 # http11.receive_response_body.complete {'return_value': None}
112 # http11.response_closed.started {}
113 # http11.response_closed.complete {'return_value': None}
114 ```
115
116 The `event_name` and `info` arguments here will be one of the following:
117
118 * `{event_type}.{event_name}.started`, `<dictionary of keyword arguments>`
119 * `{event_type}.{event_name}.complete`, `{"return_value": <...>}`
120 * `{event_type}.{event_name}.failed`, `{"exception": <...>}`
121
122 Note that when using the async variant of `httpcore` the handler function passed to `"trace"` must be an `async def ...` function.
123
124 The following event types are currently exposed...
125
126 **Establishing the connection**
127
128 * `"connection.connect_tcp"`
129 * `"connection.connect_unix_socket"`
130 * `"connection.start_tls"`
131
132 **HTTP/1.1 events**
133
134 * `"http11.send_request_headers"`
135 * `"http11.send_request_body"`
136 * `"http11.receive_response"`
137 * `"http11.receive_response_body"`
138 * `"http11.response_closed"`
139
140 **HTTP/2 events**
141
142 * `"http2.send_connection_init"`
143 * `"http2.send_request_headers"`
144 * `"http2.send_request_body"`
145 * `"http2.receive_response_headers"`
146 * `"http2.receive_response_body"`
147 * `"http2.response_closed"`
148
149 ## Response Extensions
150
151 ### `"http_version"`
152
153 The HTTP version, as bytes. Eg. `b"HTTP/1.1"`.
154
155 When using HTTP/1.1 the response line includes an explicit version, and the value of this key could feasibly be one of `b"HTTP/0.9"`, `b"HTTP/1.0"`, or `b"HTTP/1.1"`.
156
157 When using HTTP/2 there is no further response versioning included in the protocol, and the value of this key will always be `b"HTTP/2"`.
158
159 ### `"reason_phrase"`
160
161 The reason-phrase of the HTTP response, as bytes. For example `b"OK"`. Some servers may include a custom reason phrase, although this is not recommended.
162
163 HTTP/2 onwards does not include a reason phrase on the wire.
164
165 When no key is included, a default based on the status code may be used.
166
167 ### `"network_stream"`
168
169 The `"network_stream"` extension allows developers to handle HTTP `CONNECT` and `Upgrade` requests, by providing an API that steps outside the standard request/response model, and can directly read or write to the network.
170
171 The interface provided by the network stream:
172
173 * `read(max_bytes, timeout = None) -> bytes`
174 * `write(buffer, timeout = None)`
175 * `close()`
176 * `start_tls(ssl_context, server_hostname = None, timeout = None) -> NetworkStream`
177 * `get_extra_info(info) -> Any`
178
179 This API can be used as the foundation for working with HTTP proxies, WebSocket upgrades, and other advanced use-cases.
180
181 An example to demonstrate:
182
183 ```python
184 # Formulate a CONNECT request...
185 #
186 # This will establish a connection to 127.0.0.1:8080, and then send the following...
187 #
188 # CONNECT http://www.example.com HTTP/1.1
189 # Host: 127.0.0.1:8080
190 url = httpcore.URL(b"http", b"127.0.0.1", 8080, b"http://www.example.com")
191 with httpcore.stream("CONNECT", url) as response:
192 network_stream = response.extensions["network_stream"]
193
194 # Upgrade to an SSL stream...
195 network_stream = network_stream.start_tls(
196 ssl_context=httpcore.default_ssl_context(),
197 hostname=b"www.example.com",
198 )
199
200 # Manually send an HTTP request over the network stream, and read the response...
201 #
202 # For a more complete example see the httpcore `TunnelHTTPConnection` implementation.
203 network_stream.write(b"GET / HTTP/1.1\r\nHost: example.com\r\n\r\n")
204 data = network_stream.read()
205 print(data)
206 ```
207
208 The network stream abstraction also allows access to various low-level information that may be exposed by the underlying socket:
209
210 ```python
211 response = httpcore.request("GET", "https://www.example.com")
212 network_stream = response.extensions["network_stream"]
213
214 client_addr = network_stream.get_extra_info("client_addr")
215 server_addr = network_stream.get_extra_info("server_addr")
216 print("Client address", client_addr)
217 print("Server address", server_addr)
218 ```
219
220 The socket SSL information is also available through this interface, although you need to ensure that the underlying connection is still open, in order to access it...
221
222 ```python
223 with httpcore.stream("GET", "https://www.example.com") as response:
224 network_stream = response.extensions["network_stream"]
225
226 ssl_object = network_stream.get_extra_info("ssl_object")
227 print("TLS version", ssl_object.version())
228 ```
0 # HTTP/2
1
2 HTTP/2 is a major new iteration of the HTTP protocol, that provides a more efficient transport, with potential performance benefits. HTTP/2 does not change the core semantics of the request or response, but alters the way that data is sent to and from the server.
3
4 Rather than the text format that HTTP/1.1 uses, HTTP/2 is a binary format. The binary format provides full request and response multiplexing, and efficient compression of HTTP headers. The stream multiplexing means that where HTTP/1.1 requires one TCP stream for each concurrent request, HTTP/2 allows a single TCP stream to handle multiple concurrent requests.
5
6 HTTP/2 also provides support for functionality such as response prioritization, and server push.
7
8 For a comprehensive guide to HTTP/2 you may want to check out "[HTTP2 Explained](https://http2-explained.haxx.se)".
9
10 ## Enabling HTTP/2
11
12 When using the `httpcore` client, HTTP/2 support is not enabled by default, because HTTP/1.1 is a mature, battle-hardened transport layer, and our HTTP/1.1 implementation may be considered the more robust option at this point in time. It is possible that a future version of `httpcore` may enable HTTP/2 support by default.
13
14 If you're issuing highly concurrent requests you might want to consider trying out our HTTP/2 support. You can do so by first making sure to install the optional HTTP/2 dependencies...
15
16 ```shell
17 $ pip install httpcore[http2]
18 ```
19
20 And then instantiating a connection pool with HTTP/2 support enabled:
21
22 ```python
23 import httpcore
24
25 pool = httpcore.ConnectionPool(http2=True)
26 ```
27
28 We can take a look at the difference in behaviour by issuing several outgoing requests in parallel.
29
30 Start out by using a standard HTTP/1.1 connection pool:
31
32 ```python
33 import httpcore
34 import concurrent.futures
35 import time
36
37
38 def download(http, year):
39 http.request("GET", f"https://en.wikipedia.org/wiki/{year}")
40
41
42 def main():
43 with httpcore.ConnectionPool() as http:
44 started = time.time()
45 with concurrent.futures.ThreadPoolExecutor(max_workers=10) as threads:
46 for year in range(2000, 2020):
47 threads.submit(download, http, year)
48 complete = time.time()
49
50 for connection in http.connections:
51 print(connection)
52 print("Complete in %.3f seconds" % (complete - started))
53
54
55 main()
56 ```
57
58 If you run this with an HTTP/1.1 connection pool, you ought to see output similar to the following:
59
60 ```python
61 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 2]>,
62 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 3]>,
63 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 6]>,
64 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 5]>,
65 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 1]>,
66 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 1]>,
67 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 1]>,
68 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/1.1, IDLE, Request Count: 1]>
69 Complete in 0.586 seconds
70 ```
71
72 We can see that the connection pool required a number of connections in order to handle the parallel requests.
73
74 If we now upgrade our connection pool to support HTTP/2:
75
76 ```python
77 with httpcore.ConnectionPool(http2=True) as http:
78 ...
79 ```
80
81 And run the same script again, we should end up with something like this:
82
83 ```python
84 <HTTPConnection ['https://en.wikipedia.org:443', HTTP/2, IDLE, Request Count: 20]>
85 Complete in 0.573 seconds
86 ```
87
88 All of our requests have been handled over a single connection.
89
90 Switching to HTTP/2 should not *necessarily* be considered an "upgrade". It is more complex, and requires more computational power, and so particularly in an interpreted language like Python it *could* be slower in some instances. Moreover, utilising multiple connections may end up connecting to multiple hosts, and could sometimes appear faster to the client, at the cost of requiring more server resources. Enabling HTTP/2 is most likely to be beneficial if you are sending requests in high concurrency, and may often be more well suited to an async context, rather than multi-threading.
91
92 ## Inspecting the HTTP version
93
94 Enabling HTTP/2 support on the client does not *necessarily* mean that your requests and responses will be transported over HTTP/2, since both the client *and* the server need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the client will use a standard HTTP/1.1 connection instead.
95
96 You can determine which version of the HTTP protocol was used by examining the `"http_version"` response extension.
97
98 ```python
99 import httpcore
100
101 pool = httpcore.ConnectionPool(http2=True)
102 response = pool.request("GET", "https://www.example.com/")
103
104 # Should be one of b"HTTP/2", b"HTTP/1.1", b"HTTP/1.0", or b"HTTP/0.9".
105 print(response.extensions["http_version"])
106 ```
107
108 See [the extensions documentation](extensions.md) for more details.
109
110 ## HTTP/2 negotiation
111
112 Robust servers need to support both HTTP/2 and HTTP/1.1 capable clients, and so need some way to "negotiate" with the client which protocol version will be used.
113
114 ### HTTP/2 over HTTPS
115
116 Generally the method used is for the server to advertise if it has HTTP/2 support during the part of the SSL connection handshake. This is known as ALPN - "Application Layer Protocol Negotiation".
117
118 Most browsers only provide HTTP/2 support over HTTPS connections, and this is also the default behaviour that `httpcore` provides. If you enable HTTP/2 support you should still expect to see HTTP/1.1 connections for any `http://` URLs.
119
120 ### HTTP/2 over HTTP
121
122 Servers can optionally also support HTTP/2 over HTTP by supporting the `Upgrade: h2c` header.
123
124 This mechanism is not supported by `httpcore`. It requires an additional round-trip between the client and server, and also requires any request body to be sent twice.
125
126 ### Prior Knowledge
127
128 If you know in advance that the server you are communicating with will support HTTP/2, then you can enforce that the client uses HTTP/2, without requiring either ALPN support or an HTTP `Upgrade: h2c` header.
129
130 This is managed by disabling HTTP/1.1 support on the connection pool:
131
132 ```python
133 pool = httpcore.ConnectionPool(http1=False, http2=True)
134 ```
135
136 ## Request & response headers
137
138 Because HTTP/2 frames the requests and responses somewhat differently to HTTP/1.1, there is a difference in some of the headers that are used.
139
140 In order for the `httpcore` library to support both HTTP/1.1 and HTTP/2 transparently, the HTTP/1.1 style is always used throughout the API. Any differences in header styles are only mapped onto HTTP/2 at the internal network layer.
141
142 ## Request headers
143
144 The following pseudo-headers are used by HTTP/2 in the request:
145
146 * `:method` - The request method.
147 * `:path` - Taken from the URL of the request.
148 * `:authority` - Equivalent to the `Host` header in HTTP/1.1. In `httpcore` this is represented using the request `Host` header, which is automatically populated from the request URL if no `Host` header is explicitly included.
149 * `:scheme` - Taken from the URL of the request.
150
151 These pseudo-headers are included in `httpcore` as part of the `request.method` and `request.url` attributes, and through the `request.headers["Host"]` header. *They are not exposed directly by their psuedo-header names.*
152
153 The one other difference to be aware of is the `Transfer-Encoding: chunked` header.
154
155 In HTTP/2 this header is never used, since streaming data is framed using a different mechanism.
156
157 In `httpcore` the `Transfer-Encoding: chunked` header is always used to represent the presence of a streaming body on the request, and is automatically populated if required. However the header is only sent if the underlying connection ends up being HTTP/1.1, and is omitted if the underlying connection ends up being HTTP/2.
158
159 ## Response headers
160
161 The following pseudo-header is used by HTTP/2 in the response:
162
163 * `:status` - The response status code.
164
165 In `httpcore` this *is represented by the `response.status` attribute, rather than being exposed as a psuedo-header*.
0 :::{include} ../README.md
1 :::
0 # HTTPCore
21
3 <!-- Table of Content entries, shown in left sidebar. -->
2 [![Test Suite](https://github.com/encode/httpcore/workflows/Test%20Suite/badge.svg)](https://github.com/encode/httpcore/actions)
3 [![Package version](https://badge.fury.io/py/httpcore.svg)](https://pypi.org/project/httpcore/)
44
5 :::{toctree}
6 :hidden:
7 :caption: Usage
5 > *Do one thing, and do it well.*
86
9 api
10 :::
7 The HTTP Core package provides a minimal low-level HTTP client, which does
8 one thing only. Sending HTTP requests.
119
12 :::{toctree}
13 :hidden:
14 :caption: Development
10 It does not provide any high level model abstractions over the API,
11 does not handle redirects, multipart uploads, building authentication headers,
12 transparent HTTP caching, URL parsing, session cookie handling,
13 content or charset decoding, handling JSON, environment based configuration
14 defaults, or any of that Jazz.
1515
16 contributing
17 Changelog <https://github.com/encode/httpcore/blob/master/CHANGELOG.md>
18 License <https://github.com/encode/httpcore/blob/master/LICENSE.md>
19 Source Code <https://github.com/encode/httpcore>
20 :::
16 Some things HTTP Core does do:
17
18 * Sending HTTP requests.
19 * Thread-safe / task-safe connection pooling.
20 * HTTP(S) proxy support.
21 * Supports HTTP/1.1 and HTTP/2.
22 * Provides both sync and async interfaces.
23 * Async backend support for `asyncio` and `trio`.
24
25 ## Installation
26
27 For HTTP/1.1 only support, install with...
28
29 ```shell
30 $ pip install httpcore
31 ```
32
33 For HTTP/1.1 and HTTP/2 support, install with...
34
35 ```shell
36 $ pip install httpcore[http2]
37 ```
38
39 ## Example
40
41 Let's check we're able to send HTTP requests:
42
43 ```python
44 import httpcore
45
46 response = httpcore.request("GET", "https://www.example.com/")
47
48 print(response)
49 # <Response [200]>
50 print(response.status)
51 # 200
52 print(response.headers)
53 # [(b'Accept-Ranges', b'bytes'), (b'Age', b'557328'), (b'Cache-Control', b'max-age=604800'), ...]
54 print(response.content)
55 # b'<!doctype html>\n<html>\n<head>\n<title>Example Domain</title>\n\n<meta charset="utf-8"/>\n ...'
56 ```
57
58 Ready to get going?
59
60 Head over to [the quickstart documentation](quickstart.md).
0 # Network Backends
1
2 TODO
0 # Proxies
1
2 The `httpcore` package currently provides support for HTTP proxies, using either "HTTP Forwarding" and "HTTP Tunnelling". Forwarding is a proxy mechanism for sending requests to `http` URLs via an intermediate proxy. Tunnelling is a proxy mechanism for sending requests to `https` URLs via an intermediate proxy.
3
4 Sending requests via a proxy is very similar to sending requests using a standard connection pool:
5
6 ```python
7 import httpcore
8
9 proxy = httpcore.HTTPProxy(proxy_url="http://127.0.0.1:8080/")
10 r = proxy.request("GET", "https://www.example.com/")
11
12 print(r)
13 # <Response [200]>
14 ```
15
16 You can test the `httpcore` proxy support, using the Python [`proxy.py`](https://pypi.org/project/proxy.py/) tool:
17
18 ```shell
19 $ pip install proxy.py
20 $ proxy --hostname 127.0.0.1 --port 8080
21 ```
22
23 Requests will automatically use either forwarding or tunnelling, depending on if the scheme is `http` or `https`.
24
25 ## Authentication
26
27 Proxy headers can be included in the initial configuration:
28
29 ```python
30 import httpcore
31 import base64
32
33 auth = base64.b64encode(b"Basic <username>:<password>")
34 proxy = httpcore.HTTPProxy(
35 proxy_url="http://127.0.0.1:8080/",
36 proxy_headers={"Proxy-Authorization": auth}
37 )
38 ```
39
40 ## HTTP Versions
41
42 Proxy support currently only allows for HTTP/1.1 connections to the proxy.
43
44 ---
45
46 # Reference
47
48 ## `httpcore.HTTPProxy`
49
50 ::: httpcore.HTTPProxy
51 handler: python
52 rendering:
53 show_source: False
0 # Quickstart
1
2 For convenience, the `httpcore` package provides a couple of top-level functions that you can use for sending HTTP requests. You probably don't want to integrate against functions if you're writing a library that uses `httpcore`, but you might find them useful for testing `httpcore` from the command-line, or if you're writing a simple script that doesn't require any of the connection pooling or advanced configuration that `httpcore` offers.
3
4 ## Sending a request
5
6 We'll start off by sending a request...
7
8 ```python
9 import httpcore
10
11 response = httpcore.request("GET", "https://www.example.com/")
12
13 print(response)
14 # <Response [200]>
15 print(response.status)
16 # 200
17 print(response.headers)
18 # [(b'Accept-Ranges', b'bytes'), (b'Age', b'557328'), (b'Cache-Control', b'max-age=604800'), ...]
19 print(response.content)
20 # b'<!doctype html>\n<html>\n<head>\n<title>Example Domain</title>\n\n<meta charset="utf-8"/>\n ...'
21 ```
22
23 ## Request headers
24
25 Request headers may be included either in a dictionary style, or as a list of two-tuples.
26
27 ```python
28 import httpcore
29 import json
30
31 headers = {'User-Agent': 'httpcore'}
32 r = httpcore.request('GET', 'https://httpbin.org/headers', headers=headers)
33
34 print(json.loads(r.content))
35 # {
36 # 'headers': {
37 # 'Host': 'httpbin.org',
38 # 'User-Agent': 'httpcore',
39 # 'X-Amzn-Trace-Id': 'Root=1-616ff5de-5ea1b7e12766f1cf3b8e3a33'
40 # }
41 # }
42 ```
43
44 The keys and values may either be provided as strings or as bytes. Where strings are provided they may only contain characters within the ASCII range `chr(0)` - `chr(127)`. To include characters outside this range you must deal with any character encoding explicitly, and pass bytes as the header keys/values.
45
46 The `Host` header will always be automatically included in any outgoing request, as it is strictly required to be present by the HTTP protocol.
47
48 *Note that the `X-Amzn-Trace-Id` header shown in the example above is not an outgoing request header, but has been added by a gateway server.*
49
50 ## Request body
51
52 A request body can be included either as bytes...
53
54 ```python
55 import httpcore
56 import json
57
58 r = httpcore.request('POST', 'https://httpbin.org/post', content=b'Hello, world')
59
60 print(json.loads(r.content))
61 # {
62 # 'args': {},
63 # 'data': 'Hello, world',
64 # 'files': {},
65 # 'form': {},
66 # 'headers': {
67 # 'Host': 'httpbin.org',
68 # 'Content-Length': '12',
69 # 'X-Amzn-Trace-Id': 'Root=1-61700258-00e338a124ca55854bf8435f'
70 # },
71 # 'json': None,
72 # 'origin': '68.41.35.196',
73 # 'url': 'https://httpbin.org/post'
74 # }
75 ```
76
77 Or as an iterable that returns bytes...
78
79 ```python
80 import httpcore
81 import json
82
83 with open("hello-world.txt", "rb") as input_file:
84 r = httpcore.request('POST', 'https://httpbin.org/post', content=input_file)
85
86 print(json.loads(r.content))
87 # {
88 # 'args': {},
89 # 'data': 'Hello, world',
90 # 'files': {},
91 # 'form': {},
92 # 'headers': {
93 # 'Host': 'httpbin.org',
94 # 'Transfer-Encoding': 'chunked',
95 # 'X-Amzn-Trace-Id': 'Root=1-61700258-00e338a124ca55854bf8435f'
96 # },
97 # 'json': None,
98 # 'origin': '68.41.35.196',
99 # 'url': 'https://httpbin.org/post'
100 # }
101 ```
102
103 When a request body is included, either a `Content-Length` header or a `Transfer-Encoding: chunked` header will be automatically included.
104
105 The `Content-Length` header is used when passing bytes, and indicates an HTTP request with a body of a pre-determined length.
106
107 The `Transfer-Encoding: chunked` header is the mechanism that HTTP/1.1 uses for sending HTTP request bodies without a pre-determined length.
108
109 ## Streaming responses
110
111 When using the `httpcore.request()` function, the response body will automatically be read to completion, and made available in the `response.content` attribute.
112
113 Sometimes you may be dealing with large responses and not want to read the entire response into memory. The `httpcore.stream()` function provides a mechanism for sending a request and dealing with a streaming response:
114
115 ```python
116 import httpcore
117
118 with httpcore.stream('GET', 'https://example.com') as response:
119 for chunk in response.iter_stream():
120 print(f"Downloaded: {chunk}")
121 ```
122
123 Here's a more complete example that demonstrates downloading a response:
124
125 ```python
126 import httpcore
127
128 with httpcore.stream('GET', 'https://speed.hetzner.de/100MB.bin') as response:
129 with open("download.bin", "wb") as output_file:
130 for chunk in response.iter_stream():
131 output_file.write(chunk)
132 ```
133
134 The `httpcore.stream()` API also allows you to *conditionally* read the response...
135
136 ```python
137 import httpcore
138
139 with httpcore.stream('GET', 'https://example.com') as response:
140 content_length = [int(v) for k, v in response.headers if k.lower() == b'content-length'][0]
141 if content_length > 100_000_000:
142 raise Exception("Response too large.")
143 response.read() # `response.content` is now available.
144 ```
145
146 ---
147
148 # Reference
149
150 ## `httpcore.request()`
151
152 ::: httpcore.request
153 handler: python
154 rendering:
155 show_source: False
156
157 ## `httpcore.stream()`
158
159 ::: httpcore.stream
160 handler: python
161 rendering:
162 show_source: False
0 # Requests, Responses, and URLs
1
2 TODO
3
4 ## Requests
5
6 Request instances in `httpcore` are deliberately simple, and only include the essential information required to represent an HTTP request.
7
8 Properties on the request are plain byte-wise representations.
9
10 ```python
11 >>> request = httpcore.Request("GET", "https://www.example.com/")
12 >>> request.method
13 b"GET"
14 >>> request.url
15 httpcore.URL(scheme=b"https", host=b"www.example.com", port=None, target=b"/")
16 >>> request.headers
17 [(b'Host', b'www.example.com')]
18 >>> request.stream
19 <httpcore.ByteStream [0 bytes]>
20 ```
21
22 The interface is liberal in the types that it accepts, but specific in the properties that it uses to represent them. For example, headers may be specified as a dictionary of strings, but internally are represented as a list of `(byte, byte)` tuples.
23
24 ```python
25 >>> headers = {"User-Agent": "custom"}
26 >>> request = httpcore.Request("GET", "https://www.example.com/", headers=headers)
27 >>> request.headers
28 [(b'Host', b'www.example.com'), (b"User-Agent", b"custom")]
29
30 ## Responses
31
32 ...
33
34 ## URLs
35
36 ...
37
38 ---
39
40 # Reference
41
42 ## `httpcore.Request`
43
44 ::: httpcore.Request
45 handler: python
46 rendering:
47 show_source: False
48
49 ## `httpcore.Response`
50
51 ::: httpcore.Response
52 handler: python
53 rendering:
54 show_source: False
55
56 ## `httpcore.URL`
57
58 ::: httpcore.URL
59 handler: python
60 rendering:
61 show_source: False
0 # API Reference
1
2 * Quickstart
3 * `httpcore.request()`
4 * `httpcore.stream()`
5 * Requests, Responses, and URLs
6 * `httpcore.Request`
7 * `httpcore.Response`
8 * `httpcore.URL`
9 * Connection Pools
10 * `httpcore.ConnectionPool`
11 * Proxies
12 * `httpcore.HTTPProxy`
13 * Connections
14 * `httpcore.HTTPConnection`
15 * `httpcore.HTTP11Connection`
16 * `httpcore.HTTP2Connection`
17 * Async Support
18 * `httpcore.AsyncConnectionPool`
19 * `httpcore.AsyncHTTPProxy`
20 * `httpcore.AsyncHTTPConnection`
21 * `httpcore.AsyncHTTP11Connection`
22 * `httpcore.AsyncHTTP2Connection`
23 * Network Backends
24 * Sync
25 * `httpcore.backends.sync.SyncBackend`
26 * `httpcore.backends.mock.MockBackend`
27 * Async
28 * `httpcore.backends.auto.AutoBackend`
29 * `httpcore.backends.asyncio.AsyncioBackend`
30 * `httpcore.backends.trio.TrioBackend`
31 * `httpcore.backends.mock.AsyncMockBackend`
32 * Base interfaces
33 * `httpcore.backends.base.NetworkBackend`
34 * `httpcore.backends.base.AsyncNetworkBackend`
35 * Exceptions
36 * `httpcore.TimeoutException`
37 * `httpcore.PoolTimeout`
38 * `httpcore.ConnectTimeout`
39 * `httpcore.ReadTimeout`
40 * `httpcore.WriteTimeout`
41 * `httpcore.NetworkError`
42 * `httpcore.ConnectError`
43 * `httpcore.ReadError`
44 * `httpcore.WriteError`
45 * `httpcore.ProtocolError`
46 * `httpcore.RemoteProtocolError`
47 * `httpcore.LocalProtocolError`
48 * `httpcore.ProxyError`
49 * `httpcore.UnsupportedProtocol`
0 from ._async.base import AsyncByteStream, AsyncHTTPTransport
1 from ._async.connection_pool import AsyncConnectionPool
2 from ._async.http_proxy import AsyncHTTPProxy
3 from ._bytestreams import AsyncIteratorByteStream, ByteStream, IteratorByteStream
0 from ._api import request, stream
1 from ._async import (
2 AsyncConnectionInterface,
3 AsyncConnectionPool,
4 AsyncHTTP2Connection,
5 AsyncHTTP11Connection,
6 AsyncHTTPConnection,
7 AsyncHTTPProxy,
8 )
49 from ._exceptions import (
5 CloseError,
610 ConnectError,
11 ConnectionNotAvailable,
712 ConnectTimeout,
813 LocalProtocolError,
914 NetworkError,
1823 WriteError,
1924 WriteTimeout,
2025 )
21 from ._sync.base import SyncByteStream, SyncHTTPTransport
22 from ._sync.connection_pool import SyncConnectionPool
23 from ._sync.http_proxy import SyncHTTPProxy
26 from ._models import URL, Origin, Request, Response
27 from ._ssl import default_ssl_context
28 from ._sync import (
29 ConnectionInterface,
30 ConnectionPool,
31 HTTP2Connection,
32 HTTP11Connection,
33 HTTPConnection,
34 HTTPProxy,
35 )
2436
2537 __all__ = [
26 "AsyncByteStream",
38 # top-level requests
39 "request",
40 "stream",
41 # models
42 "Origin",
43 "URL",
44 "Request",
45 "Response",
46 # async
47 "AsyncHTTPConnection",
2748 "AsyncConnectionPool",
2849 "AsyncHTTPProxy",
29 "AsyncHTTPTransport",
30 "AsyncIteratorByteStream",
31 "ByteStream",
32 "CloseError",
50 "AsyncHTTP11Connection",
51 "AsyncHTTP2Connection",
52 "AsyncConnectionInterface",
53 # sync
54 "HTTPConnection",
55 "ConnectionPool",
56 "HTTPProxy",
57 "HTTP11Connection",
58 "HTTP2Connection",
59 "ConnectionInterface",
60 # util
61 "default_ssl_context",
62 # exceptions
63 "ConnectionNotAvailable",
64 "ProxyError",
65 "ProtocolError",
66 "LocalProtocolError",
67 "RemoteProtocolError",
68 "UnsupportedProtocol",
69 "TimeoutException",
70 "PoolTimeout",
71 "ConnectTimeout",
72 "ReadTimeout",
73 "WriteTimeout",
74 "NetworkError",
3375 "ConnectError",
34 "ConnectTimeout",
35 "IteratorByteStream",
36 "LocalProtocolError",
37 "NetworkError",
38 "PoolTimeout",
39 "ProtocolError",
40 "ProxyError",
4176 "ReadError",
42 "ReadTimeout",
43 "RemoteProtocolError",
44 "SyncByteStream",
45 "SyncConnectionPool",
46 "SyncHTTPProxy",
47 "SyncHTTPTransport",
48 "TimeoutException",
49 "UnsupportedProtocol",
5077 "WriteError",
51 "WriteTimeout",
5278 ]
53 __version__ = "0.13.7"
79
80 __version__ = "0.14.3"
81
5482
5583 __locals = locals()
56
57 for _name in __all__:
58 if not _name.startswith("__"):
59 # Save original source module, used by Sphinx.
60 __locals[_name].__source_module__ = __locals[_name].__module__
61 # Override module for prettier repr().
62 setattr(__locals[_name], "__module__", "httpcore") # noqa
84 for __name in __all__:
85 if not __name.startswith("__"):
86 setattr(__locals[__name], "__module__", "httpcore") # noqa
0 from contextlib import contextmanager
1 from typing import Iterator, Union
2
3 from ._models import URL, Response
4 from ._sync.connection_pool import ConnectionPool
5
6
7 def request(
8 method: Union[bytes, str],
9 url: Union[URL, bytes, str],
10 *,
11 headers: Union[dict, list] = None,
12 content: Union[bytes, Iterator[bytes]] = None,
13 extensions: dict = None,
14 ) -> Response:
15 """
16 Sends an HTTP request, returning the response.
17
18 ```
19 response = httpcore.request("GET", "https://www.example.com/")
20 ```
21
22 Arguments:
23 method: The HTTP method for the request. Typically one of `"GET"`,
24 `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`.
25 url: The URL of the HTTP request. Either as an instance of `httpcore.URL`,
26 or as str/bytes.
27 headers: The HTTP request headers. Either as a dictionary of str/bytes,
28 or as a list of two-tuples of str/bytes.
29 content: The content of the request body. Either as bytes,
30 or as a bytes iterator.
31 extensions: A dictionary of optional extra information included on the request.
32 Possible keys include `"timeout"`.
33
34 Returns:
35 An instance of `httpcore.Response`.
36 """
37 with ConnectionPool() as pool:
38 return pool.request(
39 method=method,
40 url=url,
41 headers=headers,
42 content=content,
43 extensions=extensions,
44 )
45
46
47 @contextmanager
48 def stream(
49 method: Union[bytes, str],
50 url: Union[URL, bytes, str],
51 *,
52 headers: Union[dict, list] = None,
53 content: Union[bytes, Iterator[bytes]] = None,
54 extensions: dict = None,
55 ) -> Iterator[Response]:
56 """
57 Sends an HTTP request, returning the response within a content manager.
58
59 ```
60 with httpcore.stream("GET", "https://www.example.com/") as response:
61 ...
62 ```
63
64 When using the `stream()` function, the body of the response will not be
65 automatically read. If you want to access the response body you should
66 either use `content = response.read()`, or `for chunk in response.iter_content()`.
67
68 Arguments:
69 method: The HTTP method for the request. Typically one of `"GET"`,
70 `"OPTIONS"`, `"HEAD"`, `"POST"`, `"PUT"`, `"PATCH"`, or `"DELETE"`.
71 url: The URL of the HTTP request. Either as an instance of `httpcore.URL`,
72 or as str/bytes.
73 headers: The HTTP request headers. Either as a dictionary of str/bytes,
74 or as a list of two-tuples of str/bytes.
75 content: The content of the request body. Either as bytes,
76 or as a bytes iterator.
77 extensions: A dictionary of optional extra information included on the request.
78 Possible keys include `"timeout"`.
79
80 Returns:
81 An instance of `httpcore.Response`.
82 """
83 with ConnectionPool() as pool:
84 with pool.stream(
85 method=method,
86 url=url,
87 headers=headers,
88 content=content,
89 extensions=extensions,
90 ) as response:
91 yield response
0 from .connection import AsyncHTTPConnection
1 from .connection_pool import AsyncConnectionPool
2 from .http11 import AsyncHTTP11Connection
3 from .http_proxy import AsyncHTTPProxy
4 from .interfaces import AsyncConnectionInterface
5
6 try:
7 from .http2 import AsyncHTTP2Connection
8 except ImportError: # pragma: nocover
9
10 class AsyncHTTP2Connection: # type: ignore
11 def __init__(self, *args, **kwargs) -> None: # type: ignore
12 raise RuntimeError(
13 "Attempted to use http2 support, but the `h2` package is not "
14 "installed. Use 'pip install httpcore[http2]'."
15 )
16
17
18 __all__ = [
19 "AsyncHTTPConnection",
20 "AsyncConnectionPool",
21 "AsyncHTTPProxy",
22 "AsyncHTTP11Connection",
23 "AsyncHTTP2Connection",
24 "AsyncConnectionInterface",
25 ]
+0
-122
httpcore/_async/base.py less more
0 import enum
1 from types import TracebackType
2 from typing import AsyncIterator, Tuple, Type
3
4 from .._types import URL, Headers, T
5
6
7 class NewConnectionRequired(Exception):
8 pass
9
10
11 class ConnectionState(enum.IntEnum):
12 """
13 PENDING READY
14 | | ^
15 v V |
16 ACTIVE |
17 | | |
18 | V |
19 V IDLE-+
20 FULL |
21 | |
22 V V
23 CLOSED
24 """
25
26 PENDING = 0 # Connection not yet acquired.
27 READY = 1 # Re-acquired from pool, about to send a request.
28 ACTIVE = 2 # Active requests.
29 FULL = 3 # Active requests, no more stream IDs available.
30 IDLE = 4 # No active requests.
31 CLOSED = 5 # Connection closed.
32
33
34 class AsyncByteStream:
35 """
36 The base interface for request and response bodies.
37
38 Concrete implementations should subclass this class, and implement
39 the :meth:`__aiter__` method, and optionally the :meth:`aclose` method.
40 """
41
42 async def __aiter__(self) -> AsyncIterator[bytes]:
43 """
44 Yield bytes representing the request or response body.
45 """
46 yield b"" # pragma: nocover
47
48 async def aclose(self) -> None:
49 """
50 Must be called by the client to indicate that the stream has been closed.
51 """
52 pass # pragma: nocover
53
54 async def aread(self) -> bytes:
55 try:
56 return b"".join([part async for part in self])
57 finally:
58 await self.aclose()
59
60
61 class AsyncHTTPTransport:
62 """
63 The base interface for sending HTTP requests.
64
65 Concrete implementations should subclass this class, and implement
66 the :meth:`handle_async_request` method, and optionally the :meth:`aclose` method.
67 """
68
69 async def handle_async_request(
70 self,
71 method: bytes,
72 url: URL,
73 headers: Headers,
74 stream: AsyncByteStream,
75 extensions: dict,
76 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
77 """
78 The interface for sending a single HTTP request, and returning a response.
79
80 Parameters
81 ----------
82 method:
83 The HTTP method, such as ``b'GET'``.
84 url:
85 The URL as a 4-tuple of (scheme, host, port, path).
86 headers:
87 Any HTTP headers to send with the request.
88 stream:
89 The body of the HTTP request.
90 extensions:
91 A dictionary of optional extensions.
92
93 Returns
94 -------
95 status_code:
96 The HTTP status code, such as ``200``.
97 headers:
98 Any HTTP headers included on the response.
99 stream:
100 The body of the HTTP response.
101 extensions:
102 A dictionary of optional extensions.
103 """
104 raise NotImplementedError() # pragma: nocover
105
106 async def aclose(self) -> None:
107 """
108 Close the implementation, which should close any outstanding response streams,
109 and any keep alive connections.
110 """
111
112 async def __aenter__(self: T) -> T:
113 return self
114
115 async def __aexit__(
116 self,
117 exc_type: Type[BaseException] = None,
118 exc_value: BaseException = None,
119 traceback: TracebackType = None,
120 ) -> None:
121 await self.aclose()
0 from ssl import SSLContext
1 from typing import List, Optional, Tuple, cast
2
3 from .._backends.auto import AsyncBackend, AsyncLock, AsyncSocketStream, AutoBackend
4 from .._exceptions import ConnectError, ConnectTimeout
5 from .._types import URL, Headers, Origin, TimeoutDict
6 from .._utils import exponential_backoff, get_logger, url_to_origin
7 from .base import AsyncByteStream, AsyncHTTPTransport, NewConnectionRequired
8 from .http import AsyncBaseHTTPConnection
0 import itertools
1 import ssl
2 from types import TracebackType
3 from typing import Iterator, Optional, Type
4
5 from .._exceptions import ConnectError, ConnectionNotAvailable, ConnectTimeout
6 from .._models import Origin, Request, Response
7 from .._ssl import default_ssl_context
8 from .._synchronization import AsyncLock
9 from .._trace import Trace
10 from ..backends.auto import AutoBackend
11 from ..backends.base import AsyncNetworkBackend, AsyncNetworkStream
912 from .http11 import AsyncHTTP11Connection
10
11 logger = get_logger(__name__)
13 from .interfaces import AsyncConnectionInterface
1214
1315 RETRIES_BACKOFF_FACTOR = 0.5 # 0s, 0.5s, 1s, 2s, 4s, etc.
1416
1517
16 class AsyncHTTPConnection(AsyncHTTPTransport):
18 def exponential_backoff(factor: float) -> Iterator[float]:
19 yield 0
20 for n in itertools.count(2):
21 yield factor * (2 ** (n - 2))
22
23
24 class AsyncHTTPConnection(AsyncConnectionInterface):
1725 def __init__(
1826 self,
1927 origin: Origin,
28 ssl_context: ssl.SSLContext = None,
29 keepalive_expiry: float = None,
2030 http1: bool = True,
2131 http2: bool = False,
22 keepalive_expiry: float = None,
32 retries: int = 0,
33 local_address: str = None,
2334 uds: str = None,
24 ssl_context: SSLContext = None,
25 socket: AsyncSocketStream = None,
26 local_address: str = None,
27 retries: int = 0,
28 backend: AsyncBackend = None,
29 ):
30 self.origin = origin
31 self._http1_enabled = http1
32 self._http2_enabled = http2
35 network_backend: AsyncNetworkBackend = None,
36 ) -> None:
37 ssl_context = default_ssl_context() if ssl_context is None else ssl_context
38 alpn_protocols = ["http/1.1", "h2"] if http2 else ["http/1.1"]
39 ssl_context.set_alpn_protocols(alpn_protocols)
40
41 self._origin = origin
42 self._ssl_context = ssl_context
3343 self._keepalive_expiry = keepalive_expiry
44 self._http1 = http1
45 self._http2 = http2
46 self._retries = retries
47 self._local_address = local_address
3448 self._uds = uds
35 self._ssl_context = SSLContext() if ssl_context is None else ssl_context
36 self.socket = socket
37 self._local_address = local_address
38 self._retries = retries
39
40 alpn_protocols: List[str] = []
41 if http1:
42 alpn_protocols.append("http/1.1")
43 if http2:
44 alpn_protocols.append("h2")
45
46 self._ssl_context.set_alpn_protocols(alpn_protocols)
47
48 self.connection: Optional[AsyncBaseHTTPConnection] = None
49 self._is_http11 = False
50 self._is_http2 = False
51 self._connect_failed = False
52 self._expires_at: Optional[float] = None
53 self._backend = AutoBackend() if backend is None else backend
54
55 def __repr__(self) -> str:
56 return f"<AsyncHTTPConnection [{self.info()}]>"
57
58 def info(self) -> str:
59 if self.connection is None:
60 return "Connection failed" if self._connect_failed else "Connecting"
61 return self.connection.info()
62
63 def should_close(self) -> bool:
64 """
65 Return `True` if the connection is in a state where it should be closed.
66 This occurs when any of the following occur:
67
68 * There are no active requests on an HTTP/1.1 connection, and the underlying
69 socket is readable. The only valid state the socket can be readable in
70 if this occurs is when the b"" EOF marker is about to be returned,
71 indicating a server disconnect.
72 * There are no active requests being made and the keepalive timeout has passed.
73 """
74 if self.connection is None:
75 return False
76 return self.connection.should_close()
77
78 def is_idle(self) -> bool:
79 """
80 Return `True` if the connection is currently idle.
81 """
82 if self.connection is None:
83 return False
84 return self.connection.is_idle()
85
86 def is_closed(self) -> bool:
87 if self.connection is None:
88 return self._connect_failed
89 return self.connection.is_closed()
90
91 def is_available(self) -> bool:
92 """
93 Return `True` if the connection is currently able to accept an outgoing request.
94 This occurs when any of the following occur:
95
96 * The connection has not yet been opened, and HTTP/2 support is enabled.
97 We don't *know* at this point if we'll end up on an HTTP/2 connection or
98 not, but we *might* do, so we indicate availability.
99 * The connection has been opened, and is currently idle.
100 * The connection is open, and is an HTTP/2 connection. The connection must
101 also not currently be exceeding the maximum number of allowable concurrent
102 streams and must not have exhausted the maximum total number of stream IDs.
103 """
104 if self.connection is None:
105 return self._http2_enabled and not self.is_closed
106 return self.connection.is_available()
107
108 @property
109 def request_lock(self) -> AsyncLock:
110 # We do this lazily, to make sure backend autodetection always
111 # runs within an async context.
112 if not hasattr(self, "_request_lock"):
113 self._request_lock = self._backend.create_lock()
114 return self._request_lock
115
116 async def handle_async_request(
117 self,
118 method: bytes,
119 url: URL,
120 headers: Headers,
121 stream: AsyncByteStream,
122 extensions: dict,
123 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
124 assert url_to_origin(url) == self.origin
125 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
126
127 async with self.request_lock:
128 if self.connection is None:
129 if self._connect_failed:
130 raise NewConnectionRequired()
131 if not self.socket:
132 logger.trace(
133 "open_socket origin=%r timeout=%r", self.origin, timeout
49
50 self._network_backend: AsyncNetworkBackend = (
51 AutoBackend() if network_backend is None else network_backend
52 )
53 self._connection: Optional[AsyncConnectionInterface] = None
54 self._connect_failed: bool = False
55 self._request_lock = AsyncLock()
56
57 async def handle_async_request(self, request: Request) -> Response:
58 if not self.can_handle_request(request.url.origin):
59 raise RuntimeError(
60 f"Attempted to send request to {request.url.origin} on connection to {self._origin}"
61 )
62
63 async with self._request_lock:
64 if self._connection is None:
65 try:
66 stream = await self._connect(request)
67
68 ssl_object = stream.get_extra_info("ssl_object")
69 http2_negotiated = (
70 ssl_object is not None
71 and ssl_object.selected_alpn_protocol() == "h2"
13472 )
135 self.socket = await self._open_socket(timeout)
136 self._create_connection(self.socket)
137 elif not self.connection.is_available():
138 raise NewConnectionRequired()
139
140 assert self.connection is not None
141 logger.trace(
142 "connection.handle_async_request method=%r url=%r headers=%r",
143 method,
144 url,
145 headers,
146 )
147 return await self.connection.handle_async_request(
148 method, url, headers, stream, extensions
149 )
150
151 async def _open_socket(self, timeout: TimeoutDict = None) -> AsyncSocketStream:
152 scheme, hostname, port = self.origin
153 timeout = {} if timeout is None else timeout
154 ssl_context = self._ssl_context if scheme == b"https" else None
73 if http2_negotiated or (self._http2 and not self._http1):
74 from .http2 import AsyncHTTP2Connection
75
76 self._connection = AsyncHTTP2Connection(
77 origin=self._origin,
78 stream=stream,
79 keepalive_expiry=self._keepalive_expiry,
80 )
81 else:
82 self._connection = AsyncHTTP11Connection(
83 origin=self._origin,
84 stream=stream,
85 keepalive_expiry=self._keepalive_expiry,
86 )
87 except Exception as exc:
88 self._connect_failed = True
89 raise exc
90 elif not self._connection.is_available():
91 raise ConnectionNotAvailable()
92
93 return await self._connection.handle_async_request(request)
94
95 async def _connect(self, request: Request) -> AsyncNetworkStream:
96 timeouts = request.extensions.get("timeout", {})
97 timeout = timeouts.get("connect", None)
15598
15699 retries_left = self._retries
157100 delays = exponential_backoff(factor=RETRIES_BACKOFF_FACTOR)
159102 while True:
160103 try:
161104 if self._uds is None:
162 return await self._backend.open_tcp_stream(
163 hostname,
164 port,
165 ssl_context,
166 timeout,
167 local_address=self._local_address,
168 )
105 kwargs = {
106 "host": self._origin.host.decode("ascii"),
107 "port": self._origin.port,
108 "local_address": self._local_address,
109 "timeout": timeout,
110 }
111 async with Trace(
112 "connection.connect_tcp", request, kwargs
113 ) as trace:
114 stream = await self._network_backend.connect_tcp(**kwargs)
115 trace.return_value = stream
169116 else:
170 return await self._backend.open_uds_stream(
171 self._uds, hostname, ssl_context, timeout
172 )
117 kwargs = {
118 "path": self._uds,
119 "timeout": timeout,
120 }
121 async with Trace(
122 "connection.connect_unix_socket", request, kwargs
123 ) as trace:
124 stream = await self._network_backend.connect_unix_socket(
125 **kwargs
126 )
127 trace.return_value = stream
173128 except (ConnectError, ConnectTimeout):
174129 if retries_left <= 0:
175 self._connect_failed = True
176130 raise
177131 retries_left -= 1
178132 delay = next(delays)
179 await self._backend.sleep(delay)
180 except Exception: # noqa: PIE786
181 self._connect_failed = True
182 raise
183
184 def _create_connection(self, socket: AsyncSocketStream) -> None:
185 http_version = socket.get_http_version()
186 logger.trace(
187 "create_connection socket=%r http_version=%r", socket, http_version
188 )
189 if http_version == "HTTP/2" or (
190 self._http2_enabled and not self._http1_enabled
191 ):
192 from .http2 import AsyncHTTP2Connection
193
194 self._is_http2 = True
195 self.connection = AsyncHTTP2Connection(
196 socket=socket,
197 keepalive_expiry=self._keepalive_expiry,
198 backend=self._backend,
133 # TRACE 'retry'
134 await self._network_backend.sleep(delay)
135 else:
136 break
137
138 if self._origin.scheme == b"https":
139 kwargs = {
140 "ssl_context": self._ssl_context,
141 "server_hostname": self._origin.host.decode("ascii"),
142 "timeout": timeout,
143 }
144 async with Trace("connection.start_tls", request, kwargs) as trace:
145 stream = await stream.start_tls(**kwargs)
146 trace.return_value = stream
147 return stream
148
149 def can_handle_request(self, origin: Origin) -> bool:
150 return origin == self._origin
151
152 async def aclose(self) -> None:
153 if self._connection is not None:
154 await self._connection.aclose()
155
156 def is_available(self) -> bool:
157 if self._connection is None:
158 # If HTTP/2 support is enabled, and the resulting connection could
159 # end up as HTTP/2 then we should indicate the connection as being
160 # available to service multiple requests.
161 return (
162 self._http2
163 and (self._origin.scheme == b"https" or not self._http1)
164 and not self._connect_failed
199165 )
200 else:
201 self._is_http11 = True
202 self.connection = AsyncHTTP11Connection(
203 socket=socket, keepalive_expiry=self._keepalive_expiry
204 )
205
206 async def start_tls(
207 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
166 return self._connection.is_available()
167
168 def has_expired(self) -> bool:
169 if self._connection is None:
170 return self._connect_failed
171 return self._connection.has_expired()
172
173 def is_idle(self) -> bool:
174 if self._connection is None:
175 return self._connect_failed
176 return self._connection.is_idle()
177
178 def is_closed(self) -> bool:
179 if self._connection is None:
180 return self._connect_failed
181 return self._connection.is_closed()
182
183 def info(self) -> str:
184 if self._connection is None:
185 return "CONNECTION FAILED" if self._connect_failed else "CONNECTING"
186 return self._connection.info()
187
188 def __repr__(self) -> str:
189 return f"<{self.__class__.__name__} [{self.info()}]>"
190
191 # These context managers are not used in the standard flow, but are
192 # useful for testing or working with connection instances directly.
193
194 async def __aenter__(self) -> "AsyncHTTPConnection":
195 return self
196
197 async def __aexit__(
198 self,
199 exc_type: Type[BaseException] = None,
200 exc_value: BaseException = None,
201 traceback: TracebackType = None,
208202 ) -> None:
209 if self.connection is not None:
210 logger.trace("start_tls hostname=%r timeout=%r", hostname, timeout)
211 self.socket = await self.connection.start_tls(
212 hostname, ssl_context, timeout
213 )
214 logger.trace("start_tls complete hostname=%r timeout=%r", hostname, timeout)
215
216 async def aclose(self) -> None:
217 async with self.request_lock:
218 if self.connection is not None:
219 await self.connection.aclose()
203 await self.aclose()
0 import warnings
1 from ssl import SSLContext
2 from typing import (
3 AsyncIterator,
4 Callable,
5 Dict,
6 List,
7 Optional,
8 Set,
9 Tuple,
10 Union,
11 cast,
12 )
13
14 from .._backends.auto import AsyncBackend, AsyncLock, AsyncSemaphore
15 from .._backends.base import lookup_async_backend
16 from .._exceptions import LocalProtocolError, PoolTimeout, UnsupportedProtocol
17 from .._threadlock import ThreadLock
18 from .._types import URL, Headers, Origin, TimeoutDict
19 from .._utils import get_logger, origin_to_url_string, url_to_origin
20 from .base import AsyncByteStream, AsyncHTTPTransport, NewConnectionRequired
0 import ssl
1 import sys
2 from types import TracebackType
3 from typing import AsyncIterable, AsyncIterator, List, Optional, Type
4
5 from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol
6 from .._models import Origin, Request, Response
7 from .._ssl import default_ssl_context
8 from .._synchronization import AsyncEvent, AsyncLock
9 from ..backends.auto import AutoBackend
10 from ..backends.base import AsyncNetworkBackend
2111 from .connection import AsyncHTTPConnection
22
23 logger = get_logger(__name__)
24
25
26 class NullSemaphore(AsyncSemaphore):
27 def __init__(self) -> None:
28 pass
29
30 async def acquire(self, timeout: float = None) -> None:
31 return
32
33 async def release(self) -> None:
34 return
35
36
37 class ResponseByteStream(AsyncByteStream):
12 from .interfaces import AsyncConnectionInterface, AsyncRequestInterface
13
14
15 class RequestStatus:
16 def __init__(self, request: Request):
17 self.request = request
18 self.connection: Optional[AsyncConnectionInterface] = None
19 self._connection_acquired = AsyncEvent()
20
21 def set_connection(self, connection: AsyncConnectionInterface) -> None:
22 assert self.connection is None
23 self.connection = connection
24 self._connection_acquired.set()
25
26 def unset_connection(self) -> None:
27 assert self.connection is not None
28 self.connection = None
29 self._connection_acquired = AsyncEvent()
30
31 async def wait_for_connection(
32 self, timeout: float = None
33 ) -> AsyncConnectionInterface:
34 await self._connection_acquired.wait(timeout=timeout)
35 assert self.connection is not None
36 return self.connection
37
38
39 class AsyncConnectionPool(AsyncRequestInterface):
40 """
41 A connection pool for making HTTP requests.
42 """
43
3844 def __init__(
3945 self,
40 stream: AsyncByteStream,
41 connection: AsyncHTTPConnection,
42 callback: Callable,
43 ) -> None:
44 """
45 A wrapper around the response stream that we return from
46 `.handle_async_request()`.
47
48 Ensures that when `stream.aclose()` is called, the connection pool
49 is notified via a callback.
50 """
51 self.stream = stream
52 self.connection = connection
53 self.callback = callback
54
55 async def __aiter__(self) -> AsyncIterator[bytes]:
56 async for chunk in self.stream:
57 yield chunk
58
59 async def aclose(self) -> None:
60 try:
61 # Call the underlying stream close callback.
62 # This will be a call to `AsyncHTTP11Connection._response_closed()`
63 # or `AsyncHTTP2Stream._response_closed()`.
64 await self.stream.aclose()
65 finally:
66 # Call the connection pool close callback.
67 # This will be a call to `AsyncConnectionPool._response_closed()`.
68 await self.callback(self.connection)
69
70
71 class AsyncConnectionPool(AsyncHTTPTransport):
72 """
73 A connection pool for making HTTP requests.
74
75 Parameters
76 ----------
77 ssl_context:
78 An SSL context to use for verifying connections.
79 max_connections:
80 The maximum number of concurrent connections to allow.
81 max_keepalive_connections:
82 The maximum number of connections to allow before closing keep-alive
83 connections.
84 keepalive_expiry:
85 The maximum time to allow before closing a keep-alive connection.
86 http1:
87 Enable/Disable HTTP/1.1 support. Defaults to True.
88 http2:
89 Enable/Disable HTTP/2 support. Defaults to False.
90 uds:
91 Path to a Unix Domain Socket to use instead of TCP sockets.
92 local_address:
93 Local address to connect from. Can also be used to connect using a particular
94 address family. Using ``local_address="0.0.0.0"`` will connect using an
95 ``AF_INET`` address (IPv4), while using ``local_address="::"`` will connect
96 using an ``AF_INET6`` address (IPv6).
97 retries:
98 The maximum number of retries when trying to establish a connection.
99 backend:
100 A name indicating which concurrency backend to use.
101 """
102
103 def __init__(
104 self,
105 ssl_context: SSLContext = None,
106 max_connections: int = None,
46 ssl_context: ssl.SSLContext = None,
47 max_connections: Optional[int] = 10,
10748 max_keepalive_connections: int = None,
10849 keepalive_expiry: float = None,
10950 http1: bool = True,
11051 http2: bool = False,
52 retries: int = 0,
53 local_address: str = None,
11154 uds: str = None,
112 local_address: str = None,
113 retries: int = 0,
114 max_keepalive: int = None,
115 backend: Union[AsyncBackend, str] = "auto",
116 ):
117 if max_keepalive is not None:
118 warnings.warn(
119 "'max_keepalive' is deprecated. Use 'max_keepalive_connections'.",
120 DeprecationWarning,
121 )
122 max_keepalive_connections = max_keepalive
123
124 if isinstance(backend, str):
125 backend = lookup_async_backend(backend)
126
127 self._ssl_context = SSLContext() if ssl_context is None else ssl_context
128 self._max_connections = max_connections
129 self._max_keepalive_connections = max_keepalive_connections
55 network_backend: AsyncNetworkBackend = None,
56 ) -> None:
57 """
58 A connection pool for making HTTP requests.
59
60 Parameters:
61 ssl_context: An SSL context to use for verifying connections.
62 If not specified, the default `httpcore.default_ssl_context()`
63 will be used.
64 max_connections: The maximum number of concurrent HTTP connections that
65 the pool should allow. Any attempt to send a request on a pool that
66 would exceed this amount will block until a connection is available.
67 max_keepalive_connections: The maximum number of idle HTTP connections
68 that will be maintained in the pool.
69 keepalive_expiry: The duration in seconds that an idle HTTP connection
70 may be maintained for before being expired from the pool.
71 http1: A boolean indicating if HTTP/1.1 requests should be supported
72 by the connection pool. Defaults to True.
73 http2: A boolean indicating if HTTP/2 requests should be supported by
74 the connection pool. Defaults to False.
75 retries: The maximum number of retries when trying to establish a
76 connection.
77 local_address: Local address to connect from. Can also be used to connect
78 using a particular address family. Using `local_address="0.0.0.0"`
79 will connect using an `AF_INET` address (IPv4), while using
80 `local_address="::"` will connect using an `AF_INET6` address (IPv6).
81 uds: Path to a Unix Domain Socket to use instead of TCP sockets.
82 network_backend: A backend instance to use for handling network I/O.
83 """
84 if ssl_context is None:
85 ssl_context = default_ssl_context()
86
87 self._ssl_context = ssl_context
88
89 self._max_connections = (
90 sys.maxsize if max_connections is None else max_connections
91 )
92 self._max_keepalive_connections = (
93 sys.maxsize
94 if max_keepalive_connections is None
95 else max_keepalive_connections
96 )
97 self._max_keepalive_connections = min(
98 self._max_connections, self._max_keepalive_connections
99 )
100
130101 self._keepalive_expiry = keepalive_expiry
131102 self._http1 = http1
132103 self._http2 = http2
104 self._retries = retries
105 self._local_address = local_address
133106 self._uds = uds
134 self._local_address = local_address
135 self._retries = retries
136 self._connections: Dict[Origin, Set[AsyncHTTPConnection]] = {}
137 self._thread_lock = ThreadLock()
138 self._backend = backend
139 self._next_keepalive_check = 0.0
140
141 if not (http1 or http2):
142 raise ValueError("Either http1 or http2 must be True.")
143
144 if http2:
145 try:
146 import h2 # noqa: F401
147 except ImportError:
148 raise ImportError(
149 "Attempted to use http2=True, but the 'h2' "
150 "package is not installed. Use 'pip install httpcore[http2]'."
151 )
152
153 @property
154 def _connection_semaphore(self) -> AsyncSemaphore:
155 # We do this lazily, to make sure backend autodetection always
156 # runs within an async context.
157 if not hasattr(self, "_internal_semaphore"):
158 if self._max_connections is not None:
159 self._internal_semaphore = self._backend.create_semaphore(
160 self._max_connections, exc_class=PoolTimeout
161 )
162 else:
163 self._internal_semaphore = NullSemaphore()
164
165 return self._internal_semaphore
166
167 @property
168 def _connection_acquiry_lock(self) -> AsyncLock:
169 if not hasattr(self, "_internal_connection_acquiry_lock"):
170 self._internal_connection_acquiry_lock = self._backend.create_lock()
171 return self._internal_connection_acquiry_lock
172
173 def _create_connection(
174 self,
175 origin: Tuple[bytes, bytes, int],
176 ) -> AsyncHTTPConnection:
107
108 self._pool: List[AsyncConnectionInterface] = []
109 self._requests: List[RequestStatus] = []
110 self._pool_lock = AsyncLock()
111 self._network_backend = (
112 AutoBackend() if network_backend is None else network_backend
113 )
114
115 def create_connection(self, origin: Origin) -> AsyncConnectionInterface:
177116 return AsyncHTTPConnection(
178117 origin=origin,
118 ssl_context=self._ssl_context,
119 keepalive_expiry=self._keepalive_expiry,
179120 http1=self._http1,
180121 http2=self._http2,
181 keepalive_expiry=self._keepalive_expiry,
122 retries=self._retries,
123 local_address=self._local_address,
182124 uds=self._uds,
183 ssl_context=self._ssl_context,
184 local_address=self._local_address,
185 retries=self._retries,
186 backend=self._backend,
187 )
188
189 async def handle_async_request(
125 network_backend=self._network_backend,
126 )
127
128 @property
129 def connections(self) -> List[AsyncConnectionInterface]:
130 """
131 Return a list of the connections currently in the pool.
132
133 For example:
134
135 ```python
136 >>> pool.connections
137 [
138 <AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 6]>,
139 <AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 9]> ,
140 <AsyncHTTPConnection ['http://example.com:80', HTTP/1.1, IDLE, Request Count: 1]>,
141 ]
142 ```
143 """
144 return list(self._pool)
145
146 async def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool:
147 """
148 Attempt to provide a connection that can handle the given origin.
149 """
150 origin = status.request.url.origin
151
152 # If there are queued requests in front of us, then don't acquire a
153 # connection. We handle requests strictly in order.
154 waiting = [s for s in self._requests if s.connection is None]
155 if waiting and waiting[0] is not status:
156 return False
157
158 # Reuse an existing connection if one is currently available.
159 for idx, connection in enumerate(self._pool):
160 if connection.can_handle_request(origin) and connection.is_available():
161 self._pool.pop(idx)
162 self._pool.insert(0, connection)
163 status.set_connection(connection)
164 return True
165
166 # If the pool is currently full, attempt to close one idle connection.
167 if len(self._pool) >= self._max_connections:
168 for idx, connection in reversed(list(enumerate(self._pool))):
169 if connection.is_idle():
170 await connection.aclose()
171 self._pool.pop(idx)
172 break
173
174 # If the pool is still full, then we cannot acquire a connection.
175 if len(self._pool) >= self._max_connections:
176 return False
177
178 # Otherwise create a new connection.
179 connection = self.create_connection(origin)
180 self._pool.insert(0, connection)
181 status.set_connection(connection)
182 return True
183
184 async def _close_expired_connections(self) -> None:
185 """
186 Clean up the connection pool by closing off any connections that have expired.
187 """
188 # Close any connections that have expired their keep-alive time.
189 for idx, connection in reversed(list(enumerate(self._pool))):
190 if connection.has_expired():
191 await connection.aclose()
192 self._pool.pop(idx)
193
194 # If the pool size exceeds the maximum number of allowed keep-alive connections,
195 # then close off idle connections as required.
196 pool_size = len(self._pool)
197 for idx, connection in reversed(list(enumerate(self._pool))):
198 if connection.is_idle() and pool_size > self._max_keepalive_connections:
199 await connection.aclose()
200 self._pool.pop(idx)
201 pool_size -= 1
202
203 async def handle_async_request(self, request: Request) -> Response:
204 """
205 Send an HTTP request, and return an HTTP response.
206
207 This is the core implementation that is called into by `.request()` or `.stream()`.
208 """
209 scheme = request.url.scheme.decode()
210 if scheme == "":
211 raise UnsupportedProtocol(
212 "Request URL is missing an 'http://' or 'https://' protocol."
213 )
214 if scheme not in ("http", "https"):
215 raise UnsupportedProtocol(
216 f"Request URL has an unsupported protocol '{scheme}://'."
217 )
218
219 status = RequestStatus(request)
220
221 async with self._pool_lock:
222 self._requests.append(status)
223 await self._close_expired_connections()
224 await self._attempt_to_acquire_connection(status)
225
226 while True:
227 timeouts = request.extensions.get("timeout", {})
228 timeout = timeouts.get("pool", None)
229 connection = await status.wait_for_connection(timeout=timeout)
230 try:
231 response = await connection.handle_async_request(request)
232 except ConnectionNotAvailable:
233 # The ConnectionNotAvailable exception is a special case, that
234 # indicates we need to retry the request on a new connection.
235 #
236 # The most common case where this can occur is when multiple
237 # requests are queued waiting for a single connection, which
238 # might end up as an HTTP/2 connection, but which actually ends
239 # up as HTTP/1.1.
240 async with self._pool_lock:
241 # Maintain our position in the request queue, but reset the
242 # status so that the request becomes queued again.
243 status.unset_connection()
244 await self._attempt_to_acquire_connection(status)
245 except Exception as exc:
246 await self.response_closed(status)
247 raise exc
248 else:
249 break
250
251 # When we return the response, we wrap the stream in a special class
252 # that handles notifying the connection pool once the response
253 # has been released.
254 assert isinstance(response.stream, AsyncIterable)
255 return Response(
256 status=response.status,
257 headers=response.headers,
258 content=ConnectionPoolByteStream(response.stream, self, status),
259 extensions=response.extensions,
260 )
261
262 async def response_closed(self, status: RequestStatus) -> None:
263 """
264 This method acts as a callback once the request/response cycle is complete.
265
266 It is called into from the `ConnectionPoolByteStream.aclose()` method.
267 """
268 assert status.connection is not None
269 connection = status.connection
270
271 async with self._pool_lock:
272 # Update the state of the connection pool.
273 self._requests.remove(status)
274
275 if connection.is_closed() and connection in self._pool:
276 self._pool.remove(connection)
277
278 # Since we've had a response closed, it's possible we'll now be able
279 # to service one or more requests that are currently pending.
280 for status in self._requests:
281 if status.connection is None:
282 acquired = await self._attempt_to_acquire_connection(status)
283 # If we could not acquire a connection for a queued request
284 # then we don't need to check anymore requests that are
285 # queued later behind it.
286 if not acquired:
287 break
288
289 # Housekeeping.
290 await self._close_expired_connections()
291
292 async def aclose(self) -> None:
293 """
294 Close any connections in the pool.
295 """
296 async with self._pool_lock:
297 for connection in self._pool:
298 await connection.aclose()
299 self._pool = []
300 self._requests = []
301
302 async def __aenter__(self) -> "AsyncConnectionPool":
303 return self
304
305 async def __aexit__(
190306 self,
191 method: bytes,
192 url: URL,
193 headers: Headers,
194 stream: AsyncByteStream,
195 extensions: dict,
196 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
197 if not url[0]:
198 raise UnsupportedProtocol(
199 "Request URL missing either an 'http://' or 'https://' protocol."
200 )
201
202 if url[0] not in (b"http", b"https"):
203 protocol = url[0].decode("ascii")
204 raise UnsupportedProtocol(
205 f"Request URL has an unsupported protocol '{protocol}://'."
206 )
207
208 if not url[1]:
209 raise LocalProtocolError("Missing hostname in URL.")
210
211 origin = url_to_origin(url)
212 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
213
214 await self._keepalive_sweep()
215
216 connection: Optional[AsyncHTTPConnection] = None
217 while connection is None:
218 async with self._connection_acquiry_lock:
219 # We get-or-create a connection as an atomic operation, to ensure
220 # that HTTP/2 requests issued in close concurrency will end up
221 # on the same connection.
222 logger.trace("get_connection_from_pool=%r", origin)
223 connection = await self._get_connection_from_pool(origin)
224
225 if connection is None:
226 connection = self._create_connection(origin=origin)
227 logger.trace("created connection=%r", connection)
228 await self._add_to_pool(connection, timeout=timeout)
229 else:
230 logger.trace("reuse connection=%r", connection)
231
232 try:
233 response = await connection.handle_async_request(
234 method, url, headers=headers, stream=stream, extensions=extensions
235 )
236 except NewConnectionRequired:
237 connection = None
238 except BaseException: # noqa: PIE786
239 # See https://github.com/encode/httpcore/pull/305 for motivation
240 # behind catching 'BaseException' rather than 'Exception' here.
241 logger.trace("remove from pool connection=%r", connection)
242 await self._remove_from_pool(connection)
243 raise
244
245 status_code, headers, stream, extensions = response
246 wrapped_stream = ResponseByteStream(
247 stream, connection=connection, callback=self._response_closed
248 )
249 return status_code, headers, wrapped_stream, extensions
250
251 async def _get_connection_from_pool(
252 self, origin: Origin
253 ) -> Optional[AsyncHTTPConnection]:
254 # Determine expired keep alive connections on this origin.
255 reuse_connection = None
256 connections_to_close = set()
257
258 for connection in self._connections_for_origin(origin):
259 if connection.should_close():
260 connections_to_close.add(connection)
261 await self._remove_from_pool(connection)
262 elif connection.is_available():
263 reuse_connection = connection
264
265 # Close any dropped connections.
266 for connection in connections_to_close:
267 await connection.aclose()
268
269 return reuse_connection
270
271 async def _response_closed(self, connection: AsyncHTTPConnection) -> None:
272 remove_from_pool = False
273 close_connection = False
274
275 if connection.is_closed():
276 remove_from_pool = True
277 elif connection.is_idle():
278 num_connections = len(self._get_all_connections())
279 if (
280 self._max_keepalive_connections is not None
281 and num_connections > self._max_keepalive_connections
282 ):
283 remove_from_pool = True
284 close_connection = True
285
286 if remove_from_pool:
287 await self._remove_from_pool(connection)
288
289 if close_connection:
290 await connection.aclose()
291
292 async def _keepalive_sweep(self) -> None:
293 """
294 Remove any IDLE connections that have expired past their keep-alive time.
295 """
296 if self._keepalive_expiry is None:
297 return
298
299 now = await self._backend.time()
300 if now < self._next_keepalive_check:
301 return
302
303 self._next_keepalive_check = now + min(1.0, self._keepalive_expiry)
304 connections_to_close = set()
305
306 for connection in self._get_all_connections():
307 if connection.should_close():
308 connections_to_close.add(connection)
309 await self._remove_from_pool(connection)
310
311 for connection in connections_to_close:
312 await connection.aclose()
313
314 async def _add_to_pool(
315 self, connection: AsyncHTTPConnection, timeout: TimeoutDict
307 exc_type: Type[BaseException] = None,
308 exc_value: BaseException = None,
309 traceback: TracebackType = None,
316310 ) -> None:
317 logger.trace("adding connection to pool=%r", connection)
318 await self._connection_semaphore.acquire(timeout=timeout.get("pool", None))
319 async with self._thread_lock:
320 self._connections.setdefault(connection.origin, set())
321 self._connections[connection.origin].add(connection)
322
323 async def _remove_from_pool(self, connection: AsyncHTTPConnection) -> None:
324 logger.trace("removing connection from pool=%r", connection)
325 async with self._thread_lock:
326 if connection in self._connections.get(connection.origin, set()):
327 await self._connection_semaphore.release()
328 self._connections[connection.origin].remove(connection)
329 if not self._connections[connection.origin]:
330 del self._connections[connection.origin]
331
332 def _connections_for_origin(self, origin: Origin) -> Set[AsyncHTTPConnection]:
333 return set(self._connections.get(origin, set()))
334
335 def _get_all_connections(self) -> Set[AsyncHTTPConnection]:
336 connections: Set[AsyncHTTPConnection] = set()
337 for connection_set in self._connections.values():
338 connections |= connection_set
339 return connections
311 await self.aclose()
312
313
314 class ConnectionPoolByteStream:
315 """
316 A wrapper around the response byte stream, that additionally handles
317 notifying the connection pool when the response has been closed.
318 """
319
320 def __init__(
321 self,
322 stream: AsyncIterable[bytes],
323 pool: AsyncConnectionPool,
324 status: RequestStatus,
325 ) -> None:
326 self._stream = stream
327 self._pool = pool
328 self._status = status
329
330 async def __aiter__(self) -> AsyncIterator[bytes]:
331 async for part in self._stream:
332 yield part
340333
341334 async def aclose(self) -> None:
342 connections = self._get_all_connections()
343 for connection in connections:
344 await self._remove_from_pool(connection)
345
346 # Close all connections
347 for connection in connections:
348 await connection.aclose()
349
350 async def get_connection_info(self) -> Dict[str, List[str]]:
351 """
352 Returns a dict of origin URLs to a list of summary strings for each connection.
353 """
354 await self._keepalive_sweep()
355
356 stats = {}
357 for origin, connections in self._connections.items():
358 stats[origin_to_url_string(origin)] = sorted(
359 [connection.info() for connection in connections]
360 )
361 return stats
335 try:
336 if hasattr(self._stream, "aclose"):
337 await self._stream.aclose() # type: ignore
338 finally:
339 await self._pool.response_closed(self._status)
+0
-42
httpcore/_async/http.py less more
0 from ssl import SSLContext
1
2 from .._backends.auto import AsyncSocketStream
3 from .._types import TimeoutDict
4 from .base import AsyncHTTPTransport
5
6
7 class AsyncBaseHTTPConnection(AsyncHTTPTransport):
8 def info(self) -> str:
9 raise NotImplementedError() # pragma: nocover
10
11 def should_close(self) -> bool:
12 """
13 Return `True` if the connection is in a state where it should be closed.
14 """
15 raise NotImplementedError() # pragma: nocover
16
17 def is_idle(self) -> bool:
18 """
19 Return `True` if the connection is currently idle.
20 """
21 raise NotImplementedError() # pragma: nocover
22
23 def is_closed(self) -> bool:
24 """
25 Return `True` if the connection has been closed.
26 """
27 raise NotImplementedError() # pragma: nocover
28
29 def is_available(self) -> bool:
30 """
31 Return `True` if the connection is currently able to accept an outgoing request.
32 """
33 raise NotImplementedError() # pragma: nocover
34
35 async def start_tls(
36 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
37 ) -> AsyncSocketStream:
38 """
39 Upgrade the underlying socket to TLS.
40 """
41 raise NotImplementedError() # pragma: nocover
00 import enum
11 import time
2 from ssl import SSLContext
3 from typing import AsyncIterator, List, Optional, Tuple, Union, cast
2 from types import TracebackType
3 from typing import AsyncIterable, AsyncIterator, List, Optional, Tuple, Type, Union
44
55 import h11
66
7 from .._backends.auto import AsyncSocketStream
8 from .._bytestreams import AsyncIteratorByteStream
9 from .._exceptions import LocalProtocolError, RemoteProtocolError, map_exceptions
10 from .._types import URL, Headers, TimeoutDict
11 from .._utils import get_logger
12 from .base import AsyncByteStream, NewConnectionRequired
13 from .http import AsyncBaseHTTPConnection
7 from .._exceptions import (
8 ConnectionNotAvailable,
9 LocalProtocolError,
10 RemoteProtocolError,
11 map_exceptions,
12 )
13 from .._models import Origin, Request, Response
14 from .._synchronization import AsyncLock
15 from .._trace import Trace
16 from ..backends.base import AsyncNetworkStream
17 from .interfaces import AsyncConnectionInterface
1418
1519 H11Event = Union[
1620 h11.Request,
2226 ]
2327
2428
25 class ConnectionState(enum.IntEnum):
29 class HTTPConnectionState(enum.IntEnum):
2630 NEW = 0
2731 ACTIVE = 1
2832 IDLE = 2
2933 CLOSED = 3
3034
3135
32 logger = get_logger(__name__)
33
34
35 class AsyncHTTP11Connection(AsyncBaseHTTPConnection):
36 class AsyncHTTP11Connection(AsyncConnectionInterface):
3637 READ_NUM_BYTES = 64 * 1024
3738
38 def __init__(self, socket: AsyncSocketStream, keepalive_expiry: float = None):
39 self.socket = socket
40
39 def __init__(
40 self, origin: Origin, stream: AsyncNetworkStream, keepalive_expiry: float = None
41 ) -> None:
42 self._origin = origin
43 self._network_stream = stream
4144 self._keepalive_expiry: Optional[float] = keepalive_expiry
42 self._should_expire_at: Optional[float] = None
45 self._expire_at: Optional[float] = None
46 self._state = HTTPConnectionState.NEW
47 self._state_lock = AsyncLock()
48 self._request_count = 0
4349 self._h11_state = h11.Connection(our_role=h11.CLIENT)
44 self._state = ConnectionState.NEW
45
46 def __repr__(self) -> str:
47 return f"<AsyncHTTP11Connection [{self._state.name}]>"
48
49 def _now(self) -> float:
50 return time.monotonic()
51
52 def _server_disconnected(self) -> bool:
53 """
54 Return True if the connection is idle, and the underlying socket is readable.
55 The only valid state the socket can be readable here is when the b""
56 EOF marker is about to be returned, indicating a server disconnect.
57 """
58 return self._state == ConnectionState.IDLE and self.socket.is_readable()
59
60 def _keepalive_expired(self) -> bool:
61 """
62 Return True if the connection is idle, and has passed it's keepalive
63 expiry time.
64 """
65 return (
66 self._state == ConnectionState.IDLE
67 and self._should_expire_at is not None
68 and self._now() >= self._should_expire_at
69 )
70
71 def info(self) -> str:
72 return f"HTTP/1.1, {self._state.name}"
73
74 def should_close(self) -> bool:
75 """
76 Return `True` if the connection is in a state where it should be closed.
77 """
78 return self._server_disconnected() or self._keepalive_expired()
79
80 def is_idle(self) -> bool:
81 """
82 Return `True` if the connection is currently idle.
83 """
84 return self._state == ConnectionState.IDLE
85
86 def is_closed(self) -> bool:
87 """
88 Return `True` if the connection has been closed.
89 """
90 return self._state == ConnectionState.CLOSED
91
92 def is_available(self) -> bool:
93 """
94 Return `True` if the connection is currently able to accept an outgoing request.
95 """
96 return self._state == ConnectionState.IDLE
97
98 async def handle_async_request(
99 self,
100 method: bytes,
101 url: URL,
102 headers: Headers,
103 stream: AsyncByteStream,
104 extensions: dict,
105 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
106 """
107 Send a single HTTP/1.1 request.
108
109 Note that there is no kind of task/thread locking at this layer of interface.
110 Dealing with locking for concurrency is handled by the `AsyncHTTPConnection`.
111 """
112 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
113
114 if self._state in (ConnectionState.NEW, ConnectionState.IDLE):
115 self._state = ConnectionState.ACTIVE
116 self._should_expire_at = None
117 else:
118 raise NewConnectionRequired()
119
120 await self._send_request(method, url, headers, timeout)
121 await self._send_request_body(stream, timeout)
122 (
123 http_version,
124 status_code,
125 reason_phrase,
126 headers,
127 ) = await self._receive_response(timeout)
128 response_stream = AsyncIteratorByteStream(
129 aiterator=self._receive_response_data(timeout),
130 aclose_func=self._response_closed,
131 )
132 extensions = {
133 "http_version": http_version,
134 "reason_phrase": reason_phrase,
135 }
136 return (status_code, headers, response_stream, extensions)
137
138 async def start_tls(
139 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
140 ) -> AsyncSocketStream:
141 timeout = {} if timeout is None else timeout
142 self.socket = await self.socket.start_tls(hostname, ssl_context, timeout)
143 return self.socket
144
145 async def _send_request(
146 self, method: bytes, url: URL, headers: Headers, timeout: TimeoutDict
147 ) -> None:
148 """
149 Send the request line and headers.
150 """
151 logger.trace("send_request method=%r url=%r headers=%s", method, url, headers)
152 _scheme, _host, _port, target = url
50
51 async def handle_async_request(self, request: Request) -> Response:
52 if not self.can_handle_request(request.url.origin):
53 raise RuntimeError(
54 f"Attempted to send request to {request.url.origin} on connection "
55 f"to {self._origin}"
56 )
57
58 async with self._state_lock:
59 if self._state in (HTTPConnectionState.NEW, HTTPConnectionState.IDLE):
60 self._request_count += 1
61 self._state = HTTPConnectionState.ACTIVE
62 self._expire_at = None
63 else:
64 raise ConnectionNotAvailable()
65
66 try:
67 kwargs = {"request": request}
68 async with Trace("http11.send_request_headers", request, kwargs) as trace:
69 await self._send_request_headers(**kwargs)
70 async with Trace("http11.send_request_body", request, kwargs) as trace:
71 await self._send_request_body(**kwargs)
72 async with Trace(
73 "http11.receive_response_headers", request, kwargs
74 ) as trace:
75 (
76 http_version,
77 status,
78 reason_phrase,
79 headers,
80 ) = await self._receive_response_headers(**kwargs)
81 trace.return_value = (
82 http_version,
83 status,
84 reason_phrase,
85 headers,
86 )
87
88 return Response(
89 status=status,
90 headers=headers,
91 content=HTTP11ConnectionByteStream(self, request),
92 extensions={
93 "http_version": http_version,
94 "reason_phrase": reason_phrase,
95 "network_stream": self._network_stream,
96 },
97 )
98 except BaseException as exc:
99 async with Trace("http11.response_closed", request) as trace:
100 await self._response_closed()
101 raise exc
102
103 # Sending the request...
104
105 async def _send_request_headers(self, request: Request) -> None:
106 timeouts = request.extensions.get("timeout", {})
107 timeout = timeouts.get("write", None)
108
153109 with map_exceptions({h11.LocalProtocolError: LocalProtocolError}):
154 event = h11.Request(method=method, target=target, headers=headers)
155 await self._send_event(event, timeout)
156
157 async def _send_request_body(
158 self, stream: AsyncByteStream, timeout: TimeoutDict
159 ) -> None:
160 """
161 Send the request body.
162 """
163 # Send the request body.
164 async for chunk in stream:
165 logger.trace("send_data=Data(<%d bytes>)", len(chunk))
110 event = h11.Request(
111 method=request.method,
112 target=request.url.target,
113 headers=request.headers,
114 )
115 await self._send_event(event, timeout=timeout)
116
117 async def _send_request_body(self, request: Request) -> None:
118 timeouts = request.extensions.get("timeout", {})
119 timeout = timeouts.get("write", None)
120
121 assert isinstance(request.stream, AsyncIterable)
122 async for chunk in request.stream:
166123 event = h11.Data(data=chunk)
167 await self._send_event(event, timeout)
168
169 # Finalize sending the request.
124 await self._send_event(event, timeout=timeout)
125
170126 event = h11.EndOfMessage()
171 await self._send_event(event, timeout)
172
173 async def _send_event(self, event: H11Event, timeout: TimeoutDict) -> None:
174 """
175 Send a single `h11` event to the network, waiting for the data to
176 drain before returning.
177 """
127 await self._send_event(event, timeout=timeout)
128
129 async def _send_event(self, event: H11Event, timeout: float = None) -> None:
178130 bytes_to_send = self._h11_state.send(event)
179 await self.socket.write(bytes_to_send, timeout)
180
181 async def _receive_response(
182 self, timeout: TimeoutDict
131 await self._network_stream.write(bytes_to_send, timeout=timeout)
132
133 # Receiving the response...
134
135 async def _receive_response_headers(
136 self, request: Request
183137 ) -> Tuple[bytes, int, bytes, List[Tuple[bytes, bytes]]]:
184 """
185 Read the response status and headers from the network.
186 """
138 timeouts = request.extensions.get("timeout", {})
139 timeout = timeouts.get("read", None)
140
187141 while True:
188 event = await self._receive_event(timeout)
142 event = await self._receive_event(timeout=timeout)
189143 if isinstance(event, h11.Response):
190144 break
191145
197151
198152 return http_version, event.status_code, event.reason, headers
199153
200 async def _receive_response_data(
201 self, timeout: TimeoutDict
202 ) -> AsyncIterator[bytes]:
203 """
204 Read the response data from the network.
205 """
154 async def _receive_response_body(self, request: Request) -> AsyncIterator[bytes]:
155 timeouts = request.extensions.get("timeout", {})
156 timeout = timeouts.get("read", None)
157
206158 while True:
207 event = await self._receive_event(timeout)
159 event = await self._receive_event(timeout=timeout)
208160 if isinstance(event, h11.Data):
209 logger.trace("receive_event=Data(<%d bytes>)", len(event.data))
210161 yield bytes(event.data)
211162 elif isinstance(event, (h11.EndOfMessage, h11.PAUSED)):
212 logger.trace("receive_event=%r", event)
213163 break
214164
215 async def _receive_event(self, timeout: TimeoutDict) -> H11Event:
216 """
217 Read a single `h11` event, reading more data from the network if needed.
218 """
165 async def _receive_event(self, timeout: float = None) -> H11Event:
219166 while True:
220167 with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
221168 event = self._h11_state.next_event()
222169
223170 if event is h11.NEED_DATA:
224 data = await self.socket.read(self.READ_NUM_BYTES, timeout)
225
226 # If we feed this case through h11 we'll raise an exception like:
227 #
228 # httpcore.RemoteProtocolError: can't handle event type
229 # ConnectionClosed when role=SERVER and state=SEND_RESPONSE
230 #
231 # Which is accurate, but not very informative from an end-user
232 # perspective. Instead we handle messaging for this case distinctly.
233 if data == b"" and self._h11_state.their_state == h11.SEND_RESPONSE:
234 msg = "Server disconnected without sending a response."
235 raise RemoteProtocolError(msg)
236
171 data = await self._network_stream.read(
172 self.READ_NUM_BYTES, timeout=timeout
173 )
237174 self._h11_state.receive_data(data)
238175 else:
239 assert event is not h11.NEED_DATA
240 break
241 return event
176 return event
242177
243178 async def _response_closed(self) -> None:
244 logger.trace(
245 "response_closed our_state=%r their_state=%r",
246 self._h11_state.our_state,
247 self._h11_state.their_state,
179 async with self._state_lock:
180 if (
181 self._h11_state.our_state is h11.DONE
182 and self._h11_state.their_state is h11.DONE
183 ):
184 self._state = HTTPConnectionState.IDLE
185 self._h11_state.start_next_cycle()
186 if self._keepalive_expiry is not None:
187 now = time.monotonic()
188 self._expire_at = now + self._keepalive_expiry
189 else:
190 await self.aclose()
191
192 # Once the connection is no longer required...
193
194 async def aclose(self) -> None:
195 # Note that this method unilaterally closes the connection, and does
196 # not have any kind of locking in place around it.
197 self._state = HTTPConnectionState.CLOSED
198 await self._network_stream.aclose()
199
200 # The AsyncConnectionInterface methods provide information about the state of
201 # the connection, allowing for a connection pooling implementation to
202 # determine when to reuse and when to close the connection...
203
204 def can_handle_request(self, origin: Origin) -> bool:
205 return origin == self._origin
206
207 def is_available(self) -> bool:
208 # Note that HTTP/1.1 connections in the "NEW" state are not treated as
209 # being "available". The control flow which created the connection will
210 # be able to send an outgoing request, but the connection will not be
211 # acquired from the connection pool for any other request.
212 return self._state == HTTPConnectionState.IDLE
213
214 def has_expired(self) -> bool:
215 now = time.monotonic()
216 keepalive_expired = self._expire_at is not None and now > self._expire_at
217
218 # If the HTTP connection is idle but the socket is readable, then the
219 # only valid state is that the socket is about to return b"", indicating
220 # a server-initiated disconnect.
221 server_disconnected = (
222 self._state == HTTPConnectionState.IDLE
223 and self._network_stream.get_extra_info("is_readable")
248224 )
249 if (
250 self._h11_state.our_state is h11.DONE
251 and self._h11_state.their_state is h11.DONE
252 ):
253 self._h11_state.start_next_cycle()
254 self._state = ConnectionState.IDLE
255 if self._keepalive_expiry is not None:
256 self._should_expire_at = self._now() + self._keepalive_expiry
257 else:
258 await self.aclose()
225
226 return keepalive_expired or server_disconnected
227
228 def is_idle(self) -> bool:
229 return self._state == HTTPConnectionState.IDLE
230
231 def is_closed(self) -> bool:
232 return self._state == HTTPConnectionState.CLOSED
233
234 def info(self) -> str:
235 origin = str(self._origin)
236 return (
237 f"{origin!r}, HTTP/1.1, {self._state.name}, "
238 f"Request Count: {self._request_count}"
239 )
240
241 def __repr__(self) -> str:
242 class_name = self.__class__.__name__
243 origin = str(self._origin)
244 return (
245 f"<{class_name} [{origin!r}, {self._state.name}, "
246 f"Request Count: {self._request_count}]>"
247 )
248
249 # These context managers are not used in the standard flow, but are
250 # useful for testing or working with connection instances directly.
251
252 async def __aenter__(self) -> "AsyncHTTP11Connection":
253 return self
254
255 async def __aexit__(
256 self,
257 exc_type: Type[BaseException] = None,
258 exc_value: BaseException = None,
259 traceback: TracebackType = None,
260 ) -> None:
261 await self.aclose()
262
263
264 class HTTP11ConnectionByteStream:
265 def __init__(self, connection: AsyncHTTP11Connection, request: Request) -> None:
266 self._connection = connection
267 self._request = request
268
269 async def __aiter__(self) -> AsyncIterator[bytes]:
270 kwargs = {"request": self._request}
271 async with Trace("http11.receive_response_body", self._request, kwargs):
272 async for chunk in self._connection._receive_response_body(**kwargs):
273 yield chunk
259274
260275 async def aclose(self) -> None:
261 if self._state != ConnectionState.CLOSED:
262 self._state = ConnectionState.CLOSED
263
264 if self._h11_state.our_state is h11.MUST_CLOSE:
265 event = h11.ConnectionClosed()
266 self._h11_state.send(event)
267
268 await self.socket.aclose()
276 async with Trace("http11.response_closed", self._request):
277 await self._connection._response_closed()
00 import enum
11 import time
2 from ssl import SSLContext
3 from typing import AsyncIterator, Dict, List, Optional, Tuple, cast
4
2 import types
3 import typing
4
5 import h2.config
56 import h2.connection
67 import h2.events
7 from h2.config import H2Configuration
8 from h2.exceptions import NoAvailableStreamIDError
9 from h2.settings import SettingCodes, Settings
10
11 from .._backends.auto import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
12 from .._bytestreams import AsyncIteratorByteStream
13 from .._exceptions import LocalProtocolError, PoolTimeout, RemoteProtocolError
14 from .._types import URL, Headers, TimeoutDict
15 from .._utils import get_logger
16 from .base import AsyncByteStream, NewConnectionRequired
17 from .http import AsyncBaseHTTPConnection
18
19 logger = get_logger(__name__)
20
21
22 class ConnectionState(enum.IntEnum):
23 IDLE = 0
8 import h2.exceptions
9 import h2.settings
10
11 from .._exceptions import ConnectionNotAvailable, RemoteProtocolError
12 from .._models import Origin, Request, Response
13 from .._synchronization import AsyncLock, AsyncSemaphore
14 from .._trace import Trace
15 from ..backends.base import AsyncNetworkStream
16 from .interfaces import AsyncConnectionInterface
17
18
19 def has_body_headers(request: Request) -> bool:
20 return any(
21 [
22 k.lower() == b"content-length" or k.lower() == b"transfer-encoding"
23 for k, v in request.headers
24 ]
25 )
26
27
28 class HTTPConnectionState(enum.IntEnum):
2429 ACTIVE = 1
25 CLOSED = 2
26
27
28 class AsyncHTTP2Connection(AsyncBaseHTTPConnection):
30 IDLE = 2
31 CLOSED = 3
32
33
34 class AsyncHTTP2Connection(AsyncConnectionInterface):
2935 READ_NUM_BYTES = 64 * 1024
30 CONFIG = H2Configuration(validate_inbound_headers=False)
36 CONFIG = h2.config.H2Configuration(validate_inbound_headers=False)
3137
3238 def __init__(
33 self,
34 socket: AsyncSocketStream,
35 backend: AsyncBackend,
36 keepalive_expiry: float = None,
39 self, origin: Origin, stream: AsyncNetworkStream, keepalive_expiry: float = None
3740 ):
38 self.socket = socket
39
40 self._backend = backend
41 self._origin = origin
42 self._network_stream = stream
43 self._keepalive_expiry: typing.Optional[float] = keepalive_expiry
4144 self._h2_state = h2.connection.H2Connection(config=self.CONFIG)
42
45 self._state = HTTPConnectionState.IDLE
46 self._expire_at: typing.Optional[float] = None
47 self._request_count = 0
48 self._init_lock = AsyncLock()
49 self._state_lock = AsyncLock()
50 self._read_lock = AsyncLock()
51 self._write_lock = AsyncLock()
4352 self._sent_connection_init = False
44 self._streams: Dict[int, AsyncHTTP2Stream] = {}
45 self._events: Dict[int, List[h2.events.Event]] = {}
46
47 self._keepalive_expiry: Optional[float] = keepalive_expiry
48 self._should_expire_at: Optional[float] = None
49 self._state = ConnectionState.ACTIVE
50 self._exhausted_available_stream_ids = False
51
52 def __repr__(self) -> str:
53 return f"<AsyncHTTP2Connection [{self._state}]>"
54
55 def info(self) -> str:
56 return f"HTTP/2, {self._state.name}, {len(self._streams)} streams"
57
58 def _now(self) -> float:
59 return time.monotonic()
60
61 def should_close(self) -> bool:
62 """
63 Return `True` if the connection is currently idle, and the keepalive
64 timeout has passed.
65 """
66 return (
67 self._state == ConnectionState.IDLE
68 and self._should_expire_at is not None
69 and self._now() >= self._should_expire_at
70 )
71
72 def is_idle(self) -> bool:
73 """
74 Return `True` if the connection is currently idle.
75 """
76 return self._state == ConnectionState.IDLE
77
78 def is_closed(self) -> bool:
79 """
80 Return `True` if the connection has been closed.
81 """
82 return self._state == ConnectionState.CLOSED
83
84 def is_available(self) -> bool:
85 """
86 Return `True` if the connection is currently able to accept an outgoing request.
87 This occurs when any of the following occur:
88
89 * The connection has not yet been opened, and HTTP/2 support is enabled.
90 We don't *know* at this point if we'll end up on an HTTP/2 connection or
91 not, but we *might* do, so we indicate availability.
92 * The connection has been opened, and is currently idle.
93 * The connection is open, and is an HTTP/2 connection. The connection must
94 also not have exhausted the maximum total number of stream IDs.
95 """
96 return (
97 self._state != ConnectionState.CLOSED
98 and not self._exhausted_available_stream_ids
99 )
100
101 @property
102 def init_lock(self) -> AsyncLock:
103 # We do this lazily, to make sure backend autodetection always
104 # runs within an async context.
105 if not hasattr(self, "_initialization_lock"):
106 self._initialization_lock = self._backend.create_lock()
107 return self._initialization_lock
108
109 @property
110 def read_lock(self) -> AsyncLock:
111 # We do this lazily, to make sure backend autodetection always
112 # runs within an async context.
113 if not hasattr(self, "_read_lock"):
114 self._read_lock = self._backend.create_lock()
115 return self._read_lock
116
117 @property
118 def max_streams_semaphore(self) -> AsyncSemaphore:
119 # We do this lazily, to make sure backend autodetection always
120 # runs within an async context.
121 if not hasattr(self, "_max_streams_semaphore"):
122 max_streams = self._h2_state.local_settings.max_concurrent_streams
123 self._max_streams_semaphore = self._backend.create_semaphore(
124 max_streams, exc_class=PoolTimeout
53 self._used_all_stream_ids = False
54 self._events: typing.Dict[int, h2.events.Event] = {}
55
56 async def handle_async_request(self, request: Request) -> Response:
57 if not self.can_handle_request(request.url.origin):
58 # This cannot occur in normal operation, since the connection pool
59 # will only send requests on connections that handle them.
60 # It's in place simply for resilience as a guard against incorrect
61 # usage, for anyone working directly with httpcore connections.
62 raise RuntimeError(
63 f"Attempted to send request to {request.url.origin} on connection "
64 f"to {self._origin}"
12565 )
126 return self._max_streams_semaphore
127
128 async def start_tls(
129 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
130 ) -> AsyncSocketStream:
131 raise NotImplementedError("TLS upgrade not supported on HTTP/2 connections.")
132
133 async def handle_async_request(
134 self,
135 method: bytes,
136 url: URL,
137 headers: Headers,
138 stream: AsyncByteStream,
139 extensions: dict,
140 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
141 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
142
143 async with self.init_lock:
66
67 async with self._state_lock:
68 if self._state in (HTTPConnectionState.ACTIVE, HTTPConnectionState.IDLE):
69 self._request_count += 1
70 self._expire_at = None
71 self._state = HTTPConnectionState.ACTIVE
72 else:
73 raise ConnectionNotAvailable()
74
75 async with self._init_lock:
14476 if not self._sent_connection_init:
145 # The very first stream is responsible for initiating the connection.
146 self._state = ConnectionState.ACTIVE
147 await self.send_connection_init(timeout)
77 kwargs = {"request": request}
78 async with Trace("http2.send_connection_init", request, kwargs):
79 await self._send_connection_init(**kwargs)
14880 self._sent_connection_init = True
149
150 await self.max_streams_semaphore.acquire()
81 max_streams = self._h2_state.local_settings.max_concurrent_streams
82 self._max_streams_semaphore = AsyncSemaphore(max_streams)
83
84 await self._max_streams_semaphore.acquire()
85
15186 try:
152 try:
153 stream_id = self._h2_state.get_next_available_stream_id()
154 except NoAvailableStreamIDError:
155 self._exhausted_available_stream_ids = True
156 raise NewConnectionRequired()
157 else:
158 self._state = ConnectionState.ACTIVE
159 self._should_expire_at = None
160
161 h2_stream = AsyncHTTP2Stream(stream_id=stream_id, connection=self)
162 self._streams[stream_id] = h2_stream
87 stream_id = self._h2_state.get_next_available_stream_id()
16388 self._events[stream_id] = []
164 return await h2_stream.handle_async_request(
165 method, url, headers, stream, extensions
89 except h2.exceptions.NoAvailableStreamIDError: # pragma: nocover
90 self._used_all_stream_ids = True
91 raise ConnectionNotAvailable()
92
93 try:
94 kwargs = {"request": request, "stream_id": stream_id}
95 async with Trace("http2.send_request_headers", request, kwargs):
96 await self._send_request_headers(request=request, stream_id=stream_id)
97 async with Trace("http2.send_request_body", request, kwargs):
98 await self._send_request_body(request=request, stream_id=stream_id)
99 async with Trace(
100 "http2.receive_response_headers", request, kwargs
101 ) as trace:
102 status, headers = await self._receive_response(
103 request=request, stream_id=stream_id
104 )
105 trace.return_value = (status, headers)
106
107 return Response(
108 status=status,
109 headers=headers,
110 content=HTTP2ConnectionByteStream(self, request, stream_id=stream_id),
111 extensions={"stream_id": stream_id, "http_version": b"HTTP/2"},
166112 )
167113 except Exception: # noqa: PIE786
168 await self.max_streams_semaphore.release()
114 kwargs = {"stream_id": stream_id}
115 async with Trace("http2.response_closed", request, kwargs):
116 await self._response_closed(stream_id=stream_id)
169117 raise
170118
171 async def send_connection_init(self, timeout: TimeoutDict) -> None:
119 async def _send_connection_init(self, request: Request) -> None:
172120 """
173121 The HTTP/2 connection requires some initial setup before we can start
174122 using individual request/response streams on it.
176124 # Need to set these manually here instead of manipulating via
177125 # __setitem__() otherwise the H2Connection will emit SettingsUpdate
178126 # frames in addition to sending the undesired defaults.
179 self._h2_state.local_settings = Settings(
127 self._h2_state.local_settings = h2.settings.Settings(
180128 client=True,
181129 initial_values={
182130 # Disable PUSH_PROMISE frames from the server since we don't do anything
183131 # with them for now. Maybe when we support caching?
184 SettingCodes.ENABLE_PUSH: 0,
132 h2.settings.SettingCodes.ENABLE_PUSH: 0,
185133 # These two are taken from h2 for safe defaults
186 SettingCodes.MAX_CONCURRENT_STREAMS: 100,
187 SettingCodes.MAX_HEADER_LIST_SIZE: 65536,
134 h2.settings.SettingCodes.MAX_CONCURRENT_STREAMS: 100,
135 h2.settings.SettingCodes.MAX_HEADER_LIST_SIZE: 65536,
188136 },
189137 )
190138
195143 h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL
196144 ]
197145
198 logger.trace("initiate_connection=%r", self)
199146 self._h2_state.initiate_connection()
200147 self._h2_state.increment_flow_control_window(2 ** 24)
201 data_to_send = self._h2_state.data_to_send()
202 await self.socket.write(data_to_send, timeout)
203
204 def is_socket_readable(self) -> bool:
205 return self.socket.is_readable()
206
207 async def aclose(self) -> None:
208 logger.trace("close_connection=%r", self)
209 if self._state != ConnectionState.CLOSED:
210 self._state = ConnectionState.CLOSED
211
212 await self.socket.aclose()
213
214 async def wait_for_outgoing_flow(self, stream_id: int, timeout: TimeoutDict) -> int:
215 """
216 Returns the maximum allowable outgoing flow for a given stream.
217 If the allowable flow is zero, then waits on the network until
218 WindowUpdated frames have increased the flow rate.
219 https://tools.ietf.org/html/rfc7540#section-6.9
220 """
221 local_flow = self._h2_state.local_flow_control_window(stream_id)
222 connection_flow = self._h2_state.max_outbound_frame_size
223 flow = min(local_flow, connection_flow)
224 while flow == 0:
225 await self.receive_events(timeout)
226 local_flow = self._h2_state.local_flow_control_window(stream_id)
227 connection_flow = self._h2_state.max_outbound_frame_size
228 flow = min(local_flow, connection_flow)
229 return flow
230
231 async def wait_for_event(
232 self, stream_id: int, timeout: TimeoutDict
233 ) -> h2.events.Event:
234 """
235 Returns the next event for a given stream.
236 If no events are available yet, then waits on the network until
237 an event is available.
238 """
239 async with self.read_lock:
240 while not self._events[stream_id]:
241 await self.receive_events(timeout)
242 return self._events[stream_id].pop(0)
243
244 async def receive_events(self, timeout: TimeoutDict) -> None:
245 """
246 Read some data from the network, and update the H2 state.
247 """
248 data = await self.socket.read(self.READ_NUM_BYTES, timeout)
249 if data == b"":
250 raise RemoteProtocolError("Server disconnected")
251
252 events = self._h2_state.receive_data(data)
253 for event in events:
254 event_stream_id = getattr(event, "stream_id", 0)
255 logger.trace("receive_event stream_id=%r event=%s", event_stream_id, event)
256
257 if hasattr(event, "error_code"):
258 raise RemoteProtocolError(event)
259
260 if event_stream_id in self._events:
261 self._events[event_stream_id].append(event)
262
263 data_to_send = self._h2_state.data_to_send()
264 await self.socket.write(data_to_send, timeout)
265
266 async def send_headers(
267 self, stream_id: int, headers: Headers, end_stream: bool, timeout: TimeoutDict
268 ) -> None:
269 logger.trace("send_headers stream_id=%r headers=%r", stream_id, headers)
270 self._h2_state.send_headers(stream_id, headers, end_stream=end_stream)
271 self._h2_state.increment_flow_control_window(2 ** 24, stream_id=stream_id)
272 data_to_send = self._h2_state.data_to_send()
273 await self.socket.write(data_to_send, timeout)
274
275 async def send_data(
276 self, stream_id: int, chunk: bytes, timeout: TimeoutDict
277 ) -> None:
278 logger.trace("send_data stream_id=%r chunk=%r", stream_id, chunk)
279 self._h2_state.send_data(stream_id, chunk)
280 data_to_send = self._h2_state.data_to_send()
281 await self.socket.write(data_to_send, timeout)
282
283 async def end_stream(self, stream_id: int, timeout: TimeoutDict) -> None:
284 logger.trace("end_stream stream_id=%r", stream_id)
285 self._h2_state.end_stream(stream_id)
286 data_to_send = self._h2_state.data_to_send()
287 await self.socket.write(data_to_send, timeout)
288
289 async def acknowledge_received_data(
290 self, stream_id: int, amount: int, timeout: TimeoutDict
291 ) -> None:
292 self._h2_state.acknowledge_received_data(amount, stream_id)
293 data_to_send = self._h2_state.data_to_send()
294 await self.socket.write(data_to_send, timeout)
295
296 async def close_stream(self, stream_id: int) -> None:
297 try:
298 logger.trace("close_stream stream_id=%r", stream_id)
299 del self._streams[stream_id]
300 del self._events[stream_id]
301
302 if not self._streams:
303 if self._state == ConnectionState.ACTIVE:
304 if self._exhausted_available_stream_ids:
305 await self.aclose()
306 else:
307 self._state = ConnectionState.IDLE
308 if self._keepalive_expiry is not None:
309 self._should_expire_at = (
310 self._now() + self._keepalive_expiry
311 )
312 finally:
313 await self.max_streams_semaphore.release()
314
315
316 class AsyncHTTP2Stream:
317 def __init__(self, stream_id: int, connection: AsyncHTTP2Connection) -> None:
318 self.stream_id = stream_id
319 self.connection = connection
320
321 async def handle_async_request(
322 self,
323 method: bytes,
324 url: URL,
325 headers: Headers,
326 stream: AsyncByteStream,
327 extensions: dict,
328 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
329 headers = [(k.lower(), v) for (k, v) in headers]
330 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
331
332 # Send the request.
333 seen_headers = set(key for key, value in headers)
334 has_body = (
335 b"content-length" in seen_headers or b"transfer-encoding" in seen_headers
336 )
337
338 await self.send_headers(method, url, headers, has_body, timeout)
339 if has_body:
340 await self.send_body(stream, timeout)
341
342 # Receive the response.
343 status_code, headers = await self.receive_response(timeout)
344 response_stream = AsyncIteratorByteStream(
345 aiterator=self.body_iter(timeout), aclose_func=self._response_closed
346 )
347
348 extensions = {
349 "http_version": b"HTTP/2",
350 }
351 return (status_code, headers, response_stream, extensions)
352
353 async def send_headers(
354 self,
355 method: bytes,
356 url: URL,
357 headers: Headers,
358 has_body: bool,
359 timeout: TimeoutDict,
360 ) -> None:
361 scheme, hostname, port, path = url
148 await self._write_outgoing_data(request)
149
150 # Sending the request...
151
152 async def _send_request_headers(self, request: Request, stream_id: int) -> None:
153 end_stream = not has_body_headers(request)
362154
363155 # In HTTP/2 the ':authority' pseudo-header is used instead of 'Host'.
364156 # In order to gracefully handle HTTP/1.1 and HTTP/2 we always require
365157 # HTTP/1.1 style headers, and map them appropriately if we end up on
366158 # an HTTP/2 connection.
367 authority = None
368
369 for k, v in headers:
370 if k == b"host":
371 authority = v
372 break
373
374 if authority is None:
375 # Mirror the same error we'd see with `h11`, so that the behaviour
376 # is consistent. Although we're dealing with an `:authority`
377 # pseudo-header by this point, from an end-user perspective the issue
378 # is that the outgoing request needed to include a `host` header.
379 raise LocalProtocolError("Missing mandatory Host: header")
159 authority = [v for k, v in request.headers if k.lower() == b"host"][0]
380160
381161 headers = [
382 (b":method", method),
162 (b":method", request.method),
383163 (b":authority", authority),
384 (b":scheme", scheme),
385 (b":path", path),
164 (b":scheme", request.url.scheme),
165 (b":path", request.url.target),
386166 ] + [
387 (k, v)
388 for k, v in headers
389 if k
167 (k.lower(), v)
168 for k, v in request.headers
169 if k.lower()
390170 not in (
391171 b"host",
392172 b"transfer-encoding",
393173 )
394174 ]
395 end_stream = not has_body
396
397 await self.connection.send_headers(self.stream_id, headers, end_stream, timeout)
398
399 async def send_body(self, stream: AsyncByteStream, timeout: TimeoutDict) -> None:
400 async for data in stream:
175
176 self._h2_state.send_headers(stream_id, headers, end_stream=end_stream)
177 self._h2_state.increment_flow_control_window(2 ** 24, stream_id=stream_id)
178 await self._write_outgoing_data(request)
179
180 async def _send_request_body(self, request: Request, stream_id: int) -> None:
181 if not has_body_headers(request):
182 return
183
184 assert isinstance(request.stream, typing.AsyncIterable)
185 async for data in request.stream:
401186 while data:
402 max_flow = await self.connection.wait_for_outgoing_flow(
403 self.stream_id, timeout
404 )
187 max_flow = await self._wait_for_outgoing_flow(request, stream_id)
405188 chunk_size = min(len(data), max_flow)
406189 chunk, data = data[:chunk_size], data[chunk_size:]
407 await self.connection.send_data(self.stream_id, chunk, timeout)
408
409 await self.connection.end_stream(self.stream_id, timeout)
410
411 async def receive_response(
412 self, timeout: TimeoutDict
413 ) -> Tuple[int, List[Tuple[bytes, bytes]]]:
414 """
415 Read the response status and headers from the network.
416 """
190 self._h2_state.send_data(stream_id, chunk)
191 await self._write_outgoing_data(request)
192
193 self._h2_state.end_stream(stream_id)
194 await self._write_outgoing_data(request)
195
196 # Receiving the response...
197
198 async def _receive_response(
199 self, request: Request, stream_id: int
200 ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]:
417201 while True:
418 event = await self.connection.wait_for_event(self.stream_id, timeout)
202 event = await self._receive_stream_event(request, stream_id)
419203 if isinstance(event, h2.events.ResponseReceived):
420204 break
421205
429213
430214 return (status_code, headers)
431215
432 async def body_iter(self, timeout: TimeoutDict) -> AsyncIterator[bytes]:
216 async def _receive_response_body(
217 self, request: Request, stream_id: int
218 ) -> typing.AsyncIterator[bytes]:
433219 while True:
434 event = await self.connection.wait_for_event(self.stream_id, timeout)
220 event = await self._receive_stream_event(request, stream_id)
435221 if isinstance(event, h2.events.DataReceived):
436222 amount = event.flow_controlled_length
437 await self.connection.acknowledge_received_data(
438 self.stream_id, amount, timeout
439 )
223 self._h2_state.acknowledge_received_data(amount, stream_id)
224 await self._write_outgoing_data(request)
440225 yield event.data
441226 elif isinstance(event, (h2.events.StreamEnded, h2.events.StreamReset)):
442227 break
443228
444 async def _response_closed(self) -> None:
445 await self.connection.close_stream(self.stream_id)
229 async def _receive_stream_event(
230 self, request: Request, stream_id: int
231 ) -> h2.events.Event:
232 while not self._events.get(stream_id):
233 await self._receive_events(request)
234 return self._events[stream_id].pop(0)
235
236 async def _receive_events(self, request: Request) -> None:
237 events = await self._read_incoming_data(request)
238 for event in events:
239 event_stream_id = getattr(event, "stream_id", 0)
240
241 if hasattr(event, "error_code"):
242 raise RemoteProtocolError(event)
243
244 if event_stream_id in self._events:
245 self._events[event_stream_id].append(event)
246
247 await self._write_outgoing_data(request)
248
249 async def _response_closed(self, stream_id: int) -> None:
250 await self._max_streams_semaphore.release()
251 del self._events[stream_id]
252 async with self._state_lock:
253 if self._state == HTTPConnectionState.ACTIVE and not self._events:
254 self._state = HTTPConnectionState.IDLE
255 if self._keepalive_expiry is not None:
256 now = time.monotonic()
257 self._expire_at = now + self._keepalive_expiry
258 if self._used_all_stream_ids: # pragma: nocover
259 await self.aclose()
260
261 async def aclose(self) -> None:
262 # Note that this method unilaterally closes the connection, and does
263 # not have any kind of locking in place around it.
264 # For task-safe/thread-safe operations call into 'attempt_close' instead.
265 self._state = HTTPConnectionState.CLOSED
266 await self._network_stream.aclose()
267
268 # Wrappers around network read/write operations...
269
270 async def _read_incoming_data(
271 self, request: Request
272 ) -> typing.List[h2.events.Event]:
273 timeouts = request.extensions.get("timeout", {})
274 timeout = timeouts.get("read", None)
275
276 async with self._read_lock:
277 data = await self._network_stream.read(self.READ_NUM_BYTES, timeout)
278 if data == b"":
279 raise RemoteProtocolError("Server disconnected")
280 return self._h2_state.receive_data(data)
281
282 async def _write_outgoing_data(self, request: Request) -> None:
283 timeouts = request.extensions.get("timeout", {})
284 timeout = timeouts.get("write", None)
285
286 async with self._write_lock:
287 data_to_send = self._h2_state.data_to_send()
288 await self._network_stream.write(data_to_send, timeout)
289
290 # Flow control...
291
292 async def _wait_for_outgoing_flow(self, request: Request, stream_id: int) -> int:
293 """
294 Returns the maximum allowable outgoing flow for a given stream.
295
296 If the allowable flow is zero, then waits on the network until
297 WindowUpdated frames have increased the flow rate.
298 https://tools.ietf.org/html/rfc7540#section-6.9
299 """
300 local_flow = self._h2_state.local_flow_control_window(stream_id)
301 max_frame_size = self._h2_state.max_outbound_frame_size
302 flow = min(local_flow, max_frame_size)
303 while flow == 0:
304 await self._receive_events(request)
305 local_flow = self._h2_state.local_flow_control_window(stream_id)
306 max_frame_size = self._h2_state.max_outbound_frame_size
307 flow = min(local_flow, max_frame_size)
308 return flow
309
310 # Interface for connection pooling...
311
312 def can_handle_request(self, origin: Origin) -> bool:
313 return origin == self._origin
314
315 def is_available(self) -> bool:
316 return (
317 self._state != HTTPConnectionState.CLOSED and not self._used_all_stream_ids
318 )
319
320 def has_expired(self) -> bool:
321 now = time.monotonic()
322 return self._expire_at is not None and now > self._expire_at
323
324 def is_idle(self) -> bool:
325 return self._state == HTTPConnectionState.IDLE
326
327 def is_closed(self) -> bool:
328 return self._state == HTTPConnectionState.CLOSED
329
330 def info(self) -> str:
331 origin = str(self._origin)
332 return (
333 f"{origin!r}, HTTP/2, {self._state.name}, "
334 f"Request Count: {self._request_count}"
335 )
336
337 def __repr__(self) -> str:
338 class_name = self.__class__.__name__
339 origin = str(self._origin)
340 return (
341 f"<{class_name} [{origin!r}, {self._state.name}, "
342 f"Request Count: {self._request_count}]>"
343 )
344
345 # These context managers are not used in the standard flow, but are
346 # useful for testing or working with connection instances directly.
347
348 async def __aenter__(self) -> "AsyncHTTP2Connection":
349 return self
350
351 async def __aexit__(
352 self,
353 exc_type: typing.Type[BaseException] = None,
354 exc_value: BaseException = None,
355 traceback: types.TracebackType = None,
356 ) -> None:
357 await self.aclose()
358
359
360 class HTTP2ConnectionByteStream:
361 def __init__(
362 self, connection: AsyncHTTP2Connection, request: Request, stream_id: int
363 ) -> None:
364 self._connection = connection
365 self._request = request
366 self._stream_id = stream_id
367
368 async def __aiter__(self) -> typing.AsyncIterator[bytes]:
369 kwargs = {"request": self._request, "stream_id": self._stream_id}
370 async with Trace("http2.receive_response_body", self._request, kwargs):
371 async for chunk in self._connection._receive_response_body(
372 request=self._request, stream_id=self._stream_id
373 ):
374 yield chunk
375
376 async def aclose(self) -> None:
377 kwargs = {"stream_id": self._stream_id}
378 async with Trace("http2.response_closed", self._request, kwargs):
379 await self._connection._response_closed(stream_id=self._stream_id)
0 from http import HTTPStatus
1 from ssl import SSLContext
2 from typing import Tuple, cast
3
4 from .._bytestreams import ByteStream
0 import ssl
1 from typing import List, Mapping, Optional, Sequence, Tuple, Union
2
53 from .._exceptions import ProxyError
6 from .._types import URL, Headers, TimeoutDict
7 from .._utils import get_logger, url_to_origin
8 from .base import AsyncByteStream
4 from .._models import URL, Origin, Request, Response, enforce_headers, enforce_url
5 from .._ssl import default_ssl_context
6 from .._synchronization import AsyncLock
7 from ..backends.base import AsyncNetworkBackend
98 from .connection import AsyncHTTPConnection
10 from .connection_pool import AsyncConnectionPool, ResponseByteStream
11
12 logger = get_logger(__name__)
13
14
15 def get_reason_phrase(status_code: int) -> str:
16 try:
17 return HTTPStatus(status_code).phrase
18 except ValueError:
19 return ""
9 from .connection_pool import AsyncConnectionPool
10 from .http11 import AsyncHTTP11Connection
11 from .interfaces import AsyncConnectionInterface
12
13 HeadersAsSequence = Sequence[Tuple[Union[bytes, str], Union[bytes, str]]]
14 HeadersAsMapping = Mapping[Union[bytes, str], Union[bytes, str]]
2015
2116
2217 def merge_headers(
23 default_headers: Headers = None, override_headers: Headers = None
24 ) -> Headers:
25 """
26 Append default_headers and override_headers, de-duplicating if a key existing in
27 both cases.
28 """
29 default_headers = [] if default_headers is None else default_headers
30 override_headers = [] if override_headers is None else override_headers
18 default_headers: Sequence[Tuple[bytes, bytes]] = None,
19 override_headers: Sequence[Tuple[bytes, bytes]] = None,
20 ) -> List[Tuple[bytes, bytes]]:
21 """
22 Append default_headers and override_headers, de-duplicating if a key exists
23 in both cases.
24 """
25 default_headers = [] if default_headers is None else list(default_headers)
26 override_headers = [] if override_headers is None else list(override_headers)
3127 has_override = set([key.lower() for key, value in override_headers])
3228 default_headers = [
3329 (key, value)
3935
4036 class AsyncHTTPProxy(AsyncConnectionPool):
4137 """
42 A connection pool for making HTTP requests via an HTTP proxy.
43
44 Parameters
45 ----------
46 proxy_url:
47 The URL of the proxy service as a 4-tuple of (scheme, host, port, path).
48 proxy_headers:
49 A list of proxy headers to include.
50 proxy_mode:
51 A proxy mode to operate in. May be "DEFAULT", "FORWARD_ONLY", or "TUNNEL_ONLY".
52 ssl_context:
53 An SSL context to use for verifying connections.
54 max_connections:
55 The maximum number of concurrent connections to allow.
56 max_keepalive_connections:
57 The maximum number of connections to allow before closing keep-alive
58 connections.
59 http2:
60 Enable HTTP/2 support.
38 A connection pool that sends requests via an HTTP proxy.
6139 """
6240
6341 def __init__(
6442 self,
65 proxy_url: URL,
66 proxy_headers: Headers = None,
67 proxy_mode: str = "DEFAULT",
68 ssl_context: SSLContext = None,
69 max_connections: int = None,
43 proxy_url: Union[URL, bytes, str],
44 proxy_headers: Union[HeadersAsMapping, HeadersAsSequence] = None,
45 ssl_context: ssl.SSLContext = None,
46 max_connections: Optional[int] = 10,
7047 max_keepalive_connections: int = None,
7148 keepalive_expiry: float = None,
72 http2: bool = False,
73 backend: str = "auto",
74 # Deprecated argument style:
75 max_keepalive: int = None,
76 ):
77 assert proxy_mode in ("DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY")
78
79 self.proxy_origin = url_to_origin(proxy_url)
80 self.proxy_headers = [] if proxy_headers is None else proxy_headers
81 self.proxy_mode = proxy_mode
49 retries: int = 0,
50 local_address: str = None,
51 uds: str = None,
52 network_backend: AsyncNetworkBackend = None,
53 ) -> None:
54 """
55 A connection pool for making HTTP requests.
56
57 Parameters:
58 proxy_url: The URL to use when connecting to the proxy server.
59 For example `"http://127.0.0.1:8080/"`.
60 proxy_headers: Any HTTP headers to use for the proxy requests.
61 For example `{"Proxy-Authorization": "Basic <username>:<password>"}`.
62 ssl_context: An SSL context to use for verifying connections.
63 If not specified, the default `httpcore.default_ssl_context()`
64 will be used.
65 max_connections: The maximum number of concurrent HTTP connections that
66 the pool should allow. Any attempt to send a request on a pool that
67 would exceed this amount will block until a connection is available.
68 max_keepalive_connections: The maximum number of idle HTTP connections
69 that will be maintained in the pool.
70 keepalive_expiry: The duration in seconds that an idle HTTP connection
71 may be maintained for before being expired from the pool.
72 retries: The maximum number of retries when trying to establish
73 a connection.
74 local_address: Local address to connect from. Can also be used to
75 connect using a particular address family. Using
76 `local_address="0.0.0.0"` will connect using an `AF_INET` address
77 (IPv4), while using `local_address="::"` will connect using an
78 `AF_INET6` address (IPv6).
79 uds: Path to a Unix Domain Socket to use instead of TCP sockets.
80 network_backend: A backend instance to use for handling network I/O.
81 """
82 if ssl_context is None:
83 ssl_context = default_ssl_context()
84
8285 super().__init__(
8386 ssl_context=ssl_context,
8487 max_connections=max_connections,
8588 max_keepalive_connections=max_keepalive_connections,
8689 keepalive_expiry=keepalive_expiry,
87 http2=http2,
88 backend=backend,
89 max_keepalive=max_keepalive,
90 )
91
92 async def handle_async_request(
90 network_backend=network_backend,
91 retries=retries,
92 local_address=local_address,
93 uds=uds,
94 )
95 self._ssl_context = ssl_context
96 self._proxy_url = enforce_url(proxy_url, name="proxy_url")
97 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
98
99 def create_connection(self, origin: Origin) -> AsyncConnectionInterface:
100 if origin.scheme == b"http":
101 return AsyncForwardHTTPConnection(
102 proxy_origin=self._proxy_url.origin,
103 keepalive_expiry=self._keepalive_expiry,
104 network_backend=self._network_backend,
105 )
106 return AsyncTunnelHTTPConnection(
107 proxy_origin=self._proxy_url.origin,
108 proxy_headers=self._proxy_headers,
109 remote_origin=origin,
110 ssl_context=self._ssl_context,
111 keepalive_expiry=self._keepalive_expiry,
112 network_backend=self._network_backend,
113 )
114
115
116 class AsyncForwardHTTPConnection(AsyncConnectionInterface):
117 def __init__(
93118 self,
94 method: bytes,
95 url: URL,
96 headers: Headers,
97 stream: AsyncByteStream,
98 extensions: dict,
99 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
100 if self._keepalive_expiry is not None:
101 await self._keepalive_sweep()
102
103 if (
104 self.proxy_mode == "DEFAULT" and url[0] == b"http"
105 ) or self.proxy_mode == "FORWARD_ONLY":
106 # By default HTTP requests should be forwarded.
107 logger.trace(
108 "forward_request proxy_origin=%r proxy_headers=%r method=%r url=%r",
109 self.proxy_origin,
110 self.proxy_headers,
111 method,
112 url,
113 )
114 return await self._forward_request(
115 method, url, headers=headers, stream=stream, extensions=extensions
116 )
117 else:
118 # By default HTTPS should be tunnelled.
119 logger.trace(
120 "tunnel_request proxy_origin=%r proxy_headers=%r method=%r url=%r",
121 self.proxy_origin,
122 self.proxy_headers,
123 method,
124 url,
125 )
126 return await self._tunnel_request(
127 method, url, headers=headers, stream=stream, extensions=extensions
128 )
129
130 async def _forward_request(
119 proxy_origin: Origin,
120 proxy_headers: Union[HeadersAsMapping, HeadersAsSequence] = None,
121 keepalive_expiry: float = None,
122 network_backend: AsyncNetworkBackend = None,
123 ) -> None:
124 self._connection = AsyncHTTPConnection(
125 origin=proxy_origin,
126 keepalive_expiry=keepalive_expiry,
127 network_backend=network_backend,
128 )
129 self._proxy_origin = proxy_origin
130 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
131
132 async def handle_async_request(self, request: Request) -> Response:
133 headers = merge_headers(self._proxy_headers, request.headers)
134 url = URL(
135 scheme=self._proxy_origin.scheme,
136 host=self._proxy_origin.host,
137 port=self._proxy_origin.port,
138 target=bytes(request.url),
139 )
140 proxy_request = Request(
141 method=request.method,
142 url=url,
143 headers=headers,
144 content=request.stream,
145 extensions=request.extensions,
146 )
147 return await self._connection.handle_async_request(proxy_request)
148
149 def can_handle_request(self, origin: Origin) -> bool:
150 return origin.scheme == b"http"
151
152 async def aclose(self) -> None:
153 await self._connection.aclose()
154
155 def info(self) -> str:
156 return self._connection.info()
157
158 def is_available(self) -> bool:
159 return self._connection.is_available()
160
161 def has_expired(self) -> bool:
162 return self._connection.has_expired()
163
164 def is_idle(self) -> bool:
165 return self._connection.is_idle()
166
167 def is_closed(self) -> bool:
168 return self._connection.is_closed()
169
170 def __repr__(self) -> str:
171 return f"<{self.__class__.__name__} [{self.info()}]>"
172
173
174 class AsyncTunnelHTTPConnection(AsyncConnectionInterface):
175 def __init__(
131176 self,
132 method: bytes,
133 url: URL,
134 headers: Headers,
135 stream: AsyncByteStream,
136 extensions: dict,
137 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
138 """
139 Forwarded proxy requests include the entire URL as the HTTP target,
140 rather than just the path.
141 """
142 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
143 origin = self.proxy_origin
144 connection = await self._get_connection_from_pool(origin)
145
146 if connection is None:
147 connection = AsyncHTTPConnection(
148 origin=origin,
149 http2=self._http2,
150 keepalive_expiry=self._keepalive_expiry,
151 ssl_context=self._ssl_context,
152 )
153 await self._add_to_pool(connection, timeout)
154
155 # Issue a forwarded proxy request...
156
157 # GET https://www.example.org/path HTTP/1.1
158 # [proxy headers]
159 # [headers]
160 scheme, host, port, path = url
161 if port is None:
162 target = b"%b://%b%b" % (scheme, host, path)
163 else:
164 target = b"%b://%b:%d%b" % (scheme, host, port, path)
165
166 url = self.proxy_origin + (target,)
167 headers = merge_headers(self.proxy_headers, headers)
168
169 (
170 status_code,
171 headers,
172 stream,
173 extensions,
174 ) = await connection.handle_async_request(
175 method, url, headers=headers, stream=stream, extensions=extensions
176 )
177
178 wrapped_stream = ResponseByteStream(
179 stream, connection=connection, callback=self._response_closed
180 )
181
182 return status_code, headers, wrapped_stream, extensions
183
184 async def _tunnel_request(
185 self,
186 method: bytes,
187 url: URL,
188 headers: Headers,
189 stream: AsyncByteStream,
190 extensions: dict,
191 ) -> Tuple[int, Headers, AsyncByteStream, dict]:
192 """
193 Tunnelled proxy requests require an initial CONNECT request to
194 establish the connection, and then send regular requests.
195 """
196 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
197 origin = url_to_origin(url)
198 connection = await self._get_connection_from_pool(origin)
199
200 if connection is None:
201 scheme, host, port = origin
202
203 # First, create a connection to the proxy server
204 proxy_connection = AsyncHTTPConnection(
205 origin=self.proxy_origin,
206 http2=self._http2,
207 keepalive_expiry=self._keepalive_expiry,
208 ssl_context=self._ssl_context,
209 )
210
211 # Issue a CONNECT request...
212
213 # CONNECT www.example.org:80 HTTP/1.1
214 # [proxy-headers]
215 target = b"%b:%d" % (host, port)
216 connect_url = self.proxy_origin + (target,)
217 connect_headers = [(b"Host", target), (b"Accept", b"*/*")]
218 connect_headers = merge_headers(connect_headers, self.proxy_headers)
219
220 try:
221 (
222 proxy_status_code,
223 _,
224 proxy_stream,
225 _,
226 ) = await proxy_connection.handle_async_request(
227 b"CONNECT",
228 connect_url,
229 headers=connect_headers,
230 stream=ByteStream(b""),
231 extensions=extensions,
232 )
233
234 proxy_reason = get_reason_phrase(proxy_status_code)
235 logger.trace(
236 "tunnel_response proxy_status_code=%r proxy_reason=%r ",
237 proxy_status_code,
238 proxy_reason,
239 )
240 # Read the response data without closing the socket
241 async for _ in proxy_stream:
242 pass
243
244 # See if the tunnel was successfully established.
245 if proxy_status_code < 200 or proxy_status_code > 299:
246 msg = "%d %s" % (proxy_status_code, proxy_reason)
177 proxy_origin: Origin,
178 remote_origin: Origin,
179 ssl_context: ssl.SSLContext,
180 proxy_headers: Sequence[Tuple[bytes, bytes]] = None,
181 keepalive_expiry: float = None,
182 network_backend: AsyncNetworkBackend = None,
183 ) -> None:
184 self._connection: AsyncConnectionInterface = AsyncHTTPConnection(
185 origin=proxy_origin,
186 keepalive_expiry=keepalive_expiry,
187 network_backend=network_backend,
188 )
189 self._proxy_origin = proxy_origin
190 self._remote_origin = remote_origin
191 self._ssl_context = ssl_context
192 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
193 self._keepalive_expiry = keepalive_expiry
194 self._connect_lock = AsyncLock()
195 self._connected = False
196
197 async def handle_async_request(self, request: Request) -> Response:
198 timeouts = request.extensions.get("timeout", {})
199 timeout = timeouts.get("connect", None)
200
201 async with self._connect_lock:
202 if not self._connected:
203 target = b"%b:%d" % (self._remote_origin.host, self._remote_origin.port)
204
205 connect_url = URL(
206 scheme=self._proxy_origin.scheme,
207 host=self._proxy_origin.host,
208 port=self._proxy_origin.port,
209 target=target,
210 )
211 connect_headers = merge_headers(
212 [(b"Host", target), (b"Accept", b"*/*")], self._proxy_headers
213 )
214 connect_request = Request(
215 method=b"CONNECT", url=connect_url, headers=connect_headers
216 )
217 connect_response = await self._connection.handle_async_request(
218 connect_request
219 )
220
221 if connect_response.status < 200 or connect_response.status > 299:
222 reason_bytes = connect_response.extensions.get("reason_phrase", b"")
223 reason_str = reason_bytes.decode("ascii", errors="ignore")
224 msg = "%d %s" % (connect_response.status, reason_str)
225 await self._connection.aclose()
247226 raise ProxyError(msg)
248227
249 # Upgrade to TLS if required
250 # We assume the target speaks TLS on the specified port
251 if scheme == b"https":
252 await proxy_connection.start_tls(host, self._ssl_context, timeout)
253 except Exception as exc:
254 await proxy_connection.aclose()
255 raise ProxyError(exc)
256
257 # The CONNECT request is successful, so we have now SWITCHED PROTOCOLS.
258 # This means the proxy connection is now unusable, and we must create
259 # a new one for regular requests, making sure to use the same socket to
260 # retain the tunnel.
261 connection = AsyncHTTPConnection(
262 origin=origin,
263 http2=self._http2,
264 keepalive_expiry=self._keepalive_expiry,
265 ssl_context=self._ssl_context,
266 socket=proxy_connection.socket,
267 )
268 await self._add_to_pool(connection, timeout)
269
270 # Once the connection has been established we can send requests on
271 # it as normal.
272 (
273 status_code,
274 headers,
275 stream,
276 extensions,
277 ) = await connection.handle_async_request(
278 method,
279 url,
280 headers=headers,
281 stream=stream,
282 extensions=extensions,
283 )
284
285 wrapped_stream = ResponseByteStream(
286 stream, connection=connection, callback=self._response_closed
287 )
288
289 return status_code, headers, wrapped_stream, extensions
228 stream = connect_response.extensions["network_stream"]
229 stream = await stream.start_tls(
230 ssl_context=self._ssl_context,
231 server_hostname=self._remote_origin.host.decode("ascii"),
232 timeout=timeout,
233 )
234 self._connection = AsyncHTTP11Connection(
235 origin=self._remote_origin,
236 stream=stream,
237 keepalive_expiry=self._keepalive_expiry,
238 )
239 self._connected = True
240 return await self._connection.handle_async_request(request)
241
242 def can_handle_request(self, origin: Origin) -> bool:
243 return origin == self._remote_origin
244
245 async def aclose(self) -> None:
246 await self._connection.aclose()
247
248 def info(self) -> str:
249 return self._connection.info()
250
251 def is_available(self) -> bool:
252 return self._connection.is_available()
253
254 def has_expired(self) -> bool:
255 return self._connection.has_expired()
256
257 def is_idle(self) -> bool:
258 return self._connection.is_idle()
259
260 def is_closed(self) -> bool:
261 return self._connection.is_closed()
262
263 def __repr__(self) -> str:
264 return f"<{self.__class__.__name__} [{self.info()}]>"
0 from typing import AsyncIterator, Union
1
2 from .._compat import asynccontextmanager
3 from .._models import (
4 URL,
5 Origin,
6 Request,
7 Response,
8 enforce_bytes,
9 enforce_headers,
10 enforce_url,
11 include_request_headers,
12 )
13
14
15 class AsyncRequestInterface:
16 async def request(
17 self,
18 method: Union[bytes, str],
19 url: Union[URL, bytes, str],
20 *,
21 headers: Union[dict, list] = None,
22 content: Union[bytes, AsyncIterator[bytes]] = None,
23 extensions: dict = None,
24 ) -> Response:
25 # Strict type checking on our parameters.
26 method = enforce_bytes(method, name="method")
27 url = enforce_url(url, name="url")
28 headers = enforce_headers(headers, name="headers")
29
30 # Include Host header, and optionally Content-Length or Transfer-Encoding.
31 headers = include_request_headers(headers, url=url, content=content)
32
33 request = Request(
34 method=method,
35 url=url,
36 headers=headers,
37 content=content,
38 extensions=extensions,
39 )
40 response = await self.handle_async_request(request)
41 try:
42 await response.aread()
43 finally:
44 await response.aclose()
45 return response
46
47 @asynccontextmanager
48 async def stream(
49 self,
50 method: Union[bytes, str],
51 url: Union[URL, bytes, str],
52 *,
53 headers: Union[dict, list] = None,
54 content: Union[bytes, AsyncIterator[bytes]] = None,
55 extensions: dict = None,
56 ) -> AsyncIterator[Response]:
57 # Strict type checking on our parameters.
58 method = enforce_bytes(method, name="method")
59 url = enforce_url(url, name="url")
60 headers = enforce_headers(headers, name="headers")
61
62 # Include Host header, and optionally Content-Length or Transfer-Encoding.
63 headers = include_request_headers(headers, url=url, content=content)
64
65 request = Request(
66 method=method,
67 url=url,
68 headers=headers,
69 content=content,
70 extensions=extensions,
71 )
72 response = await self.handle_async_request(request)
73 try:
74 yield response
75 finally:
76 await response.aclose()
77
78 async def handle_async_request(self, request: Request) -> Response:
79 raise NotImplementedError() # pragma: nocover
80
81
82 class AsyncConnectionInterface(AsyncRequestInterface):
83 async def aclose(self) -> None:
84 raise NotImplementedError() # pragma: nocover
85
86 def info(self) -> str:
87 raise NotImplementedError() # pragma: nocover
88
89 def can_handle_request(self, origin: Origin) -> bool:
90 raise NotImplementedError() # pragma: nocover
91
92 def is_available(self) -> bool:
93 """
94 Return `True` if the connection is currently able to accept an
95 outgoing request.
96
97 An HTTP/1.1 connection will only be available if it is currently idle.
98
99 An HTTP/2 connection will be available so long as the stream ID space is
100 not yet exhausted, and the connection is not in an error state.
101
102 While the connection is being established we may not yet know if it is going
103 to result in an HTTP/1.1 or HTTP/2 connection. The connection should be
104 treated as being available, but might ultimately raise `NewConnectionRequired`
105 required exceptions if multiple requests are attempted over a connection
106 that ends up being established as HTTP/1.1.
107 """
108 raise NotImplementedError() # pragma: nocover
109
110 def has_expired(self) -> bool:
111 """
112 Return `True` if the connection is in a state where it should be closed.
113
114 This either means that the connection is idle and it has passed the
115 expiry time on its keep-alive, or that server has sent an EOF.
116 """
117 raise NotImplementedError() # pragma: nocover
118
119 def is_idle(self) -> bool:
120 """
121 Return `True` if the connection is currently idle.
122 """
123 raise NotImplementedError() # pragma: nocover
124
125 def is_closed(self) -> bool:
126 """
127 Return `True` if the connection has been closed.
128
129 Used when a response is closed to determine if the connection may be
130 returned to the connection pool or not.
131 """
132 raise NotImplementedError() # pragma: nocover
+0
-0
httpcore/_backends/__init__.py less more
(Empty file)
+0
-201
httpcore/_backends/anyio.py less more
0 from ssl import SSLContext
1 from typing import Optional
2
3 import anyio.abc
4 from anyio import BrokenResourceError, EndOfStream
5 from anyio.abc import ByteStream, SocketAttribute
6 from anyio.streams.tls import TLSAttribute, TLSStream
7
8 from .._exceptions import (
9 ConnectError,
10 ConnectTimeout,
11 ReadError,
12 ReadTimeout,
13 WriteError,
14 WriteTimeout,
15 map_exceptions,
16 )
17 from .._types import TimeoutDict
18 from .._utils import is_socket_readable
19 from .base import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
20
21
22 class SocketStream(AsyncSocketStream):
23 def __init__(self, stream: ByteStream) -> None:
24 self.stream = stream
25 self.read_lock = anyio.Lock()
26 self.write_lock = anyio.Lock()
27
28 def get_http_version(self) -> str:
29 alpn_protocol = self.stream.extra(TLSAttribute.alpn_protocol, None)
30 return "HTTP/2" if alpn_protocol == "h2" else "HTTP/1.1"
31
32 async def start_tls(
33 self,
34 hostname: bytes,
35 ssl_context: SSLContext,
36 timeout: TimeoutDict,
37 ) -> "SocketStream":
38 connect_timeout = timeout.get("connect")
39 try:
40 with anyio.fail_after(connect_timeout):
41 ssl_stream = await TLSStream.wrap(
42 self.stream,
43 ssl_context=ssl_context,
44 hostname=hostname.decode("ascii"),
45 standard_compatible=False,
46 )
47 except TimeoutError:
48 raise ConnectTimeout from None
49 except BrokenResourceError as exc:
50 raise ConnectError from exc
51
52 return SocketStream(ssl_stream)
53
54 async def read(self, n: int, timeout: TimeoutDict) -> bytes:
55 read_timeout = timeout.get("read")
56 async with self.read_lock:
57 try:
58 with anyio.fail_after(read_timeout):
59 return await self.stream.receive(n)
60 except TimeoutError:
61 await self.stream.aclose()
62 raise ReadTimeout from None
63 except BrokenResourceError as exc:
64 raise ReadError from exc
65 except EndOfStream:
66 return b""
67
68 async def write(self, data: bytes, timeout: TimeoutDict) -> None:
69 if not data:
70 return
71
72 write_timeout = timeout.get("write")
73 async with self.write_lock:
74 try:
75 with anyio.fail_after(write_timeout):
76 return await self.stream.send(data)
77 except TimeoutError:
78 await self.stream.aclose()
79 raise WriteTimeout from None
80 except BrokenResourceError as exc:
81 raise WriteError from exc
82
83 async def aclose(self) -> None:
84 async with self.write_lock:
85 try:
86 await self.stream.aclose()
87 except BrokenResourceError:
88 pass
89
90 def is_readable(self) -> bool:
91 sock = self.stream.extra(SocketAttribute.raw_socket)
92 return is_socket_readable(sock)
93
94
95 class Lock(AsyncLock):
96 def __init__(self) -> None:
97 self._lock = anyio.Lock()
98
99 async def release(self) -> None:
100 self._lock.release()
101
102 async def acquire(self) -> None:
103 await self._lock.acquire()
104
105
106 class Semaphore(AsyncSemaphore):
107 def __init__(self, max_value: int, exc_class: type):
108 self.max_value = max_value
109 self.exc_class = exc_class
110
111 @property
112 def semaphore(self) -> anyio.abc.Semaphore:
113 if not hasattr(self, "_semaphore"):
114 self._semaphore = anyio.Semaphore(self.max_value)
115 return self._semaphore
116
117 async def acquire(self, timeout: float = None) -> None:
118 with anyio.move_on_after(timeout):
119 await self.semaphore.acquire()
120 return
121
122 raise self.exc_class()
123
124 async def release(self) -> None:
125 self.semaphore.release()
126
127
128 class AnyIOBackend(AsyncBackend):
129 async def open_tcp_stream(
130 self,
131 hostname: bytes,
132 port: int,
133 ssl_context: Optional[SSLContext],
134 timeout: TimeoutDict,
135 *,
136 local_address: Optional[str],
137 ) -> AsyncSocketStream:
138 connect_timeout = timeout.get("connect")
139 unicode_host = hostname.decode("utf-8")
140 exc_map = {
141 TimeoutError: ConnectTimeout,
142 OSError: ConnectError,
143 BrokenResourceError: ConnectError,
144 }
145
146 with map_exceptions(exc_map):
147 with anyio.fail_after(connect_timeout):
148 stream: anyio.abc.ByteStream
149 stream = await anyio.connect_tcp(
150 unicode_host, port, local_host=local_address
151 )
152 if ssl_context:
153 stream = await TLSStream.wrap(
154 stream,
155 hostname=unicode_host,
156 ssl_context=ssl_context,
157 standard_compatible=False,
158 )
159
160 return SocketStream(stream=stream)
161
162 async def open_uds_stream(
163 self,
164 path: str,
165 hostname: bytes,
166 ssl_context: Optional[SSLContext],
167 timeout: TimeoutDict,
168 ) -> AsyncSocketStream:
169 connect_timeout = timeout.get("connect")
170 unicode_host = hostname.decode("utf-8")
171 exc_map = {
172 TimeoutError: ConnectTimeout,
173 OSError: ConnectError,
174 BrokenResourceError: ConnectError,
175 }
176
177 with map_exceptions(exc_map):
178 with anyio.fail_after(connect_timeout):
179 stream: anyio.abc.ByteStream = await anyio.connect_unix(path)
180 if ssl_context:
181 stream = await TLSStream.wrap(
182 stream,
183 hostname=unicode_host,
184 ssl_context=ssl_context,
185 standard_compatible=False,
186 )
187
188 return SocketStream(stream=stream)
189
190 def create_lock(self) -> AsyncLock:
191 return Lock()
192
193 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
194 return Semaphore(max_value, exc_class=exc_class)
195
196 async def time(self) -> float:
197 return float(anyio.current_time())
198
199 async def sleep(self, seconds: float) -> None:
200 await anyio.sleep(seconds)
+0
-303
httpcore/_backends/asyncio.py less more
0 import asyncio
1 import socket
2 from ssl import SSLContext
3 from typing import Optional
4
5 from .._exceptions import (
6 ConnectError,
7 ConnectTimeout,
8 ReadError,
9 ReadTimeout,
10 WriteError,
11 WriteTimeout,
12 map_exceptions,
13 )
14 from .._types import TimeoutDict
15 from .._utils import is_socket_readable
16 from .base import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
17
18 SSL_MONKEY_PATCH_APPLIED = False
19
20
21 def ssl_monkey_patch() -> None:
22 """
23 Monkey-patch for https://bugs.python.org/issue36709
24
25 This prevents console errors when outstanding HTTPS connections
26 still exist at the point of exiting.
27
28 Clients which have been opened using a `with` block, or which have
29 had `close()` closed, will not exhibit this issue in the first place.
30 """
31 MonkeyPatch = asyncio.selector_events._SelectorSocketTransport # type: ignore
32
33 _write = MonkeyPatch.write
34
35 def _fixed_write(self, data: bytes) -> None: # type: ignore
36 if self._loop and not self._loop.is_closed():
37 _write(self, data)
38
39 MonkeyPatch.write = _fixed_write
40
41
42 async def backport_start_tls(
43 transport: asyncio.BaseTransport,
44 protocol: asyncio.BaseProtocol,
45 ssl_context: SSLContext,
46 *,
47 server_side: bool = False,
48 server_hostname: str = None,
49 ssl_handshake_timeout: float = None,
50 ) -> asyncio.Transport: # pragma: nocover (Since it's not used on all Python versions.)
51 """
52 Python 3.6 asyncio doesn't have a start_tls() method on the loop
53 so we use this function in place of the loop's start_tls() method.
54 Adapted from this comment:
55 https://github.com/urllib3/urllib3/issues/1323#issuecomment-362494839
56 """
57 import asyncio.sslproto
58
59 loop = asyncio.get_event_loop()
60 waiter = loop.create_future()
61 ssl_protocol = asyncio.sslproto.SSLProtocol(
62 loop,
63 protocol,
64 ssl_context,
65 waiter,
66 server_side=False,
67 server_hostname=server_hostname,
68 call_connection_made=False,
69 )
70
71 transport.set_protocol(ssl_protocol)
72 loop.call_soon(ssl_protocol.connection_made, transport)
73 loop.call_soon(transport.resume_reading) # type: ignore
74
75 await waiter
76 return ssl_protocol._app_transport
77
78
79 class SocketStream(AsyncSocketStream):
80 def __init__(
81 self, stream_reader: asyncio.StreamReader, stream_writer: asyncio.StreamWriter
82 ):
83 self.stream_reader = stream_reader
84 self.stream_writer = stream_writer
85 self.read_lock = asyncio.Lock()
86 self.write_lock = asyncio.Lock()
87
88 def get_http_version(self) -> str:
89 ssl_object = self.stream_writer.get_extra_info("ssl_object")
90
91 if ssl_object is None:
92 return "HTTP/1.1"
93
94 ident = ssl_object.selected_alpn_protocol()
95 return "HTTP/2" if ident == "h2" else "HTTP/1.1"
96
97 async def start_tls(
98 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict
99 ) -> "SocketStream":
100 loop = asyncio.get_event_loop()
101
102 stream_reader = asyncio.StreamReader()
103 protocol = asyncio.StreamReaderProtocol(stream_reader)
104 transport = self.stream_writer.transport
105
106 loop_start_tls = getattr(loop, "start_tls", backport_start_tls)
107
108 exc_map = {asyncio.TimeoutError: ConnectTimeout, OSError: ConnectError}
109
110 with map_exceptions(exc_map):
111 transport = await asyncio.wait_for(
112 loop_start_tls(
113 transport,
114 protocol,
115 ssl_context,
116 server_hostname=hostname.decode("ascii"),
117 ),
118 timeout=timeout.get("connect"),
119 )
120
121 # Initialize the protocol, so it is made aware of being tied to
122 # a TLS connection.
123 # See: https://github.com/encode/httpx/issues/859
124 protocol.connection_made(transport)
125
126 stream_writer = asyncio.StreamWriter(
127 transport=transport, protocol=protocol, reader=stream_reader, loop=loop
128 )
129
130 ssl_stream = SocketStream(stream_reader, stream_writer)
131 # When we return a new SocketStream with new StreamReader/StreamWriter instances
132 # we need to keep references to the old StreamReader/StreamWriter so that they
133 # are not garbage collected and closed while we're still using them.
134 ssl_stream._inner = self # type: ignore
135 return ssl_stream
136
137 async def read(self, n: int, timeout: TimeoutDict) -> bytes:
138 exc_map = {asyncio.TimeoutError: ReadTimeout, OSError: ReadError}
139 async with self.read_lock:
140 with map_exceptions(exc_map):
141 try:
142 return await asyncio.wait_for(
143 self.stream_reader.read(n), timeout.get("read")
144 )
145 except AttributeError as exc: # pragma: nocover
146 if "resume_reading" in str(exc):
147 # Python's asyncio has a bug that can occur when a
148 # connection has been closed, while it is paused.
149 # See: https://github.com/encode/httpx/issues/1213
150 #
151 # Returning an empty byte-string to indicate connection
152 # close will eventually raise an httpcore.RemoteProtocolError
153 # to the user when this goes through our HTTP parsing layer.
154 return b""
155 raise
156
157 async def write(self, data: bytes, timeout: TimeoutDict) -> None:
158 if not data:
159 return
160
161 exc_map = {asyncio.TimeoutError: WriteTimeout, OSError: WriteError}
162 async with self.write_lock:
163 with map_exceptions(exc_map):
164 self.stream_writer.write(data)
165 return await asyncio.wait_for(
166 self.stream_writer.drain(), timeout.get("write")
167 )
168
169 async def aclose(self) -> None:
170 # SSL connections should issue the close and then abort, rather than
171 # waiting for the remote end of the connection to signal the EOF.
172 #
173 # See:
174 #
175 # * https://bugs.python.org/issue39758
176 # * https://github.com/python-trio/trio/blob/
177 # 31e2ae866ad549f1927d45ce073d4f0ea9f12419/trio/_ssl.py#L779-L829
178 #
179 # And related issues caused if we simply omit the 'wait_closed' call,
180 # without first using `.abort()`
181 #
182 # * https://github.com/encode/httpx/issues/825
183 # * https://github.com/encode/httpx/issues/914
184 is_ssl = self.stream_writer.get_extra_info("ssl_object") is not None
185
186 async with self.write_lock:
187 try:
188 self.stream_writer.close()
189 if is_ssl:
190 # Give the connection a chance to write any data in the buffer,
191 # and then forcibly tear down the SSL connection.
192 await asyncio.sleep(0)
193 self.stream_writer.transport.abort() # type: ignore
194 if hasattr(self.stream_writer, "wait_closed"):
195 # Python 3.7+ only.
196 await self.stream_writer.wait_closed() # type: ignore
197 except OSError:
198 pass
199
200 def is_readable(self) -> bool:
201 transport = self.stream_reader._transport # type: ignore
202 sock: Optional[socket.socket] = transport.get_extra_info("socket")
203 return is_socket_readable(sock)
204
205
206 class Lock(AsyncLock):
207 def __init__(self) -> None:
208 self._lock = asyncio.Lock()
209
210 async def release(self) -> None:
211 self._lock.release()
212
213 async def acquire(self) -> None:
214 await self._lock.acquire()
215
216
217 class Semaphore(AsyncSemaphore):
218 def __init__(self, max_value: int, exc_class: type) -> None:
219 self.max_value = max_value
220 self.exc_class = exc_class
221
222 @property
223 def semaphore(self) -> asyncio.BoundedSemaphore:
224 if not hasattr(self, "_semaphore"):
225 self._semaphore = asyncio.BoundedSemaphore(value=self.max_value)
226 return self._semaphore
227
228 async def acquire(self, timeout: float = None) -> None:
229 try:
230 await asyncio.wait_for(self.semaphore.acquire(), timeout)
231 except asyncio.TimeoutError:
232 raise self.exc_class()
233
234 async def release(self) -> None:
235 self.semaphore.release()
236
237
238 class AsyncioBackend(AsyncBackend):
239 def __init__(self) -> None:
240 global SSL_MONKEY_PATCH_APPLIED
241
242 if not SSL_MONKEY_PATCH_APPLIED:
243 ssl_monkey_patch()
244 SSL_MONKEY_PATCH_APPLIED = True
245
246 async def open_tcp_stream(
247 self,
248 hostname: bytes,
249 port: int,
250 ssl_context: Optional[SSLContext],
251 timeout: TimeoutDict,
252 *,
253 local_address: Optional[str],
254 ) -> SocketStream:
255 host = hostname.decode("ascii")
256 connect_timeout = timeout.get("connect")
257 local_addr = None if local_address is None else (local_address, 0)
258
259 exc_map = {asyncio.TimeoutError: ConnectTimeout, OSError: ConnectError}
260 with map_exceptions(exc_map):
261 stream_reader, stream_writer = await asyncio.wait_for(
262 asyncio.open_connection(
263 host, port, ssl=ssl_context, local_addr=local_addr
264 ),
265 connect_timeout,
266 )
267 return SocketStream(
268 stream_reader=stream_reader, stream_writer=stream_writer
269 )
270
271 async def open_uds_stream(
272 self,
273 path: str,
274 hostname: bytes,
275 ssl_context: Optional[SSLContext],
276 timeout: TimeoutDict,
277 ) -> AsyncSocketStream:
278 host = hostname.decode("ascii")
279 connect_timeout = timeout.get("connect")
280 kwargs: dict = {"server_hostname": host} if ssl_context is not None else {}
281 exc_map = {asyncio.TimeoutError: ConnectTimeout, OSError: ConnectError}
282 with map_exceptions(exc_map):
283 stream_reader, stream_writer = await asyncio.wait_for(
284 asyncio.open_unix_connection(path, ssl=ssl_context, **kwargs),
285 connect_timeout,
286 )
287 return SocketStream(
288 stream_reader=stream_reader, stream_writer=stream_writer
289 )
290
291 def create_lock(self) -> AsyncLock:
292 return Lock()
293
294 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
295 return Semaphore(max_value, exc_class=exc_class)
296
297 async def time(self) -> float:
298 loop = asyncio.get_event_loop()
299 return loop.time()
300
301 async def sleep(self, seconds: float) -> None:
302 await asyncio.sleep(seconds)
+0
-67
httpcore/_backends/auto.py less more
0 from ssl import SSLContext
1 from typing import Optional
2
3 import sniffio
4
5 from .._types import TimeoutDict
6 from .base import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
7
8 # The following line is imported from the _sync modules
9 from .sync import SyncBackend, SyncLock, SyncSemaphore, SyncSocketStream # noqa
10
11
12 class AutoBackend(AsyncBackend):
13 @property
14 def backend(self) -> AsyncBackend:
15 if not hasattr(self, "_backend_implementation"):
16 backend = sniffio.current_async_library()
17
18 if backend == "asyncio":
19 from .anyio import AnyIOBackend
20
21 self._backend_implementation: AsyncBackend = AnyIOBackend()
22 elif backend == "trio":
23 from .trio import TrioBackend
24
25 self._backend_implementation = TrioBackend()
26 elif backend == "curio":
27 from .curio import CurioBackend
28
29 self._backend_implementation = CurioBackend()
30 else: # pragma: nocover
31 raise RuntimeError(f"Unsupported concurrency backend {backend!r}")
32 return self._backend_implementation
33
34 async def open_tcp_stream(
35 self,
36 hostname: bytes,
37 port: int,
38 ssl_context: Optional[SSLContext],
39 timeout: TimeoutDict,
40 *,
41 local_address: Optional[str],
42 ) -> AsyncSocketStream:
43 return await self.backend.open_tcp_stream(
44 hostname, port, ssl_context, timeout, local_address=local_address
45 )
46
47 async def open_uds_stream(
48 self,
49 path: str,
50 hostname: bytes,
51 ssl_context: Optional[SSLContext],
52 timeout: TimeoutDict,
53 ) -> AsyncSocketStream:
54 return await self.backend.open_uds_stream(path, hostname, ssl_context, timeout)
55
56 def create_lock(self) -> AsyncLock:
57 return self.backend.create_lock()
58
59 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
60 return self.backend.create_semaphore(max_value, exc_class=exc_class)
61
62 async def time(self) -> float:
63 return await self.backend.time()
64
65 async def sleep(self, seconds: float) -> None:
66 await self.backend.sleep(seconds)
+0
-137
httpcore/_backends/base.py less more
0 from ssl import SSLContext
1 from types import TracebackType
2 from typing import TYPE_CHECKING, Optional, Type
3
4 from .._types import TimeoutDict
5
6 if TYPE_CHECKING: # pragma: no cover
7 from .sync import SyncBackend
8
9
10 def lookup_async_backend(name: str) -> "AsyncBackend":
11 if name == "auto":
12 from .auto import AutoBackend
13
14 return AutoBackend()
15 elif name == "asyncio":
16 from .asyncio import AsyncioBackend
17
18 return AsyncioBackend()
19 elif name == "trio":
20 from .trio import TrioBackend
21
22 return TrioBackend()
23 elif name == "curio":
24 from .curio import CurioBackend
25
26 return CurioBackend()
27 elif name == "anyio":
28 from .anyio import AnyIOBackend
29
30 return AnyIOBackend()
31
32 raise ValueError("Invalid backend name {name!r}")
33
34
35 def lookup_sync_backend(name: str) -> "SyncBackend":
36 from .sync import SyncBackend
37
38 return SyncBackend()
39
40
41 class AsyncSocketStream:
42 """
43 A socket stream with read/write operations. Abstracts away any asyncio-specific
44 interfaces into a more generic base class, that we can use with alternate
45 backends, or for stand-alone test cases.
46 """
47
48 def get_http_version(self) -> str:
49 raise NotImplementedError() # pragma: no cover
50
51 async def start_tls(
52 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict
53 ) -> "AsyncSocketStream":
54 raise NotImplementedError() # pragma: no cover
55
56 async def read(self, n: int, timeout: TimeoutDict) -> bytes:
57 raise NotImplementedError() # pragma: no cover
58
59 async def write(self, data: bytes, timeout: TimeoutDict) -> None:
60 raise NotImplementedError() # pragma: no cover
61
62 async def aclose(self) -> None:
63 raise NotImplementedError() # pragma: no cover
64
65 def is_readable(self) -> bool:
66 raise NotImplementedError() # pragma: no cover
67
68
69 class AsyncLock:
70 """
71 An abstract interface for Lock classes.
72 """
73
74 async def __aenter__(self) -> None:
75 await self.acquire()
76
77 async def __aexit__(
78 self,
79 exc_type: Type[BaseException] = None,
80 exc_value: BaseException = None,
81 traceback: TracebackType = None,
82 ) -> None:
83 await self.release()
84
85 async def release(self) -> None:
86 raise NotImplementedError() # pragma: no cover
87
88 async def acquire(self) -> None:
89 raise NotImplementedError() # pragma: no cover
90
91
92 class AsyncSemaphore:
93 """
94 An abstract interface for Semaphore classes.
95 Abstracts away any asyncio-specific interfaces.
96 """
97
98 async def acquire(self, timeout: float = None) -> None:
99 raise NotImplementedError() # pragma: no cover
100
101 async def release(self) -> None:
102 raise NotImplementedError() # pragma: no cover
103
104
105 class AsyncBackend:
106 async def open_tcp_stream(
107 self,
108 hostname: bytes,
109 port: int,
110 ssl_context: Optional[SSLContext],
111 timeout: TimeoutDict,
112 *,
113 local_address: Optional[str],
114 ) -> AsyncSocketStream:
115 raise NotImplementedError() # pragma: no cover
116
117 async def open_uds_stream(
118 self,
119 path: str,
120 hostname: bytes,
121 ssl_context: Optional[SSLContext],
122 timeout: TimeoutDict,
123 ) -> AsyncSocketStream:
124 raise NotImplementedError() # pragma: no cover
125
126 def create_lock(self) -> AsyncLock:
127 raise NotImplementedError() # pragma: no cover
128
129 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
130 raise NotImplementedError() # pragma: no cover
131
132 async def time(self) -> float:
133 raise NotImplementedError() # pragma: no cover
134
135 async def sleep(self, seconds: float) -> None:
136 raise NotImplementedError() # pragma: no cover
+0
-206
httpcore/_backends/curio.py less more
0 from ssl import SSLContext, SSLSocket
1 from typing import Optional
2
3 import curio
4 import curio.io
5
6 from .._exceptions import (
7 ConnectError,
8 ConnectTimeout,
9 ReadError,
10 ReadTimeout,
11 WriteError,
12 WriteTimeout,
13 map_exceptions,
14 )
15 from .._types import TimeoutDict
16 from .._utils import get_logger, is_socket_readable
17 from .base import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
18
19 logger = get_logger(__name__)
20
21 ONE_DAY_IN_SECONDS = float(60 * 60 * 24)
22
23
24 def convert_timeout(value: Optional[float]) -> float:
25 return value if value is not None else ONE_DAY_IN_SECONDS
26
27
28 class Lock(AsyncLock):
29 def __init__(self) -> None:
30 self._lock = curio.Lock()
31
32 async def acquire(self) -> None:
33 await self._lock.acquire()
34
35 async def release(self) -> None:
36 await self._lock.release()
37
38
39 class Semaphore(AsyncSemaphore):
40 def __init__(self, max_value: int, exc_class: type) -> None:
41 self.max_value = max_value
42 self.exc_class = exc_class
43
44 @property
45 def semaphore(self) -> curio.Semaphore:
46 if not hasattr(self, "_semaphore"):
47 self._semaphore = curio.Semaphore(value=self.max_value)
48 return self._semaphore
49
50 async def acquire(self, timeout: float = None) -> None:
51 timeout = convert_timeout(timeout)
52
53 try:
54 return await curio.timeout_after(timeout, self.semaphore.acquire())
55 except curio.TaskTimeout:
56 raise self.exc_class()
57
58 async def release(self) -> None:
59 await self.semaphore.release()
60
61
62 class SocketStream(AsyncSocketStream):
63 def __init__(self, socket: curio.io.Socket) -> None:
64 self.read_lock = curio.Lock()
65 self.write_lock = curio.Lock()
66 self.socket = socket
67 self.stream = socket.as_stream()
68
69 def get_http_version(self) -> str:
70 if hasattr(self.socket, "_socket"):
71 raw_socket = self.socket._socket
72
73 if isinstance(raw_socket, SSLSocket):
74 ident = raw_socket.selected_alpn_protocol()
75 return "HTTP/2" if ident == "h2" else "HTTP/1.1"
76
77 return "HTTP/1.1"
78
79 async def start_tls(
80 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict
81 ) -> "AsyncSocketStream":
82 connect_timeout = convert_timeout(timeout.get("connect"))
83 exc_map = {
84 curio.TaskTimeout: ConnectTimeout,
85 curio.CurioError: ConnectError,
86 OSError: ConnectError,
87 }
88
89 with map_exceptions(exc_map):
90 wrapped_sock = curio.io.Socket(
91 ssl_context.wrap_socket(
92 self.socket._socket,
93 do_handshake_on_connect=False,
94 server_hostname=hostname.decode("ascii"),
95 )
96 )
97
98 await curio.timeout_after(
99 connect_timeout,
100 wrapped_sock.do_handshake(),
101 )
102
103 return SocketStream(wrapped_sock)
104
105 async def read(self, n: int, timeout: TimeoutDict) -> bytes:
106 read_timeout = convert_timeout(timeout.get("read"))
107 exc_map = {
108 curio.TaskTimeout: ReadTimeout,
109 curio.CurioError: ReadError,
110 OSError: ReadError,
111 }
112
113 with map_exceptions(exc_map):
114 async with self.read_lock:
115 return await curio.timeout_after(read_timeout, self.stream.read(n))
116
117 async def write(self, data: bytes, timeout: TimeoutDict) -> None:
118 write_timeout = convert_timeout(timeout.get("write"))
119 exc_map = {
120 curio.TaskTimeout: WriteTimeout,
121 curio.CurioError: WriteError,
122 OSError: WriteError,
123 }
124
125 with map_exceptions(exc_map):
126 async with self.write_lock:
127 await curio.timeout_after(write_timeout, self.stream.write(data))
128
129 async def aclose(self) -> None:
130 await self.stream.close()
131 await self.socket.close()
132
133 def is_readable(self) -> bool:
134 return is_socket_readable(self.socket)
135
136
137 class CurioBackend(AsyncBackend):
138 async def open_tcp_stream(
139 self,
140 hostname: bytes,
141 port: int,
142 ssl_context: Optional[SSLContext],
143 timeout: TimeoutDict,
144 *,
145 local_address: Optional[str],
146 ) -> AsyncSocketStream:
147 connect_timeout = convert_timeout(timeout.get("connect"))
148 exc_map = {
149 curio.TaskTimeout: ConnectTimeout,
150 curio.CurioError: ConnectError,
151 OSError: ConnectError,
152 }
153 host = hostname.decode("ascii")
154
155 kwargs: dict = {}
156 if ssl_context is not None:
157 kwargs["ssl"] = ssl_context
158 kwargs["server_hostname"] = host
159 if local_address is not None:
160 kwargs["source_addr"] = (local_address, 0)
161
162 with map_exceptions(exc_map):
163 sock: curio.io.Socket = await curio.timeout_after(
164 connect_timeout,
165 curio.open_connection(hostname, port, **kwargs),
166 )
167
168 return SocketStream(sock)
169
170 async def open_uds_stream(
171 self,
172 path: str,
173 hostname: bytes,
174 ssl_context: Optional[SSLContext],
175 timeout: TimeoutDict,
176 ) -> AsyncSocketStream:
177 connect_timeout = convert_timeout(timeout.get("connect"))
178 exc_map = {
179 curio.TaskTimeout: ConnectTimeout,
180 curio.CurioError: ConnectError,
181 OSError: ConnectError,
182 }
183 host = hostname.decode("ascii")
184 kwargs = (
185 {} if ssl_context is None else {"ssl": ssl_context, "server_hostname": host}
186 )
187
188 with map_exceptions(exc_map):
189 sock: curio.io.Socket = await curio.timeout_after(
190 connect_timeout, curio.open_unix_connection(path, **kwargs)
191 )
192
193 return SocketStream(sock)
194
195 def create_lock(self) -> AsyncLock:
196 return Lock()
197
198 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
199 return Semaphore(max_value, exc_class)
200
201 async def time(self) -> float:
202 return await curio.clock()
203
204 async def sleep(self, seconds: float) -> None:
205 await curio.sleep(seconds)
+0
-178
httpcore/_backends/sync.py less more
0 import socket
1 import threading
2 import time
3 from ssl import SSLContext
4 from types import TracebackType
5 from typing import Optional, Type
6
7 from .._exceptions import (
8 ConnectError,
9 ConnectTimeout,
10 ReadError,
11 ReadTimeout,
12 WriteError,
13 WriteTimeout,
14 map_exceptions,
15 )
16 from .._types import TimeoutDict
17 from .._utils import is_socket_readable
18
19
20 class SyncSocketStream:
21 """
22 A socket stream with read/write operations. Abstracts away any asyncio-specific
23 interfaces into a more generic base class, that we can use with alternate
24 backends, or for stand-alone test cases.
25 """
26
27 def __init__(self, sock: socket.socket) -> None:
28 self.sock = sock
29 self.read_lock = threading.Lock()
30 self.write_lock = threading.Lock()
31
32 def get_http_version(self) -> str:
33 selected_alpn_protocol = getattr(self.sock, "selected_alpn_protocol", None)
34 if selected_alpn_protocol is not None:
35 ident = selected_alpn_protocol()
36 return "HTTP/2" if ident == "h2" else "HTTP/1.1"
37 return "HTTP/1.1"
38
39 def start_tls(
40 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict
41 ) -> "SyncSocketStream":
42 connect_timeout = timeout.get("connect")
43 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
44
45 with map_exceptions(exc_map):
46 self.sock.settimeout(connect_timeout)
47 wrapped = ssl_context.wrap_socket(
48 self.sock, server_hostname=hostname.decode("ascii")
49 )
50
51 return SyncSocketStream(wrapped)
52
53 def read(self, n: int, timeout: TimeoutDict) -> bytes:
54 read_timeout = timeout.get("read")
55 exc_map = {socket.timeout: ReadTimeout, socket.error: ReadError}
56
57 with self.read_lock:
58 with map_exceptions(exc_map):
59 self.sock.settimeout(read_timeout)
60 return self.sock.recv(n)
61
62 def write(self, data: bytes, timeout: TimeoutDict) -> None:
63 write_timeout = timeout.get("write")
64 exc_map = {socket.timeout: WriteTimeout, socket.error: WriteError}
65
66 with self.write_lock:
67 with map_exceptions(exc_map):
68 while data:
69 self.sock.settimeout(write_timeout)
70 n = self.sock.send(data)
71 data = data[n:]
72
73 def close(self) -> None:
74 with self.write_lock:
75 try:
76 self.sock.close()
77 except socket.error:
78 pass
79
80 def is_readable(self) -> bool:
81 return is_socket_readable(self.sock)
82
83
84 class SyncLock:
85 def __init__(self) -> None:
86 self._lock = threading.Lock()
87
88 def __enter__(self) -> None:
89 self.acquire()
90
91 def __exit__(
92 self,
93 exc_type: Type[BaseException] = None,
94 exc_value: BaseException = None,
95 traceback: TracebackType = None,
96 ) -> None:
97 self.release()
98
99 def release(self) -> None:
100 self._lock.release()
101
102 def acquire(self) -> None:
103 self._lock.acquire()
104
105
106 class SyncSemaphore:
107 def __init__(self, max_value: int, exc_class: type) -> None:
108 self.max_value = max_value
109 self.exc_class = exc_class
110 self._semaphore = threading.Semaphore(max_value)
111
112 def acquire(self, timeout: float = None) -> None:
113 if not self._semaphore.acquire(timeout=timeout): # type: ignore
114 raise self.exc_class()
115
116 def release(self) -> None:
117 self._semaphore.release()
118
119
120 class SyncBackend:
121 def open_tcp_stream(
122 self,
123 hostname: bytes,
124 port: int,
125 ssl_context: Optional[SSLContext],
126 timeout: TimeoutDict,
127 *,
128 local_address: Optional[str],
129 ) -> SyncSocketStream:
130 address = (hostname.decode("ascii"), port)
131 connect_timeout = timeout.get("connect")
132 source_address = None if local_address is None else (local_address, 0)
133 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
134
135 with map_exceptions(exc_map):
136 sock = socket.create_connection(
137 address, connect_timeout, source_address=source_address # type: ignore
138 )
139 if ssl_context is not None:
140 sock = ssl_context.wrap_socket(
141 sock, server_hostname=hostname.decode("ascii")
142 )
143 return SyncSocketStream(sock=sock)
144
145 def open_uds_stream(
146 self,
147 path: str,
148 hostname: bytes,
149 ssl_context: Optional[SSLContext],
150 timeout: TimeoutDict,
151 ) -> SyncSocketStream:
152 connect_timeout = timeout.get("connect")
153 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
154
155 with map_exceptions(exc_map):
156 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
157 sock.settimeout(connect_timeout)
158 sock.connect(path)
159
160 if ssl_context is not None:
161 sock = ssl_context.wrap_socket(
162 sock, server_hostname=hostname.decode("ascii")
163 )
164
165 return SyncSocketStream(sock=sock)
166
167 def create_lock(self) -> SyncLock:
168 return SyncLock()
169
170 def create_semaphore(self, max_value: int, exc_class: type) -> SyncSemaphore:
171 return SyncSemaphore(max_value, exc_class=exc_class)
172
173 def time(self) -> float:
174 return time.monotonic()
175
176 def sleep(self, seconds: float) -> None:
177 time.sleep(seconds)
+0
-212
httpcore/_backends/trio.py less more
0 from ssl import SSLContext
1 from typing import Optional
2
3 import trio
4
5 from .._exceptions import (
6 ConnectError,
7 ConnectTimeout,
8 ReadError,
9 ReadTimeout,
10 WriteError,
11 WriteTimeout,
12 map_exceptions,
13 )
14 from .._types import TimeoutDict
15 from .base import AsyncBackend, AsyncLock, AsyncSemaphore, AsyncSocketStream
16
17
18 def none_as_inf(value: Optional[float]) -> float:
19 return value if value is not None else float("inf")
20
21
22 class SocketStream(AsyncSocketStream):
23 def __init__(self, stream: trio.abc.Stream) -> None:
24 self.stream = stream
25 self.read_lock = trio.Lock()
26 self.write_lock = trio.Lock()
27
28 def get_http_version(self) -> str:
29 if not isinstance(self.stream, trio.SSLStream):
30 return "HTTP/1.1"
31
32 ident = self.stream.selected_alpn_protocol()
33 return "HTTP/2" if ident == "h2" else "HTTP/1.1"
34
35 async def start_tls(
36 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict
37 ) -> "SocketStream":
38 connect_timeout = none_as_inf(timeout.get("connect"))
39 exc_map = {
40 trio.TooSlowError: ConnectTimeout,
41 trio.BrokenResourceError: ConnectError,
42 }
43 ssl_stream = trio.SSLStream(
44 self.stream,
45 ssl_context=ssl_context,
46 server_hostname=hostname.decode("ascii"),
47 )
48
49 with map_exceptions(exc_map):
50 with trio.fail_after(connect_timeout):
51 await ssl_stream.do_handshake()
52 return SocketStream(ssl_stream)
53
54 async def read(self, n: int, timeout: TimeoutDict) -> bytes:
55 read_timeout = none_as_inf(timeout.get("read"))
56 exc_map = {trio.TooSlowError: ReadTimeout, trio.BrokenResourceError: ReadError}
57
58 async with self.read_lock:
59 with map_exceptions(exc_map):
60 try:
61 with trio.fail_after(read_timeout):
62 return await self.stream.receive_some(max_bytes=n)
63 except trio.TooSlowError as exc:
64 await self.stream.aclose()
65 raise exc
66
67 async def write(self, data: bytes, timeout: TimeoutDict) -> None:
68 if not data:
69 return
70
71 write_timeout = none_as_inf(timeout.get("write"))
72 exc_map = {
73 trio.TooSlowError: WriteTimeout,
74 trio.BrokenResourceError: WriteError,
75 }
76
77 async with self.write_lock:
78 with map_exceptions(exc_map):
79 try:
80 with trio.fail_after(write_timeout):
81 return await self.stream.send_all(data)
82 except trio.TooSlowError as exc:
83 await self.stream.aclose()
84 raise exc
85
86 async def aclose(self) -> None:
87 async with self.write_lock:
88 try:
89 await self.stream.aclose()
90 except trio.BrokenResourceError:
91 pass
92
93 def is_readable(self) -> bool:
94 # Adapted from: https://github.com/encode/httpx/pull/143#issuecomment-515202982
95 stream = self.stream
96
97 # Peek through any SSLStream wrappers to get the underlying SocketStream.
98 while isinstance(stream, trio.SSLStream):
99 stream = stream.transport_stream
100 assert isinstance(stream, trio.SocketStream)
101
102 return stream.socket.is_readable()
103
104
105 class Lock(AsyncLock):
106 def __init__(self) -> None:
107 self._lock = trio.Lock()
108
109 async def release(self) -> None:
110 self._lock.release()
111
112 async def acquire(self) -> None:
113 await self._lock.acquire()
114
115
116 class Semaphore(AsyncSemaphore):
117 def __init__(self, max_value: int, exc_class: type):
118 self.max_value = max_value
119 self.exc_class = exc_class
120
121 @property
122 def semaphore(self) -> trio.Semaphore:
123 if not hasattr(self, "_semaphore"):
124 self._semaphore = trio.Semaphore(self.max_value, max_value=self.max_value)
125 return self._semaphore
126
127 async def acquire(self, timeout: float = None) -> None:
128 timeout = none_as_inf(timeout)
129
130 with trio.move_on_after(timeout):
131 await self.semaphore.acquire()
132 return
133
134 raise self.exc_class()
135
136 async def release(self) -> None:
137 self.semaphore.release()
138
139
140 class TrioBackend(AsyncBackend):
141 async def open_tcp_stream(
142 self,
143 hostname: bytes,
144 port: int,
145 ssl_context: Optional[SSLContext],
146 timeout: TimeoutDict,
147 *,
148 local_address: Optional[str],
149 ) -> AsyncSocketStream:
150 connect_timeout = none_as_inf(timeout.get("connect"))
151 # Trio will support local_address from 0.16.1 onwards.
152 # We only include the keyword argument if a local_address
153 #  argument has been passed.
154 kwargs: dict = {} if local_address is None else {"local_address": local_address}
155 exc_map = {
156 OSError: ConnectError,
157 trio.TooSlowError: ConnectTimeout,
158 trio.BrokenResourceError: ConnectError,
159 }
160
161 with map_exceptions(exc_map):
162 with trio.fail_after(connect_timeout):
163 stream: trio.abc.Stream = await trio.open_tcp_stream(
164 hostname, port, **kwargs
165 )
166
167 if ssl_context is not None:
168 stream = trio.SSLStream(
169 stream, ssl_context, server_hostname=hostname.decode("ascii")
170 )
171 await stream.do_handshake()
172
173 return SocketStream(stream=stream)
174
175 async def open_uds_stream(
176 self,
177 path: str,
178 hostname: bytes,
179 ssl_context: Optional[SSLContext],
180 timeout: TimeoutDict,
181 ) -> AsyncSocketStream:
182 connect_timeout = none_as_inf(timeout.get("connect"))
183 exc_map = {
184 OSError: ConnectError,
185 trio.TooSlowError: ConnectTimeout,
186 trio.BrokenResourceError: ConnectError,
187 }
188
189 with map_exceptions(exc_map):
190 with trio.fail_after(connect_timeout):
191 stream: trio.abc.Stream = await trio.open_unix_socket(path)
192
193 if ssl_context is not None:
194 stream = trio.SSLStream(
195 stream, ssl_context, server_hostname=hostname.decode("ascii")
196 )
197 await stream.do_handshake()
198
199 return SocketStream(stream=stream)
200
201 def create_lock(self) -> AsyncLock:
202 return Lock()
203
204 def create_semaphore(self, max_value: int, exc_class: type) -> AsyncSemaphore:
205 return Semaphore(max_value, exc_class=exc_class)
206
207 async def time(self) -> float:
208 return trio.current_time()
209
210 async def sleep(self, seconds: float) -> None:
211 await trio.sleep(seconds)
+0
-96
httpcore/_bytestreams.py less more
0 from typing import AsyncIterator, Callable, Iterator
1
2 from ._async.base import AsyncByteStream
3 from ._sync.base import SyncByteStream
4
5
6 class ByteStream(AsyncByteStream, SyncByteStream):
7 """
8 A concrete implementation for either sync or async byte streams.
9
10 Example::
11
12 stream = httpcore.ByteStream(b"123")
13
14 Parameters
15 ----------
16 content:
17 A plain byte string used as the content of the stream.
18 """
19
20 def __init__(self, content: bytes) -> None:
21 self._content = content
22
23 def __iter__(self) -> Iterator[bytes]:
24 yield self._content
25
26 async def __aiter__(self) -> AsyncIterator[bytes]:
27 yield self._content
28
29
30 class IteratorByteStream(SyncByteStream):
31 """
32 A concrete implementation for sync byte streams.
33
34 Example::
35
36 def generate_content():
37 yield b"Hello, world!"
38 ...
39
40 stream = httpcore.IteratorByteStream(generate_content())
41
42 Parameters
43 ----------
44 iterator:
45 A sync byte iterator, used as the content of the stream.
46 close_func:
47 An optional function called when closing the stream.
48 """
49
50 def __init__(self, iterator: Iterator[bytes], close_func: Callable = None) -> None:
51 self._iterator = iterator
52 self._close_func = close_func
53
54 def __iter__(self) -> Iterator[bytes]:
55 for chunk in self._iterator:
56 yield chunk
57
58 def close(self) -> None:
59 if self._close_func is not None:
60 self._close_func()
61
62
63 class AsyncIteratorByteStream(AsyncByteStream):
64 """
65 A concrete implementation for async byte streams.
66
67 Example::
68
69 async def generate_content():
70 yield b"Hello, world!"
71 ...
72
73 stream = httpcore.AsyncIteratorByteStream(generate_content())
74
75 Parameters
76 ----------
77 aiterator:
78 An async byte iterator, used as the content of the stream.
79 aclose_func:
80 An optional async function called when closing the stream.
81 """
82
83 def __init__(
84 self, aiterator: AsyncIterator[bytes], aclose_func: Callable = None
85 ) -> None:
86 self._aiterator = aiterator
87 self._aclose_func = aclose_func
88
89 async def __aiter__(self) -> AsyncIterator[bytes]:
90 async for chunk in self._aiterator:
91 yield chunk
92
93 async def aclose(self) -> None:
94 if self._aclose_func is not None:
95 await self._aclose_func()
0 # `contextlib.asynccontextmanager` exists from Python 3.7 onwards.
1 # For 3.6 we require the `async_generator` package for a backported version.
2 try:
3 from contextlib import asynccontextmanager # type: ignore
4 except ImportError:
5 from async_generator import asynccontextmanager # type: ignore # noqa
88 except Exception as exc: # noqa: PIE786
99 for from_exc, to_exc in map.items():
1010 if isinstance(exc, from_exc):
11 raise to_exc(exc) from None
12 raise
11 raise to_exc(exc)
12 raise # pragma: nocover
13
14
15 class ConnectionNotAvailable(Exception):
16 pass
17
18
19 class ProxyError(Exception):
20 pass
1321
1422
1523 class UnsupportedProtocol(Exception):
2533
2634
2735 class LocalProtocolError(ProtocolError):
28 pass
29
30
31 class ProxyError(Exception):
3236 pass
3337
3438
7276
7377 class WriteError(NetworkError):
7478 pass
75
76
77 class CloseError(NetworkError):
78 pass
0 from typing import (
1 Any,
2 AsyncIterable,
3 AsyncIterator,
4 Iterable,
5 Iterator,
6 List,
7 Mapping,
8 Optional,
9 Sequence,
10 Tuple,
11 Union,
12 )
13 from urllib.parse import urlparse
14
15 # Functions for typechecking...
16
17
18 HeadersAsSequence = Sequence[Tuple[Union[bytes, str], Union[bytes, str]]]
19 HeadersAsMapping = Mapping[Union[bytes, str], Union[bytes, str]]
20
21
22 def enforce_bytes(value: Union[bytes, str], *, name: str) -> bytes:
23 """
24 Any arguments that are ultimately represented as bytes can be specified
25 either as bytes or as strings.
26
27 However we enforce that any string arguments must only contain characters in
28 the plain ASCII range. chr(0)...chr(127). If you need to use characters
29 outside that range then be precise, and use a byte-wise argument.
30 """
31 if isinstance(value, str):
32 try:
33 return value.encode("ascii")
34 except UnicodeEncodeError:
35 raise TypeError(f"{name} strings may not include unicode characters.")
36 elif isinstance(value, bytes):
37 return value
38
39 seen_type = type(value).__name__
40 raise TypeError(f"{name} must be bytes or str, but got {seen_type}.")
41
42
43 def enforce_url(value: Union["URL", bytes, str], *, name: str) -> "URL":
44 """
45 Type check for URL parameters.
46 """
47 if isinstance(value, (bytes, str)):
48 return URL(value)
49 elif isinstance(value, URL):
50 return value
51
52 seen_type = type(value).__name__
53 raise TypeError(f"{name} must be a URL, bytes, or str, but got {seen_type}.")
54
55
56 def enforce_headers(
57 value: Union[HeadersAsMapping, HeadersAsSequence] = None, *, name: str
58 ) -> List[Tuple[bytes, bytes]]:
59 """
60 Convienence function that ensure all items in request or response headers
61 are either bytes or strings in the plain ASCII range.
62 """
63 if value is None:
64 return []
65 elif isinstance(value, Mapping):
66 return [
67 (
68 enforce_bytes(k, name="header name"),
69 enforce_bytes(v, name="header value"),
70 )
71 for k, v in value.items()
72 ]
73 elif isinstance(value, Sequence):
74 return [
75 (
76 enforce_bytes(k, name="header name"),
77 enforce_bytes(v, name="header value"),
78 )
79 for k, v in value
80 ]
81
82 seen_type = type(value).__name__
83 raise TypeError(
84 f"{name} must be a mapping or sequence of two-tuples, but got {seen_type}."
85 )
86
87
88 def enforce_stream(
89 value: Union[bytes, Iterable[bytes], AsyncIterable[bytes], None], *, name: str
90 ) -> Union[Iterable[bytes], AsyncIterable[bytes]]:
91 if value is None:
92 return ByteStream(b"")
93 elif isinstance(value, bytes):
94 return ByteStream(value)
95 return value
96
97
98 # * https://tools.ietf.org/html/rfc3986#section-3.2.3
99 # * https://url.spec.whatwg.org/#url-miscellaneous
100 # * https://url.spec.whatwg.org/#scheme-state
101 DEFAULT_PORTS = {
102 b"ftp": 21,
103 b"http": 80,
104 b"https": 443,
105 b"ws": 80,
106 b"wss": 443,
107 }
108
109
110 def include_request_headers(
111 headers: List[Tuple[bytes, bytes]],
112 *,
113 url: "URL",
114 content: Union[None, bytes, Iterable[bytes], AsyncIterable[bytes]],
115 ) -> List[Tuple[bytes, bytes]]:
116 headers_set = set([k.lower() for k, v in headers])
117
118 if b"host" not in headers_set:
119 default_port = DEFAULT_PORTS.get(url.scheme)
120 if url.port is None or url.port == default_port:
121 header_value = url.host
122 else:
123 header_value = b"%b:%d" % (url.host, url.port)
124 headers = [(b"Host", header_value)] + headers
125
126 if (
127 content is not None
128 and b"content-length" not in headers_set
129 and b"transfer-encoding" not in headers_set
130 ):
131 if isinstance(content, bytes):
132 content_length = str(len(content)).encode("ascii")
133 headers += [(b"Content-Length", content_length)]
134 else:
135 headers += [(b"Transfer-Encoding", b"chunked")] # pragma: nocover
136
137 return headers
138
139
140 # Interfaces for byte streams...
141
142
143 class ByteStream:
144 """
145 A container for non-streaming content, and that supports both sync and async
146 stream iteration.
147 """
148
149 def __init__(self, content: bytes) -> None:
150 self._content = content
151
152 def __iter__(self) -> Iterator[bytes]:
153 yield self._content
154
155 async def __aiter__(self) -> AsyncIterator[bytes]:
156 yield self._content
157
158 def __repr__(self) -> str:
159 return f"<{self.__class__.__name__} [{len(self._content)} bytes]>"
160
161
162 class Origin:
163 def __init__(self, scheme: bytes, host: bytes, port: int) -> None:
164 self.scheme = scheme
165 self.host = host
166 self.port = port
167
168 def __eq__(self, other: Any) -> bool:
169 return (
170 isinstance(other, Origin)
171 and self.scheme == other.scheme
172 and self.host == other.host
173 and self.port == other.port
174 )
175
176 def __str__(self) -> str:
177 scheme = self.scheme.decode("ascii")
178 host = self.host.decode("ascii")
179 port = str(self.port)
180 return f"{scheme}://{host}:{port}"
181
182
183 class URL:
184 """
185 Represents the URL against which an HTTP request may be made.
186
187 The URL may either be specified as a plain string, for convienence:
188
189 ```python
190 url = httpcore.URL("https://www.example.com/")
191 ```
192
193 Or be constructed with explicitily pre-parsed components:
194
195 ```python
196 url = httpcore.URL(scheme=b'https', host=b'www.example.com', port=None, target=b'/')
197 ```
198
199 Using this second more explicit style allows integrations that are using
200 `httpcore` to pass through URLs that have already been parsed in order to use
201 libraries such as `rfc-3986` rather than relying on the stdlib. It also ensures
202 that URL parsing is treated identically at both the networking level and at any
203 higher layers of abstraction.
204
205 The four components are important here, as they allow the URL to be precisely
206 specified in a pre-parsed format. They also allow certain types of request to
207 be created that could not otherwise be expressed.
208
209 For example, an HTTP request to `http://www.example.com/` forwarded via a proxy
210 at `http://localhost:8080`...
211
212 ```python
213 # Constructs an HTTP request with a complete URL as the target:
214 # GET https://www.example.com/ HTTP/1.1
215 url = httpcore.URL(
216 scheme=b'http',
217 host=b'localhost',
218 port=8080,
219 target=b'https://www.example.com/'
220 )
221 request = httpcore.Request(
222 method="GET",
223 url=url
224 )
225 ```
226
227 Another example is constructing an `OPTIONS *` request...
228
229 ```python
230 # Constructs an 'OPTIONS *' HTTP request:
231 # OPTIONS * HTTP/1.1
232 url = httpcore.URL(scheme=b'https', host=b'www.example.com', target=b'*')
233 request = httpcore.Request(method="OPTIONS", url=url)
234 ```
235
236 This kind of request is not possible to formulate with a URL string,
237 because the `/` delimiter is always used to demark the target from the
238 host/port portion of the URL.
239
240 For convenience, string-like arguments may be specified either as strings or
241 as bytes. However, once a request is being issue over-the-wire, the URL
242 components are always ultimately required to be a bytewise representation.
243
244 In order to avoid any ambiguity over character encodings, when strings are used
245 as arguments, they must be strictly limited to the ASCII range `chr(0)`-`chr(127)`.
246 If you require a bytewise representation that is outside this range you must
247 handle the character encoding directly, and pass a bytes instance.
248 """
249
250 def __init__(
251 self,
252 url: Union[bytes, str] = "",
253 *,
254 scheme: Union[bytes, str] = b"",
255 host: Union[bytes, str] = b"",
256 port: Optional[int] = None,
257 target: Union[bytes, str] = b"",
258 ) -> None:
259 """
260 Parameters:
261 url: The complete URL as a string or bytes.
262 scheme: The URL scheme as a string or bytes.
263 Typically either `"http"` or `"https"`.
264 host: The URL host as a string or bytes. Such as `"www.example.com"`.
265 port: The port to connect to. Either an integer or `None`.
266 target: The target of the HTTP request. Such as `"/items?search=red"`.
267 """
268 if url:
269 parsed = urlparse(enforce_bytes(url, name="url"))
270 self.scheme = parsed.scheme
271 self.host = parsed.hostname or b""
272 self.port = parsed.port
273 self.target = (parsed.path or b"/") + (
274 b"?" + parsed.query if parsed.query else b""
275 )
276 else:
277 self.scheme = enforce_bytes(scheme, name="scheme")
278 self.host = enforce_bytes(host, name="host")
279 self.port = port
280 self.target = enforce_bytes(target, name="target")
281
282 @property
283 def origin(self) -> Origin:
284 default_port = {b"http": 80, b"https": 443}[self.scheme]
285 return Origin(
286 scheme=self.scheme, host=self.host, port=self.port or default_port
287 )
288
289 def __eq__(self, other: Any) -> bool:
290 return (
291 isinstance(other, URL)
292 and other.scheme == self.scheme
293 and other.host == self.host
294 and other.port == self.port
295 and other.target == self.target
296 )
297
298 def __bytes__(self) -> bytes:
299 if self.port is None:
300 return b"%b://%b%b" % (self.scheme, self.host, self.target)
301 return b"%b://%b:%d%b" % (self.scheme, self.host, self.port, self.target)
302
303 def __repr__(self) -> str:
304 return (
305 f"{self.__class__.__name__}(scheme={self.scheme!r}, "
306 f"host={self.host!r}, port={self.port!r}, target={self.target!r})"
307 )
308
309
310 class Request:
311 """
312 An HTTP request.
313 """
314
315 def __init__(
316 self,
317 method: Union[bytes, str],
318 url: Union[URL, bytes, str],
319 *,
320 headers: Union[dict, list] = None,
321 content: Union[bytes, Iterable[bytes], AsyncIterable[bytes]] = None,
322 extensions: dict = None,
323 ) -> None:
324 """
325 Parameters:
326 method: The HTTP request method, either as a string or bytes.
327 For example: `GET`.
328 url: The request URL, either as a `URL` instance, or as a string or bytes.
329 For example: `"https://www.example.com".`
330 headers: The HTTP request headers.
331 content: The content of the response body.
332 extensions: A dictionary of optional extra information included on
333 the request. Possible keys include `"timeout"`, and `"trace"`.
334 """
335 self.method: bytes = enforce_bytes(method, name="method")
336 self.url: URL = enforce_url(url, name="url")
337 self.headers: List[Tuple[bytes, bytes]] = enforce_headers(
338 headers, name="headers"
339 )
340 self.stream: Union[Iterable[bytes], AsyncIterable[bytes]] = enforce_stream(
341 content, name="content"
342 )
343 self.extensions = {} if extensions is None else extensions
344
345 def __repr__(self) -> str:
346 return f"<{self.__class__.__name__} [{self.method!r}]>"
347
348
349 class Response:
350 """
351 An HTTP response.
352 """
353
354 def __init__(
355 self,
356 status: int,
357 *,
358 headers: Union[dict, list] = None,
359 content: Union[bytes, Iterable[bytes], AsyncIterable[bytes]] = None,
360 extensions: dict = None,
361 ) -> None:
362 """
363 Parameters:
364 status: The HTTP status code of the response. For example `200`.
365 headers: The HTTP response headers.
366 content: The content of the response body.
367 extensions: A dictionary of optional extra information included on
368 the responseself.Possible keys include `"http_version"`,
369 `"reason_phrase"`, and `"network_stream"`.
370 """
371 self.status: int = status
372 self.headers: List[Tuple[bytes, bytes]] = enforce_headers(
373 headers, name="headers"
374 )
375 self.stream: Union[Iterable[bytes], AsyncIterable[bytes]] = enforce_stream(
376 content, name="content"
377 )
378 self.extensions: dict = {} if extensions is None else extensions
379
380 self._stream_consumed = False
381
382 @property
383 def content(self) -> bytes:
384 if not hasattr(self, "_content"):
385 if isinstance(self.stream, Iterable):
386 raise RuntimeError(
387 "Attempted to access 'response.content' on a streaming response. "
388 "Call 'response.read()' first."
389 )
390 else:
391 raise RuntimeError(
392 "Attempted to access 'response.content' on a streaming response. "
393 "Call 'await response.aread()' first."
394 )
395 return self._content
396
397 def __repr__(self) -> str:
398 return f"<{self.__class__.__name__} [{self.status}]>"
399
400 # Sync interface...
401
402 def read(self) -> bytes:
403 if not isinstance(self.stream, Iterable): # pragma: nocover
404 raise RuntimeError(
405 "Attempted to read an asynchronous response using 'response.read()'. "
406 "You should use 'await response.aread()' instead."
407 )
408 if not hasattr(self, "_content"):
409 self._content = b"".join([part for part in self.iter_stream()])
410 return self._content
411
412 def iter_stream(self) -> Iterator[bytes]:
413 if not isinstance(self.stream, Iterable): # pragma: nocover
414 raise RuntimeError(
415 "Attempted to stream an asynchronous response using 'for ... in "
416 "response.iter_stream()'. "
417 "You should use 'async for ... in response.aiter_stream()' instead."
418 )
419 if self._stream_consumed:
420 raise RuntimeError(
421 "Attempted to call 'for ... in response.iter_stream()' more than once."
422 )
423 self._stream_consumed = True
424 for chunk in self.stream:
425 yield chunk
426
427 def close(self) -> None:
428 if not isinstance(self.stream, Iterable): # pragma: nocover
429 raise RuntimeError(
430 "Attempted to close an asynchronous response using 'response.close()'. "
431 "You should use 'await response.aclose()' instead."
432 )
433 if hasattr(self.stream, "close"):
434 self.stream.close() # type: ignore
435
436 # Async interface...
437
438 async def aread(self) -> bytes:
439 if not isinstance(self.stream, AsyncIterable): # pragma: nocover
440 raise RuntimeError(
441 "Attempted to read an synchronous response using "
442 "'await response.aread()'. "
443 "You should use 'response.read()' instead."
444 )
445 if not hasattr(self, "_content"):
446 self._content = b"".join([part async for part in self.aiter_stream()])
447 return self._content
448
449 async def aiter_stream(self) -> AsyncIterator[bytes]:
450 if not isinstance(self.stream, AsyncIterable): # pragma: nocover
451 raise RuntimeError(
452 "Attempted to stream an synchronous response using 'async for ... in "
453 "response.aiter_stream()'. "
454 "You should use 'for ... in response.iter_stream()' instead."
455 )
456 if self._stream_consumed:
457 raise RuntimeError(
458 "Attempted to call 'async for ... in response.aiter_stream()' "
459 "more than once."
460 )
461 self._stream_consumed = True
462 async for chunk in self.stream:
463 yield chunk
464
465 async def aclose(self) -> None:
466 if not isinstance(self.stream, AsyncIterable): # pragma: nocover
467 raise RuntimeError(
468 "Attempted to close a synchronous response using "
469 "'await response.aclose()'. "
470 "You should use 'response.close()' instead."
471 )
472 if hasattr(self.stream, "aclose"):
473 await self.stream.aclose() # type: ignore
0 import ssl
1
2 import certifi
3
4
5 def default_ssl_context() -> ssl.SSLContext:
6 context = ssl.create_default_context()
7 context.load_verify_locations(certifi.where())
8 return context
0 from .connection import HTTPConnection
1 from .connection_pool import ConnectionPool
2 from .http11 import HTTP11Connection
3 from .http_proxy import HTTPProxy
4 from .interfaces import ConnectionInterface
5
6 try:
7 from .http2 import HTTP2Connection
8 except ImportError: # pragma: nocover
9
10 class HTTP2Connection: # type: ignore
11 def __init__(self, *args, **kwargs) -> None: # type: ignore
12 raise RuntimeError(
13 "Attempted to use http2 support, but the `h2` package is not "
14 "installed. Use 'pip install httpcore[http2]'."
15 )
16
17
18 __all__ = [
19 "HTTPConnection",
20 "ConnectionPool",
21 "HTTPProxy",
22 "HTTP11Connection",
23 "HTTP2Connection",
24 "ConnectionInterface",
25 ]
+0
-122
httpcore/_sync/base.py less more
0 import enum
1 from types import TracebackType
2 from typing import Iterator, Tuple, Type
3
4 from .._types import URL, Headers, T
5
6
7 class NewConnectionRequired(Exception):
8 pass
9
10
11 class ConnectionState(enum.IntEnum):
12 """
13 PENDING READY
14 | | ^
15 v V |
16 ACTIVE |
17 | | |
18 | V |
19 V IDLE-+
20 FULL |
21 | |
22 V V
23 CLOSED
24 """
25
26 PENDING = 0 # Connection not yet acquired.
27 READY = 1 # Re-acquired from pool, about to send a request.
28 ACTIVE = 2 # Active requests.
29 FULL = 3 # Active requests, no more stream IDs available.
30 IDLE = 4 # No active requests.
31 CLOSED = 5 # Connection closed.
32
33
34 class SyncByteStream:
35 """
36 The base interface for request and response bodies.
37
38 Concrete implementations should subclass this class, and implement
39 the :meth:`__iter__` method, and optionally the :meth:`close` method.
40 """
41
42 def __iter__(self) -> Iterator[bytes]:
43 """
44 Yield bytes representing the request or response body.
45 """
46 yield b"" # pragma: nocover
47
48 def close(self) -> None:
49 """
50 Must be called by the client to indicate that the stream has been closed.
51 """
52 pass # pragma: nocover
53
54 def read(self) -> bytes:
55 try:
56 return b"".join([part for part in self])
57 finally:
58 self.close()
59
60
61 class SyncHTTPTransport:
62 """
63 The base interface for sending HTTP requests.
64
65 Concrete implementations should subclass this class, and implement
66 the :meth:`handle_request` method, and optionally the :meth:`close` method.
67 """
68
69 def handle_request(
70 self,
71 method: bytes,
72 url: URL,
73 headers: Headers,
74 stream: SyncByteStream,
75 extensions: dict,
76 ) -> Tuple[int, Headers, SyncByteStream, dict]:
77 """
78 The interface for sending a single HTTP request, and returning a response.
79
80 Parameters
81 ----------
82 method:
83 The HTTP method, such as ``b'GET'``.
84 url:
85 The URL as a 4-tuple of (scheme, host, port, path).
86 headers:
87 Any HTTP headers to send with the request.
88 stream:
89 The body of the HTTP request.
90 extensions:
91 A dictionary of optional extensions.
92
93 Returns
94 -------
95 status_code:
96 The HTTP status code, such as ``200``.
97 headers:
98 Any HTTP headers included on the response.
99 stream:
100 The body of the HTTP response.
101 extensions:
102 A dictionary of optional extensions.
103 """
104 raise NotImplementedError() # pragma: nocover
105
106 def close(self) -> None:
107 """
108 Close the implementation, which should close any outstanding response streams,
109 and any keep alive connections.
110 """
111
112 def __enter__(self: T) -> T:
113 return self
114
115 def __exit__(
116 self,
117 exc_type: Type[BaseException] = None,
118 exc_value: BaseException = None,
119 traceback: TracebackType = None,
120 ) -> None:
121 self.close()
0 from ssl import SSLContext
1 from typing import List, Optional, Tuple, cast
2
3 from .._backends.sync import SyncBackend, SyncLock, SyncSocketStream, SyncBackend
4 from .._exceptions import ConnectError, ConnectTimeout
5 from .._types import URL, Headers, Origin, TimeoutDict
6 from .._utils import exponential_backoff, get_logger, url_to_origin
7 from .base import SyncByteStream, SyncHTTPTransport, NewConnectionRequired
8 from .http import SyncBaseHTTPConnection
9 from .http11 import SyncHTTP11Connection
10
11 logger = get_logger(__name__)
0 import itertools
1 import ssl
2 from types import TracebackType
3 from typing import Iterator, Optional, Type
4
5 from .._exceptions import ConnectError, ConnectionNotAvailable, ConnectTimeout
6 from .._models import Origin, Request, Response
7 from .._ssl import default_ssl_context
8 from .._synchronization import Lock
9 from .._trace import Trace
10 from ..backends.sync import SyncBackend
11 from ..backends.base import NetworkBackend, NetworkStream
12 from .http11 import HTTP11Connection
13 from .interfaces import ConnectionInterface
1214
1315 RETRIES_BACKOFF_FACTOR = 0.5 # 0s, 0.5s, 1s, 2s, 4s, etc.
1416
1517
16 class SyncHTTPConnection(SyncHTTPTransport):
18 def exponential_backoff(factor: float) -> Iterator[float]:
19 yield 0
20 for n in itertools.count(2):
21 yield factor * (2 ** (n - 2))
22
23
24 class HTTPConnection(ConnectionInterface):
1725 def __init__(
1826 self,
1927 origin: Origin,
28 ssl_context: ssl.SSLContext = None,
29 keepalive_expiry: float = None,
2030 http1: bool = True,
2131 http2: bool = False,
22 keepalive_expiry: float = None,
32 retries: int = 0,
33 local_address: str = None,
2334 uds: str = None,
24 ssl_context: SSLContext = None,
25 socket: SyncSocketStream = None,
26 local_address: str = None,
27 retries: int = 0,
28 backend: SyncBackend = None,
29 ):
30 self.origin = origin
31 self._http1_enabled = http1
32 self._http2_enabled = http2
35 network_backend: NetworkBackend = None,
36 ) -> None:
37 ssl_context = default_ssl_context() if ssl_context is None else ssl_context
38 alpn_protocols = ["http/1.1", "h2"] if http2 else ["http/1.1"]
39 ssl_context.set_alpn_protocols(alpn_protocols)
40
41 self._origin = origin
42 self._ssl_context = ssl_context
3343 self._keepalive_expiry = keepalive_expiry
44 self._http1 = http1
45 self._http2 = http2
46 self._retries = retries
47 self._local_address = local_address
3448 self._uds = uds
35 self._ssl_context = SSLContext() if ssl_context is None else ssl_context
36 self.socket = socket
37 self._local_address = local_address
38 self._retries = retries
39
40 alpn_protocols: List[str] = []
41 if http1:
42 alpn_protocols.append("http/1.1")
43 if http2:
44 alpn_protocols.append("h2")
45
46 self._ssl_context.set_alpn_protocols(alpn_protocols)
47
48 self.connection: Optional[SyncBaseHTTPConnection] = None
49 self._is_http11 = False
50 self._is_http2 = False
51 self._connect_failed = False
52 self._expires_at: Optional[float] = None
53 self._backend = SyncBackend() if backend is None else backend
54
55 def __repr__(self) -> str:
56 return f"<SyncHTTPConnection [{self.info()}]>"
57
58 def info(self) -> str:
59 if self.connection is None:
60 return "Connection failed" if self._connect_failed else "Connecting"
61 return self.connection.info()
62
63 def should_close(self) -> bool:
64 """
65 Return `True` if the connection is in a state where it should be closed.
66 This occurs when any of the following occur:
67
68 * There are no active requests on an HTTP/1.1 connection, and the underlying
69 socket is readable. The only valid state the socket can be readable in
70 if this occurs is when the b"" EOF marker is about to be returned,
71 indicating a server disconnect.
72 * There are no active requests being made and the keepalive timeout has passed.
73 """
74 if self.connection is None:
75 return False
76 return self.connection.should_close()
77
78 def is_idle(self) -> bool:
79 """
80 Return `True` if the connection is currently idle.
81 """
82 if self.connection is None:
83 return False
84 return self.connection.is_idle()
85
86 def is_closed(self) -> bool:
87 if self.connection is None:
88 return self._connect_failed
89 return self.connection.is_closed()
90
91 def is_available(self) -> bool:
92 """
93 Return `True` if the connection is currently able to accept an outgoing request.
94 This occurs when any of the following occur:
95
96 * The connection has not yet been opened, and HTTP/2 support is enabled.
97 We don't *know* at this point if we'll end up on an HTTP/2 connection or
98 not, but we *might* do, so we indicate availability.
99 * The connection has been opened, and is currently idle.
100 * The connection is open, and is an HTTP/2 connection. The connection must
101 also not currently be exceeding the maximum number of allowable concurrent
102 streams and must not have exhausted the maximum total number of stream IDs.
103 """
104 if self.connection is None:
105 return self._http2_enabled and not self.is_closed
106 return self.connection.is_available()
107
108 @property
109 def request_lock(self) -> SyncLock:
110 # We do this lazily, to make sure backend autodetection always
111 # runs within an async context.
112 if not hasattr(self, "_request_lock"):
113 self._request_lock = self._backend.create_lock()
114 return self._request_lock
115
116 def handle_request(
117 self,
118 method: bytes,
119 url: URL,
120 headers: Headers,
121 stream: SyncByteStream,
122 extensions: dict,
123 ) -> Tuple[int, Headers, SyncByteStream, dict]:
124 assert url_to_origin(url) == self.origin
125 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
126
127 with self.request_lock:
128 if self.connection is None:
129 if self._connect_failed:
130 raise NewConnectionRequired()
131 if not self.socket:
132 logger.trace(
133 "open_socket origin=%r timeout=%r", self.origin, timeout
49
50 self._network_backend: NetworkBackend = (
51 SyncBackend() if network_backend is None else network_backend
52 )
53 self._connection: Optional[ConnectionInterface] = None
54 self._connect_failed: bool = False
55 self._request_lock = Lock()
56
57 def handle_request(self, request: Request) -> Response:
58 if not self.can_handle_request(request.url.origin):
59 raise RuntimeError(
60 f"Attempted to send request to {request.url.origin} on connection to {self._origin}"
61 )
62
63 with self._request_lock:
64 if self._connection is None:
65 try:
66 stream = self._connect(request)
67
68 ssl_object = stream.get_extra_info("ssl_object")
69 http2_negotiated = (
70 ssl_object is not None
71 and ssl_object.selected_alpn_protocol() == "h2"
13472 )
135 self.socket = self._open_socket(timeout)
136 self._create_connection(self.socket)
137 elif not self.connection.is_available():
138 raise NewConnectionRequired()
139
140 assert self.connection is not None
141 logger.trace(
142 "connection.handle_request method=%r url=%r headers=%r",
143 method,
144 url,
145 headers,
146 )
147 return self.connection.handle_request(
148 method, url, headers, stream, extensions
149 )
150
151 def _open_socket(self, timeout: TimeoutDict = None) -> SyncSocketStream:
152 scheme, hostname, port = self.origin
153 timeout = {} if timeout is None else timeout
154 ssl_context = self._ssl_context if scheme == b"https" else None
73 if http2_negotiated or (self._http2 and not self._http1):
74 from .http2 import HTTP2Connection
75
76 self._connection = HTTP2Connection(
77 origin=self._origin,
78 stream=stream,
79 keepalive_expiry=self._keepalive_expiry,
80 )
81 else:
82 self._connection = HTTP11Connection(
83 origin=self._origin,
84 stream=stream,
85 keepalive_expiry=self._keepalive_expiry,
86 )
87 except Exception as exc:
88 self._connect_failed = True
89 raise exc
90 elif not self._connection.is_available():
91 raise ConnectionNotAvailable()
92
93 return self._connection.handle_request(request)
94
95 def _connect(self, request: Request) -> NetworkStream:
96 timeouts = request.extensions.get("timeout", {})
97 timeout = timeouts.get("connect", None)
15598
15699 retries_left = self._retries
157100 delays = exponential_backoff(factor=RETRIES_BACKOFF_FACTOR)
159102 while True:
160103 try:
161104 if self._uds is None:
162 return self._backend.open_tcp_stream(
163 hostname,
164 port,
165 ssl_context,
166 timeout,
167 local_address=self._local_address,
168 )
105 kwargs = {
106 "host": self._origin.host.decode("ascii"),
107 "port": self._origin.port,
108 "local_address": self._local_address,
109 "timeout": timeout,
110 }
111 with Trace(
112 "connection.connect_tcp", request, kwargs
113 ) as trace:
114 stream = self._network_backend.connect_tcp(**kwargs)
115 trace.return_value = stream
169116 else:
170 return self._backend.open_uds_stream(
171 self._uds, hostname, ssl_context, timeout
172 )
117 kwargs = {
118 "path": self._uds,
119 "timeout": timeout,
120 }
121 with Trace(
122 "connection.connect_unix_socket", request, kwargs
123 ) as trace:
124 stream = self._network_backend.connect_unix_socket(
125 **kwargs
126 )
127 trace.return_value = stream
173128 except (ConnectError, ConnectTimeout):
174129 if retries_left <= 0:
175 self._connect_failed = True
176130 raise
177131 retries_left -= 1
178132 delay = next(delays)
179 self._backend.sleep(delay)
180 except Exception: # noqa: PIE786
181 self._connect_failed = True
182 raise
183
184 def _create_connection(self, socket: SyncSocketStream) -> None:
185 http_version = socket.get_http_version()
186 logger.trace(
187 "create_connection socket=%r http_version=%r", socket, http_version
188 )
189 if http_version == "HTTP/2" or (
190 self._http2_enabled and not self._http1_enabled
191 ):
192 from .http2 import SyncHTTP2Connection
193
194 self._is_http2 = True
195 self.connection = SyncHTTP2Connection(
196 socket=socket,
197 keepalive_expiry=self._keepalive_expiry,
198 backend=self._backend,
133 # TRACE 'retry'
134 self._network_backend.sleep(delay)
135 else:
136 break
137
138 if self._origin.scheme == b"https":
139 kwargs = {
140 "ssl_context": self._ssl_context,
141 "server_hostname": self._origin.host.decode("ascii"),
142 "timeout": timeout,
143 }
144 with Trace("connection.start_tls", request, kwargs) as trace:
145 stream = stream.start_tls(**kwargs)
146 trace.return_value = stream
147 return stream
148
149 def can_handle_request(self, origin: Origin) -> bool:
150 return origin == self._origin
151
152 def close(self) -> None:
153 if self._connection is not None:
154 self._connection.close()
155
156 def is_available(self) -> bool:
157 if self._connection is None:
158 # If HTTP/2 support is enabled, and the resulting connection could
159 # end up as HTTP/2 then we should indicate the connection as being
160 # available to service multiple requests.
161 return (
162 self._http2
163 and (self._origin.scheme == b"https" or not self._http1)
164 and not self._connect_failed
199165 )
200 else:
201 self._is_http11 = True
202 self.connection = SyncHTTP11Connection(
203 socket=socket, keepalive_expiry=self._keepalive_expiry
204 )
205
206 def start_tls(
207 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
166 return self._connection.is_available()
167
168 def has_expired(self) -> bool:
169 if self._connection is None:
170 return self._connect_failed
171 return self._connection.has_expired()
172
173 def is_idle(self) -> bool:
174 if self._connection is None:
175 return self._connect_failed
176 return self._connection.is_idle()
177
178 def is_closed(self) -> bool:
179 if self._connection is None:
180 return self._connect_failed
181 return self._connection.is_closed()
182
183 def info(self) -> str:
184 if self._connection is None:
185 return "CONNECTION FAILED" if self._connect_failed else "CONNECTING"
186 return self._connection.info()
187
188 def __repr__(self) -> str:
189 return f"<{self.__class__.__name__} [{self.info()}]>"
190
191 # These context managers are not used in the standard flow, but are
192 # useful for testing or working with connection instances directly.
193
194 def __enter__(self) -> "HTTPConnection":
195 return self
196
197 def __exit__(
198 self,
199 exc_type: Type[BaseException] = None,
200 exc_value: BaseException = None,
201 traceback: TracebackType = None,
208202 ) -> None:
209 if self.connection is not None:
210 logger.trace("start_tls hostname=%r timeout=%r", hostname, timeout)
211 self.socket = self.connection.start_tls(
212 hostname, ssl_context, timeout
213 )
214 logger.trace("start_tls complete hostname=%r timeout=%r", hostname, timeout)
215
216 def close(self) -> None:
217 with self.request_lock:
218 if self.connection is not None:
219 self.connection.close()
203 self.close()
0 import warnings
1 from ssl import SSLContext
2 from typing import (
3 Iterator,
4 Callable,
5 Dict,
6 List,
7 Optional,
8 Set,
9 Tuple,
10 Union,
11 cast,
12 )
13
14 from .._backends.sync import SyncBackend, SyncLock, SyncSemaphore
15 from .._backends.base import lookup_sync_backend
16 from .._exceptions import LocalProtocolError, PoolTimeout, UnsupportedProtocol
17 from .._threadlock import ThreadLock
18 from .._types import URL, Headers, Origin, TimeoutDict
19 from .._utils import get_logger, origin_to_url_string, url_to_origin
20 from .base import SyncByteStream, SyncHTTPTransport, NewConnectionRequired
21 from .connection import SyncHTTPConnection
22
23 logger = get_logger(__name__)
24
25
26 class NullSemaphore(SyncSemaphore):
27 def __init__(self) -> None:
28 pass
29
30 def acquire(self, timeout: float = None) -> None:
31 return
32
33 def release(self) -> None:
34 return
35
36
37 class ResponseByteStream(SyncByteStream):
0 import ssl
1 import sys
2 from types import TracebackType
3 from typing import Iterable, Iterator, List, Optional, Type
4
5 from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol
6 from .._models import Origin, Request, Response
7 from .._ssl import default_ssl_context
8 from .._synchronization import Event, Lock
9 from ..backends.sync import SyncBackend
10 from ..backends.base import NetworkBackend
11 from .connection import HTTPConnection
12 from .interfaces import ConnectionInterface, RequestInterface
13
14
15 class RequestStatus:
16 def __init__(self, request: Request):
17 self.request = request
18 self.connection: Optional[ConnectionInterface] = None
19 self._connection_acquired = Event()
20
21 def set_connection(self, connection: ConnectionInterface) -> None:
22 assert self.connection is None
23 self.connection = connection
24 self._connection_acquired.set()
25
26 def unset_connection(self) -> None:
27 assert self.connection is not None
28 self.connection = None
29 self._connection_acquired = Event()
30
31 def wait_for_connection(
32 self, timeout: float = None
33 ) -> ConnectionInterface:
34 self._connection_acquired.wait(timeout=timeout)
35 assert self.connection is not None
36 return self.connection
37
38
39 class ConnectionPool(RequestInterface):
40 """
41 A connection pool for making HTTP requests.
42 """
43
3844 def __init__(
3945 self,
40 stream: SyncByteStream,
41 connection: SyncHTTPConnection,
42 callback: Callable,
43 ) -> None:
44 """
45 A wrapper around the response stream that we return from
46 `.handle_request()`.
47
48 Ensures that when `stream.close()` is called, the connection pool
49 is notified via a callback.
50 """
51 self.stream = stream
52 self.connection = connection
53 self.callback = callback
54
55 def __iter__(self) -> Iterator[bytes]:
56 for chunk in self.stream:
57 yield chunk
58
59 def close(self) -> None:
60 try:
61 # Call the underlying stream close callback.
62 # This will be a call to `SyncHTTP11Connection._response_closed()`
63 # or `SyncHTTP2Stream._response_closed()`.
64 self.stream.close()
65 finally:
66 # Call the connection pool close callback.
67 # This will be a call to `SyncConnectionPool._response_closed()`.
68 self.callback(self.connection)
69
70
71 class SyncConnectionPool(SyncHTTPTransport):
72 """
73 A connection pool for making HTTP requests.
74
75 Parameters
76 ----------
77 ssl_context:
78 An SSL context to use for verifying connections.
79 max_connections:
80 The maximum number of concurrent connections to allow.
81 max_keepalive_connections:
82 The maximum number of connections to allow before closing keep-alive
83 connections.
84 keepalive_expiry:
85 The maximum time to allow before closing a keep-alive connection.
86 http1:
87 Enable/Disable HTTP/1.1 support. Defaults to True.
88 http2:
89 Enable/Disable HTTP/2 support. Defaults to False.
90 uds:
91 Path to a Unix Domain Socket to use instead of TCP sockets.
92 local_address:
93 Local address to connect from. Can also be used to connect using a particular
94 address family. Using ``local_address="0.0.0.0"`` will connect using an
95 ``AF_INET`` address (IPv4), while using ``local_address="::"`` will connect
96 using an ``AF_INET6`` address (IPv6).
97 retries:
98 The maximum number of retries when trying to establish a connection.
99 backend:
100 A name indicating which concurrency backend to use.
101 """
102
103 def __init__(
104 self,
105 ssl_context: SSLContext = None,
106 max_connections: int = None,
46 ssl_context: ssl.SSLContext = None,
47 max_connections: Optional[int] = 10,
10748 max_keepalive_connections: int = None,
10849 keepalive_expiry: float = None,
10950 http1: bool = True,
11051 http2: bool = False,
52 retries: int = 0,
53 local_address: str = None,
11154 uds: str = None,
112 local_address: str = None,
113 retries: int = 0,
114 max_keepalive: int = None,
115 backend: Union[SyncBackend, str] = "sync",
116 ):
117 if max_keepalive is not None:
118 warnings.warn(
119 "'max_keepalive' is deprecated. Use 'max_keepalive_connections'.",
120 DeprecationWarning,
121 )
122 max_keepalive_connections = max_keepalive
123
124 if isinstance(backend, str):
125 backend = lookup_sync_backend(backend)
126
127 self._ssl_context = SSLContext() if ssl_context is None else ssl_context
128 self._max_connections = max_connections
129 self._max_keepalive_connections = max_keepalive_connections
55 network_backend: NetworkBackend = None,
56 ) -> None:
57 """
58 A connection pool for making HTTP requests.
59
60 Parameters:
61 ssl_context: An SSL context to use for verifying connections.
62 If not specified, the default `httpcore.default_ssl_context()`
63 will be used.
64 max_connections: The maximum number of concurrent HTTP connections that
65 the pool should allow. Any attempt to send a request on a pool that
66 would exceed this amount will block until a connection is available.
67 max_keepalive_connections: The maximum number of idle HTTP connections
68 that will be maintained in the pool.
69 keepalive_expiry: The duration in seconds that an idle HTTP connection
70 may be maintained for before being expired from the pool.
71 http1: A boolean indicating if HTTP/1.1 requests should be supported
72 by the connection pool. Defaults to True.
73 http2: A boolean indicating if HTTP/2 requests should be supported by
74 the connection pool. Defaults to False.
75 retries: The maximum number of retries when trying to establish a
76 connection.
77 local_address: Local address to connect from. Can also be used to connect
78 using a particular address family. Using `local_address="0.0.0.0"`
79 will connect using an `AF_INET` address (IPv4), while using
80 `local_address="::"` will connect using an `AF_INET6` address (IPv6).
81 uds: Path to a Unix Domain Socket to use instead of TCP sockets.
82 network_backend: A backend instance to use for handling network I/O.
83 """
84 if ssl_context is None:
85 ssl_context = default_ssl_context()
86
87 self._ssl_context = ssl_context
88
89 self._max_connections = (
90 sys.maxsize if max_connections is None else max_connections
91 )
92 self._max_keepalive_connections = (
93 sys.maxsize
94 if max_keepalive_connections is None
95 else max_keepalive_connections
96 )
97 self._max_keepalive_connections = min(
98 self._max_connections, self._max_keepalive_connections
99 )
100
130101 self._keepalive_expiry = keepalive_expiry
131102 self._http1 = http1
132103 self._http2 = http2
104 self._retries = retries
105 self._local_address = local_address
133106 self._uds = uds
134 self._local_address = local_address
135 self._retries = retries
136 self._connections: Dict[Origin, Set[SyncHTTPConnection]] = {}
137 self._thread_lock = ThreadLock()
138 self._backend = backend
139 self._next_keepalive_check = 0.0
140
141 if not (http1 or http2):
142 raise ValueError("Either http1 or http2 must be True.")
143
144 if http2:
145 try:
146 import h2 # noqa: F401
147 except ImportError:
148 raise ImportError(
149 "Attempted to use http2=True, but the 'h2' "
150 "package is not installed. Use 'pip install httpcore[http2]'."
151 )
152
153 @property
154 def _connection_semaphore(self) -> SyncSemaphore:
155 # We do this lazily, to make sure backend autodetection always
156 # runs within an async context.
157 if not hasattr(self, "_internal_semaphore"):
158 if self._max_connections is not None:
159 self._internal_semaphore = self._backend.create_semaphore(
160 self._max_connections, exc_class=PoolTimeout
161 )
162 else:
163 self._internal_semaphore = NullSemaphore()
164
165 return self._internal_semaphore
166
167 @property
168 def _connection_acquiry_lock(self) -> SyncLock:
169 if not hasattr(self, "_internal_connection_acquiry_lock"):
170 self._internal_connection_acquiry_lock = self._backend.create_lock()
171 return self._internal_connection_acquiry_lock
172
173 def _create_connection(
174 self,
175 origin: Tuple[bytes, bytes, int],
176 ) -> SyncHTTPConnection:
177 return SyncHTTPConnection(
107
108 self._pool: List[ConnectionInterface] = []
109 self._requests: List[RequestStatus] = []
110 self._pool_lock = Lock()
111 self._network_backend = (
112 SyncBackend() if network_backend is None else network_backend
113 )
114
115 def create_connection(self, origin: Origin) -> ConnectionInterface:
116 return HTTPConnection(
178117 origin=origin,
118 ssl_context=self._ssl_context,
119 keepalive_expiry=self._keepalive_expiry,
179120 http1=self._http1,
180121 http2=self._http2,
181 keepalive_expiry=self._keepalive_expiry,
122 retries=self._retries,
123 local_address=self._local_address,
182124 uds=self._uds,
183 ssl_context=self._ssl_context,
184 local_address=self._local_address,
185 retries=self._retries,
186 backend=self._backend,
187 )
188
189 def handle_request(
125 network_backend=self._network_backend,
126 )
127
128 @property
129 def connections(self) -> List[ConnectionInterface]:
130 """
131 Return a list of the connections currently in the pool.
132
133 For example:
134
135 ```python
136 >>> pool.connections
137 [
138 <HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 6]>,
139 <HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 9]> ,
140 <HTTPConnection ['http://example.com:80', HTTP/1.1, IDLE, Request Count: 1]>,
141 ]
142 ```
143 """
144 return list(self._pool)
145
146 def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool:
147 """
148 Attempt to provide a connection that can handle the given origin.
149 """
150 origin = status.request.url.origin
151
152 # If there are queued requests in front of us, then don't acquire a
153 # connection. We handle requests strictly in order.
154 waiting = [s for s in self._requests if s.connection is None]
155 if waiting and waiting[0] is not status:
156 return False
157
158 # Reuse an existing connection if one is currently available.
159 for idx, connection in enumerate(self._pool):
160 if connection.can_handle_request(origin) and connection.is_available():
161 self._pool.pop(idx)
162 self._pool.insert(0, connection)
163 status.set_connection(connection)
164 return True
165
166 # If the pool is currently full, attempt to close one idle connection.
167 if len(self._pool) >= self._max_connections:
168 for idx, connection in reversed(list(enumerate(self._pool))):
169 if connection.is_idle():
170 connection.close()
171 self._pool.pop(idx)
172 break
173
174 # If the pool is still full, then we cannot acquire a connection.
175 if len(self._pool) >= self._max_connections:
176 return False
177
178 # Otherwise create a new connection.
179 connection = self.create_connection(origin)
180 self._pool.insert(0, connection)
181 status.set_connection(connection)
182 return True
183
184 def _close_expired_connections(self) -> None:
185 """
186 Clean up the connection pool by closing off any connections that have expired.
187 """
188 # Close any connections that have expired their keep-alive time.
189 for idx, connection in reversed(list(enumerate(self._pool))):
190 if connection.has_expired():
191 connection.close()
192 self._pool.pop(idx)
193
194 # If the pool size exceeds the maximum number of allowed keep-alive connections,
195 # then close off idle connections as required.
196 pool_size = len(self._pool)
197 for idx, connection in reversed(list(enumerate(self._pool))):
198 if connection.is_idle() and pool_size > self._max_keepalive_connections:
199 connection.close()
200 self._pool.pop(idx)
201 pool_size -= 1
202
203 def handle_request(self, request: Request) -> Response:
204 """
205 Send an HTTP request, and return an HTTP response.
206
207 This is the core implementation that is called into by `.request()` or `.stream()`.
208 """
209 scheme = request.url.scheme.decode()
210 if scheme == "":
211 raise UnsupportedProtocol(
212 "Request URL is missing an 'http://' or 'https://' protocol."
213 )
214 if scheme not in ("http", "https"):
215 raise UnsupportedProtocol(
216 f"Request URL has an unsupported protocol '{scheme}://'."
217 )
218
219 status = RequestStatus(request)
220
221 with self._pool_lock:
222 self._requests.append(status)
223 self._close_expired_connections()
224 self._attempt_to_acquire_connection(status)
225
226 while True:
227 timeouts = request.extensions.get("timeout", {})
228 timeout = timeouts.get("pool", None)
229 connection = status.wait_for_connection(timeout=timeout)
230 try:
231 response = connection.handle_request(request)
232 except ConnectionNotAvailable:
233 # The ConnectionNotAvailable exception is a special case, that
234 # indicates we need to retry the request on a new connection.
235 #
236 # The most common case where this can occur is when multiple
237 # requests are queued waiting for a single connection, which
238 # might end up as an HTTP/2 connection, but which actually ends
239 # up as HTTP/1.1.
240 with self._pool_lock:
241 # Maintain our position in the request queue, but reset the
242 # status so that the request becomes queued again.
243 status.unset_connection()
244 self._attempt_to_acquire_connection(status)
245 except Exception as exc:
246 self.response_closed(status)
247 raise exc
248 else:
249 break
250
251 # When we return the response, we wrap the stream in a special class
252 # that handles notifying the connection pool once the response
253 # has been released.
254 assert isinstance(response.stream, Iterable)
255 return Response(
256 status=response.status,
257 headers=response.headers,
258 content=ConnectionPoolByteStream(response.stream, self, status),
259 extensions=response.extensions,
260 )
261
262 def response_closed(self, status: RequestStatus) -> None:
263 """
264 This method acts as a callback once the request/response cycle is complete.
265
266 It is called into from the `ConnectionPoolByteStream.close()` method.
267 """
268 assert status.connection is not None
269 connection = status.connection
270
271 with self._pool_lock:
272 # Update the state of the connection pool.
273 self._requests.remove(status)
274
275 if connection.is_closed() and connection in self._pool:
276 self._pool.remove(connection)
277
278 # Since we've had a response closed, it's possible we'll now be able
279 # to service one or more requests that are currently pending.
280 for status in self._requests:
281 if status.connection is None:
282 acquired = self._attempt_to_acquire_connection(status)
283 # If we could not acquire a connection for a queued request
284 # then we don't need to check anymore requests that are
285 # queued later behind it.
286 if not acquired:
287 break
288
289 # Housekeeping.
290 self._close_expired_connections()
291
292 def close(self) -> None:
293 """
294 Close any connections in the pool.
295 """
296 with self._pool_lock:
297 for connection in self._pool:
298 connection.close()
299 self._pool = []
300 self._requests = []
301
302 def __enter__(self) -> "ConnectionPool":
303 return self
304
305 def __exit__(
190306 self,
191 method: bytes,
192 url: URL,
193 headers: Headers,
194 stream: SyncByteStream,
195 extensions: dict,
196 ) -> Tuple[int, Headers, SyncByteStream, dict]:
197 if not url[0]:
198 raise UnsupportedProtocol(
199 "Request URL missing either an 'http://' or 'https://' protocol."
200 )
201
202 if url[0] not in (b"http", b"https"):
203 protocol = url[0].decode("ascii")
204 raise UnsupportedProtocol(
205 f"Request URL has an unsupported protocol '{protocol}://'."
206 )
207
208 if not url[1]:
209 raise LocalProtocolError("Missing hostname in URL.")
210
211 origin = url_to_origin(url)
212 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
213
214 self._keepalive_sweep()
215
216 connection: Optional[SyncHTTPConnection] = None
217 while connection is None:
218 with self._connection_acquiry_lock:
219 # We get-or-create a connection as an atomic operation, to ensure
220 # that HTTP/2 requests issued in close concurrency will end up
221 # on the same connection.
222 logger.trace("get_connection_from_pool=%r", origin)
223 connection = self._get_connection_from_pool(origin)
224
225 if connection is None:
226 connection = self._create_connection(origin=origin)
227 logger.trace("created connection=%r", connection)
228 self._add_to_pool(connection, timeout=timeout)
229 else:
230 logger.trace("reuse connection=%r", connection)
231
232 try:
233 response = connection.handle_request(
234 method, url, headers=headers, stream=stream, extensions=extensions
235 )
236 except NewConnectionRequired:
237 connection = None
238 except BaseException: # noqa: PIE786
239 # See https://github.com/encode/httpcore/pull/305 for motivation
240 # behind catching 'BaseException' rather than 'Exception' here.
241 logger.trace("remove from pool connection=%r", connection)
242 self._remove_from_pool(connection)
243 raise
244
245 status_code, headers, stream, extensions = response
246 wrapped_stream = ResponseByteStream(
247 stream, connection=connection, callback=self._response_closed
248 )
249 return status_code, headers, wrapped_stream, extensions
250
251 def _get_connection_from_pool(
252 self, origin: Origin
253 ) -> Optional[SyncHTTPConnection]:
254 # Determine expired keep alive connections on this origin.
255 reuse_connection = None
256 connections_to_close = set()
257
258 for connection in self._connections_for_origin(origin):
259 if connection.should_close():
260 connections_to_close.add(connection)
261 self._remove_from_pool(connection)
262 elif connection.is_available():
263 reuse_connection = connection
264
265 # Close any dropped connections.
266 for connection in connections_to_close:
267 connection.close()
268
269 return reuse_connection
270
271 def _response_closed(self, connection: SyncHTTPConnection) -> None:
272 remove_from_pool = False
273 close_connection = False
274
275 if connection.is_closed():
276 remove_from_pool = True
277 elif connection.is_idle():
278 num_connections = len(self._get_all_connections())
279 if (
280 self._max_keepalive_connections is not None
281 and num_connections > self._max_keepalive_connections
282 ):
283 remove_from_pool = True
284 close_connection = True
285
286 if remove_from_pool:
287 self._remove_from_pool(connection)
288
289 if close_connection:
290 connection.close()
291
292 def _keepalive_sweep(self) -> None:
293 """
294 Remove any IDLE connections that have expired past their keep-alive time.
295 """
296 if self._keepalive_expiry is None:
297 return
298
299 now = self._backend.time()
300 if now < self._next_keepalive_check:
301 return
302
303 self._next_keepalive_check = now + min(1.0, self._keepalive_expiry)
304 connections_to_close = set()
305
306 for connection in self._get_all_connections():
307 if connection.should_close():
308 connections_to_close.add(connection)
309 self._remove_from_pool(connection)
310
311 for connection in connections_to_close:
312 connection.close()
313
314 def _add_to_pool(
315 self, connection: SyncHTTPConnection, timeout: TimeoutDict
307 exc_type: Type[BaseException] = None,
308 exc_value: BaseException = None,
309 traceback: TracebackType = None,
316310 ) -> None:
317 logger.trace("adding connection to pool=%r", connection)
318 self._connection_semaphore.acquire(timeout=timeout.get("pool", None))
319 with self._thread_lock:
320 self._connections.setdefault(connection.origin, set())
321 self._connections[connection.origin].add(connection)
322
323 def _remove_from_pool(self, connection: SyncHTTPConnection) -> None:
324 logger.trace("removing connection from pool=%r", connection)
325 with self._thread_lock:
326 if connection in self._connections.get(connection.origin, set()):
327 self._connection_semaphore.release()
328 self._connections[connection.origin].remove(connection)
329 if not self._connections[connection.origin]:
330 del self._connections[connection.origin]
331
332 def _connections_for_origin(self, origin: Origin) -> Set[SyncHTTPConnection]:
333 return set(self._connections.get(origin, set()))
334
335 def _get_all_connections(self) -> Set[SyncHTTPConnection]:
336 connections: Set[SyncHTTPConnection] = set()
337 for connection_set in self._connections.values():
338 connections |= connection_set
339 return connections
311 self.close()
312
313
314 class ConnectionPoolByteStream:
315 """
316 A wrapper around the response byte stream, that additionally handles
317 notifying the connection pool when the response has been closed.
318 """
319
320 def __init__(
321 self,
322 stream: Iterable[bytes],
323 pool: ConnectionPool,
324 status: RequestStatus,
325 ) -> None:
326 self._stream = stream
327 self._pool = pool
328 self._status = status
329
330 def __iter__(self) -> Iterator[bytes]:
331 for part in self._stream:
332 yield part
340333
341334 def close(self) -> None:
342 connections = self._get_all_connections()
343 for connection in connections:
344 self._remove_from_pool(connection)
345
346 # Close all connections
347 for connection in connections:
348 connection.close()
349
350 def get_connection_info(self) -> Dict[str, List[str]]:
351 """
352 Returns a dict of origin URLs to a list of summary strings for each connection.
353 """
354 self._keepalive_sweep()
355
356 stats = {}
357 for origin, connections in self._connections.items():
358 stats[origin_to_url_string(origin)] = sorted(
359 [connection.info() for connection in connections]
360 )
361 return stats
335 try:
336 if hasattr(self._stream, "close"):
337 self._stream.close() # type: ignore
338 finally:
339 self._pool.response_closed(self._status)
+0
-42
httpcore/_sync/http.py less more
0 from ssl import SSLContext
1
2 from .._backends.sync import SyncSocketStream
3 from .._types import TimeoutDict
4 from .base import SyncHTTPTransport
5
6
7 class SyncBaseHTTPConnection(SyncHTTPTransport):
8 def info(self) -> str:
9 raise NotImplementedError() # pragma: nocover
10
11 def should_close(self) -> bool:
12 """
13 Return `True` if the connection is in a state where it should be closed.
14 """
15 raise NotImplementedError() # pragma: nocover
16
17 def is_idle(self) -> bool:
18 """
19 Return `True` if the connection is currently idle.
20 """
21 raise NotImplementedError() # pragma: nocover
22
23 def is_closed(self) -> bool:
24 """
25 Return `True` if the connection has been closed.
26 """
27 raise NotImplementedError() # pragma: nocover
28
29 def is_available(self) -> bool:
30 """
31 Return `True` if the connection is currently able to accept an outgoing request.
32 """
33 raise NotImplementedError() # pragma: nocover
34
35 def start_tls(
36 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
37 ) -> SyncSocketStream:
38 """
39 Upgrade the underlying socket to TLS.
40 """
41 raise NotImplementedError() # pragma: nocover
00 import enum
11 import time
2 from ssl import SSLContext
3 from typing import Iterator, List, Optional, Tuple, Union, cast
2 from types import TracebackType
3 from typing import Iterable, Iterator, List, Optional, Tuple, Type, Union
44
55 import h11
66
7 from .._backends.sync import SyncSocketStream
8 from .._bytestreams import IteratorByteStream
9 from .._exceptions import LocalProtocolError, RemoteProtocolError, map_exceptions
10 from .._types import URL, Headers, TimeoutDict
11 from .._utils import get_logger
12 from .base import SyncByteStream, NewConnectionRequired
13 from .http import SyncBaseHTTPConnection
7 from .._exceptions import (
8 ConnectionNotAvailable,
9 LocalProtocolError,
10 RemoteProtocolError,
11 map_exceptions,
12 )
13 from .._models import Origin, Request, Response
14 from .._synchronization import Lock
15 from .._trace import Trace
16 from ..backends.base import NetworkStream
17 from .interfaces import ConnectionInterface
1418
1519 H11Event = Union[
1620 h11.Request,
2226 ]
2327
2428
25 class ConnectionState(enum.IntEnum):
29 class HTTPConnectionState(enum.IntEnum):
2630 NEW = 0
2731 ACTIVE = 1
2832 IDLE = 2
2933 CLOSED = 3
3034
3135
32 logger = get_logger(__name__)
33
34
35 class SyncHTTP11Connection(SyncBaseHTTPConnection):
36 class HTTP11Connection(ConnectionInterface):
3637 READ_NUM_BYTES = 64 * 1024
3738
38 def __init__(self, socket: SyncSocketStream, keepalive_expiry: float = None):
39 self.socket = socket
40
39 def __init__(
40 self, origin: Origin, stream: NetworkStream, keepalive_expiry: float = None
41 ) -> None:
42 self._origin = origin
43 self._network_stream = stream
4144 self._keepalive_expiry: Optional[float] = keepalive_expiry
42 self._should_expire_at: Optional[float] = None
45 self._expire_at: Optional[float] = None
46 self._state = HTTPConnectionState.NEW
47 self._state_lock = Lock()
48 self._request_count = 0
4349 self._h11_state = h11.Connection(our_role=h11.CLIENT)
44 self._state = ConnectionState.NEW
45
46 def __repr__(self) -> str:
47 return f"<SyncHTTP11Connection [{self._state.name}]>"
48
49 def _now(self) -> float:
50 return time.monotonic()
51
52 def _server_disconnected(self) -> bool:
53 """
54 Return True if the connection is idle, and the underlying socket is readable.
55 The only valid state the socket can be readable here is when the b""
56 EOF marker is about to be returned, indicating a server disconnect.
57 """
58 return self._state == ConnectionState.IDLE and self.socket.is_readable()
59
60 def _keepalive_expired(self) -> bool:
61 """
62 Return True if the connection is idle, and has passed it's keepalive
63 expiry time.
64 """
65 return (
66 self._state == ConnectionState.IDLE
67 and self._should_expire_at is not None
68 and self._now() >= self._should_expire_at
69 )
70
71 def info(self) -> str:
72 return f"HTTP/1.1, {self._state.name}"
73
74 def should_close(self) -> bool:
75 """
76 Return `True` if the connection is in a state where it should be closed.
77 """
78 return self._server_disconnected() or self._keepalive_expired()
79
80 def is_idle(self) -> bool:
81 """
82 Return `True` if the connection is currently idle.
83 """
84 return self._state == ConnectionState.IDLE
85
86 def is_closed(self) -> bool:
87 """
88 Return `True` if the connection has been closed.
89 """
90 return self._state == ConnectionState.CLOSED
91
92 def is_available(self) -> bool:
93 """
94 Return `True` if the connection is currently able to accept an outgoing request.
95 """
96 return self._state == ConnectionState.IDLE
97
98 def handle_request(
99 self,
100 method: bytes,
101 url: URL,
102 headers: Headers,
103 stream: SyncByteStream,
104 extensions: dict,
105 ) -> Tuple[int, Headers, SyncByteStream, dict]:
106 """
107 Send a single HTTP/1.1 request.
108
109 Note that there is no kind of task/thread locking at this layer of interface.
110 Dealing with locking for concurrency is handled by the `SyncHTTPConnection`.
111 """
112 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
113
114 if self._state in (ConnectionState.NEW, ConnectionState.IDLE):
115 self._state = ConnectionState.ACTIVE
116 self._should_expire_at = None
117 else:
118 raise NewConnectionRequired()
119
120 self._send_request(method, url, headers, timeout)
121 self._send_request_body(stream, timeout)
122 (
123 http_version,
124 status_code,
125 reason_phrase,
126 headers,
127 ) = self._receive_response(timeout)
128 response_stream = IteratorByteStream(
129 iterator=self._receive_response_data(timeout),
130 close_func=self._response_closed,
131 )
132 extensions = {
133 "http_version": http_version,
134 "reason_phrase": reason_phrase,
135 }
136 return (status_code, headers, response_stream, extensions)
137
138 def start_tls(
139 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
140 ) -> SyncSocketStream:
141 timeout = {} if timeout is None else timeout
142 self.socket = self.socket.start_tls(hostname, ssl_context, timeout)
143 return self.socket
144
145 def _send_request(
146 self, method: bytes, url: URL, headers: Headers, timeout: TimeoutDict
147 ) -> None:
148 """
149 Send the request line and headers.
150 """
151 logger.trace("send_request method=%r url=%r headers=%s", method, url, headers)
152 _scheme, _host, _port, target = url
50
51 def handle_request(self, request: Request) -> Response:
52 if not self.can_handle_request(request.url.origin):
53 raise RuntimeError(
54 f"Attempted to send request to {request.url.origin} on connection "
55 f"to {self._origin}"
56 )
57
58 with self._state_lock:
59 if self._state in (HTTPConnectionState.NEW, HTTPConnectionState.IDLE):
60 self._request_count += 1
61 self._state = HTTPConnectionState.ACTIVE
62 self._expire_at = None
63 else:
64 raise ConnectionNotAvailable()
65
66 try:
67 kwargs = {"request": request}
68 with Trace("http11.send_request_headers", request, kwargs) as trace:
69 self._send_request_headers(**kwargs)
70 with Trace("http11.send_request_body", request, kwargs) as trace:
71 self._send_request_body(**kwargs)
72 with Trace(
73 "http11.receive_response_headers", request, kwargs
74 ) as trace:
75 (
76 http_version,
77 status,
78 reason_phrase,
79 headers,
80 ) = self._receive_response_headers(**kwargs)
81 trace.return_value = (
82 http_version,
83 status,
84 reason_phrase,
85 headers,
86 )
87
88 return Response(
89 status=status,
90 headers=headers,
91 content=HTTP11ConnectionByteStream(self, request),
92 extensions={
93 "http_version": http_version,
94 "reason_phrase": reason_phrase,
95 "network_stream": self._network_stream,
96 },
97 )
98 except BaseException as exc:
99 with Trace("http11.response_closed", request) as trace:
100 self._response_closed()
101 raise exc
102
103 # Sending the request...
104
105 def _send_request_headers(self, request: Request) -> None:
106 timeouts = request.extensions.get("timeout", {})
107 timeout = timeouts.get("write", None)
108
153109 with map_exceptions({h11.LocalProtocolError: LocalProtocolError}):
154 event = h11.Request(method=method, target=target, headers=headers)
155 self._send_event(event, timeout)
156
157 def _send_request_body(
158 self, stream: SyncByteStream, timeout: TimeoutDict
159 ) -> None:
160 """
161 Send the request body.
162 """
163 # Send the request body.
164 for chunk in stream:
165 logger.trace("send_data=Data(<%d bytes>)", len(chunk))
110 event = h11.Request(
111 method=request.method,
112 target=request.url.target,
113 headers=request.headers,
114 )
115 self._send_event(event, timeout=timeout)
116
117 def _send_request_body(self, request: Request) -> None:
118 timeouts = request.extensions.get("timeout", {})
119 timeout = timeouts.get("write", None)
120
121 assert isinstance(request.stream, Iterable)
122 for chunk in request.stream:
166123 event = h11.Data(data=chunk)
167 self._send_event(event, timeout)
168
169 # Finalize sending the request.
124 self._send_event(event, timeout=timeout)
125
170126 event = h11.EndOfMessage()
171 self._send_event(event, timeout)
172
173 def _send_event(self, event: H11Event, timeout: TimeoutDict) -> None:
174 """
175 Send a single `h11` event to the network, waiting for the data to
176 drain before returning.
177 """
127 self._send_event(event, timeout=timeout)
128
129 def _send_event(self, event: H11Event, timeout: float = None) -> None:
178130 bytes_to_send = self._h11_state.send(event)
179 self.socket.write(bytes_to_send, timeout)
180
181 def _receive_response(
182 self, timeout: TimeoutDict
131 self._network_stream.write(bytes_to_send, timeout=timeout)
132
133 # Receiving the response...
134
135 def _receive_response_headers(
136 self, request: Request
183137 ) -> Tuple[bytes, int, bytes, List[Tuple[bytes, bytes]]]:
184 """
185 Read the response status and headers from the network.
186 """
138 timeouts = request.extensions.get("timeout", {})
139 timeout = timeouts.get("read", None)
140
187141 while True:
188 event = self._receive_event(timeout)
142 event = self._receive_event(timeout=timeout)
189143 if isinstance(event, h11.Response):
190144 break
191145
197151
198152 return http_version, event.status_code, event.reason, headers
199153
200 def _receive_response_data(
201 self, timeout: TimeoutDict
202 ) -> Iterator[bytes]:
203 """
204 Read the response data from the network.
205 """
154 def _receive_response_body(self, request: Request) -> Iterator[bytes]:
155 timeouts = request.extensions.get("timeout", {})
156 timeout = timeouts.get("read", None)
157
206158 while True:
207 event = self._receive_event(timeout)
159 event = self._receive_event(timeout=timeout)
208160 if isinstance(event, h11.Data):
209 logger.trace("receive_event=Data(<%d bytes>)", len(event.data))
210161 yield bytes(event.data)
211162 elif isinstance(event, (h11.EndOfMessage, h11.PAUSED)):
212 logger.trace("receive_event=%r", event)
213163 break
214164
215 def _receive_event(self, timeout: TimeoutDict) -> H11Event:
216 """
217 Read a single `h11` event, reading more data from the network if needed.
218 """
165 def _receive_event(self, timeout: float = None) -> H11Event:
219166 while True:
220167 with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
221168 event = self._h11_state.next_event()
222169
223170 if event is h11.NEED_DATA:
224 data = self.socket.read(self.READ_NUM_BYTES, timeout)
225
226 # If we feed this case through h11 we'll raise an exception like:
227 #
228 # httpcore.RemoteProtocolError: can't handle event type
229 # ConnectionClosed when role=SERVER and state=SEND_RESPONSE
230 #
231 # Which is accurate, but not very informative from an end-user
232 # perspective. Instead we handle messaging for this case distinctly.
233 if data == b"" and self._h11_state.their_state == h11.SEND_RESPONSE:
234 msg = "Server disconnected without sending a response."
235 raise RemoteProtocolError(msg)
236
171 data = self._network_stream.read(
172 self.READ_NUM_BYTES, timeout=timeout
173 )
237174 self._h11_state.receive_data(data)
238175 else:
239 assert event is not h11.NEED_DATA
240 break
241 return event
176 return event
242177
243178 def _response_closed(self) -> None:
244 logger.trace(
245 "response_closed our_state=%r their_state=%r",
246 self._h11_state.our_state,
247 self._h11_state.their_state,
179 with self._state_lock:
180 if (
181 self._h11_state.our_state is h11.DONE
182 and self._h11_state.their_state is h11.DONE
183 ):
184 self._state = HTTPConnectionState.IDLE
185 self._h11_state.start_next_cycle()
186 if self._keepalive_expiry is not None:
187 now = time.monotonic()
188 self._expire_at = now + self._keepalive_expiry
189 else:
190 self.close()
191
192 # Once the connection is no longer required...
193
194 def close(self) -> None:
195 # Note that this method unilaterally closes the connection, and does
196 # not have any kind of locking in place around it.
197 self._state = HTTPConnectionState.CLOSED
198 self._network_stream.close()
199
200 # The ConnectionInterface methods provide information about the state of
201 # the connection, allowing for a connection pooling implementation to
202 # determine when to reuse and when to close the connection...
203
204 def can_handle_request(self, origin: Origin) -> bool:
205 return origin == self._origin
206
207 def is_available(self) -> bool:
208 # Note that HTTP/1.1 connections in the "NEW" state are not treated as
209 # being "available". The control flow which created the connection will
210 # be able to send an outgoing request, but the connection will not be
211 # acquired from the connection pool for any other request.
212 return self._state == HTTPConnectionState.IDLE
213
214 def has_expired(self) -> bool:
215 now = time.monotonic()
216 keepalive_expired = self._expire_at is not None and now > self._expire_at
217
218 # If the HTTP connection is idle but the socket is readable, then the
219 # only valid state is that the socket is about to return b"", indicating
220 # a server-initiated disconnect.
221 server_disconnected = (
222 self._state == HTTPConnectionState.IDLE
223 and self._network_stream.get_extra_info("is_readable")
248224 )
249 if (
250 self._h11_state.our_state is h11.DONE
251 and self._h11_state.their_state is h11.DONE
252 ):
253 self._h11_state.start_next_cycle()
254 self._state = ConnectionState.IDLE
255 if self._keepalive_expiry is not None:
256 self._should_expire_at = self._now() + self._keepalive_expiry
257 else:
258 self.close()
225
226 return keepalive_expired or server_disconnected
227
228 def is_idle(self) -> bool:
229 return self._state == HTTPConnectionState.IDLE
230
231 def is_closed(self) -> bool:
232 return self._state == HTTPConnectionState.CLOSED
233
234 def info(self) -> str:
235 origin = str(self._origin)
236 return (
237 f"{origin!r}, HTTP/1.1, {self._state.name}, "
238 f"Request Count: {self._request_count}"
239 )
240
241 def __repr__(self) -> str:
242 class_name = self.__class__.__name__
243 origin = str(self._origin)
244 return (
245 f"<{class_name} [{origin!r}, {self._state.name}, "
246 f"Request Count: {self._request_count}]>"
247 )
248
249 # These context managers are not used in the standard flow, but are
250 # useful for testing or working with connection instances directly.
251
252 def __enter__(self) -> "HTTP11Connection":
253 return self
254
255 def __exit__(
256 self,
257 exc_type: Type[BaseException] = None,
258 exc_value: BaseException = None,
259 traceback: TracebackType = None,
260 ) -> None:
261 self.close()
262
263
264 class HTTP11ConnectionByteStream:
265 def __init__(self, connection: HTTP11Connection, request: Request) -> None:
266 self._connection = connection
267 self._request = request
268
269 def __iter__(self) -> Iterator[bytes]:
270 kwargs = {"request": self._request}
271 with Trace("http11.receive_response_body", self._request, kwargs):
272 for chunk in self._connection._receive_response_body(**kwargs):
273 yield chunk
259274
260275 def close(self) -> None:
261 if self._state != ConnectionState.CLOSED:
262 self._state = ConnectionState.CLOSED
263
264 if self._h11_state.our_state is h11.MUST_CLOSE:
265 event = h11.ConnectionClosed()
266 self._h11_state.send(event)
267
268 self.socket.close()
276 with Trace("http11.response_closed", self._request):
277 self._connection._response_closed()
00 import enum
11 import time
2 from ssl import SSLContext
3 from typing import Iterator, Dict, List, Optional, Tuple, cast
4
2 import types
3 import typing
4
5 import h2.config
56 import h2.connection
67 import h2.events
7 from h2.config import H2Configuration
8 from h2.exceptions import NoAvailableStreamIDError
9 from h2.settings import SettingCodes, Settings
10
11 from .._backends.sync import SyncBackend, SyncLock, SyncSemaphore, SyncSocketStream
12 from .._bytestreams import IteratorByteStream
13 from .._exceptions import LocalProtocolError, PoolTimeout, RemoteProtocolError
14 from .._types import URL, Headers, TimeoutDict
15 from .._utils import get_logger
16 from .base import SyncByteStream, NewConnectionRequired
17 from .http import SyncBaseHTTPConnection
18
19 logger = get_logger(__name__)
20
21
22 class ConnectionState(enum.IntEnum):
23 IDLE = 0
8 import h2.exceptions
9 import h2.settings
10
11 from .._exceptions import ConnectionNotAvailable, RemoteProtocolError
12 from .._models import Origin, Request, Response
13 from .._synchronization import Lock, Semaphore
14 from .._trace import Trace
15 from ..backends.base import NetworkStream
16 from .interfaces import ConnectionInterface
17
18
19 def has_body_headers(request: Request) -> bool:
20 return any(
21 [
22 k.lower() == b"content-length" or k.lower() == b"transfer-encoding"
23 for k, v in request.headers
24 ]
25 )
26
27
28 class HTTPConnectionState(enum.IntEnum):
2429 ACTIVE = 1
25 CLOSED = 2
26
27
28 class SyncHTTP2Connection(SyncBaseHTTPConnection):
30 IDLE = 2
31 CLOSED = 3
32
33
34 class HTTP2Connection(ConnectionInterface):
2935 READ_NUM_BYTES = 64 * 1024
30 CONFIG = H2Configuration(validate_inbound_headers=False)
36 CONFIG = h2.config.H2Configuration(validate_inbound_headers=False)
3137
3238 def __init__(
33 self,
34 socket: SyncSocketStream,
35 backend: SyncBackend,
36 keepalive_expiry: float = None,
39 self, origin: Origin, stream: NetworkStream, keepalive_expiry: float = None
3740 ):
38 self.socket = socket
39
40 self._backend = backend
41 self._origin = origin
42 self._network_stream = stream
43 self._keepalive_expiry: typing.Optional[float] = keepalive_expiry
4144 self._h2_state = h2.connection.H2Connection(config=self.CONFIG)
42
45 self._state = HTTPConnectionState.IDLE
46 self._expire_at: typing.Optional[float] = None
47 self._request_count = 0
48 self._init_lock = Lock()
49 self._state_lock = Lock()
50 self._read_lock = Lock()
51 self._write_lock = Lock()
4352 self._sent_connection_init = False
44 self._streams: Dict[int, SyncHTTP2Stream] = {}
45 self._events: Dict[int, List[h2.events.Event]] = {}
46
47 self._keepalive_expiry: Optional[float] = keepalive_expiry
48 self._should_expire_at: Optional[float] = None
49 self._state = ConnectionState.ACTIVE
50 self._exhausted_available_stream_ids = False
51
52 def __repr__(self) -> str:
53 return f"<SyncHTTP2Connection [{self._state}]>"
54
55 def info(self) -> str:
56 return f"HTTP/2, {self._state.name}, {len(self._streams)} streams"
57
58 def _now(self) -> float:
59 return time.monotonic()
60
61 def should_close(self) -> bool:
62 """
63 Return `True` if the connection is currently idle, and the keepalive
64 timeout has passed.
65 """
66 return (
67 self._state == ConnectionState.IDLE
68 and self._should_expire_at is not None
69 and self._now() >= self._should_expire_at
70 )
71
72 def is_idle(self) -> bool:
73 """
74 Return `True` if the connection is currently idle.
75 """
76 return self._state == ConnectionState.IDLE
77
78 def is_closed(self) -> bool:
79 """
80 Return `True` if the connection has been closed.
81 """
82 return self._state == ConnectionState.CLOSED
83
84 def is_available(self) -> bool:
85 """
86 Return `True` if the connection is currently able to accept an outgoing request.
87 This occurs when any of the following occur:
88
89 * The connection has not yet been opened, and HTTP/2 support is enabled.
90 We don't *know* at this point if we'll end up on an HTTP/2 connection or
91 not, but we *might* do, so we indicate availability.
92 * The connection has been opened, and is currently idle.
93 * The connection is open, and is an HTTP/2 connection. The connection must
94 also not have exhausted the maximum total number of stream IDs.
95 """
96 return (
97 self._state != ConnectionState.CLOSED
98 and not self._exhausted_available_stream_ids
99 )
100
101 @property
102 def init_lock(self) -> SyncLock:
103 # We do this lazily, to make sure backend autodetection always
104 # runs within an async context.
105 if not hasattr(self, "_initialization_lock"):
106 self._initialization_lock = self._backend.create_lock()
107 return self._initialization_lock
108
109 @property
110 def read_lock(self) -> SyncLock:
111 # We do this lazily, to make sure backend autodetection always
112 # runs within an async context.
113 if not hasattr(self, "_read_lock"):
114 self._read_lock = self._backend.create_lock()
115 return self._read_lock
116
117 @property
118 def max_streams_semaphore(self) -> SyncSemaphore:
119 # We do this lazily, to make sure backend autodetection always
120 # runs within an async context.
121 if not hasattr(self, "_max_streams_semaphore"):
122 max_streams = self._h2_state.local_settings.max_concurrent_streams
123 self._max_streams_semaphore = self._backend.create_semaphore(
124 max_streams, exc_class=PoolTimeout
53 self._used_all_stream_ids = False
54 self._events: typing.Dict[int, h2.events.Event] = {}
55
56 def handle_request(self, request: Request) -> Response:
57 if not self.can_handle_request(request.url.origin):
58 # This cannot occur in normal operation, since the connection pool
59 # will only send requests on connections that handle them.
60 # It's in place simply for resilience as a guard against incorrect
61 # usage, for anyone working directly with httpcore connections.
62 raise RuntimeError(
63 f"Attempted to send request to {request.url.origin} on connection "
64 f"to {self._origin}"
12565 )
126 return self._max_streams_semaphore
127
128 def start_tls(
129 self, hostname: bytes, ssl_context: SSLContext, timeout: TimeoutDict = None
130 ) -> SyncSocketStream:
131 raise NotImplementedError("TLS upgrade not supported on HTTP/2 connections.")
132
133 def handle_request(
134 self,
135 method: bytes,
136 url: URL,
137 headers: Headers,
138 stream: SyncByteStream,
139 extensions: dict,
140 ) -> Tuple[int, Headers, SyncByteStream, dict]:
141 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
142
143 with self.init_lock:
66
67 with self._state_lock:
68 if self._state in (HTTPConnectionState.ACTIVE, HTTPConnectionState.IDLE):
69 self._request_count += 1
70 self._expire_at = None
71 self._state = HTTPConnectionState.ACTIVE
72 else:
73 raise ConnectionNotAvailable()
74
75 with self._init_lock:
14476 if not self._sent_connection_init:
145 # The very first stream is responsible for initiating the connection.
146 self._state = ConnectionState.ACTIVE
147 self.send_connection_init(timeout)
77 kwargs = {"request": request}
78 with Trace("http2.send_connection_init", request, kwargs):
79 self._send_connection_init(**kwargs)
14880 self._sent_connection_init = True
149
150 self.max_streams_semaphore.acquire()
81 max_streams = self._h2_state.local_settings.max_concurrent_streams
82 self._max_streams_semaphore = Semaphore(max_streams)
83
84 self._max_streams_semaphore.acquire()
85
15186 try:
152 try:
153 stream_id = self._h2_state.get_next_available_stream_id()
154 except NoAvailableStreamIDError:
155 self._exhausted_available_stream_ids = True
156 raise NewConnectionRequired()
157 else:
158 self._state = ConnectionState.ACTIVE
159 self._should_expire_at = None
160
161 h2_stream = SyncHTTP2Stream(stream_id=stream_id, connection=self)
162 self._streams[stream_id] = h2_stream
87 stream_id = self._h2_state.get_next_available_stream_id()
16388 self._events[stream_id] = []
164 return h2_stream.handle_request(
165 method, url, headers, stream, extensions
89 except h2.exceptions.NoAvailableStreamIDError: # pragma: nocover
90 self._used_all_stream_ids = True
91 raise ConnectionNotAvailable()
92
93 try:
94 kwargs = {"request": request, "stream_id": stream_id}
95 with Trace("http2.send_request_headers", request, kwargs):
96 self._send_request_headers(request=request, stream_id=stream_id)
97 with Trace("http2.send_request_body", request, kwargs):
98 self._send_request_body(request=request, stream_id=stream_id)
99 with Trace(
100 "http2.receive_response_headers", request, kwargs
101 ) as trace:
102 status, headers = self._receive_response(
103 request=request, stream_id=stream_id
104 )
105 trace.return_value = (status, headers)
106
107 return Response(
108 status=status,
109 headers=headers,
110 content=HTTP2ConnectionByteStream(self, request, stream_id=stream_id),
111 extensions={"stream_id": stream_id, "http_version": b"HTTP/2"},
166112 )
167113 except Exception: # noqa: PIE786
168 self.max_streams_semaphore.release()
114 kwargs = {"stream_id": stream_id}
115 with Trace("http2.response_closed", request, kwargs):
116 self._response_closed(stream_id=stream_id)
169117 raise
170118
171 def send_connection_init(self, timeout: TimeoutDict) -> None:
119 def _send_connection_init(self, request: Request) -> None:
172120 """
173121 The HTTP/2 connection requires some initial setup before we can start
174122 using individual request/response streams on it.
176124 # Need to set these manually here instead of manipulating via
177125 # __setitem__() otherwise the H2Connection will emit SettingsUpdate
178126 # frames in addition to sending the undesired defaults.
179 self._h2_state.local_settings = Settings(
127 self._h2_state.local_settings = h2.settings.Settings(
180128 client=True,
181129 initial_values={
182130 # Disable PUSH_PROMISE frames from the server since we don't do anything
183131 # with them for now. Maybe when we support caching?
184 SettingCodes.ENABLE_PUSH: 0,
132 h2.settings.SettingCodes.ENABLE_PUSH: 0,
185133 # These two are taken from h2 for safe defaults
186 SettingCodes.MAX_CONCURRENT_STREAMS: 100,
187 SettingCodes.MAX_HEADER_LIST_SIZE: 65536,
134 h2.settings.SettingCodes.MAX_CONCURRENT_STREAMS: 100,
135 h2.settings.SettingCodes.MAX_HEADER_LIST_SIZE: 65536,
188136 },
189137 )
190138
195143 h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL
196144 ]
197145
198 logger.trace("initiate_connection=%r", self)
199146 self._h2_state.initiate_connection()
200147 self._h2_state.increment_flow_control_window(2 ** 24)
201 data_to_send = self._h2_state.data_to_send()
202 self.socket.write(data_to_send, timeout)
203
204 def is_socket_readable(self) -> bool:
205 return self.socket.is_readable()
206
207 def close(self) -> None:
208 logger.trace("close_connection=%r", self)
209 if self._state != ConnectionState.CLOSED:
210 self._state = ConnectionState.CLOSED
211
212 self.socket.close()
213
214 def wait_for_outgoing_flow(self, stream_id: int, timeout: TimeoutDict) -> int:
215 """
216 Returns the maximum allowable outgoing flow for a given stream.
217 If the allowable flow is zero, then waits on the network until
218 WindowUpdated frames have increased the flow rate.
219 https://tools.ietf.org/html/rfc7540#section-6.9
220 """
221 local_flow = self._h2_state.local_flow_control_window(stream_id)
222 connection_flow = self._h2_state.max_outbound_frame_size
223 flow = min(local_flow, connection_flow)
224 while flow == 0:
225 self.receive_events(timeout)
226 local_flow = self._h2_state.local_flow_control_window(stream_id)
227 connection_flow = self._h2_state.max_outbound_frame_size
228 flow = min(local_flow, connection_flow)
229 return flow
230
231 def wait_for_event(
232 self, stream_id: int, timeout: TimeoutDict
233 ) -> h2.events.Event:
234 """
235 Returns the next event for a given stream.
236 If no events are available yet, then waits on the network until
237 an event is available.
238 """
239 with self.read_lock:
240 while not self._events[stream_id]:
241 self.receive_events(timeout)
242 return self._events[stream_id].pop(0)
243
244 def receive_events(self, timeout: TimeoutDict) -> None:
245 """
246 Read some data from the network, and update the H2 state.
247 """
248 data = self.socket.read(self.READ_NUM_BYTES, timeout)
249 if data == b"":
250 raise RemoteProtocolError("Server disconnected")
251
252 events = self._h2_state.receive_data(data)
253 for event in events:
254 event_stream_id = getattr(event, "stream_id", 0)
255 logger.trace("receive_event stream_id=%r event=%s", event_stream_id, event)
256
257 if hasattr(event, "error_code"):
258 raise RemoteProtocolError(event)
259
260 if event_stream_id in self._events:
261 self._events[event_stream_id].append(event)
262
263 data_to_send = self._h2_state.data_to_send()
264 self.socket.write(data_to_send, timeout)
265
266 def send_headers(
267 self, stream_id: int, headers: Headers, end_stream: bool, timeout: TimeoutDict
268 ) -> None:
269 logger.trace("send_headers stream_id=%r headers=%r", stream_id, headers)
270 self._h2_state.send_headers(stream_id, headers, end_stream=end_stream)
271 self._h2_state.increment_flow_control_window(2 ** 24, stream_id=stream_id)
272 data_to_send = self._h2_state.data_to_send()
273 self.socket.write(data_to_send, timeout)
274
275 def send_data(
276 self, stream_id: int, chunk: bytes, timeout: TimeoutDict
277 ) -> None:
278 logger.trace("send_data stream_id=%r chunk=%r", stream_id, chunk)
279 self._h2_state.send_data(stream_id, chunk)
280 data_to_send = self._h2_state.data_to_send()
281 self.socket.write(data_to_send, timeout)
282
283 def end_stream(self, stream_id: int, timeout: TimeoutDict) -> None:
284 logger.trace("end_stream stream_id=%r", stream_id)
285 self._h2_state.end_stream(stream_id)
286 data_to_send = self._h2_state.data_to_send()
287 self.socket.write(data_to_send, timeout)
288
289 def acknowledge_received_data(
290 self, stream_id: int, amount: int, timeout: TimeoutDict
291 ) -> None:
292 self._h2_state.acknowledge_received_data(amount, stream_id)
293 data_to_send = self._h2_state.data_to_send()
294 self.socket.write(data_to_send, timeout)
295
296 def close_stream(self, stream_id: int) -> None:
297 try:
298 logger.trace("close_stream stream_id=%r", stream_id)
299 del self._streams[stream_id]
300 del self._events[stream_id]
301
302 if not self._streams:
303 if self._state == ConnectionState.ACTIVE:
304 if self._exhausted_available_stream_ids:
305 self.close()
306 else:
307 self._state = ConnectionState.IDLE
308 if self._keepalive_expiry is not None:
309 self._should_expire_at = (
310 self._now() + self._keepalive_expiry
311 )
312 finally:
313 self.max_streams_semaphore.release()
314
315
316 class SyncHTTP2Stream:
317 def __init__(self, stream_id: int, connection: SyncHTTP2Connection) -> None:
318 self.stream_id = stream_id
319 self.connection = connection
320
321 def handle_request(
322 self,
323 method: bytes,
324 url: URL,
325 headers: Headers,
326 stream: SyncByteStream,
327 extensions: dict,
328 ) -> Tuple[int, Headers, SyncByteStream, dict]:
329 headers = [(k.lower(), v) for (k, v) in headers]
330 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
331
332 # Send the request.
333 seen_headers = set(key for key, value in headers)
334 has_body = (
335 b"content-length" in seen_headers or b"transfer-encoding" in seen_headers
336 )
337
338 self.send_headers(method, url, headers, has_body, timeout)
339 if has_body:
340 self.send_body(stream, timeout)
341
342 # Receive the response.
343 status_code, headers = self.receive_response(timeout)
344 response_stream = IteratorByteStream(
345 iterator=self.body_iter(timeout), close_func=self._response_closed
346 )
347
348 extensions = {
349 "http_version": b"HTTP/2",
350 }
351 return (status_code, headers, response_stream, extensions)
352
353 def send_headers(
354 self,
355 method: bytes,
356 url: URL,
357 headers: Headers,
358 has_body: bool,
359 timeout: TimeoutDict,
360 ) -> None:
361 scheme, hostname, port, path = url
148 self._write_outgoing_data(request)
149
150 # Sending the request...
151
152 def _send_request_headers(self, request: Request, stream_id: int) -> None:
153 end_stream = not has_body_headers(request)
362154
363155 # In HTTP/2 the ':authority' pseudo-header is used instead of 'Host'.
364156 # In order to gracefully handle HTTP/1.1 and HTTP/2 we always require
365157 # HTTP/1.1 style headers, and map them appropriately if we end up on
366158 # an HTTP/2 connection.
367 authority = None
368
369 for k, v in headers:
370 if k == b"host":
371 authority = v
372 break
373
374 if authority is None:
375 # Mirror the same error we'd see with `h11`, so that the behaviour
376 # is consistent. Although we're dealing with an `:authority`
377 # pseudo-header by this point, from an end-user perspective the issue
378 # is that the outgoing request needed to include a `host` header.
379 raise LocalProtocolError("Missing mandatory Host: header")
159 authority = [v for k, v in request.headers if k.lower() == b"host"][0]
380160
381161 headers = [
382 (b":method", method),
162 (b":method", request.method),
383163 (b":authority", authority),
384 (b":scheme", scheme),
385 (b":path", path),
164 (b":scheme", request.url.scheme),
165 (b":path", request.url.target),
386166 ] + [
387 (k, v)
388 for k, v in headers
389 if k
167 (k.lower(), v)
168 for k, v in request.headers
169 if k.lower()
390170 not in (
391171 b"host",
392172 b"transfer-encoding",
393173 )
394174 ]
395 end_stream = not has_body
396
397 self.connection.send_headers(self.stream_id, headers, end_stream, timeout)
398
399 def send_body(self, stream: SyncByteStream, timeout: TimeoutDict) -> None:
400 for data in stream:
175
176 self._h2_state.send_headers(stream_id, headers, end_stream=end_stream)
177 self._h2_state.increment_flow_control_window(2 ** 24, stream_id=stream_id)
178 self._write_outgoing_data(request)
179
180 def _send_request_body(self, request: Request, stream_id: int) -> None:
181 if not has_body_headers(request):
182 return
183
184 assert isinstance(request.stream, typing.Iterable)
185 for data in request.stream:
401186 while data:
402 max_flow = self.connection.wait_for_outgoing_flow(
403 self.stream_id, timeout
404 )
187 max_flow = self._wait_for_outgoing_flow(request, stream_id)
405188 chunk_size = min(len(data), max_flow)
406189 chunk, data = data[:chunk_size], data[chunk_size:]
407 self.connection.send_data(self.stream_id, chunk, timeout)
408
409 self.connection.end_stream(self.stream_id, timeout)
410
411 def receive_response(
412 self, timeout: TimeoutDict
413 ) -> Tuple[int, List[Tuple[bytes, bytes]]]:
414 """
415 Read the response status and headers from the network.
416 """
190 self._h2_state.send_data(stream_id, chunk)
191 self._write_outgoing_data(request)
192
193 self._h2_state.end_stream(stream_id)
194 self._write_outgoing_data(request)
195
196 # Receiving the response...
197
198 def _receive_response(
199 self, request: Request, stream_id: int
200 ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]:
417201 while True:
418 event = self.connection.wait_for_event(self.stream_id, timeout)
202 event = self._receive_stream_event(request, stream_id)
419203 if isinstance(event, h2.events.ResponseReceived):
420204 break
421205
429213
430214 return (status_code, headers)
431215
432 def body_iter(self, timeout: TimeoutDict) -> Iterator[bytes]:
216 def _receive_response_body(
217 self, request: Request, stream_id: int
218 ) -> typing.Iterator[bytes]:
433219 while True:
434 event = self.connection.wait_for_event(self.stream_id, timeout)
220 event = self._receive_stream_event(request, stream_id)
435221 if isinstance(event, h2.events.DataReceived):
436222 amount = event.flow_controlled_length
437 self.connection.acknowledge_received_data(
438 self.stream_id, amount, timeout
439 )
223 self._h2_state.acknowledge_received_data(amount, stream_id)
224 self._write_outgoing_data(request)
440225 yield event.data
441226 elif isinstance(event, (h2.events.StreamEnded, h2.events.StreamReset)):
442227 break
443228
444 def _response_closed(self) -> None:
445 self.connection.close_stream(self.stream_id)
229 def _receive_stream_event(
230 self, request: Request, stream_id: int
231 ) -> h2.events.Event:
232 while not self._events.get(stream_id):
233 self._receive_events(request)
234 return self._events[stream_id].pop(0)
235
236 def _receive_events(self, request: Request) -> None:
237 events = self._read_incoming_data(request)
238 for event in events:
239 event_stream_id = getattr(event, "stream_id", 0)
240
241 if hasattr(event, "error_code"):
242 raise RemoteProtocolError(event)
243
244 if event_stream_id in self._events:
245 self._events[event_stream_id].append(event)
246
247 self._write_outgoing_data(request)
248
249 def _response_closed(self, stream_id: int) -> None:
250 self._max_streams_semaphore.release()
251 del self._events[stream_id]
252 with self._state_lock:
253 if self._state == HTTPConnectionState.ACTIVE and not self._events:
254 self._state = HTTPConnectionState.IDLE
255 if self._keepalive_expiry is not None:
256 now = time.monotonic()
257 self._expire_at = now + self._keepalive_expiry
258 if self._used_all_stream_ids: # pragma: nocover
259 self.close()
260
261 def close(self) -> None:
262 # Note that this method unilaterally closes the connection, and does
263 # not have any kind of locking in place around it.
264 # For task-safe/thread-safe operations call into 'attempt_close' instead.
265 self._state = HTTPConnectionState.CLOSED
266 self._network_stream.close()
267
268 # Wrappers around network read/write operations...
269
270 def _read_incoming_data(
271 self, request: Request
272 ) -> typing.List[h2.events.Event]:
273 timeouts = request.extensions.get("timeout", {})
274 timeout = timeouts.get("read", None)
275
276 with self._read_lock:
277 data = self._network_stream.read(self.READ_NUM_BYTES, timeout)
278 if data == b"":
279 raise RemoteProtocolError("Server disconnected")
280 return self._h2_state.receive_data(data)
281
282 def _write_outgoing_data(self, request: Request) -> None:
283 timeouts = request.extensions.get("timeout", {})
284 timeout = timeouts.get("write", None)
285
286 with self._write_lock:
287 data_to_send = self._h2_state.data_to_send()
288 self._network_stream.write(data_to_send, timeout)
289
290 # Flow control...
291
292 def _wait_for_outgoing_flow(self, request: Request, stream_id: int) -> int:
293 """
294 Returns the maximum allowable outgoing flow for a given stream.
295
296 If the allowable flow is zero, then waits on the network until
297 WindowUpdated frames have increased the flow rate.
298 https://tools.ietf.org/html/rfc7540#section-6.9
299 """
300 local_flow = self._h2_state.local_flow_control_window(stream_id)
301 max_frame_size = self._h2_state.max_outbound_frame_size
302 flow = min(local_flow, max_frame_size)
303 while flow == 0:
304 self._receive_events(request)
305 local_flow = self._h2_state.local_flow_control_window(stream_id)
306 max_frame_size = self._h2_state.max_outbound_frame_size
307 flow = min(local_flow, max_frame_size)
308 return flow
309
310 # Interface for connection pooling...
311
312 def can_handle_request(self, origin: Origin) -> bool:
313 return origin == self._origin
314
315 def is_available(self) -> bool:
316 return (
317 self._state != HTTPConnectionState.CLOSED and not self._used_all_stream_ids
318 )
319
320 def has_expired(self) -> bool:
321 now = time.monotonic()
322 return self._expire_at is not None and now > self._expire_at
323
324 def is_idle(self) -> bool:
325 return self._state == HTTPConnectionState.IDLE
326
327 def is_closed(self) -> bool:
328 return self._state == HTTPConnectionState.CLOSED
329
330 def info(self) -> str:
331 origin = str(self._origin)
332 return (
333 f"{origin!r}, HTTP/2, {self._state.name}, "
334 f"Request Count: {self._request_count}"
335 )
336
337 def __repr__(self) -> str:
338 class_name = self.__class__.__name__
339 origin = str(self._origin)
340 return (
341 f"<{class_name} [{origin!r}, {self._state.name}, "
342 f"Request Count: {self._request_count}]>"
343 )
344
345 # These context managers are not used in the standard flow, but are
346 # useful for testing or working with connection instances directly.
347
348 def __enter__(self) -> "HTTP2Connection":
349 return self
350
351 def __exit__(
352 self,
353 exc_type: typing.Type[BaseException] = None,
354 exc_value: BaseException = None,
355 traceback: types.TracebackType = None,
356 ) -> None:
357 self.close()
358
359
360 class HTTP2ConnectionByteStream:
361 def __init__(
362 self, connection: HTTP2Connection, request: Request, stream_id: int
363 ) -> None:
364 self._connection = connection
365 self._request = request
366 self._stream_id = stream_id
367
368 def __iter__(self) -> typing.Iterator[bytes]:
369 kwargs = {"request": self._request, "stream_id": self._stream_id}
370 with Trace("http2.receive_response_body", self._request, kwargs):
371 for chunk in self._connection._receive_response_body(
372 request=self._request, stream_id=self._stream_id
373 ):
374 yield chunk
375
376 def close(self) -> None:
377 kwargs = {"stream_id": self._stream_id}
378 with Trace("http2.response_closed", self._request, kwargs):
379 self._connection._response_closed(stream_id=self._stream_id)
0 from http import HTTPStatus
1 from ssl import SSLContext
2 from typing import Tuple, cast
3
4 from .._bytestreams import ByteStream
0 import ssl
1 from typing import List, Mapping, Optional, Sequence, Tuple, Union
2
53 from .._exceptions import ProxyError
6 from .._types import URL, Headers, TimeoutDict
7 from .._utils import get_logger, url_to_origin
8 from .base import SyncByteStream
9 from .connection import SyncHTTPConnection
10 from .connection_pool import SyncConnectionPool, ResponseByteStream
11
12 logger = get_logger(__name__)
13
14
15 def get_reason_phrase(status_code: int) -> str:
16 try:
17 return HTTPStatus(status_code).phrase
18 except ValueError:
19 return ""
4 from .._models import URL, Origin, Request, Response, enforce_headers, enforce_url
5 from .._ssl import default_ssl_context
6 from .._synchronization import Lock
7 from ..backends.base import NetworkBackend
8 from .connection import HTTPConnection
9 from .connection_pool import ConnectionPool
10 from .http11 import HTTP11Connection
11 from .interfaces import ConnectionInterface
12
13 HeadersAsSequence = Sequence[Tuple[Union[bytes, str], Union[bytes, str]]]
14 HeadersAsMapping = Mapping[Union[bytes, str], Union[bytes, str]]
2015
2116
2217 def merge_headers(
23 default_headers: Headers = None, override_headers: Headers = None
24 ) -> Headers:
25 """
26 Append default_headers and override_headers, de-duplicating if a key existing in
27 both cases.
28 """
29 default_headers = [] if default_headers is None else default_headers
30 override_headers = [] if override_headers is None else override_headers
18 default_headers: Sequence[Tuple[bytes, bytes]] = None,
19 override_headers: Sequence[Tuple[bytes, bytes]] = None,
20 ) -> List[Tuple[bytes, bytes]]:
21 """
22 Append default_headers and override_headers, de-duplicating if a key exists
23 in both cases.
24 """
25 default_headers = [] if default_headers is None else list(default_headers)
26 override_headers = [] if override_headers is None else list(override_headers)
3127 has_override = set([key.lower() for key, value in override_headers])
3228 default_headers = [
3329 (key, value)
3733 return default_headers + override_headers
3834
3935
40 class SyncHTTPProxy(SyncConnectionPool):
41 """
42 A connection pool for making HTTP requests via an HTTP proxy.
43
44 Parameters
45 ----------
46 proxy_url:
47 The URL of the proxy service as a 4-tuple of (scheme, host, port, path).
48 proxy_headers:
49 A list of proxy headers to include.
50 proxy_mode:
51 A proxy mode to operate in. May be "DEFAULT", "FORWARD_ONLY", or "TUNNEL_ONLY".
52 ssl_context:
53 An SSL context to use for verifying connections.
54 max_connections:
55 The maximum number of concurrent connections to allow.
56 max_keepalive_connections:
57 The maximum number of connections to allow before closing keep-alive
58 connections.
59 http2:
60 Enable HTTP/2 support.
36 class HTTPProxy(ConnectionPool):
37 """
38 A connection pool that sends requests via an HTTP proxy.
6139 """
6240
6341 def __init__(
6442 self,
65 proxy_url: URL,
66 proxy_headers: Headers = None,
67 proxy_mode: str = "DEFAULT",
68 ssl_context: SSLContext = None,
69 max_connections: int = None,
43 proxy_url: Union[URL, bytes, str],
44 proxy_headers: Union[HeadersAsMapping, HeadersAsSequence] = None,
45 ssl_context: ssl.SSLContext = None,
46 max_connections: Optional[int] = 10,
7047 max_keepalive_connections: int = None,
7148 keepalive_expiry: float = None,
72 http2: bool = False,
73 backend: str = "sync",
74 # Deprecated argument style:
75 max_keepalive: int = None,
76 ):
77 assert proxy_mode in ("DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY")
78
79 self.proxy_origin = url_to_origin(proxy_url)
80 self.proxy_headers = [] if proxy_headers is None else proxy_headers
81 self.proxy_mode = proxy_mode
49 retries: int = 0,
50 local_address: str = None,
51 uds: str = None,
52 network_backend: NetworkBackend = None,
53 ) -> None:
54 """
55 A connection pool for making HTTP requests.
56
57 Parameters:
58 proxy_url: The URL to use when connecting to the proxy server.
59 For example `"http://127.0.0.1:8080/"`.
60 proxy_headers: Any HTTP headers to use for the proxy requests.
61 For example `{"Proxy-Authorization": "Basic <username>:<password>"}`.
62 ssl_context: An SSL context to use for verifying connections.
63 If not specified, the default `httpcore.default_ssl_context()`
64 will be used.
65 max_connections: The maximum number of concurrent HTTP connections that
66 the pool should allow. Any attempt to send a request on a pool that
67 would exceed this amount will block until a connection is available.
68 max_keepalive_connections: The maximum number of idle HTTP connections
69 that will be maintained in the pool.
70 keepalive_expiry: The duration in seconds that an idle HTTP connection
71 may be maintained for before being expired from the pool.
72 retries: The maximum number of retries when trying to establish
73 a connection.
74 local_address: Local address to connect from. Can also be used to
75 connect using a particular address family. Using
76 `local_address="0.0.0.0"` will connect using an `AF_INET` address
77 (IPv4), while using `local_address="::"` will connect using an
78 `AF_INET6` address (IPv6).
79 uds: Path to a Unix Domain Socket to use instead of TCP sockets.
80 network_backend: A backend instance to use for handling network I/O.
81 """
82 if ssl_context is None:
83 ssl_context = default_ssl_context()
84
8285 super().__init__(
8386 ssl_context=ssl_context,
8487 max_connections=max_connections,
8588 max_keepalive_connections=max_keepalive_connections,
8689 keepalive_expiry=keepalive_expiry,
87 http2=http2,
88 backend=backend,
89 max_keepalive=max_keepalive,
90 )
91
92 def handle_request(
90 network_backend=network_backend,
91 retries=retries,
92 local_address=local_address,
93 uds=uds,
94 )
95 self._ssl_context = ssl_context
96 self._proxy_url = enforce_url(proxy_url, name="proxy_url")
97 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
98
99 def create_connection(self, origin: Origin) -> ConnectionInterface:
100 if origin.scheme == b"http":
101 return ForwardHTTPConnection(
102 proxy_origin=self._proxy_url.origin,
103 keepalive_expiry=self._keepalive_expiry,
104 network_backend=self._network_backend,
105 )
106 return TunnelHTTPConnection(
107 proxy_origin=self._proxy_url.origin,
108 proxy_headers=self._proxy_headers,
109 remote_origin=origin,
110 ssl_context=self._ssl_context,
111 keepalive_expiry=self._keepalive_expiry,
112 network_backend=self._network_backend,
113 )
114
115
116 class ForwardHTTPConnection(ConnectionInterface):
117 def __init__(
93118 self,
94 method: bytes,
95 url: URL,
96 headers: Headers,
97 stream: SyncByteStream,
98 extensions: dict,
99 ) -> Tuple[int, Headers, SyncByteStream, dict]:
100 if self._keepalive_expiry is not None:
101 self._keepalive_sweep()
102
103 if (
104 self.proxy_mode == "DEFAULT" and url[0] == b"http"
105 ) or self.proxy_mode == "FORWARD_ONLY":
106 # By default HTTP requests should be forwarded.
107 logger.trace(
108 "forward_request proxy_origin=%r proxy_headers=%r method=%r url=%r",
109 self.proxy_origin,
110 self.proxy_headers,
111 method,
112 url,
113 )
114 return self._forward_request(
115 method, url, headers=headers, stream=stream, extensions=extensions
116 )
117 else:
118 # By default HTTPS should be tunnelled.
119 logger.trace(
120 "tunnel_request proxy_origin=%r proxy_headers=%r method=%r url=%r",
121 self.proxy_origin,
122 self.proxy_headers,
123 method,
124 url,
125 )
126 return self._tunnel_request(
127 method, url, headers=headers, stream=stream, extensions=extensions
128 )
129
130 def _forward_request(
119 proxy_origin: Origin,
120 proxy_headers: Union[HeadersAsMapping, HeadersAsSequence] = None,
121 keepalive_expiry: float = None,
122 network_backend: NetworkBackend = None,
123 ) -> None:
124 self._connection = HTTPConnection(
125 origin=proxy_origin,
126 keepalive_expiry=keepalive_expiry,
127 network_backend=network_backend,
128 )
129 self._proxy_origin = proxy_origin
130 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
131
132 def handle_request(self, request: Request) -> Response:
133 headers = merge_headers(self._proxy_headers, request.headers)
134 url = URL(
135 scheme=self._proxy_origin.scheme,
136 host=self._proxy_origin.host,
137 port=self._proxy_origin.port,
138 target=bytes(request.url),
139 )
140 proxy_request = Request(
141 method=request.method,
142 url=url,
143 headers=headers,
144 content=request.stream,
145 extensions=request.extensions,
146 )
147 return self._connection.handle_request(proxy_request)
148
149 def can_handle_request(self, origin: Origin) -> bool:
150 return origin.scheme == b"http"
151
152 def close(self) -> None:
153 self._connection.close()
154
155 def info(self) -> str:
156 return self._connection.info()
157
158 def is_available(self) -> bool:
159 return self._connection.is_available()
160
161 def has_expired(self) -> bool:
162 return self._connection.has_expired()
163
164 def is_idle(self) -> bool:
165 return self._connection.is_idle()
166
167 def is_closed(self) -> bool:
168 return self._connection.is_closed()
169
170 def __repr__(self) -> str:
171 return f"<{self.__class__.__name__} [{self.info()}]>"
172
173
174 class TunnelHTTPConnection(ConnectionInterface):
175 def __init__(
131176 self,
132 method: bytes,
133 url: URL,
134 headers: Headers,
135 stream: SyncByteStream,
136 extensions: dict,
137 ) -> Tuple[int, Headers, SyncByteStream, dict]:
138 """
139 Forwarded proxy requests include the entire URL as the HTTP target,
140 rather than just the path.
141 """
142 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
143 origin = self.proxy_origin
144 connection = self._get_connection_from_pool(origin)
145
146 if connection is None:
147 connection = SyncHTTPConnection(
148 origin=origin,
149 http2=self._http2,
150 keepalive_expiry=self._keepalive_expiry,
151 ssl_context=self._ssl_context,
152 )
153 self._add_to_pool(connection, timeout)
154
155 # Issue a forwarded proxy request...
156
157 # GET https://www.example.org/path HTTP/1.1
158 # [proxy headers]
159 # [headers]
160 scheme, host, port, path = url
161 if port is None:
162 target = b"%b://%b%b" % (scheme, host, path)
163 else:
164 target = b"%b://%b:%d%b" % (scheme, host, port, path)
165
166 url = self.proxy_origin + (target,)
167 headers = merge_headers(self.proxy_headers, headers)
168
169 (
170 status_code,
171 headers,
172 stream,
173 extensions,
174 ) = connection.handle_request(
175 method, url, headers=headers, stream=stream, extensions=extensions
176 )
177
178 wrapped_stream = ResponseByteStream(
179 stream, connection=connection, callback=self._response_closed
180 )
181
182 return status_code, headers, wrapped_stream, extensions
183
184 def _tunnel_request(
185 self,
186 method: bytes,
187 url: URL,
188 headers: Headers,
189 stream: SyncByteStream,
190 extensions: dict,
191 ) -> Tuple[int, Headers, SyncByteStream, dict]:
192 """
193 Tunnelled proxy requests require an initial CONNECT request to
194 establish the connection, and then send regular requests.
195 """
196 timeout = cast(TimeoutDict, extensions.get("timeout", {}))
197 origin = url_to_origin(url)
198 connection = self._get_connection_from_pool(origin)
199
200 if connection is None:
201 scheme, host, port = origin
202
203 # First, create a connection to the proxy server
204 proxy_connection = SyncHTTPConnection(
205 origin=self.proxy_origin,
206 http2=self._http2,
207 keepalive_expiry=self._keepalive_expiry,
208 ssl_context=self._ssl_context,
209 )
210
211 # Issue a CONNECT request...
212
213 # CONNECT www.example.org:80 HTTP/1.1
214 # [proxy-headers]
215 target = b"%b:%d" % (host, port)
216 connect_url = self.proxy_origin + (target,)
217 connect_headers = [(b"Host", target), (b"Accept", b"*/*")]
218 connect_headers = merge_headers(connect_headers, self.proxy_headers)
219
220 try:
221 (
222 proxy_status_code,
223 _,
224 proxy_stream,
225 _,
226 ) = proxy_connection.handle_request(
227 b"CONNECT",
228 connect_url,
229 headers=connect_headers,
230 stream=ByteStream(b""),
231 extensions=extensions,
232 )
233
234 proxy_reason = get_reason_phrase(proxy_status_code)
235 logger.trace(
236 "tunnel_response proxy_status_code=%r proxy_reason=%r ",
237 proxy_status_code,
238 proxy_reason,
239 )
240 # Read the response data without closing the socket
241 for _ in proxy_stream:
242 pass
243
244 # See if the tunnel was successfully established.
245 if proxy_status_code < 200 or proxy_status_code > 299:
246 msg = "%d %s" % (proxy_status_code, proxy_reason)
177 proxy_origin: Origin,
178 remote_origin: Origin,
179 ssl_context: ssl.SSLContext,
180 proxy_headers: Sequence[Tuple[bytes, bytes]] = None,
181 keepalive_expiry: float = None,
182 network_backend: NetworkBackend = None,
183 ) -> None:
184 self._connection: ConnectionInterface = HTTPConnection(
185 origin=proxy_origin,
186 keepalive_expiry=keepalive_expiry,
187 network_backend=network_backend,
188 )
189 self._proxy_origin = proxy_origin
190 self._remote_origin = remote_origin
191 self._ssl_context = ssl_context
192 self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
193 self._keepalive_expiry = keepalive_expiry
194 self._connect_lock = Lock()
195 self._connected = False
196
197 def handle_request(self, request: Request) -> Response:
198 timeouts = request.extensions.get("timeout", {})
199 timeout = timeouts.get("connect", None)
200
201 with self._connect_lock:
202 if not self._connected:
203 target = b"%b:%d" % (self._remote_origin.host, self._remote_origin.port)
204
205 connect_url = URL(
206 scheme=self._proxy_origin.scheme,
207 host=self._proxy_origin.host,
208 port=self._proxy_origin.port,
209 target=target,
210 )
211 connect_headers = merge_headers(
212 [(b"Host", target), (b"Accept", b"*/*")], self._proxy_headers
213 )
214 connect_request = Request(
215 method=b"CONNECT", url=connect_url, headers=connect_headers
216 )
217 connect_response = self._connection.handle_request(
218 connect_request
219 )
220
221 if connect_response.status < 200 or connect_response.status > 299:
222 reason_bytes = connect_response.extensions.get("reason_phrase", b"")
223 reason_str = reason_bytes.decode("ascii", errors="ignore")
224 msg = "%d %s" % (connect_response.status, reason_str)
225 self._connection.close()
247226 raise ProxyError(msg)
248227
249 # Upgrade to TLS if required
250 # We assume the target speaks TLS on the specified port
251 if scheme == b"https":
252 proxy_connection.start_tls(host, self._ssl_context, timeout)
253 except Exception as exc:
254 proxy_connection.close()
255 raise ProxyError(exc)
256
257 # The CONNECT request is successful, so we have now SWITCHED PROTOCOLS.
258 # This means the proxy connection is now unusable, and we must create
259 # a new one for regular requests, making sure to use the same socket to
260 # retain the tunnel.
261 connection = SyncHTTPConnection(
262 origin=origin,
263 http2=self._http2,
264 keepalive_expiry=self._keepalive_expiry,
265 ssl_context=self._ssl_context,
266 socket=proxy_connection.socket,
267 )
268 self._add_to_pool(connection, timeout)
269
270 # Once the connection has been established we can send requests on
271 # it as normal.
272 (
273 status_code,
274 headers,
275 stream,
276 extensions,
277 ) = connection.handle_request(
278 method,
279 url,
280 headers=headers,
281 stream=stream,
282 extensions=extensions,
283 )
284
285 wrapped_stream = ResponseByteStream(
286 stream, connection=connection, callback=self._response_closed
287 )
288
289 return status_code, headers, wrapped_stream, extensions
228 stream = connect_response.extensions["network_stream"]
229 stream = stream.start_tls(
230 ssl_context=self._ssl_context,
231 server_hostname=self._remote_origin.host.decode("ascii"),
232 timeout=timeout,
233 )
234 self._connection = HTTP11Connection(
235 origin=self._remote_origin,
236 stream=stream,
237 keepalive_expiry=self._keepalive_expiry,
238 )
239 self._connected = True
240 return self._connection.handle_request(request)
241
242 def can_handle_request(self, origin: Origin) -> bool:
243 return origin == self._remote_origin
244
245 def close(self) -> None:
246 self._connection.close()
247
248 def info(self) -> str:
249 return self._connection.info()
250
251 def is_available(self) -> bool:
252 return self._connection.is_available()
253
254 def has_expired(self) -> bool:
255 return self._connection.has_expired()
256
257 def is_idle(self) -> bool:
258 return self._connection.is_idle()
259
260 def is_closed(self) -> bool:
261 return self._connection.is_closed()
262
263 def __repr__(self) -> str:
264 return f"<{self.__class__.__name__} [{self.info()}]>"
0 from typing import Iterator, Union
1
2 from contextlib import contextmanager
3 from .._models import (
4 URL,
5 Origin,
6 Request,
7 Response,
8 enforce_bytes,
9 enforce_headers,
10 enforce_url,
11 include_request_headers,
12 )
13
14
15 class RequestInterface:
16 def request(
17 self,
18 method: Union[bytes, str],
19 url: Union[URL, bytes, str],
20 *,
21 headers: Union[dict, list] = None,
22 content: Union[bytes, Iterator[bytes]] = None,
23 extensions: dict = None,
24 ) -> Response:
25 # Strict type checking on our parameters.
26 method = enforce_bytes(method, name="method")
27 url = enforce_url(url, name="url")
28 headers = enforce_headers(headers, name="headers")
29
30 # Include Host header, and optionally Content-Length or Transfer-Encoding.
31 headers = include_request_headers(headers, url=url, content=content)
32
33 request = Request(
34 method=method,
35 url=url,
36 headers=headers,
37 content=content,
38 extensions=extensions,
39 )
40 response = self.handle_request(request)
41 try:
42 response.read()
43 finally:
44 response.close()
45 return response
46
47 @contextmanager
48 def stream(
49 self,
50 method: Union[bytes, str],
51 url: Union[URL, bytes, str],
52 *,
53 headers: Union[dict, list] = None,
54 content: Union[bytes, Iterator[bytes]] = None,
55 extensions: dict = None,
56 ) -> Iterator[Response]:
57 # Strict type checking on our parameters.
58 method = enforce_bytes(method, name="method")
59 url = enforce_url(url, name="url")
60 headers = enforce_headers(headers, name="headers")
61
62 # Include Host header, and optionally Content-Length or Transfer-Encoding.
63 headers = include_request_headers(headers, url=url, content=content)
64
65 request = Request(
66 method=method,
67 url=url,
68 headers=headers,
69 content=content,
70 extensions=extensions,
71 )
72 response = self.handle_request(request)
73 try:
74 yield response
75 finally:
76 response.close()
77
78 def handle_request(self, request: Request) -> Response:
79 raise NotImplementedError() # pragma: nocover
80
81
82 class ConnectionInterface(RequestInterface):
83 def close(self) -> None:
84 raise NotImplementedError() # pragma: nocover
85
86 def info(self) -> str:
87 raise NotImplementedError() # pragma: nocover
88
89 def can_handle_request(self, origin: Origin) -> bool:
90 raise NotImplementedError() # pragma: nocover
91
92 def is_available(self) -> bool:
93 """
94 Return `True` if the connection is currently able to accept an
95 outgoing request.
96
97 An HTTP/1.1 connection will only be available if it is currently idle.
98
99 An HTTP/2 connection will be available so long as the stream ID space is
100 not yet exhausted, and the connection is not in an error state.
101
102 While the connection is being established we may not yet know if it is going
103 to result in an HTTP/1.1 or HTTP/2 connection. The connection should be
104 treated as being available, but might ultimately raise `NewConnectionRequired`
105 required exceptions if multiple requests are attempted over a connection
106 that ends up being established as HTTP/1.1.
107 """
108 raise NotImplementedError() # pragma: nocover
109
110 def has_expired(self) -> bool:
111 """
112 Return `True` if the connection is in a state where it should be closed.
113
114 This either means that the connection is idle and it has passed the
115 expiry time on its keep-alive, or that server has sent an EOF.
116 """
117 raise NotImplementedError() # pragma: nocover
118
119 def is_idle(self) -> bool:
120 """
121 Return `True` if the connection is currently idle.
122 """
123 raise NotImplementedError() # pragma: nocover
124
125 def is_closed(self) -> bool:
126 """
127 Return `True` if the connection has been closed.
128
129 Used when a response is closed to determine if the connection may be
130 returned to the connection pool or not.
131 """
132 raise NotImplementedError() # pragma: nocover
0 import threading
1 from types import TracebackType
2 from typing import Type
3
4 import anyio
5
6 from ._exceptions import PoolTimeout, map_exceptions
7
8
9 class AsyncLock:
10 def __init__(self) -> None:
11 self._lock = anyio.Lock()
12
13 async def __aenter__(self) -> "AsyncLock":
14 await self._lock.acquire()
15 return self
16
17 async def __aexit__(
18 self,
19 exc_type: Type[BaseException] = None,
20 exc_value: BaseException = None,
21 traceback: TracebackType = None,
22 ) -> None:
23 self._lock.release()
24
25
26 class AsyncEvent:
27 def __init__(self) -> None:
28 self._event = anyio.Event()
29
30 def set(self) -> None:
31 self._event.set()
32
33 async def wait(self, timeout: float = None) -> None:
34 exc_map: dict = {TimeoutError: PoolTimeout}
35 with map_exceptions(exc_map):
36 with anyio.fail_after(timeout):
37 await self._event.wait()
38
39
40 class AsyncSemaphore:
41 def __init__(self, bound: int) -> None:
42 self._semaphore = anyio.Semaphore(initial_value=bound, max_value=bound)
43
44 async def acquire(self) -> None:
45 await self._semaphore.acquire()
46
47 async def release(self) -> None:
48 self._semaphore.release()
49
50
51 class Lock:
52 def __init__(self) -> None:
53 self._lock = threading.Lock()
54
55 def __enter__(self) -> "Lock":
56 self._lock.acquire()
57 return self
58
59 def __exit__(
60 self,
61 exc_type: Type[BaseException] = None,
62 exc_value: BaseException = None,
63 traceback: TracebackType = None,
64 ) -> None:
65 self._lock.release()
66
67
68 class Event:
69 def __init__(self) -> None:
70 self._event = threading.Event()
71
72 def set(self) -> None:
73 self._event.set()
74
75 def wait(self, timeout: float = None) -> None:
76 if not self._event.wait(timeout=timeout):
77 raise PoolTimeout() # pragma: nocover
78
79
80 class Semaphore:
81 def __init__(self, bound: int) -> None:
82 self._semaphore = threading.Semaphore(value=bound)
83
84 def acquire(self) -> None:
85 self._semaphore.acquire()
86
87 def release(self) -> None:
88 self._semaphore.release()
+0
-35
httpcore/_threadlock.py less more
0 import threading
1 from types import TracebackType
2 from typing import Type
3
4
5 class ThreadLock:
6 """
7 Provides thread safety when used as a sync context manager, or a
8 no-op when used as an async context manager.
9 """
10
11 def __init__(self) -> None:
12 self.lock = threading.Lock()
13
14 def __enter__(self) -> None:
15 self.lock.acquire()
16
17 def __exit__(
18 self,
19 exc_type: Type[BaseException] = None,
20 exc_value: BaseException = None,
21 traceback: TracebackType = None,
22 ) -> None:
23 self.lock.release()
24
25 async def __aenter__(self) -> None:
26 pass
27
28 async def __aexit__(
29 self,
30 exc_type: Type[BaseException] = None,
31 exc_value: BaseException = None,
32 traceback: TracebackType = None,
33 ) -> None:
34 pass
0 from types import TracebackType
1 from typing import Any, Type
2
3 from ._models import Request
4
5
6 class Trace:
7 def __init__(self, name: str, request: Request, kwargs: dict = None) -> None:
8 self.name = name
9 self.trace = request.extensions.get("trace")
10 self.kwargs = kwargs or {}
11 self.return_value: Any = None
12
13 def __enter__(self) -> "Trace":
14 if self.trace is not None:
15 info = self.kwargs
16 self.trace(f"{self.name}.started", info)
17 return self
18
19 def __exit__(
20 self,
21 exc_type: Type[BaseException] = None,
22 exc_value: BaseException = None,
23 traceback: TracebackType = None,
24 ) -> None:
25 if self.trace is not None:
26 if exc_value is None:
27 info: dict = {"return_value": self.return_value}
28 self.trace(f"{self.name}.complete", info)
29 else:
30 info = {"exception": exc_value}
31 self.trace(f"{self.name}.failed", info)
32
33 async def __aenter__(self) -> "Trace":
34 if self.trace is not None:
35 info = self.kwargs
36 await self.trace(f"{self.name}.started", info)
37 return self
38
39 async def __aexit__(
40 self,
41 exc_type: Type[BaseException] = None,
42 exc_value: BaseException = None,
43 traceback: TracebackType = None,
44 ) -> None:
45 if self.trace is not None:
46 if exc_value is None:
47 info: dict = {"return_value": self.return_value}
48 await self.trace(f"{self.name}.complete", info)
49 else:
50 info = {"exception": exc_value}
51 await self.trace(f"{self.name}.failed", info)
+0
-12
httpcore/_types.py less more
0 """
1 Type definitions for type checking purposes.
2 """
3
4 from typing import List, Mapping, Optional, Tuple, TypeVar, Union
5
6 T = TypeVar("T")
7 StrOrBytes = Union[str, bytes]
8 Origin = Tuple[bytes, bytes, int]
9 URL = Tuple[bytes, bytes, Optional[int], bytes]
10 Headers = List[Tuple[bytes, bytes]]
11 TimeoutDict = Mapping[str, Optional[float]]
0 import itertools
1 import logging
2 import os
30 import select
41 import socket
52 import sys
63 import typing
74
8 from ._types import URL, Origin
9
10 _LOGGER_INITIALIZED = False
11 TRACE_LOG_LEVEL = 5
12 DEFAULT_PORTS = {b"http": 80, b"https": 443}
13
14
15 class Logger(logging.Logger):
16 # Stub for type checkers.
17 def trace(self, message: str, *args: typing.Any, **kwargs: typing.Any) -> None:
18 ... # pragma: nocover
19
20
21 def get_logger(name: str) -> Logger:
22 """
23 Get a `logging.Logger` instance, and optionally
24 set up debug logging based on the HTTPCORE_LOG_LEVEL or HTTPX_LOG_LEVEL
25 environment variables.
26 """
27 global _LOGGER_INITIALIZED
28 if not _LOGGER_INITIALIZED:
29 _LOGGER_INITIALIZED = True
30 logging.addLevelName(TRACE_LOG_LEVEL, "TRACE")
31
32 log_level = os.environ.get(
33 "HTTPCORE_LOG_LEVEL", os.environ.get("HTTPX_LOG_LEVEL", "")
34 ).upper()
35 if log_level in ("DEBUG", "TRACE"):
36 logger = logging.getLogger("httpcore")
37 logger.setLevel(logging.DEBUG if log_level == "DEBUG" else TRACE_LOG_LEVEL)
38 handler = logging.StreamHandler(sys.stderr)
39 handler.setFormatter(
40 logging.Formatter(
41 fmt="%(levelname)s [%(asctime)s] %(name)s - %(message)s",
42 datefmt="%Y-%m-%d %H:%M:%S",
43 )
44 )
45 logger.addHandler(handler)
46
47 logger = logging.getLogger(name)
48
49 def trace(message: str, *args: typing.Any, **kwargs: typing.Any) -> None:
50 logger.log(TRACE_LOG_LEVEL, message, *args, **kwargs)
51
52 logger.trace = trace # type: ignore
53
54 return typing.cast(Logger, logger)
55
56
57 def url_to_origin(url: URL) -> Origin:
58 scheme, host, explicit_port = url[:3]
59 default_port = DEFAULT_PORTS[scheme]
60 port = default_port if explicit_port is None else explicit_port
61 return scheme, host, port
62
63
64 def origin_to_url_string(origin: Origin) -> str:
65 scheme, host, explicit_port = origin
66 port = f":{explicit_port}" if explicit_port != DEFAULT_PORTS[scheme] else ""
67 return f"{scheme.decode('ascii')}://{host.decode('ascii')}{port}"
68
69
70 def exponential_backoff(factor: float) -> typing.Iterator[float]:
71 yield 0
72 for n in itertools.count(2):
73 yield factor * (2 ** (n - 2))
74
755
766 def is_socket_readable(sock: typing.Optional[socket.socket]) -> bool:
777 """
788 Return whether a socket, as identifed by its file descriptor, is readable.
79
809 "A socket is readable" means that the read buffer isn't empty, i.e. that calling
8110 .recv() on it would immediately return some data.
8211 """
8716 # descriptor, we treat it as being readable, as if it the next read operation
8817 # on it is ready to return the terminating `b""`.
8918 sock_fd = None if sock is None else sock.fileno()
90 if sock_fd is None or sock_fd < 0:
19 if sock_fd is None or sock_fd < 0: # pragma: nocover
9120 return True
9221
9322 # The implementation below was stolen from:
9625
9726 # Use select.select on Windows, and when poll is unavailable and select.poll
9827 # everywhere else. (E.g. When eventlet is in use. See #327)
99 if sys.platform == "win32" or getattr(select, "poll", None) is None:
28 if (
29 sys.platform == "win32" or getattr(select, "poll", None) is None
30 ): # pragma: nocover
10031 rready, _, _ = select.select([sock_fd], [], [], 0)
10132 return bool(rready)
10233 p = select.poll()
0 import ssl
1 import typing
2
3 import anyio
4
5 from .._exceptions import (
6 ConnectError,
7 ConnectTimeout,
8 ReadError,
9 ReadTimeout,
10 WriteError,
11 WriteTimeout,
12 map_exceptions,
13 )
14 from .._utils import is_socket_readable
15 from .base import AsyncNetworkBackend, AsyncNetworkStream
16
17
18 class AsyncIOStream(AsyncNetworkStream):
19 def __init__(self, stream: anyio.abc.ByteStream) -> None:
20 self._stream = stream
21
22 async def read(self, max_bytes: int, timeout: float = None) -> bytes:
23 exc_map = {
24 TimeoutError: ReadTimeout,
25 anyio.BrokenResourceError: ReadError,
26 }
27 with map_exceptions(exc_map):
28 with anyio.fail_after(timeout):
29 try:
30 return await self._stream.receive(max_bytes=max_bytes)
31 except anyio.EndOfStream: # pragma: nocover
32 return b""
33
34 async def write(self, buffer: bytes, timeout: float = None) -> None:
35 if not buffer:
36 return
37
38 exc_map = {
39 TimeoutError: WriteTimeout,
40 anyio.BrokenResourceError: WriteError,
41 }
42 with map_exceptions(exc_map):
43 with anyio.fail_after(timeout):
44 await self._stream.send(item=buffer)
45
46 async def aclose(self) -> None:
47 await self._stream.aclose()
48
49 async def start_tls(
50 self,
51 ssl_context: ssl.SSLContext,
52 server_hostname: str = None,
53 timeout: float = None,
54 ) -> AsyncNetworkStream:
55 exc_map = {
56 TimeoutError: ConnectTimeout,
57 anyio.BrokenResourceError: ConnectError,
58 }
59 with map_exceptions(exc_map):
60 with anyio.fail_after(timeout):
61 ssl_stream = await anyio.streams.tls.TLSStream.wrap(
62 self._stream,
63 ssl_context=ssl_context,
64 hostname=server_hostname,
65 standard_compatible=False,
66 server_side=False,
67 )
68 return AsyncIOStream(ssl_stream)
69
70 def get_extra_info(self, info: str) -> typing.Any:
71 if info == "ssl_object":
72 return self._stream.extra(anyio.streams.tls.TLSAttribute.ssl_object, None)
73 if info == "client_addr":
74 return self._stream.extra(anyio.abc.SocketAttribute.local_address, None)
75 if info == "server_addr":
76 return self._stream.extra(anyio.abc.SocketAttribute.remote_address, None)
77 if info == "socket":
78 return self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None)
79 if info == "is_readable":
80 sock = self._stream.extra(anyio.abc.SocketAttribute.raw_socket, None)
81 return is_socket_readable(sock)
82 return None
83
84
85 class AsyncIOBackend(AsyncNetworkBackend):
86 async def connect_tcp(
87 self, host: str, port: int, timeout: float = None, local_address: str = None
88 ) -> AsyncNetworkStream:
89 exc_map = {
90 TimeoutError: ConnectTimeout,
91 OSError: ConnectError,
92 anyio.BrokenResourceError: ConnectError,
93 }
94 with map_exceptions(exc_map):
95 with anyio.fail_after(timeout):
96 stream: anyio.abc.ByteStream = await anyio.connect_tcp(
97 remote_host=host,
98 remote_port=port,
99 local_host=local_address,
100 )
101 return AsyncIOStream(stream)
102
103 async def connect_unix_socket(
104 self, path: str, timeout: float = None
105 ) -> AsyncNetworkStream: # pragma: nocover
106 exc_map = {
107 TimeoutError: ConnectTimeout,
108 OSError: ConnectError,
109 anyio.BrokenResourceError: ConnectError,
110 }
111 with map_exceptions(exc_map):
112 with anyio.fail_after(timeout):
113 stream: anyio.abc.ByteStream = await anyio.connect_unix(path)
114 return AsyncIOStream(stream)
115
116 async def sleep(self, seconds: float) -> None:
117 await anyio.sleep(seconds) # pragma: nocover
0 import sniffio
1
2 from .base import AsyncNetworkBackend, AsyncNetworkStream
3
4
5 class AutoBackend(AsyncNetworkBackend):
6 async def _init_backend(self) -> None:
7 if not (hasattr(self, "_backend")):
8 backend = sniffio.current_async_library()
9 if backend == "trio":
10 from .trio import TrioBackend
11
12 self._backend: AsyncNetworkBackend = TrioBackend()
13 else:
14 from .asyncio import AsyncIOBackend
15
16 self._backend = AsyncIOBackend()
17
18 async def connect_tcp(
19 self, host: str, port: int, timeout: float = None, local_address: str = None
20 ) -> AsyncNetworkStream:
21 await self._init_backend()
22 return await self._backend.connect_tcp(
23 host, port, timeout=timeout, local_address=local_address
24 )
25
26 async def connect_unix_socket(
27 self, path: str, timeout: float = None
28 ) -> AsyncNetworkStream: # pragma: nocover
29 await self._init_backend()
30 return await self._backend.connect_unix_socket(path, timeout=timeout)
31
32 async def sleep(self, seconds: float) -> None: # pragma: nocover
33 await self._init_backend()
34 return await self._backend.sleep(seconds)
0 import ssl
1 import time
2 import typing
3
4
5 class NetworkStream:
6 def read(self, max_bytes: int, timeout: float = None) -> bytes:
7 raise NotImplementedError() # pragma: nocover
8
9 def write(self, buffer: bytes, timeout: float = None) -> None:
10 raise NotImplementedError() # pragma: nocover
11
12 def close(self) -> None:
13 raise NotImplementedError() # pragma: nocover
14
15 def start_tls(
16 self,
17 ssl_context: ssl.SSLContext,
18 server_hostname: str = None,
19 timeout: float = None,
20 ) -> "NetworkStream":
21 raise NotImplementedError() # pragma: nocover
22
23 def get_extra_info(self, info: str) -> typing.Any:
24 return None # pragma: nocover
25
26
27 class NetworkBackend:
28 def connect_tcp(
29 self, host: str, port: int, timeout: float = None, local_address: str = None
30 ) -> NetworkStream:
31 raise NotImplementedError() # pragma: nocover
32
33 def connect_unix_socket(self, path: str, timeout: float = None) -> NetworkStream:
34 raise NotImplementedError() # pragma: nocover
35
36 def sleep(self, seconds: float) -> None:
37 time.sleep(seconds) # pragma: nocover
38
39
40 class AsyncNetworkStream:
41 async def read(self, max_bytes: int, timeout: float = None) -> bytes:
42 raise NotImplementedError() # pragma: nocover
43
44 async def write(self, buffer: bytes, timeout: float = None) -> None:
45 raise NotImplementedError() # pragma: nocover
46
47 async def aclose(self) -> None:
48 raise NotImplementedError() # pragma: nocover
49
50 async def start_tls(
51 self,
52 ssl_context: ssl.SSLContext,
53 server_hostname: str = None,
54 timeout: float = None,
55 ) -> "AsyncNetworkStream":
56 raise NotImplementedError() # pragma: nocover
57
58 def get_extra_info(self, info: str) -> typing.Any:
59 return None # pragma: nocover
60
61
62 class AsyncNetworkBackend:
63 async def connect_tcp(
64 self, host: str, port: int, timeout: float = None, local_address: str = None
65 ) -> AsyncNetworkStream:
66 raise NotImplementedError() # pragma: nocover
67
68 async def connect_unix_socket(
69 self, path: str, timeout: float = None
70 ) -> AsyncNetworkStream:
71 raise NotImplementedError() # pragma: nocover
72
73 async def sleep(self, seconds: float) -> None:
74 raise NotImplementedError() # pragma: nocover
0 import ssl
1 import typing
2
3 from .base import AsyncNetworkBackend, AsyncNetworkStream, NetworkBackend, NetworkStream
4
5
6 class MockSSLObject:
7 def __init__(self, http2: bool):
8 self._http2 = http2
9
10 def selected_alpn_protocol(self) -> str:
11 return "h2" if self._http2 else "http/1.1"
12
13
14 class MockStream(NetworkStream):
15 def __init__(self, buffer: typing.List[bytes], http2: bool = False) -> None:
16 self._buffer = buffer
17 self._http2 = http2
18
19 def read(self, max_bytes: int, timeout: float = None) -> bytes:
20 if not self._buffer:
21 return b""
22 return self._buffer.pop(0)
23
24 def write(self, buffer: bytes, timeout: float = None) -> None:
25 pass
26
27 def close(self) -> None:
28 pass
29
30 def start_tls(
31 self,
32 ssl_context: ssl.SSLContext,
33 server_hostname: str = None,
34 timeout: float = None,
35 ) -> NetworkStream:
36 return self
37
38 def get_extra_info(self, info: str) -> typing.Any:
39 return MockSSLObject(http2=self._http2) if info == "ssl_object" else None
40
41
42 class MockBackend(NetworkBackend):
43 def __init__(self, buffer: typing.List[bytes], http2: bool = False) -> None:
44 self._buffer = buffer
45 self._http2 = http2
46
47 def connect_tcp(
48 self, host: str, port: int, timeout: float = None, local_address: str = None
49 ) -> NetworkStream:
50 return MockStream(list(self._buffer), http2=self._http2)
51
52 def connect_unix_socket(self, path: str, timeout: float = None) -> NetworkStream:
53 return MockStream(list(self._buffer), http2=self._http2)
54
55 def sleep(self, seconds: float) -> None:
56 pass
57
58
59 class AsyncMockStream(AsyncNetworkStream):
60 def __init__(self, buffer: typing.List[bytes], http2: bool = False) -> None:
61 self._original_buffer = buffer
62 self._current_buffer = list(self._original_buffer)
63 self._http2 = http2
64
65 async def read(self, max_bytes: int, timeout: float = None) -> bytes:
66 if not self._current_buffer:
67 self._current_buffer = list(self._original_buffer)
68 return self._current_buffer.pop(0)
69
70 async def write(self, buffer: bytes, timeout: float = None) -> None:
71 pass
72
73 async def aclose(self) -> None:
74 pass
75
76 async def start_tls(
77 self,
78 ssl_context: ssl.SSLContext,
79 server_hostname: str = None,
80 timeout: float = None,
81 ) -> AsyncNetworkStream:
82 return self
83
84 def get_extra_info(self, info: str) -> typing.Any:
85 return MockSSLObject(http2=self._http2) if info == "ssl_object" else None
86
87
88 class AsyncMockBackend(AsyncNetworkBackend):
89 def __init__(self, buffer: typing.List[bytes], http2: bool = False) -> None:
90 self._buffer = buffer
91 self._http2 = http2
92
93 async def connect_tcp(
94 self, host: str, port: int, timeout: float = None, local_address: str = None
95 ) -> AsyncNetworkStream:
96 return AsyncMockStream(list(self._buffer), http2=self._http2)
97
98 async def connect_unix_socket(
99 self, path: str, timeout: float = None
100 ) -> AsyncNetworkStream:
101 return AsyncMockStream(list(self._buffer), http2=self._http2)
102
103 async def sleep(self, seconds: float) -> None:
104 pass
0 import socket
1 import ssl
2 import typing
3
4 from .._exceptions import (
5 ConnectError,
6 ConnectTimeout,
7 ReadError,
8 ReadTimeout,
9 WriteError,
10 WriteTimeout,
11 map_exceptions,
12 )
13 from .._utils import is_socket_readable
14 from .base import NetworkBackend, NetworkStream
15
16
17 class SyncStream(NetworkStream):
18 def __init__(self, sock: socket.socket) -> None:
19 self._sock = sock
20
21 def read(self, max_bytes: int, timeout: float = None) -> bytes:
22 exc_map = {socket.timeout: ReadTimeout, socket.error: ReadError}
23 with map_exceptions(exc_map):
24 self._sock.settimeout(timeout)
25 return self._sock.recv(max_bytes)
26
27 def write(self, buffer: bytes, timeout: float = None) -> None:
28 if not buffer:
29 return
30
31 exc_map = {socket.timeout: WriteTimeout, socket.error: WriteError}
32 with map_exceptions(exc_map):
33 while buffer:
34 self._sock.settimeout(timeout)
35 n = self._sock.send(buffer)
36 buffer = buffer[n:]
37
38 def close(self) -> None:
39 self._sock.close()
40
41 def start_tls(
42 self,
43 ssl_context: ssl.SSLContext,
44 server_hostname: str = None,
45 timeout: float = None,
46 ) -> NetworkStream:
47 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
48 with map_exceptions(exc_map):
49 self._sock.settimeout(timeout)
50 sock = ssl_context.wrap_socket(self._sock, server_hostname=server_hostname)
51 return SyncStream(sock)
52
53 def get_extra_info(self, info: str) -> typing.Any:
54 if info == "ssl_object" and isinstance(self._sock, ssl.SSLSocket):
55 return self._sock._sslobj # type: ignore
56 if info == "client_addr":
57 return self._sock.getsockname()
58 if info == "server_addr":
59 return self._sock.getpeername()
60 if info == "socket":
61 return self._sock
62 if info == "is_readable":
63 return is_socket_readable(self._sock)
64 return None
65
66
67 class SyncBackend(NetworkBackend):
68 def connect_tcp(
69 self, host: str, port: int, timeout: float = None, local_address: str = None
70 ) -> NetworkStream:
71 address = (host, port)
72 source_address = None if local_address is None else (local_address, 0)
73 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
74 with map_exceptions(exc_map):
75 sock = socket.create_connection(
76 address, timeout, source_address=source_address
77 )
78 return SyncStream(sock)
79
80 def connect_unix_socket(
81 self, path: str, timeout: float = None
82 ) -> NetworkStream: # pragma: nocover
83 exc_map = {socket.timeout: ConnectTimeout, socket.error: ConnectError}
84 with map_exceptions(exc_map):
85 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
86 sock.settimeout(timeout)
87 sock.connect(path)
88 return SyncStream(sock)
0 import ssl
1 import typing
2
3 import trio
4
5 from .._exceptions import (
6 ConnectError,
7 ConnectTimeout,
8 ReadError,
9 ReadTimeout,
10 WriteError,
11 WriteTimeout,
12 map_exceptions,
13 )
14 from .base import AsyncNetworkBackend, AsyncNetworkStream
15
16
17 class TrioStream(AsyncNetworkStream):
18 def __init__(self, stream: trio.abc.Stream) -> None:
19 self._stream = stream
20
21 async def read(self, max_bytes: int, timeout: float = None) -> bytes:
22 timeout_or_inf = float("inf") if timeout is None else timeout
23 exc_map = {trio.TooSlowError: ReadTimeout, trio.BrokenResourceError: ReadError}
24 with map_exceptions(exc_map):
25 with trio.fail_after(timeout_or_inf):
26 return await self._stream.receive_some(max_bytes=max_bytes)
27
28 async def write(self, buffer: bytes, timeout: float = None) -> None:
29 if not buffer:
30 return
31
32 timeout_or_inf = float("inf") if timeout is None else timeout
33 exc_map = {
34 trio.TooSlowError: WriteTimeout,
35 trio.BrokenResourceError: WriteError,
36 }
37 with map_exceptions(exc_map):
38 with trio.fail_after(timeout_or_inf):
39 await self._stream.send_all(data=buffer)
40
41 async def aclose(self) -> None:
42 await self._stream.aclose()
43
44 async def start_tls(
45 self,
46 ssl_context: ssl.SSLContext,
47 server_hostname: str = None,
48 timeout: float = None,
49 ) -> AsyncNetworkStream:
50 timeout_or_inf = float("inf") if timeout is None else timeout
51 exc_map = {
52 trio.TooSlowError: ConnectTimeout,
53 trio.BrokenResourceError: ConnectError,
54 }
55 ssl_stream = trio.SSLStream(
56 self._stream,
57 ssl_context=ssl_context,
58 server_hostname=server_hostname,
59 https_compatible=True,
60 server_side=False,
61 )
62 with map_exceptions(exc_map):
63 with trio.fail_after(timeout_or_inf):
64 await ssl_stream.do_handshake()
65 return TrioStream(ssl_stream)
66
67 def get_extra_info(self, info: str) -> typing.Any:
68 if info == "ssl_object" and isinstance(self._stream, trio.SSLStream):
69 return self._stream._ssl_object # type: ignore
70 if info == "client_addr":
71 return self._get_socket_stream().socket.getsockname()
72 if info == "server_addr":
73 return self._get_socket_stream().socket.getpeername()
74 if info == "socket":
75 stream = self._stream
76 while isinstance(stream, trio.SSLStream):
77 stream = stream.transport_stream
78 assert isinstance(stream, trio.SocketStream)
79 return stream.socket
80 if info == "is_readable":
81 socket = self.get_extra_info("socket")
82 return socket.is_readable()
83 return None
84
85 def _get_socket_stream(self) -> trio.SocketStream:
86 stream = self._stream
87 while isinstance(stream, trio.SSLStream):
88 stream = stream.transport_stream
89 assert isinstance(stream, trio.SocketStream)
90 return stream
91
92
93 class TrioBackend(AsyncNetworkBackend):
94 async def connect_tcp(
95 self, host: str, port: int, timeout: float = None, local_address: str = None
96 ) -> AsyncNetworkStream:
97 timeout_or_inf = float("inf") if timeout is None else timeout
98 exc_map = {
99 trio.TooSlowError: ConnectTimeout,
100 trio.BrokenResourceError: ConnectError,
101 }
102 # Trio supports 'local_address' from 0.16.1 onwards.
103 # We only include the keyword argument if a local_address
104 # argument has been passed.
105 kwargs: dict = {} if local_address is None else {"local_address": local_address}
106 with map_exceptions(exc_map):
107 with trio.fail_after(timeout_or_inf):
108 stream: trio.abc.Stream = await trio.open_tcp_stream(
109 host=host, port=port, **kwargs
110 )
111 return TrioStream(stream)
112
113 async def connect_unix_socket(
114 self, path: str, timeout: float = None
115 ) -> AsyncNetworkStream: # pragma: nocover
116 timeout_or_inf = float("inf") if timeout is None else timeout
117 exc_map = {
118 trio.TooSlowError: ConnectTimeout,
119 trio.BrokenResourceError: ConnectError,
120 }
121 with map_exceptions(exc_map):
122 with trio.fail_after(timeout_or_inf):
123 stream: trio.abc.Stream = await trio.open_unix_socket(path)
124 return TrioStream(stream)
125
126 async def sleep(self, seconds: float) -> None:
127 await trio.sleep(seconds) # pragma: nocover
0 site_name: HTTPCore
1 site_description: A minimal HTTP client for Python.
2 site_url: https://www.encode.io/httpcore/
3
4 repo_name: encode/httpcore
5 repo_url: https://github.com/encode/httpcore/
6
7 nav:
8 - Introduction: 'index.md'
9 - Quickstart: 'quickstart.md'
10 - Connection Pools: 'connection-pools.md'
11 - Proxies: 'proxies.md'
12 - HTTP/2: 'http2.md'
13 - Async Support: 'async.md'
14 - Extensions: 'extensions.md'
15 - Exceptions: 'exceptions.md'
16
17 theme:
18 name: "material"
19
20 plugins:
21 - search
22 - mkdocstrings:
23 default_handler: python
24 watch:
25 - httpcore
26 handlers:
27 python:
28 members_order:
29 - "source"
30
31 markdown_extensions:
32 - codehilite:
33 css_class: highlight
66 curio==1.5; python_version >= '3.7'
77
88 # Docs
9 # https://github.com/sphinx-doc/sphinx/issues/9505
10 sphinx @ https://github.com/sphinx-doc/sphinx/archive/03bf83365eb7a5180e93a3ccd5c050f5da36c489.tar.gz; python_version >= '3.10'
11 sphinx==4.1.2; python_version < '3.10'
12 sphinx-autobuild==2021.3.14
13 myst-parser==0.15.2
14 furo==2021.8.31
15 ghp-import==2.0.1
16 # myst-parser + docutils==0.17 has a bug: https://github.com/executablebooks/MyST-Parser/issues/343
17 docutils==0.17.1
9 mkdocs==1.2.2
10 mkdocs-autorefs==0.3.0
11 mkdocs-material==7.3.0
12 mkdocs-material-extensions==1.0.3
13 mkdocstrings==0.16.1
1814
1915 # Packaging
2016 twine==3.4.2
2117 wheel==0.37.0
2218
2319 # Tests & Linting
24 anyio==3.3.0
20 anyio==3.3.4
2521 autoflake==1.4
26 black==21.8b0
22 black==21.10b0
2723 coverage==5.5
28 flake8==3.9.2
29 flake8-bugbear==21.4.3
24 flake8==4.0.1
25 flake8-bugbear==21.9.2
3026 flake8-pie==0.6.1
3127 hypercorn==0.11.2; python_version >= '3.7'
3228 isort==5.9.3
3329 mypy==0.910
3430 pproxy==2.7.8
3531 pytest==6.2.5
32 pytest-httpbin==1.0.0
3633 pytest-trio==0.7.0
3734 pytest-asyncio==0.15.1
3835 trustme==0.9.0
36 types-certifi==2021.10.8.0
3937 uvicorn==0.12.1; python_version < '3.7'
1111
1212 ${PREFIX}python setup.py sdist bdist_wheel
1313 ${PREFIX}twine check dist/*
14 scripts/docs build
14 ${PREFIX}mkdocs build
+0
-23
scripts/docs less more
0 #!/bin/bash -e
1
2 export PREFIX=""
3 if [ -d 'venv' ] ; then
4 export PREFIX="venv/bin/"
5 fi
6
7 SOURCE_DIR="docs"
8 OUT_DIR="build/html"
9
10 COMMAND="$1"
11 ARGS="${@:2}"
12
13 set -x
14
15 if [ "$COMMAND" = "build" ]; then
16 ${PREFIX}sphinx-build $SOURCE_DIR $OUT_DIR
17 elif [ "$COMMAND" = "gh-deploy" ]; then
18 scripts/docs build
19 ${PREFIX}ghp-import $OUT_DIR -np -m "Deployed $(git rev-parse --short HEAD)" $ARGS
20 else
21 ${PREFIX}sphinx-autobuild $SOURCE_DIR $OUT_DIR --watch httpcore/ $ARGS
22 fi
2323 set -x
2424
2525 ${PREFIX}twine upload dist/*
26 scripts/docs gh-deploy --push --force
26 ${PREFIX}mkdocs gh-deploy --force
00 [flake8]
11 ignore = W503, E203, B305
2 max-line-length = 88
3 exclude = httpcore/_sync,tests/sync_tests
2 max-line-length = 120
3 exclude = httpcore/_sync,tests/_sync
44
55 [mypy]
66 disallow_untyped_defs = True
1616 combine_as_imports = True
1717 known_first_party = httpcore,tests
1818 known_third_party = brotli,certifi,chardet,cryptography,h11,h2,hstspreload,pytest,rfc3986,setuptools,sniffio,trio,trustme,urllib3,uvicorn
19 skip = httpcore/_sync/,tests/sync_tests/
19 skip = httpcore/_sync/,tests/_sync
2020
2121 [tool:pytest]
2222 addopts = -rxXs
2424 copied_from(source, changes=None): mark test as copied from somewhere else, along with a description of changes made to accodomate e.g. our test setup
2525
2626 [coverage:run]
27 omit = venv/*
27 omit = venv/*, httpcore/_sync/*, httpcore/_compat.py
2828 include = httpcore/*, tests/*
5252 packages=get_packages("httpcore"),
5353 include_package_data=True,
5454 zip_safe=False,
55 install_requires=["h11>=0.11,<0.13", "sniffio==1.*", "anyio==3.*"],
55 install_requires=[
56 "h11>=0.11,<0.13",
57 "sniffio==1.*",
58 "anyio==3.*",
59 "certifi",
60 ],
5661 extras_require={
5762 "http2": ["h2>=3,<5"],
5863 },
(New empty file)
0 import hpack
1 import hyperframe.frame
2 import pytest
3
4 from httpcore import AsyncHTTPConnection, ConnectError, ConnectionNotAvailable, Origin
5 from httpcore.backends.base import AsyncNetworkStream
6 from httpcore.backends.mock import AsyncMockBackend
7
8
9 @pytest.mark.anyio
10 async def test_http_connection():
11 origin = Origin(b"https", b"example.com", 443)
12 network_backend = AsyncMockBackend(
13 [
14 b"HTTP/1.1 200 OK\r\n",
15 b"Content-Type: plain/text\r\n",
16 b"Content-Length: 13\r\n",
17 b"\r\n",
18 b"Hello, world!",
19 ]
20 )
21
22 async with AsyncHTTPConnection(
23 origin=origin, network_backend=network_backend, keepalive_expiry=5.0
24 ) as conn:
25 assert not conn.is_idle()
26 assert not conn.is_closed()
27 assert not conn.is_available()
28 assert not conn.has_expired()
29 assert repr(conn) == "<AsyncHTTPConnection [CONNECTING]>"
30
31 async with conn.stream("GET", "https://example.com/") as response:
32 assert (
33 repr(conn)
34 == "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
35 )
36 await response.aread()
37
38 assert response.status == 200
39 assert response.content == b"Hello, world!"
40
41 assert conn.is_idle()
42 assert not conn.is_closed()
43 assert conn.is_available()
44 assert not conn.has_expired()
45 assert (
46 repr(conn)
47 == "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
48 )
49
50
51 @pytest.mark.anyio
52 async def test_concurrent_requests_not_available_on_http11_connections():
53 """
54 Attempting to issue a request against an already active HTTP/1.1 connection
55 will raise a `ConnectionNotAvailable` exception.
56 """
57 origin = Origin(b"https", b"example.com", 443)
58 network_backend = AsyncMockBackend(
59 [
60 b"HTTP/1.1 200 OK\r\n",
61 b"Content-Type: plain/text\r\n",
62 b"Content-Length: 13\r\n",
63 b"\r\n",
64 b"Hello, world!",
65 ]
66 )
67
68 async with AsyncHTTPConnection(
69 origin=origin, network_backend=network_backend, keepalive_expiry=5.0
70 ) as conn:
71 async with conn.stream("GET", "https://example.com/"):
72 with pytest.raises(ConnectionNotAvailable):
73 await conn.request("GET", "https://example.com/")
74
75
76 @pytest.mark.anyio
77 async def test_http2_connection():
78 origin = Origin(b"https", b"example.com", 443)
79 network_backend = AsyncMockBackend(
80 [
81 hyperframe.frame.SettingsFrame().serialize(),
82 hyperframe.frame.HeadersFrame(
83 stream_id=1,
84 data=hpack.Encoder().encode(
85 [
86 (b":status", b"200"),
87 (b"content-type", b"plain/text"),
88 ]
89 ),
90 flags=["END_HEADERS"],
91 ).serialize(),
92 hyperframe.frame.DataFrame(
93 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
94 ).serialize(),
95 ],
96 http2=True,
97 )
98
99 async with AsyncHTTPConnection(
100 origin=origin, network_backend=network_backend, http2=True
101 ) as conn:
102 response = await conn.request("GET", "https://example.com/")
103
104 assert response.status == 200
105 assert response.content == b"Hello, world!"
106 assert response.extensions["http_version"] == b"HTTP/2"
107
108
109 @pytest.mark.anyio
110 async def test_request_to_incorrect_origin():
111 """
112 A connection can only send requests whichever origin it is connected to.
113 """
114 origin = Origin(b"https", b"example.com", 443)
115 network_backend = AsyncMockBackend([])
116 async with AsyncHTTPConnection(
117 origin=origin, network_backend=network_backend
118 ) as conn:
119 with pytest.raises(RuntimeError):
120 await conn.request("GET", "https://other.com/")
121
122
123 class NeedsRetryBackend(AsyncMockBackend):
124 def __init__(self, *args, **kwargs) -> None:
125 self._retry = 2
126 super().__init__(*args, **kwargs)
127
128 async def connect_tcp(
129 self, host: str, port: int, timeout: float = None, local_address: str = None
130 ) -> AsyncNetworkStream:
131 if self._retry > 0:
132 self._retry -= 1
133 raise ConnectError()
134
135 return await super().connect_tcp(
136 host, port, timeout=timeout, local_address=local_address
137 )
138
139
140 @pytest.mark.anyio
141 async def test_connection_retries():
142 origin = Origin(b"https", b"example.com", 443)
143 content = [
144 b"HTTP/1.1 200 OK\r\n",
145 b"Content-Type: plain/text\r\n",
146 b"Content-Length: 13\r\n",
147 b"\r\n",
148 b"Hello, world!",
149 ]
150
151 network_backend = NeedsRetryBackend(content)
152 async with AsyncHTTPConnection(
153 origin=origin, network_backend=network_backend, retries=3
154 ) as conn:
155 response = await conn.request("GET", "https://example.com/")
156 assert response.status == 200
157
158 network_backend = NeedsRetryBackend(content)
159 async with AsyncHTTPConnection(
160 origin=origin,
161 network_backend=network_backend,
162 ) as conn:
163 with pytest.raises(ConnectError):
164 await conn.request("GET", "https://example.com/")
165
166
167 @pytest.mark.anyio
168 async def test_uds_connections():
169 # We're not actually testing Unix Domain Sockets here, because we're just
170 # using a mock backend, but at least we're covering the UDS codepath
171 # in `connection.py` which we may as well do.
172 origin = Origin(b"https", b"example.com", 443)
173 network_backend = AsyncMockBackend(
174 [
175 b"HTTP/1.1 200 OK\r\n",
176 b"Content-Type: plain/text\r\n",
177 b"Content-Length: 13\r\n",
178 b"\r\n",
179 b"Hello, world!",
180 ]
181 )
182 async with AsyncHTTPConnection(
183 origin=origin, network_backend=network_backend, uds="/mock/example"
184 ) as conn:
185 response = await conn.request("GET", "https://example.com/")
186 assert response.status == 200
0 from typing import List
1
2 import pytest
3 import trio as concurrency
4
5 from httpcore import AsyncConnectionPool, ConnectError, UnsupportedProtocol
6 from httpcore.backends.mock import AsyncMockBackend
7
8
9 @pytest.mark.anyio
10 async def test_connection_pool_with_keepalive():
11 """
12 By default HTTP/1.1 requests should be returned to the connection pool.
13 """
14 network_backend = AsyncMockBackend(
15 [
16 b"HTTP/1.1 200 OK\r\n",
17 b"Content-Type: plain/text\r\n",
18 b"Content-Length: 13\r\n",
19 b"\r\n",
20 b"Hello, world!",
21 b"HTTP/1.1 200 OK\r\n",
22 b"Content-Type: plain/text\r\n",
23 b"Content-Length: 13\r\n",
24 b"\r\n",
25 b"Hello, world!",
26 ]
27 )
28
29 async with AsyncConnectionPool(
30 network_backend=network_backend,
31 ) as pool:
32 # Sending an intial request, which once complete will return to the pool, IDLE.
33 async with pool.stream("GET", "https://example.com/") as response:
34 info = [repr(c) for c in pool.connections]
35 assert info == [
36 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
37 ]
38 await response.aread()
39
40 assert response.status == 200
41 assert response.content == b"Hello, world!"
42 info = [repr(c) for c in pool.connections]
43 assert info == [
44 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
45 ]
46
47 # Sending a second request to the same origin will reuse the existing IDLE connection.
48 async with pool.stream("GET", "https://example.com/") as response:
49 info = [repr(c) for c in pool.connections]
50 assert info == [
51 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>"
52 ]
53 await response.aread()
54
55 assert response.status == 200
56 assert response.content == b"Hello, world!"
57 info = [repr(c) for c in pool.connections]
58 assert info == [
59 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>"
60 ]
61
62 # Sending a request to a different origin will not reuse the existing IDLE connection.
63 async with pool.stream("GET", "http://example.com/") as response:
64 info = [repr(c) for c in pool.connections]
65 assert info == [
66 "<AsyncHTTPConnection ['http://example.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
67 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
68 ]
69 await response.aread()
70
71 assert response.status == 200
72 assert response.content == b"Hello, world!"
73 info = [repr(c) for c in pool.connections]
74 assert info == [
75 "<AsyncHTTPConnection ['http://example.com:80', HTTP/1.1, IDLE, Request Count: 1]>",
76 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
77 ]
78
79
80 @pytest.mark.anyio
81 async def test_connection_pool_with_close():
82 """
83 HTTP/1.1 requests that include a 'Connection: Close' header should
84 not be returned to the connection pool.
85 """
86 network_backend = AsyncMockBackend(
87 [
88 b"HTTP/1.1 200 OK\r\n",
89 b"Content-Type: plain/text\r\n",
90 b"Content-Length: 13\r\n",
91 b"\r\n",
92 b"Hello, world!",
93 ]
94 )
95
96 async with AsyncConnectionPool(network_backend=network_backend) as pool:
97 # Sending an intial request, which once complete will not return to the pool.
98 async with pool.stream(
99 "GET", "https://example.com/", headers={"Connection": "close"}
100 ) as response:
101 info = [repr(c) for c in pool.connections]
102 assert info == [
103 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
104 ]
105 await response.aread()
106
107 assert response.status == 200
108 assert response.content == b"Hello, world!"
109 info = [repr(c) for c in pool.connections]
110 assert info == []
111
112
113 @pytest.mark.anyio
114 async def test_trace_request():
115 """
116 The 'trace' request extension allows for a callback function to inspect the
117 internal events that occur while sending a request.
118 """
119 network_backend = AsyncMockBackend(
120 [
121 b"HTTP/1.1 200 OK\r\n",
122 b"Content-Type: plain/text\r\n",
123 b"Content-Length: 13\r\n",
124 b"\r\n",
125 b"Hello, world!",
126 ]
127 )
128
129 called = []
130
131 async def trace(name, kwargs):
132 called.append(name)
133
134 async with AsyncConnectionPool(network_backend=network_backend) as pool:
135 await pool.request("GET", "https://example.com/", extensions={"trace": trace})
136
137 assert called == [
138 "connection.connect_tcp.started",
139 "connection.connect_tcp.complete",
140 "connection.start_tls.started",
141 "connection.start_tls.complete",
142 "http11.send_request_headers.started",
143 "http11.send_request_headers.complete",
144 "http11.send_request_body.started",
145 "http11.send_request_body.complete",
146 "http11.receive_response_headers.started",
147 "http11.receive_response_headers.complete",
148 "http11.receive_response_body.started",
149 "http11.receive_response_body.complete",
150 "http11.response_closed.started",
151 "http11.response_closed.complete",
152 ]
153
154
155 @pytest.mark.anyio
156 async def test_connection_pool_with_http_exception():
157 """
158 HTTP/1.1 requests that result in an exception during the connection should
159 not be returned to the connection pool.
160 """
161 network_backend = AsyncMockBackend([b"Wait, this isn't valid HTTP!"])
162
163 called = []
164
165 async def trace(name, kwargs):
166 called.append(name)
167
168 async with AsyncConnectionPool(network_backend=network_backend) as pool:
169 # Sending an initial request, which once complete will not return to the pool.
170 with pytest.raises(Exception):
171 await pool.request(
172 "GET", "https://example.com/", extensions={"trace": trace}
173 )
174
175 info = [repr(c) for c in pool.connections]
176 assert info == []
177
178 assert called == [
179 "connection.connect_tcp.started",
180 "connection.connect_tcp.complete",
181 "connection.start_tls.started",
182 "connection.start_tls.complete",
183 "http11.send_request_headers.started",
184 "http11.send_request_headers.complete",
185 "http11.send_request_body.started",
186 "http11.send_request_body.complete",
187 "http11.receive_response_headers.started",
188 "http11.receive_response_headers.failed",
189 "http11.response_closed.started",
190 "http11.response_closed.complete",
191 ]
192
193
194 @pytest.mark.anyio
195 async def test_connection_pool_with_connect_exception():
196 """
197 HTTP/1.1 requests that result in an exception during connection should not
198 be returned to the connection pool.
199 """
200
201 class FailedConnectBackend(AsyncMockBackend):
202 async def connect_tcp(
203 self, host: str, port: int, timeout: float = None, local_address: str = None
204 ):
205 raise ConnectError("Could not connect")
206
207 network_backend = FailedConnectBackend([])
208
209 called = []
210
211 async def trace(name, kwargs):
212 called.append(name)
213
214 async with AsyncConnectionPool(network_backend=network_backend) as pool:
215 # Sending an initial request, which once complete will not return to the pool.
216 with pytest.raises(Exception):
217 await pool.request(
218 "GET", "https://example.com/", extensions={"trace": trace}
219 )
220
221 info = [repr(c) for c in pool.connections]
222 assert info == []
223
224 assert called == [
225 "connection.connect_tcp.started",
226 "connection.connect_tcp.failed",
227 ]
228
229
230 @pytest.mark.anyio
231 async def test_connection_pool_with_immediate_expiry():
232 """
233 Connection pools with keepalive_expiry=0.0 should immediately expire
234 keep alive connections.
235 """
236 network_backend = AsyncMockBackend(
237 [
238 b"HTTP/1.1 200 OK\r\n",
239 b"Content-Type: plain/text\r\n",
240 b"Content-Length: 13\r\n",
241 b"\r\n",
242 b"Hello, world!",
243 ]
244 )
245
246 async with AsyncConnectionPool(
247 keepalive_expiry=0.0,
248 network_backend=network_backend,
249 ) as pool:
250 # Sending an intial request, which once complete will not return to the pool.
251 async with pool.stream("GET", "https://example.com/") as response:
252 info = [repr(c) for c in pool.connections]
253 assert info == [
254 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
255 ]
256 await response.aread()
257
258 assert response.status == 200
259 assert response.content == b"Hello, world!"
260 info = [repr(c) for c in pool.connections]
261 assert info == []
262
263
264 @pytest.mark.anyio
265 async def test_connection_pool_with_no_keepalive_connections_allowed():
266 """
267 When 'max_keepalive_connections=0' is used, IDLE connections should not
268 be returned to the pool.
269 """
270 network_backend = AsyncMockBackend(
271 [
272 b"HTTP/1.1 200 OK\r\n",
273 b"Content-Type: plain/text\r\n",
274 b"Content-Length: 13\r\n",
275 b"\r\n",
276 b"Hello, world!",
277 ]
278 )
279
280 async with AsyncConnectionPool(
281 max_keepalive_connections=0, network_backend=network_backend
282 ) as pool:
283 # Sending an intial request, which once complete will not return to the pool.
284 async with pool.stream("GET", "https://example.com/") as response:
285 info = [repr(c) for c in pool.connections]
286 assert info == [
287 "<AsyncHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
288 ]
289 await response.aread()
290
291 assert response.status == 200
292 assert response.content == b"Hello, world!"
293 info = [repr(c) for c in pool.connections]
294 assert info == []
295
296
297 @pytest.mark.trio
298 async def test_connection_pool_concurrency():
299 """
300 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
301 of allowable connection in the pool.
302 """
303 network_backend = AsyncMockBackend(
304 [
305 b"HTTP/1.1 200 OK\r\n",
306 b"Content-Type: plain/text\r\n",
307 b"Content-Length: 13\r\n",
308 b"\r\n",
309 b"Hello, world!",
310 ]
311 )
312
313 async def fetch(pool, domain, info_list):
314 async with pool.stream("GET", f"http://{domain}/") as response:
315 info = [repr(c) for c in pool.connections]
316 info_list.append(info)
317 await response.aread()
318
319 async with AsyncConnectionPool(
320 max_connections=1, network_backend=network_backend
321 ) as pool:
322 info_list: List[str] = []
323 async with concurrency.open_nursery() as nursery:
324 for domain in ["a.com", "b.com", "c.com", "d.com", "e.com"]:
325 nursery.start_soon(fetch, pool, domain, info_list)
326
327 # Check that each time we inspect the connection pool, only a
328 # single connection was established.
329 for item in info_list:
330 assert len(item) == 1
331 assert item[0] in [
332 "<AsyncHTTPConnection ['http://a.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
333 "<AsyncHTTPConnection ['http://b.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
334 "<AsyncHTTPConnection ['http://c.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
335 "<AsyncHTTPConnection ['http://d.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
336 "<AsyncHTTPConnection ['http://e.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
337 ]
338
339
340 @pytest.mark.trio
341 async def test_connection_pool_concurrency_same_domain_closing():
342 """
343 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
344 of allowable connection in the pool.
345 """
346 network_backend = AsyncMockBackend(
347 [
348 b"HTTP/1.1 200 OK\r\n",
349 b"Content-Type: plain/text\r\n",
350 b"Content-Length: 13\r\n",
351 b"\r\n",
352 b"Hello, world!",
353 ]
354 )
355
356 async def fetch(pool, domain, info_list):
357 async with pool.stream("GET", f"https://{domain}/") as response:
358 info = [repr(c) for c in pool.connections]
359 info_list.append(info)
360 await response.aread()
361
362 async with AsyncConnectionPool(
363 max_connections=1, network_backend=network_backend, http2=True
364 ) as pool:
365 info_list: List[str] = []
366 async with concurrency.open_nursery() as nursery:
367 for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
368 nursery.start_soon(fetch, pool, domain, info_list)
369
370 # Check that each time we inspect the connection pool, only a
371 # single connection was established.
372 for item in info_list:
373 assert len(item) == 1
374 assert item[0] in [
375 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>",
376 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>",
377 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 3]>",
378 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 4]>",
379 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 5]>",
380 ]
381
382
383 @pytest.mark.trio
384 async def test_connection_pool_concurrency_same_domain_keepalive():
385 """
386 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
387 of allowable connection in the pool.
388 """
389 network_backend = AsyncMockBackend(
390 [
391 b"HTTP/1.1 200 OK\r\n",
392 b"Content-Type: plain/text\r\n",
393 b"Content-Length: 13\r\n",
394 b"\r\n",
395 b"Hello, world!",
396 ]
397 * 5
398 )
399
400 async def fetch(pool, domain, info_list):
401 async with pool.stream("GET", f"https://{domain}/") as response:
402 info = [repr(c) for c in pool.connections]
403 info_list.append(info)
404 await response.aread()
405
406 async with AsyncConnectionPool(
407 max_connections=1, network_backend=network_backend, http2=True
408 ) as pool:
409 info_list: List[str] = []
410 async with concurrency.open_nursery() as nursery:
411 for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
412 nursery.start_soon(fetch, pool, domain, info_list)
413
414 # Check that each time we inspect the connection pool, only a
415 # single connection was established.
416 for item in info_list:
417 assert len(item) == 1
418 assert item[0] in [
419 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>",
420 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>",
421 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 3]>",
422 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 4]>",
423 "<AsyncHTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 5]>",
424 ]
425
426
427 @pytest.mark.anyio
428 async def test_unsupported_protocol():
429 async with AsyncConnectionPool() as pool:
430 with pytest.raises(UnsupportedProtocol):
431 await pool.request("GET", "ftp://www.example.com/")
432
433 with pytest.raises(UnsupportedProtocol):
434 await pool.request("GET", "://www.example.com/")
0 import pytest
1
2 from httpcore import (
3 AsyncHTTP11Connection,
4 ConnectionNotAvailable,
5 LocalProtocolError,
6 Origin,
7 RemoteProtocolError,
8 )
9 from httpcore.backends.mock import AsyncMockStream
10
11
12 @pytest.mark.anyio
13 async def test_http11_connection():
14 origin = Origin(b"https", b"example.com", 443)
15 stream = AsyncMockStream(
16 [
17 b"HTTP/1.1 200 OK\r\n",
18 b"Content-Type: plain/text\r\n",
19 b"Content-Length: 13\r\n",
20 b"\r\n",
21 b"Hello, world!",
22 ]
23 )
24 async with AsyncHTTP11Connection(
25 origin=origin, stream=stream, keepalive_expiry=5.0
26 ) as conn:
27 response = await conn.request("GET", "https://example.com/")
28 assert response.status == 200
29 assert response.content == b"Hello, world!"
30
31 assert conn.is_idle()
32 assert not conn.is_closed()
33 assert conn.is_available()
34 assert not conn.has_expired()
35 assert (
36 repr(conn)
37 == "<AsyncHTTP11Connection ['https://example.com:443', IDLE, Request Count: 1]>"
38 )
39
40
41 @pytest.mark.anyio
42 async def test_http11_connection_unread_response():
43 """
44 If the client releases the response without reading it to termination,
45 then the connection will not be reusable.
46 """
47 origin = Origin(b"https", b"example.com", 443)
48 stream = AsyncMockStream(
49 [
50 b"HTTP/1.1 200 OK\r\n",
51 b"Content-Type: plain/text\r\n",
52 b"Content-Length: 13\r\n",
53 b"\r\n",
54 b"Hello, world!",
55 ]
56 )
57 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
58 async with conn.stream("GET", "https://example.com/") as response:
59 assert response.status == 200
60
61 assert not conn.is_idle()
62 assert conn.is_closed()
63 assert not conn.is_available()
64 assert not conn.has_expired()
65 assert (
66 repr(conn)
67 == "<AsyncHTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
68 )
69
70
71 @pytest.mark.anyio
72 async def test_http11_connection_with_remote_protocol_error():
73 """
74 If a remote protocol error occurs, then no response will be returned,
75 and the connection will not be reusable.
76 """
77 origin = Origin(b"https", b"example.com", 443)
78 stream = AsyncMockStream([b"Wait, this isn't valid HTTP!", b""])
79 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
80 with pytest.raises(RemoteProtocolError):
81 await conn.request("GET", "https://example.com/")
82
83 assert not conn.is_idle()
84 assert conn.is_closed()
85 assert not conn.is_available()
86 assert not conn.has_expired()
87 assert (
88 repr(conn)
89 == "<AsyncHTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
90 )
91
92
93 @pytest.mark.anyio
94 async def test_http11_connection_with_local_protocol_error():
95 """
96 If a local protocol error occurs, then no response will be returned,
97 and the connection will not be reusable.
98 """
99 origin = Origin(b"https", b"example.com", 443)
100 stream = AsyncMockStream(
101 [
102 b"HTTP/1.1 200 OK\r\n",
103 b"Content-Type: plain/text\r\n",
104 b"Content-Length: 13\r\n",
105 b"\r\n",
106 b"Hello, world!",
107 ]
108 )
109 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
110 with pytest.raises(LocalProtocolError) as exc_info:
111 await conn.request("GET", "https://example.com/", headers={"Host": "\0"})
112
113 assert str(exc_info.value) == "Illegal header value b'\\x00'"
114
115 assert not conn.is_idle()
116 assert conn.is_closed()
117 assert not conn.is_available()
118 assert not conn.has_expired()
119 assert (
120 repr(conn)
121 == "<AsyncHTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
122 )
123
124
125 @pytest.mark.anyio
126 async def test_http11_connection_handles_one_active_request():
127 """
128 Attempting to send a request while one is already in-flight will raise
129 a ConnectionNotAvailable exception.
130 """
131 origin = Origin(b"https", b"example.com", 443)
132 stream = AsyncMockStream(
133 [
134 b"HTTP/1.1 200 OK\r\n",
135 b"Content-Type: plain/text\r\n",
136 b"Content-Length: 13\r\n",
137 b"\r\n",
138 b"Hello, world!",
139 ]
140 )
141 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
142 async with conn.stream("GET", "https://example.com/"):
143 with pytest.raises(ConnectionNotAvailable):
144 await conn.request("GET", "https://example.com/")
145
146
147 @pytest.mark.anyio
148 async def test_http11_connection_attempt_close():
149 """
150 A connection can only be closed when it is idle.
151 """
152 origin = Origin(b"https", b"example.com", 443)
153 stream = AsyncMockStream(
154 [
155 b"HTTP/1.1 200 OK\r\n",
156 b"Content-Type: plain/text\r\n",
157 b"Content-Length: 13\r\n",
158 b"\r\n",
159 b"Hello, world!",
160 ]
161 )
162 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
163 async with conn.stream("GET", "https://example.com/") as response:
164 await response.aread()
165 assert response.status == 200
166 assert response.content == b"Hello, world!"
167
168
169 @pytest.mark.anyio
170 async def test_http11_request_to_incorrect_origin():
171 """
172 A connection can only send requests to whichever origin it is connected to.
173 """
174 origin = Origin(b"https", b"example.com", 443)
175 stream = AsyncMockStream([])
176 async with AsyncHTTP11Connection(origin=origin, stream=stream) as conn:
177 with pytest.raises(RuntimeError):
178 await conn.request("GET", "https://other.com/")
0 import hpack
1 import hyperframe.frame
2 import pytest
3
4 from httpcore import (
5 AsyncHTTP2Connection,
6 ConnectionNotAvailable,
7 Origin,
8 RemoteProtocolError,
9 )
10 from httpcore.backends.mock import AsyncMockStream
11
12
13 @pytest.mark.anyio
14 async def test_http2_connection():
15 origin = Origin(b"https", b"example.com", 443)
16 stream = AsyncMockStream(
17 [
18 hyperframe.frame.SettingsFrame().serialize(),
19 hyperframe.frame.HeadersFrame(
20 stream_id=1,
21 data=hpack.Encoder().encode(
22 [
23 (b":status", b"200"),
24 (b"content-type", b"plain/text"),
25 ]
26 ),
27 flags=["END_HEADERS"],
28 ).serialize(),
29 hyperframe.frame.DataFrame(
30 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
31 ).serialize(),
32 ]
33 )
34 async with AsyncHTTP2Connection(
35 origin=origin, stream=stream, keepalive_expiry=5.0
36 ) as conn:
37 response = await conn.request("GET", "https://example.com/")
38 assert response.status == 200
39 assert response.content == b"Hello, world!"
40
41 assert conn.is_idle()
42 assert conn.is_available()
43 assert not conn.is_closed()
44 assert not conn.has_expired()
45 assert (
46 conn.info() == "'https://example.com:443', HTTP/2, IDLE, Request Count: 1"
47 )
48 assert (
49 repr(conn)
50 == "<AsyncHTTP2Connection ['https://example.com:443', IDLE, Request Count: 1]>"
51 )
52
53
54 @pytest.mark.anyio
55 async def test_http2_connection_post_request():
56 origin = Origin(b"https", b"example.com", 443)
57 stream = AsyncMockStream(
58 [
59 hyperframe.frame.SettingsFrame().serialize(),
60 hyperframe.frame.HeadersFrame(
61 stream_id=1,
62 data=hpack.Encoder().encode(
63 [
64 (b":status", b"200"),
65 (b"content-type", b"plain/text"),
66 ]
67 ),
68 flags=["END_HEADERS"],
69 ).serialize(),
70 hyperframe.frame.DataFrame(
71 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
72 ).serialize(),
73 ]
74 )
75 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
76 response = await conn.request(
77 "POST",
78 "https://example.com/",
79 headers={b"content-length": b"17"},
80 content=b'{"data": "upload"}',
81 )
82 assert response.status == 200
83 assert response.content == b"Hello, world!"
84
85
86 @pytest.mark.anyio
87 async def test_http2_connection_with_remote_protocol_error():
88 """
89 If a remote protocol error occurs, then no response will be returned,
90 and the connection will not be reusable.
91 """
92 origin = Origin(b"https", b"example.com", 443)
93 stream = AsyncMockStream([b"Wait, this isn't valid HTTP!", b""])
94 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
95 with pytest.raises(RemoteProtocolError):
96 await conn.request("GET", "https://example.com/")
97
98
99 @pytest.mark.anyio
100 async def test_http2_connection_with_stream_cancelled():
101 """
102 If a remote protocol error occurs, then no response will be returned,
103 and the connection will not be reusable.
104 """
105 origin = Origin(b"https", b"example.com", 443)
106 stream = AsyncMockStream(
107 [
108 hyperframe.frame.SettingsFrame().serialize(),
109 hyperframe.frame.HeadersFrame(
110 stream_id=1,
111 data=hpack.Encoder().encode(
112 [
113 (b":status", b"200"),
114 (b"content-type", b"plain/text"),
115 ]
116 ),
117 flags=["END_HEADERS"],
118 ).serialize(),
119 hyperframe.frame.RstStreamFrame(stream_id=1, error_code=8).serialize(),
120 b"",
121 ]
122 )
123 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
124 with pytest.raises(RemoteProtocolError):
125 await conn.request("GET", "https://example.com/")
126
127
128 @pytest.mark.anyio
129 async def test_http2_connection_with_flow_control():
130 origin = Origin(b"https", b"example.com", 443)
131 stream = AsyncMockStream(
132 [
133 hyperframe.frame.SettingsFrame().serialize(),
134 # Available flow: 65,535
135 hyperframe.frame.WindowUpdateFrame(
136 stream_id=0, window_increment=10_000
137 ).serialize(),
138 hyperframe.frame.WindowUpdateFrame(
139 stream_id=1, window_increment=10_000
140 ).serialize(),
141 # Available flow: 75,535
142 hyperframe.frame.WindowUpdateFrame(
143 stream_id=0, window_increment=10_000
144 ).serialize(),
145 hyperframe.frame.WindowUpdateFrame(
146 stream_id=1, window_increment=10_000
147 ).serialize(),
148 # Available flow: 85,535
149 hyperframe.frame.WindowUpdateFrame(
150 stream_id=0, window_increment=10_000
151 ).serialize(),
152 hyperframe.frame.WindowUpdateFrame(
153 stream_id=1, window_increment=10_000
154 ).serialize(),
155 # Available flow: 95,535
156 hyperframe.frame.WindowUpdateFrame(
157 stream_id=0, window_increment=10_000
158 ).serialize(),
159 hyperframe.frame.WindowUpdateFrame(
160 stream_id=1, window_increment=10_000
161 ).serialize(),
162 # Available flow: 105,535
163 hyperframe.frame.HeadersFrame(
164 stream_id=1,
165 data=hpack.Encoder().encode(
166 [
167 (b":status", b"200"),
168 (b"content-type", b"plain/text"),
169 ]
170 ),
171 flags=["END_HEADERS"],
172 ).serialize(),
173 hyperframe.frame.DataFrame(
174 stream_id=1, data=b"100,000 bytes received", flags=["END_STREAM"]
175 ).serialize(),
176 ]
177 )
178 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
179 response = await conn.request(
180 "POST",
181 "https://example.com/",
182 content=b"x" * 100_000,
183 )
184 assert response.status == 200
185 assert response.content == b"100,000 bytes received"
186
187
188 @pytest.mark.anyio
189 async def test_http2_connection_attempt_close():
190 """
191 A connection can only be closed when it is idle.
192 """
193 origin = Origin(b"https", b"example.com", 443)
194 stream = AsyncMockStream(
195 [
196 hyperframe.frame.SettingsFrame().serialize(),
197 hyperframe.frame.HeadersFrame(
198 stream_id=1,
199 data=hpack.Encoder().encode(
200 [
201 (b":status", b"200"),
202 (b"content-type", b"plain/text"),
203 ]
204 ),
205 flags=["END_HEADERS"],
206 ).serialize(),
207 hyperframe.frame.DataFrame(
208 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
209 ).serialize(),
210 ]
211 )
212 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
213 async with conn.stream("GET", "https://example.com/") as response:
214 await response.aread()
215 assert response.status == 200
216 assert response.content == b"Hello, world!"
217
218 await conn.aclose()
219 with pytest.raises(ConnectionNotAvailable):
220 await conn.request("GET", "https://example.com/")
221
222
223 @pytest.mark.anyio
224 async def test_http2_request_to_incorrect_origin():
225 """
226 A connection can only send requests to whichever origin it is connected to.
227 """
228 origin = Origin(b"https", b"example.com", 443)
229 stream = AsyncMockStream([])
230 async with AsyncHTTP2Connection(origin=origin, stream=stream) as conn:
231 with pytest.raises(RuntimeError):
232 await conn.request("GET", "https://other.com/")
0 import pytest
1
2 from httpcore import AsyncHTTPProxy, Origin, ProxyError
3 from httpcore.backends.mock import AsyncMockBackend
4
5
6 @pytest.mark.anyio
7 async def test_proxy_forwarding():
8 """
9 Send an HTTP request via a proxy.
10 """
11 network_backend = AsyncMockBackend(
12 [
13 b"HTTP/1.1 200 OK\r\n",
14 b"Content-Type: plain/text\r\n",
15 b"Content-Length: 13\r\n",
16 b"\r\n",
17 b"Hello, world!",
18 ]
19 )
20
21 async with AsyncHTTPProxy(
22 proxy_url="http://localhost:8080/",
23 max_connections=10,
24 network_backend=network_backend,
25 ) as proxy:
26 # Sending an intial request, which once complete will return to the pool, IDLE.
27 async with proxy.stream("GET", "http://example.com/") as response:
28 info = [repr(c) for c in proxy.connections]
29 assert info == [
30 "<AsyncForwardHTTPConnection ['http://localhost:8080', HTTP/1.1, ACTIVE, Request Count: 1]>"
31 ]
32 await response.aread()
33
34 assert response.status == 200
35 assert response.content == b"Hello, world!"
36 info = [repr(c) for c in proxy.connections]
37 assert info == [
38 "<AsyncForwardHTTPConnection ['http://localhost:8080', HTTP/1.1, IDLE, Request Count: 1]>"
39 ]
40 assert proxy.connections[0].is_idle()
41 assert proxy.connections[0].is_available()
42 assert not proxy.connections[0].is_closed()
43
44 # A connection on a forwarding proxy can handle HTTP requests to any host.
45 assert proxy.connections[0].can_handle_request(
46 Origin(b"http", b"example.com", 80)
47 )
48 assert proxy.connections[0].can_handle_request(
49 Origin(b"http", b"other.com", 80)
50 )
51 assert not proxy.connections[0].can_handle_request(
52 Origin(b"https", b"example.com", 443)
53 )
54 assert not proxy.connections[0].can_handle_request(
55 Origin(b"https", b"other.com", 443)
56 )
57
58
59 @pytest.mark.anyio
60 async def test_proxy_tunneling():
61 """
62 Send an HTTPS request via a proxy.
63 """
64 network_backend = AsyncMockBackend(
65 [
66 b"HTTP/1.1 200 OK\r\n" b"\r\n",
67 b"HTTP/1.1 200 OK\r\n",
68 b"Content-Type: plain/text\r\n",
69 b"Content-Length: 13\r\n",
70 b"\r\n",
71 b"Hello, world!",
72 ]
73 )
74
75 async with AsyncHTTPProxy(
76 proxy_url="http://localhost:8080/",
77 max_connections=10,
78 network_backend=network_backend,
79 ) as proxy:
80 # Sending an intial request, which once complete will return to the pool, IDLE.
81 async with proxy.stream("GET", "https://example.com/") as response:
82 info = [repr(c) for c in proxy.connections]
83 assert info == [
84 "<AsyncTunnelHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
85 ]
86 await response.aread()
87
88 assert response.status == 200
89 assert response.content == b"Hello, world!"
90 info = [repr(c) for c in proxy.connections]
91 assert info == [
92 "<AsyncTunnelHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
93 ]
94 assert proxy.connections[0].is_idle()
95 assert proxy.connections[0].is_available()
96 assert not proxy.connections[0].is_closed()
97
98 # A connection on a tunneled proxy can only handle HTTPS requests to the same origin.
99 assert not proxy.connections[0].can_handle_request(
100 Origin(b"http", b"example.com", 80)
101 )
102 assert not proxy.connections[0].can_handle_request(
103 Origin(b"http", b"other.com", 80)
104 )
105 assert proxy.connections[0].can_handle_request(
106 Origin(b"https", b"example.com", 443)
107 )
108 assert not proxy.connections[0].can_handle_request(
109 Origin(b"https", b"other.com", 443)
110 )
111
112
113 @pytest.mark.anyio
114 async def test_proxy_tunneling_with_403():
115 """
116 Send an HTTPS request via a proxy.
117 """
118 network_backend = AsyncMockBackend(
119 [
120 b"HTTP/1.1 403 Permission Denied\r\n" b"\r\n",
121 ]
122 )
123
124 async with AsyncHTTPProxy(
125 proxy_url="http://localhost:8080/",
126 max_connections=10,
127 network_backend=network_backend,
128 ) as proxy:
129 with pytest.raises(ProxyError) as exc_info:
130 await proxy.request("GET", "https://example.com/")
131 assert str(exc_info.value) == "403 Permission Denied"
132 assert not proxy.connections
0 import ssl
1
2 import pytest
3
4 from httpcore import AsyncConnectionPool
5
6
7 @pytest.mark.anyio
8 async def test_request(httpbin):
9 async with AsyncConnectionPool() as pool:
10 response = await pool.request("GET", httpbin.url)
11 assert response.status == 200
12
13
14 @pytest.mark.anyio
15 async def test_ssl_request(httpbin_secure):
16 ssl_context = ssl.create_default_context()
17 ssl_context.check_hostname = False
18 ssl_context.verify_mode = ssl.CERT_NONE
19 async with AsyncConnectionPool(ssl_context=ssl_context) as pool:
20 response = await pool.request("GET", httpbin_secure.url)
21 assert response.status == 200
22
23
24 @pytest.mark.anyio
25 async def test_extra_info(httpbin_secure):
26 ssl_context = ssl.create_default_context()
27 ssl_context.check_hostname = False
28 ssl_context.verify_mode = ssl.CERT_NONE
29 async with AsyncConnectionPool(ssl_context=ssl_context) as pool:
30 async with pool.stream("GET", httpbin_secure.url) as response:
31 assert response.status == 200
32 stream = response.extensions["network_stream"]
33
34 ssl_object = stream.get_extra_info("ssl_object")
35 assert ssl_object.version() == "TLSv1.3"
36
37 local_addr = stream.get_extra_info("client_addr")
38 assert local_addr[0] == "127.0.0.1"
39
40 remote_addr = stream.get_extra_info("server_addr")
41 assert "https://%s:%d" % remote_addr == httpbin_secure.url
42
43 sock = stream.get_extra_info("socket")
44 assert hasattr(sock, "family")
45 assert hasattr(sock, "type")
46
47 invalid = stream.get_extra_info("invalid")
48 assert invalid is None
49
50 stream.get_extra_info("is_readable")
(New empty file)
0 import hpack
1 import hyperframe.frame
2 import pytest
3
4 from httpcore import HTTPConnection, ConnectError, ConnectionNotAvailable, Origin
5 from httpcore.backends.base import NetworkStream
6 from httpcore.backends.mock import MockBackend
7
8
9
10 def test_http_connection():
11 origin = Origin(b"https", b"example.com", 443)
12 network_backend = MockBackend(
13 [
14 b"HTTP/1.1 200 OK\r\n",
15 b"Content-Type: plain/text\r\n",
16 b"Content-Length: 13\r\n",
17 b"\r\n",
18 b"Hello, world!",
19 ]
20 )
21
22 with HTTPConnection(
23 origin=origin, network_backend=network_backend, keepalive_expiry=5.0
24 ) as conn:
25 assert not conn.is_idle()
26 assert not conn.is_closed()
27 assert not conn.is_available()
28 assert not conn.has_expired()
29 assert repr(conn) == "<HTTPConnection [CONNECTING]>"
30
31 with conn.stream("GET", "https://example.com/") as response:
32 assert (
33 repr(conn)
34 == "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
35 )
36 response.read()
37
38 assert response.status == 200
39 assert response.content == b"Hello, world!"
40
41 assert conn.is_idle()
42 assert not conn.is_closed()
43 assert conn.is_available()
44 assert not conn.has_expired()
45 assert (
46 repr(conn)
47 == "<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
48 )
49
50
51
52 def test_concurrent_requests_not_available_on_http11_connections():
53 """
54 Attempting to issue a request against an already active HTTP/1.1 connection
55 will raise a `ConnectionNotAvailable` exception.
56 """
57 origin = Origin(b"https", b"example.com", 443)
58 network_backend = MockBackend(
59 [
60 b"HTTP/1.1 200 OK\r\n",
61 b"Content-Type: plain/text\r\n",
62 b"Content-Length: 13\r\n",
63 b"\r\n",
64 b"Hello, world!",
65 ]
66 )
67
68 with HTTPConnection(
69 origin=origin, network_backend=network_backend, keepalive_expiry=5.0
70 ) as conn:
71 with conn.stream("GET", "https://example.com/"):
72 with pytest.raises(ConnectionNotAvailable):
73 conn.request("GET", "https://example.com/")
74
75
76
77 def test_http2_connection():
78 origin = Origin(b"https", b"example.com", 443)
79 network_backend = MockBackend(
80 [
81 hyperframe.frame.SettingsFrame().serialize(),
82 hyperframe.frame.HeadersFrame(
83 stream_id=1,
84 data=hpack.Encoder().encode(
85 [
86 (b":status", b"200"),
87 (b"content-type", b"plain/text"),
88 ]
89 ),
90 flags=["END_HEADERS"],
91 ).serialize(),
92 hyperframe.frame.DataFrame(
93 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
94 ).serialize(),
95 ],
96 http2=True,
97 )
98
99 with HTTPConnection(
100 origin=origin, network_backend=network_backend, http2=True
101 ) as conn:
102 response = conn.request("GET", "https://example.com/")
103
104 assert response.status == 200
105 assert response.content == b"Hello, world!"
106 assert response.extensions["http_version"] == b"HTTP/2"
107
108
109
110 def test_request_to_incorrect_origin():
111 """
112 A connection can only send requests whichever origin it is connected to.
113 """
114 origin = Origin(b"https", b"example.com", 443)
115 network_backend = MockBackend([])
116 with HTTPConnection(
117 origin=origin, network_backend=network_backend
118 ) as conn:
119 with pytest.raises(RuntimeError):
120 conn.request("GET", "https://other.com/")
121
122
123 class NeedsRetryBackend(MockBackend):
124 def __init__(self, *args, **kwargs) -> None:
125 self._retry = 2
126 super().__init__(*args, **kwargs)
127
128 def connect_tcp(
129 self, host: str, port: int, timeout: float = None, local_address: str = None
130 ) -> NetworkStream:
131 if self._retry > 0:
132 self._retry -= 1
133 raise ConnectError()
134
135 return super().connect_tcp(
136 host, port, timeout=timeout, local_address=local_address
137 )
138
139
140
141 def test_connection_retries():
142 origin = Origin(b"https", b"example.com", 443)
143 content = [
144 b"HTTP/1.1 200 OK\r\n",
145 b"Content-Type: plain/text\r\n",
146 b"Content-Length: 13\r\n",
147 b"\r\n",
148 b"Hello, world!",
149 ]
150
151 network_backend = NeedsRetryBackend(content)
152 with HTTPConnection(
153 origin=origin, network_backend=network_backend, retries=3
154 ) as conn:
155 response = conn.request("GET", "https://example.com/")
156 assert response.status == 200
157
158 network_backend = NeedsRetryBackend(content)
159 with HTTPConnection(
160 origin=origin,
161 network_backend=network_backend,
162 ) as conn:
163 with pytest.raises(ConnectError):
164 conn.request("GET", "https://example.com/")
165
166
167
168 def test_uds_connections():
169 # We're not actually testing Unix Domain Sockets here, because we're just
170 # using a mock backend, but at least we're covering the UDS codepath
171 # in `connection.py` which we may as well do.
172 origin = Origin(b"https", b"example.com", 443)
173 network_backend = MockBackend(
174 [
175 b"HTTP/1.1 200 OK\r\n",
176 b"Content-Type: plain/text\r\n",
177 b"Content-Length: 13\r\n",
178 b"\r\n",
179 b"Hello, world!",
180 ]
181 )
182 with HTTPConnection(
183 origin=origin, network_backend=network_backend, uds="/mock/example"
184 ) as conn:
185 response = conn.request("GET", "https://example.com/")
186 assert response.status == 200
0 from typing import List
1
2 import pytest
3 from tests import concurrency
4
5 from httpcore import ConnectionPool, ConnectError, UnsupportedProtocol
6 from httpcore.backends.mock import MockBackend
7
8
9
10 def test_connection_pool_with_keepalive():
11 """
12 By default HTTP/1.1 requests should be returned to the connection pool.
13 """
14 network_backend = MockBackend(
15 [
16 b"HTTP/1.1 200 OK\r\n",
17 b"Content-Type: plain/text\r\n",
18 b"Content-Length: 13\r\n",
19 b"\r\n",
20 b"Hello, world!",
21 b"HTTP/1.1 200 OK\r\n",
22 b"Content-Type: plain/text\r\n",
23 b"Content-Length: 13\r\n",
24 b"\r\n",
25 b"Hello, world!",
26 ]
27 )
28
29 with ConnectionPool(
30 network_backend=network_backend,
31 ) as pool:
32 # Sending an intial request, which once complete will return to the pool, IDLE.
33 with pool.stream("GET", "https://example.com/") as response:
34 info = [repr(c) for c in pool.connections]
35 assert info == [
36 "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
37 ]
38 response.read()
39
40 assert response.status == 200
41 assert response.content == b"Hello, world!"
42 info = [repr(c) for c in pool.connections]
43 assert info == [
44 "<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
45 ]
46
47 # Sending a second request to the same origin will reuse the existing IDLE connection.
48 with pool.stream("GET", "https://example.com/") as response:
49 info = [repr(c) for c in pool.connections]
50 assert info == [
51 "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>"
52 ]
53 response.read()
54
55 assert response.status == 200
56 assert response.content == b"Hello, world!"
57 info = [repr(c) for c in pool.connections]
58 assert info == [
59 "<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>"
60 ]
61
62 # Sending a request to a different origin will not reuse the existing IDLE connection.
63 with pool.stream("GET", "http://example.com/") as response:
64 info = [repr(c) for c in pool.connections]
65 assert info == [
66 "<HTTPConnection ['http://example.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
67 "<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
68 ]
69 response.read()
70
71 assert response.status == 200
72 assert response.content == b"Hello, world!"
73 info = [repr(c) for c in pool.connections]
74 assert info == [
75 "<HTTPConnection ['http://example.com:80', HTTP/1.1, IDLE, Request Count: 1]>",
76 "<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
77 ]
78
79
80
81 def test_connection_pool_with_close():
82 """
83 HTTP/1.1 requests that include a 'Connection: Close' header should
84 not be returned to the connection pool.
85 """
86 network_backend = MockBackend(
87 [
88 b"HTTP/1.1 200 OK\r\n",
89 b"Content-Type: plain/text\r\n",
90 b"Content-Length: 13\r\n",
91 b"\r\n",
92 b"Hello, world!",
93 ]
94 )
95
96 with ConnectionPool(network_backend=network_backend) as pool:
97 # Sending an intial request, which once complete will not return to the pool.
98 with pool.stream(
99 "GET", "https://example.com/", headers={"Connection": "close"}
100 ) as response:
101 info = [repr(c) for c in pool.connections]
102 assert info == [
103 "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
104 ]
105 response.read()
106
107 assert response.status == 200
108 assert response.content == b"Hello, world!"
109 info = [repr(c) for c in pool.connections]
110 assert info == []
111
112
113
114 def test_trace_request():
115 """
116 The 'trace' request extension allows for a callback function to inspect the
117 internal events that occur while sending a request.
118 """
119 network_backend = MockBackend(
120 [
121 b"HTTP/1.1 200 OK\r\n",
122 b"Content-Type: plain/text\r\n",
123 b"Content-Length: 13\r\n",
124 b"\r\n",
125 b"Hello, world!",
126 ]
127 )
128
129 called = []
130
131 def trace(name, kwargs):
132 called.append(name)
133
134 with ConnectionPool(network_backend=network_backend) as pool:
135 pool.request("GET", "https://example.com/", extensions={"trace": trace})
136
137 assert called == [
138 "connection.connect_tcp.started",
139 "connection.connect_tcp.complete",
140 "connection.start_tls.started",
141 "connection.start_tls.complete",
142 "http11.send_request_headers.started",
143 "http11.send_request_headers.complete",
144 "http11.send_request_body.started",
145 "http11.send_request_body.complete",
146 "http11.receive_response_headers.started",
147 "http11.receive_response_headers.complete",
148 "http11.receive_response_body.started",
149 "http11.receive_response_body.complete",
150 "http11.response_closed.started",
151 "http11.response_closed.complete",
152 ]
153
154
155
156 def test_connection_pool_with_http_exception():
157 """
158 HTTP/1.1 requests that result in an exception during the connection should
159 not be returned to the connection pool.
160 """
161 network_backend = MockBackend([b"Wait, this isn't valid HTTP!"])
162
163 called = []
164
165 def trace(name, kwargs):
166 called.append(name)
167
168 with ConnectionPool(network_backend=network_backend) as pool:
169 # Sending an initial request, which once complete will not return to the pool.
170 with pytest.raises(Exception):
171 pool.request(
172 "GET", "https://example.com/", extensions={"trace": trace}
173 )
174
175 info = [repr(c) for c in pool.connections]
176 assert info == []
177
178 assert called == [
179 "connection.connect_tcp.started",
180 "connection.connect_tcp.complete",
181 "connection.start_tls.started",
182 "connection.start_tls.complete",
183 "http11.send_request_headers.started",
184 "http11.send_request_headers.complete",
185 "http11.send_request_body.started",
186 "http11.send_request_body.complete",
187 "http11.receive_response_headers.started",
188 "http11.receive_response_headers.failed",
189 "http11.response_closed.started",
190 "http11.response_closed.complete",
191 ]
192
193
194
195 def test_connection_pool_with_connect_exception():
196 """
197 HTTP/1.1 requests that result in an exception during connection should not
198 be returned to the connection pool.
199 """
200
201 class FailedConnectBackend(MockBackend):
202 def connect_tcp(
203 self, host: str, port: int, timeout: float = None, local_address: str = None
204 ):
205 raise ConnectError("Could not connect")
206
207 network_backend = FailedConnectBackend([])
208
209 called = []
210
211 def trace(name, kwargs):
212 called.append(name)
213
214 with ConnectionPool(network_backend=network_backend) as pool:
215 # Sending an initial request, which once complete will not return to the pool.
216 with pytest.raises(Exception):
217 pool.request(
218 "GET", "https://example.com/", extensions={"trace": trace}
219 )
220
221 info = [repr(c) for c in pool.connections]
222 assert info == []
223
224 assert called == [
225 "connection.connect_tcp.started",
226 "connection.connect_tcp.failed",
227 ]
228
229
230
231 def test_connection_pool_with_immediate_expiry():
232 """
233 Connection pools with keepalive_expiry=0.0 should immediately expire
234 keep alive connections.
235 """
236 network_backend = MockBackend(
237 [
238 b"HTTP/1.1 200 OK\r\n",
239 b"Content-Type: plain/text\r\n",
240 b"Content-Length: 13\r\n",
241 b"\r\n",
242 b"Hello, world!",
243 ]
244 )
245
246 with ConnectionPool(
247 keepalive_expiry=0.0,
248 network_backend=network_backend,
249 ) as pool:
250 # Sending an intial request, which once complete will not return to the pool.
251 with pool.stream("GET", "https://example.com/") as response:
252 info = [repr(c) for c in pool.connections]
253 assert info == [
254 "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
255 ]
256 response.read()
257
258 assert response.status == 200
259 assert response.content == b"Hello, world!"
260 info = [repr(c) for c in pool.connections]
261 assert info == []
262
263
264
265 def test_connection_pool_with_no_keepalive_connections_allowed():
266 """
267 When 'max_keepalive_connections=0' is used, IDLE connections should not
268 be returned to the pool.
269 """
270 network_backend = MockBackend(
271 [
272 b"HTTP/1.1 200 OK\r\n",
273 b"Content-Type: plain/text\r\n",
274 b"Content-Length: 13\r\n",
275 b"\r\n",
276 b"Hello, world!",
277 ]
278 )
279
280 with ConnectionPool(
281 max_keepalive_connections=0, network_backend=network_backend
282 ) as pool:
283 # Sending an intial request, which once complete will not return to the pool.
284 with pool.stream("GET", "https://example.com/") as response:
285 info = [repr(c) for c in pool.connections]
286 assert info == [
287 "<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
288 ]
289 response.read()
290
291 assert response.status == 200
292 assert response.content == b"Hello, world!"
293 info = [repr(c) for c in pool.connections]
294 assert info == []
295
296
297
298 def test_connection_pool_concurrency():
299 """
300 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
301 of allowable connection in the pool.
302 """
303 network_backend = MockBackend(
304 [
305 b"HTTP/1.1 200 OK\r\n",
306 b"Content-Type: plain/text\r\n",
307 b"Content-Length: 13\r\n",
308 b"\r\n",
309 b"Hello, world!",
310 ]
311 )
312
313 def fetch(pool, domain, info_list):
314 with pool.stream("GET", f"http://{domain}/") as response:
315 info = [repr(c) for c in pool.connections]
316 info_list.append(info)
317 response.read()
318
319 with ConnectionPool(
320 max_connections=1, network_backend=network_backend
321 ) as pool:
322 info_list: List[str] = []
323 with concurrency.open_nursery() as nursery:
324 for domain in ["a.com", "b.com", "c.com", "d.com", "e.com"]:
325 nursery.start_soon(fetch, pool, domain, info_list)
326
327 # Check that each time we inspect the connection pool, only a
328 # single connection was established.
329 for item in info_list:
330 assert len(item) == 1
331 assert item[0] in [
332 "<HTTPConnection ['http://a.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
333 "<HTTPConnection ['http://b.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
334 "<HTTPConnection ['http://c.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
335 "<HTTPConnection ['http://d.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
336 "<HTTPConnection ['http://e.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
337 ]
338
339
340
341 def test_connection_pool_concurrency_same_domain_closing():
342 """
343 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
344 of allowable connection in the pool.
345 """
346 network_backend = MockBackend(
347 [
348 b"HTTP/1.1 200 OK\r\n",
349 b"Content-Type: plain/text\r\n",
350 b"Content-Length: 13\r\n",
351 b"\r\n",
352 b"Hello, world!",
353 ]
354 )
355
356 def fetch(pool, domain, info_list):
357 with pool.stream("GET", f"https://{domain}/") as response:
358 info = [repr(c) for c in pool.connections]
359 info_list.append(info)
360 response.read()
361
362 with ConnectionPool(
363 max_connections=1, network_backend=network_backend, http2=True
364 ) as pool:
365 info_list: List[str] = []
366 with concurrency.open_nursery() as nursery:
367 for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
368 nursery.start_soon(fetch, pool, domain, info_list)
369
370 # Check that each time we inspect the connection pool, only a
371 # single connection was established.
372 for item in info_list:
373 assert len(item) == 1
374 assert item[0] in [
375 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>",
376 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>",
377 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 3]>",
378 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 4]>",
379 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 5]>",
380 ]
381
382
383
384 def test_connection_pool_concurrency_same_domain_keepalive():
385 """
386 HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
387 of allowable connection in the pool.
388 """
389 network_backend = MockBackend(
390 [
391 b"HTTP/1.1 200 OK\r\n",
392 b"Content-Type: plain/text\r\n",
393 b"Content-Length: 13\r\n",
394 b"\r\n",
395 b"Hello, world!",
396 ]
397 * 5
398 )
399
400 def fetch(pool, domain, info_list):
401 with pool.stream("GET", f"https://{domain}/") as response:
402 info = [repr(c) for c in pool.connections]
403 info_list.append(info)
404 response.read()
405
406 with ConnectionPool(
407 max_connections=1, network_backend=network_backend, http2=True
408 ) as pool:
409 info_list: List[str] = []
410 with concurrency.open_nursery() as nursery:
411 for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
412 nursery.start_soon(fetch, pool, domain, info_list)
413
414 # Check that each time we inspect the connection pool, only a
415 # single connection was established.
416 for item in info_list:
417 assert len(item) == 1
418 assert item[0] in [
419 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>",
420 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>",
421 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 3]>",
422 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 4]>",
423 "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 5]>",
424 ]
425
426
427
428 def test_unsupported_protocol():
429 with ConnectionPool() as pool:
430 with pytest.raises(UnsupportedProtocol):
431 pool.request("GET", "ftp://www.example.com/")
432
433 with pytest.raises(UnsupportedProtocol):
434 pool.request("GET", "://www.example.com/")
0 import pytest
1
2 from httpcore import (
3 HTTP11Connection,
4 ConnectionNotAvailable,
5 LocalProtocolError,
6 Origin,
7 RemoteProtocolError,
8 )
9 from httpcore.backends.mock import MockStream
10
11
12
13 def test_http11_connection():
14 origin = Origin(b"https", b"example.com", 443)
15 stream = MockStream(
16 [
17 b"HTTP/1.1 200 OK\r\n",
18 b"Content-Type: plain/text\r\n",
19 b"Content-Length: 13\r\n",
20 b"\r\n",
21 b"Hello, world!",
22 ]
23 )
24 with HTTP11Connection(
25 origin=origin, stream=stream, keepalive_expiry=5.0
26 ) as conn:
27 response = conn.request("GET", "https://example.com/")
28 assert response.status == 200
29 assert response.content == b"Hello, world!"
30
31 assert conn.is_idle()
32 assert not conn.is_closed()
33 assert conn.is_available()
34 assert not conn.has_expired()
35 assert (
36 repr(conn)
37 == "<HTTP11Connection ['https://example.com:443', IDLE, Request Count: 1]>"
38 )
39
40
41
42 def test_http11_connection_unread_response():
43 """
44 If the client releases the response without reading it to termination,
45 then the connection will not be reusable.
46 """
47 origin = Origin(b"https", b"example.com", 443)
48 stream = MockStream(
49 [
50 b"HTTP/1.1 200 OK\r\n",
51 b"Content-Type: plain/text\r\n",
52 b"Content-Length: 13\r\n",
53 b"\r\n",
54 b"Hello, world!",
55 ]
56 )
57 with HTTP11Connection(origin=origin, stream=stream) as conn:
58 with conn.stream("GET", "https://example.com/") as response:
59 assert response.status == 200
60
61 assert not conn.is_idle()
62 assert conn.is_closed()
63 assert not conn.is_available()
64 assert not conn.has_expired()
65 assert (
66 repr(conn)
67 == "<HTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
68 )
69
70
71
72 def test_http11_connection_with_remote_protocol_error():
73 """
74 If a remote protocol error occurs, then no response will be returned,
75 and the connection will not be reusable.
76 """
77 origin = Origin(b"https", b"example.com", 443)
78 stream = MockStream([b"Wait, this isn't valid HTTP!", b""])
79 with HTTP11Connection(origin=origin, stream=stream) as conn:
80 with pytest.raises(RemoteProtocolError):
81 conn.request("GET", "https://example.com/")
82
83 assert not conn.is_idle()
84 assert conn.is_closed()
85 assert not conn.is_available()
86 assert not conn.has_expired()
87 assert (
88 repr(conn)
89 == "<HTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
90 )
91
92
93
94 def test_http11_connection_with_local_protocol_error():
95 """
96 If a local protocol error occurs, then no response will be returned,
97 and the connection will not be reusable.
98 """
99 origin = Origin(b"https", b"example.com", 443)
100 stream = MockStream(
101 [
102 b"HTTP/1.1 200 OK\r\n",
103 b"Content-Type: plain/text\r\n",
104 b"Content-Length: 13\r\n",
105 b"\r\n",
106 b"Hello, world!",
107 ]
108 )
109 with HTTP11Connection(origin=origin, stream=stream) as conn:
110 with pytest.raises(LocalProtocolError) as exc_info:
111 conn.request("GET", "https://example.com/", headers={"Host": "\0"})
112
113 assert str(exc_info.value) == "Illegal header value b'\\x00'"
114
115 assert not conn.is_idle()
116 assert conn.is_closed()
117 assert not conn.is_available()
118 assert not conn.has_expired()
119 assert (
120 repr(conn)
121 == "<HTTP11Connection ['https://example.com:443', CLOSED, Request Count: 1]>"
122 )
123
124
125
126 def test_http11_connection_handles_one_active_request():
127 """
128 Attempting to send a request while one is already in-flight will raise
129 a ConnectionNotAvailable exception.
130 """
131 origin = Origin(b"https", b"example.com", 443)
132 stream = MockStream(
133 [
134 b"HTTP/1.1 200 OK\r\n",
135 b"Content-Type: plain/text\r\n",
136 b"Content-Length: 13\r\n",
137 b"\r\n",
138 b"Hello, world!",
139 ]
140 )
141 with HTTP11Connection(origin=origin, stream=stream) as conn:
142 with conn.stream("GET", "https://example.com/"):
143 with pytest.raises(ConnectionNotAvailable):
144 conn.request("GET", "https://example.com/")
145
146
147
148 def test_http11_connection_attempt_close():
149 """
150 A connection can only be closed when it is idle.
151 """
152 origin = Origin(b"https", b"example.com", 443)
153 stream = MockStream(
154 [
155 b"HTTP/1.1 200 OK\r\n",
156 b"Content-Type: plain/text\r\n",
157 b"Content-Length: 13\r\n",
158 b"\r\n",
159 b"Hello, world!",
160 ]
161 )
162 with HTTP11Connection(origin=origin, stream=stream) as conn:
163 with conn.stream("GET", "https://example.com/") as response:
164 response.read()
165 assert response.status == 200
166 assert response.content == b"Hello, world!"
167
168
169
170 def test_http11_request_to_incorrect_origin():
171 """
172 A connection can only send requests to whichever origin it is connected to.
173 """
174 origin = Origin(b"https", b"example.com", 443)
175 stream = MockStream([])
176 with HTTP11Connection(origin=origin, stream=stream) as conn:
177 with pytest.raises(RuntimeError):
178 conn.request("GET", "https://other.com/")
0 import hpack
1 import hyperframe.frame
2 import pytest
3
4 from httpcore import (
5 HTTP2Connection,
6 ConnectionNotAvailable,
7 Origin,
8 RemoteProtocolError,
9 )
10 from httpcore.backends.mock import MockStream
11
12
13
14 def test_http2_connection():
15 origin = Origin(b"https", b"example.com", 443)
16 stream = MockStream(
17 [
18 hyperframe.frame.SettingsFrame().serialize(),
19 hyperframe.frame.HeadersFrame(
20 stream_id=1,
21 data=hpack.Encoder().encode(
22 [
23 (b":status", b"200"),
24 (b"content-type", b"plain/text"),
25 ]
26 ),
27 flags=["END_HEADERS"],
28 ).serialize(),
29 hyperframe.frame.DataFrame(
30 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
31 ).serialize(),
32 ]
33 )
34 with HTTP2Connection(
35 origin=origin, stream=stream, keepalive_expiry=5.0
36 ) as conn:
37 response = conn.request("GET", "https://example.com/")
38 assert response.status == 200
39 assert response.content == b"Hello, world!"
40
41 assert conn.is_idle()
42 assert conn.is_available()
43 assert not conn.is_closed()
44 assert not conn.has_expired()
45 assert (
46 conn.info() == "'https://example.com:443', HTTP/2, IDLE, Request Count: 1"
47 )
48 assert (
49 repr(conn)
50 == "<HTTP2Connection ['https://example.com:443', IDLE, Request Count: 1]>"
51 )
52
53
54
55 def test_http2_connection_post_request():
56 origin = Origin(b"https", b"example.com", 443)
57 stream = MockStream(
58 [
59 hyperframe.frame.SettingsFrame().serialize(),
60 hyperframe.frame.HeadersFrame(
61 stream_id=1,
62 data=hpack.Encoder().encode(
63 [
64 (b":status", b"200"),
65 (b"content-type", b"plain/text"),
66 ]
67 ),
68 flags=["END_HEADERS"],
69 ).serialize(),
70 hyperframe.frame.DataFrame(
71 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
72 ).serialize(),
73 ]
74 )
75 with HTTP2Connection(origin=origin, stream=stream) as conn:
76 response = conn.request(
77 "POST",
78 "https://example.com/",
79 headers={b"content-length": b"17"},
80 content=b'{"data": "upload"}',
81 )
82 assert response.status == 200
83 assert response.content == b"Hello, world!"
84
85
86
87 def test_http2_connection_with_remote_protocol_error():
88 """
89 If a remote protocol error occurs, then no response will be returned,
90 and the connection will not be reusable.
91 """
92 origin = Origin(b"https", b"example.com", 443)
93 stream = MockStream([b"Wait, this isn't valid HTTP!", b""])
94 with HTTP2Connection(origin=origin, stream=stream) as conn:
95 with pytest.raises(RemoteProtocolError):
96 conn.request("GET", "https://example.com/")
97
98
99
100 def test_http2_connection_with_stream_cancelled():
101 """
102 If a remote protocol error occurs, then no response will be returned,
103 and the connection will not be reusable.
104 """
105 origin = Origin(b"https", b"example.com", 443)
106 stream = MockStream(
107 [
108 hyperframe.frame.SettingsFrame().serialize(),
109 hyperframe.frame.HeadersFrame(
110 stream_id=1,
111 data=hpack.Encoder().encode(
112 [
113 (b":status", b"200"),
114 (b"content-type", b"plain/text"),
115 ]
116 ),
117 flags=["END_HEADERS"],
118 ).serialize(),
119 hyperframe.frame.RstStreamFrame(stream_id=1, error_code=8).serialize(),
120 b"",
121 ]
122 )
123 with HTTP2Connection(origin=origin, stream=stream) as conn:
124 with pytest.raises(RemoteProtocolError):
125 conn.request("GET", "https://example.com/")
126
127
128
129 def test_http2_connection_with_flow_control():
130 origin = Origin(b"https", b"example.com", 443)
131 stream = MockStream(
132 [
133 hyperframe.frame.SettingsFrame().serialize(),
134 # Available flow: 65,535
135 hyperframe.frame.WindowUpdateFrame(
136 stream_id=0, window_increment=10_000
137 ).serialize(),
138 hyperframe.frame.WindowUpdateFrame(
139 stream_id=1, window_increment=10_000
140 ).serialize(),
141 # Available flow: 75,535
142 hyperframe.frame.WindowUpdateFrame(
143 stream_id=0, window_increment=10_000
144 ).serialize(),
145 hyperframe.frame.WindowUpdateFrame(
146 stream_id=1, window_increment=10_000
147 ).serialize(),
148 # Available flow: 85,535
149 hyperframe.frame.WindowUpdateFrame(
150 stream_id=0, window_increment=10_000
151 ).serialize(),
152 hyperframe.frame.WindowUpdateFrame(
153 stream_id=1, window_increment=10_000
154 ).serialize(),
155 # Available flow: 95,535
156 hyperframe.frame.WindowUpdateFrame(
157 stream_id=0, window_increment=10_000
158 ).serialize(),
159 hyperframe.frame.WindowUpdateFrame(
160 stream_id=1, window_increment=10_000
161 ).serialize(),
162 # Available flow: 105,535
163 hyperframe.frame.HeadersFrame(
164 stream_id=1,
165 data=hpack.Encoder().encode(
166 [
167 (b":status", b"200"),
168 (b"content-type", b"plain/text"),
169 ]
170 ),
171 flags=["END_HEADERS"],
172 ).serialize(),
173 hyperframe.frame.DataFrame(
174 stream_id=1, data=b"100,000 bytes received", flags=["END_STREAM"]
175 ).serialize(),
176 ]
177 )
178 with HTTP2Connection(origin=origin, stream=stream) as conn:
179 response = conn.request(
180 "POST",
181 "https://example.com/",
182 content=b"x" * 100_000,
183 )
184 assert response.status == 200
185 assert response.content == b"100,000 bytes received"
186
187
188
189 def test_http2_connection_attempt_close():
190 """
191 A connection can only be closed when it is idle.
192 """
193 origin = Origin(b"https", b"example.com", 443)
194 stream = MockStream(
195 [
196 hyperframe.frame.SettingsFrame().serialize(),
197 hyperframe.frame.HeadersFrame(
198 stream_id=1,
199 data=hpack.Encoder().encode(
200 [
201 (b":status", b"200"),
202 (b"content-type", b"plain/text"),
203 ]
204 ),
205 flags=["END_HEADERS"],
206 ).serialize(),
207 hyperframe.frame.DataFrame(
208 stream_id=1, data=b"Hello, world!", flags=["END_STREAM"]
209 ).serialize(),
210 ]
211 )
212 with HTTP2Connection(origin=origin, stream=stream) as conn:
213 with conn.stream("GET", "https://example.com/") as response:
214 response.read()
215 assert response.status == 200
216 assert response.content == b"Hello, world!"
217
218 conn.close()
219 with pytest.raises(ConnectionNotAvailable):
220 conn.request("GET", "https://example.com/")
221
222
223
224 def test_http2_request_to_incorrect_origin():
225 """
226 A connection can only send requests to whichever origin it is connected to.
227 """
228 origin = Origin(b"https", b"example.com", 443)
229 stream = MockStream([])
230 with HTTP2Connection(origin=origin, stream=stream) as conn:
231 with pytest.raises(RuntimeError):
232 conn.request("GET", "https://other.com/")
0 import pytest
1
2 from httpcore import HTTPProxy, Origin, ProxyError
3 from httpcore.backends.mock import MockBackend
4
5
6
7 def test_proxy_forwarding():
8 """
9 Send an HTTP request via a proxy.
10 """
11 network_backend = MockBackend(
12 [
13 b"HTTP/1.1 200 OK\r\n",
14 b"Content-Type: plain/text\r\n",
15 b"Content-Length: 13\r\n",
16 b"\r\n",
17 b"Hello, world!",
18 ]
19 )
20
21 with HTTPProxy(
22 proxy_url="http://localhost:8080/",
23 max_connections=10,
24 network_backend=network_backend,
25 ) as proxy:
26 # Sending an intial request, which once complete will return to the pool, IDLE.
27 with proxy.stream("GET", "http://example.com/") as response:
28 info = [repr(c) for c in proxy.connections]
29 assert info == [
30 "<ForwardHTTPConnection ['http://localhost:8080', HTTP/1.1, ACTIVE, Request Count: 1]>"
31 ]
32 response.read()
33
34 assert response.status == 200
35 assert response.content == b"Hello, world!"
36 info = [repr(c) for c in proxy.connections]
37 assert info == [
38 "<ForwardHTTPConnection ['http://localhost:8080', HTTP/1.1, IDLE, Request Count: 1]>"
39 ]
40 assert proxy.connections[0].is_idle()
41 assert proxy.connections[0].is_available()
42 assert not proxy.connections[0].is_closed()
43
44 # A connection on a forwarding proxy can handle HTTP requests to any host.
45 assert proxy.connections[0].can_handle_request(
46 Origin(b"http", b"example.com", 80)
47 )
48 assert proxy.connections[0].can_handle_request(
49 Origin(b"http", b"other.com", 80)
50 )
51 assert not proxy.connections[0].can_handle_request(
52 Origin(b"https", b"example.com", 443)
53 )
54 assert not proxy.connections[0].can_handle_request(
55 Origin(b"https", b"other.com", 443)
56 )
57
58
59
60 def test_proxy_tunneling():
61 """
62 Send an HTTPS request via a proxy.
63 """
64 network_backend = MockBackend(
65 [
66 b"HTTP/1.1 200 OK\r\n" b"\r\n",
67 b"HTTP/1.1 200 OK\r\n",
68 b"Content-Type: plain/text\r\n",
69 b"Content-Length: 13\r\n",
70 b"\r\n",
71 b"Hello, world!",
72 ]
73 )
74
75 with HTTPProxy(
76 proxy_url="http://localhost:8080/",
77 max_connections=10,
78 network_backend=network_backend,
79 ) as proxy:
80 # Sending an intial request, which once complete will return to the pool, IDLE.
81 with proxy.stream("GET", "https://example.com/") as response:
82 info = [repr(c) for c in proxy.connections]
83 assert info == [
84 "<TunnelHTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
85 ]
86 response.read()
87
88 assert response.status == 200
89 assert response.content == b"Hello, world!"
90 info = [repr(c) for c in proxy.connections]
91 assert info == [
92 "<TunnelHTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
93 ]
94 assert proxy.connections[0].is_idle()
95 assert proxy.connections[0].is_available()
96 assert not proxy.connections[0].is_closed()
97
98 # A connection on a tunneled proxy can only handle HTTPS requests to the same origin.
99 assert not proxy.connections[0].can_handle_request(
100 Origin(b"http", b"example.com", 80)
101 )
102 assert not proxy.connections[0].can_handle_request(
103 Origin(b"http", b"other.com", 80)
104 )
105 assert proxy.connections[0].can_handle_request(
106 Origin(b"https", b"example.com", 443)
107 )
108 assert not proxy.connections[0].can_handle_request(
109 Origin(b"https", b"other.com", 443)
110 )
111
112
113
114 def test_proxy_tunneling_with_403():
115 """
116 Send an HTTPS request via a proxy.
117 """
118 network_backend = MockBackend(
119 [
120 b"HTTP/1.1 403 Permission Denied\r\n" b"\r\n",
121 ]
122 )
123
124 with HTTPProxy(
125 proxy_url="http://localhost:8080/",
126 max_connections=10,
127 network_backend=network_backend,
128 ) as proxy:
129 with pytest.raises(ProxyError) as exc_info:
130 proxy.request("GET", "https://example.com/")
131 assert str(exc_info.value) == "403 Permission Denied"
132 assert not proxy.connections
0 import ssl
1
2 import pytest
3
4 from httpcore import ConnectionPool
5
6
7
8 def test_request(httpbin):
9 with ConnectionPool() as pool:
10 response = pool.request("GET", httpbin.url)
11 assert response.status == 200
12
13
14
15 def test_ssl_request(httpbin_secure):
16 ssl_context = ssl.create_default_context()
17 ssl_context.check_hostname = False
18 ssl_context.verify_mode = ssl.CERT_NONE
19 with ConnectionPool(ssl_context=ssl_context) as pool:
20 response = pool.request("GET", httpbin_secure.url)
21 assert response.status == 200
22
23
24
25 def test_extra_info(httpbin_secure):
26 ssl_context = ssl.create_default_context()
27 ssl_context.check_hostname = False
28 ssl_context.verify_mode = ssl.CERT_NONE
29 with ConnectionPool(ssl_context=ssl_context) as pool:
30 with pool.stream("GET", httpbin_secure.url) as response:
31 assert response.status == 200
32 stream = response.extensions["network_stream"]
33
34 ssl_object = stream.get_extra_info("ssl_object")
35 assert ssl_object.version() == "TLSv1.3"
36
37 local_addr = stream.get_extra_info("client_addr")
38 assert local_addr[0] == "127.0.0.1"
39
40 remote_addr = stream.get_extra_info("server_addr")
41 assert "https://%s:%d" % remote_addr == httpbin_secure.url
42
43 sock = stream.get_extra_info("socket")
44 assert hasattr(sock, "family")
45 assert hasattr(sock, "type")
46
47 invalid = stream.get_extra_info("invalid")
48 assert invalid is None
49
50 stream.get_extra_info("is_readable")
+0
-0
tests/async_tests/__init__.py less more
(Empty file)
+0
-194
tests/async_tests/test_connection_pool.py less more
0 from typing import AsyncIterator, Tuple
1
2 import pytest
3
4 import httpcore
5 from httpcore._async.base import ConnectionState
6 from httpcore._types import URL, Headers
7
8
9 class MockConnection(object):
10 def __init__(self, http_version):
11 self.origin = (b"http", b"example.org", 80)
12 self.state = ConnectionState.PENDING
13 self.is_http11 = http_version == "HTTP/1.1"
14 self.is_http2 = http_version == "HTTP/2"
15 self.stream_count = 0
16
17 async def handle_async_request(
18 self,
19 method: bytes,
20 url: URL,
21 headers: Headers = None,
22 stream: httpcore.AsyncByteStream = None,
23 extensions: dict = None,
24 ) -> Tuple[int, Headers, httpcore.AsyncByteStream, dict]:
25 self.state = ConnectionState.ACTIVE
26 self.stream_count += 1
27
28 async def on_close():
29 self.stream_count -= 1
30 if self.stream_count == 0:
31 self.state = ConnectionState.IDLE
32
33 async def aiterator() -> AsyncIterator[bytes]:
34 yield b""
35
36 stream = httpcore.AsyncIteratorByteStream(
37 aiterator=aiterator(), aclose_func=on_close
38 )
39
40 return 200, [], stream, {}
41
42 async def aclose(self):
43 pass
44
45 def info(self) -> str:
46 return self.state.name
47
48 def is_available(self):
49 if self.is_http11:
50 return self.state == ConnectionState.IDLE
51 else:
52 return self.state != ConnectionState.CLOSED
53
54 def should_close(self):
55 return False
56
57 def is_idle(self):
58 return self.state == ConnectionState.IDLE
59
60 def is_closed(self):
61 return False
62
63
64 class ConnectionPool(httpcore.AsyncConnectionPool):
65 def __init__(self, http_version: str):
66 super().__init__()
67 self.http_version = http_version
68 assert http_version in ("HTTP/1.1", "HTTP/2")
69
70 def _create_connection(self, **kwargs):
71 return MockConnection(self.http_version)
72
73
74 async def read_body(stream: httpcore.AsyncByteStream) -> bytes:
75 try:
76 body = []
77 async for chunk in stream:
78 body.append(chunk)
79 return b"".join(body)
80 finally:
81 await stream.aclose()
82
83
84 @pytest.mark.trio
85 @pytest.mark.parametrize("http_version", ["HTTP/1.1", "HTTP/2"])
86 async def test_sequential_requests(http_version) -> None:
87 async with ConnectionPool(http_version=http_version) as http:
88 info = await http.get_connection_info()
89 assert info == {}
90
91 response = await http.handle_async_request(
92 method=b"GET",
93 url=(b"http", b"example.org", None, b"/"),
94 headers=[],
95 stream=httpcore.ByteStream(b""),
96 extensions={},
97 )
98 status_code, headers, stream, extensions = response
99 info = await http.get_connection_info()
100 assert info == {"http://example.org": ["ACTIVE"]}
101
102 await read_body(stream)
103 info = await http.get_connection_info()
104 assert info == {"http://example.org": ["IDLE"]}
105
106 response = await http.handle_async_request(
107 method=b"GET",
108 url=(b"http", b"example.org", None, b"/"),
109 headers=[],
110 stream=httpcore.ByteStream(b""),
111 extensions={},
112 )
113 status_code, headers, stream, extensions = response
114 info = await http.get_connection_info()
115 assert info == {"http://example.org": ["ACTIVE"]}
116
117 await read_body(stream)
118 info = await http.get_connection_info()
119 assert info == {"http://example.org": ["IDLE"]}
120
121
122 @pytest.mark.trio
123 async def test_concurrent_requests_h11() -> None:
124 async with ConnectionPool(http_version="HTTP/1.1") as http:
125 info = await http.get_connection_info()
126 assert info == {}
127
128 response_1 = await http.handle_async_request(
129 method=b"GET",
130 url=(b"http", b"example.org", None, b"/"),
131 headers=[],
132 stream=httpcore.ByteStream(b""),
133 extensions={},
134 )
135 status_code_1, headers_1, stream_1, ext_1 = response_1
136 info = await http.get_connection_info()
137 assert info == {"http://example.org": ["ACTIVE"]}
138
139 response_2 = await http.handle_async_request(
140 method=b"GET",
141 url=(b"http", b"example.org", None, b"/"),
142 headers=[],
143 stream=httpcore.ByteStream(b""),
144 extensions={},
145 )
146 status_code_2, headers_2, stream_2, ext_2 = response_2
147 info = await http.get_connection_info()
148 assert info == {"http://example.org": ["ACTIVE", "ACTIVE"]}
149
150 await read_body(stream_1)
151 info = await http.get_connection_info()
152 assert info == {"http://example.org": ["ACTIVE", "IDLE"]}
153
154 await read_body(stream_2)
155 info = await http.get_connection_info()
156 assert info == {"http://example.org": ["IDLE", "IDLE"]}
157
158
159 @pytest.mark.trio
160 async def test_concurrent_requests_h2() -> None:
161 async with ConnectionPool(http_version="HTTP/2") as http:
162 info = await http.get_connection_info()
163 assert info == {}
164
165 response_1 = await http.handle_async_request(
166 method=b"GET",
167 url=(b"http", b"example.org", None, b"/"),
168 headers=[],
169 stream=httpcore.ByteStream(b""),
170 extensions={},
171 )
172 status_code_1, headers_1, stream_1, ext_1 = response_1
173 info = await http.get_connection_info()
174 assert info == {"http://example.org": ["ACTIVE"]}
175
176 response_2 = await http.handle_async_request(
177 method=b"GET",
178 url=(b"http", b"example.org", None, b"/"),
179 headers=[],
180 stream=httpcore.ByteStream(b""),
181 extensions={},
182 )
183 status_code_2, headers_2, stream_2, ext_2 = response_2
184 info = await http.get_connection_info()
185 assert info == {"http://example.org": ["ACTIVE"]}
186
187 await read_body(stream_1)
188 info = await http.get_connection_info()
189 assert info == {"http://example.org": ["ACTIVE"]}
190
191 await read_body(stream_2)
192 info = await http.get_connection_info()
193 assert info == {"http://example.org": ["IDLE"]}
+0
-317
tests/async_tests/test_http11.py less more
0 import collections
1
2 import pytest
3
4 import httpcore
5 from httpcore._backends.auto import AsyncBackend, AsyncLock, AsyncSocketStream
6
7
8 class MockStream(AsyncSocketStream):
9 def __init__(self, http_buffer, disconnect):
10 self.read_buffer = collections.deque(http_buffer)
11 self.disconnect = disconnect
12
13 def get_http_version(self) -> str:
14 return "HTTP/1.1"
15
16 async def write(self, data, timeout):
17 pass
18
19 async def read(self, n, timeout):
20 return self.read_buffer.popleft()
21
22 async def aclose(self):
23 pass
24
25 def is_readable(self):
26 return self.disconnect
27
28
29 class MockLock(AsyncLock):
30 async def release(self) -> None:
31 pass
32
33 async def acquire(self) -> None:
34 pass
35
36
37 class MockBackend(AsyncBackend):
38 def __init__(self, http_buffer, disconnect=False):
39 self.http_buffer = http_buffer
40 self.disconnect = disconnect
41
42 async def open_tcp_stream(
43 self, hostname, port, ssl_context, timeout, *, local_address
44 ):
45 return MockStream(self.http_buffer, self.disconnect)
46
47 def create_lock(self):
48 return MockLock()
49
50
51 @pytest.mark.trio
52 async def test_get_request_with_connection_keepalive() -> None:
53 backend = MockBackend(
54 http_buffer=[
55 b"HTTP/1.1 200 OK\r\n",
56 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
57 b"Server: Apache\r\n",
58 b"Content-Length: 13\r\n",
59 b"Content-Type: text/plain\r\n",
60 b"\r\n",
61 b"Hello, world.",
62 b"HTTP/1.1 200 OK\r\n",
63 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
64 b"Server: Apache\r\n",
65 b"Content-Length: 13\r\n",
66 b"Content-Type: text/plain\r\n",
67 b"\r\n",
68 b"Hello, world.",
69 ]
70 )
71
72 async with httpcore.AsyncConnectionPool(backend=backend) as http:
73 # We're sending a request with a standard keep-alive connection, so
74 # it will remain in the pool once we've sent the request.
75 response = await http.handle_async_request(
76 method=b"GET",
77 url=(b"http", b"example.org", None, b"/"),
78 headers=[(b"Host", b"example.org")],
79 stream=httpcore.ByteStream(b""),
80 extensions={},
81 )
82 status_code, headers, stream, extensions = response
83 body = await stream.aread()
84 assert status_code == 200
85 assert body == b"Hello, world."
86 assert await http.get_connection_info() == {
87 "http://example.org": ["HTTP/1.1, IDLE"]
88 }
89
90 # This second request will go out over the same connection.
91 response = await http.handle_async_request(
92 method=b"GET",
93 url=(b"http", b"example.org", None, b"/"),
94 headers=[(b"Host", b"example.org")],
95 stream=httpcore.ByteStream(b""),
96 extensions={},
97 )
98 status_code, headers, stream, extensions = response
99 body = await stream.aread()
100 assert status_code == 200
101 assert body == b"Hello, world."
102 assert await http.get_connection_info() == {
103 "http://example.org": ["HTTP/1.1, IDLE"]
104 }
105
106
107 @pytest.mark.trio
108 async def test_get_request_with_connection_close_header() -> None:
109 backend = MockBackend(
110 http_buffer=[
111 b"HTTP/1.1 200 OK\r\n",
112 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
113 b"Server: Apache\r\n",
114 b"Content-Length: 13\r\n",
115 b"Content-Type: text/plain\r\n",
116 b"\r\n",
117 b"Hello, world.",
118 b"", # Terminate the connection.
119 ]
120 )
121
122 async with httpcore.AsyncConnectionPool(backend=backend) as http:
123 # We're sending a request with 'Connection: close', so the connection
124 # does not remain in the pool once we've sent the request.
125 response = await http.handle_async_request(
126 method=b"GET",
127 url=(b"http", b"example.org", None, b"/"),
128 headers=[(b"Host", b"example.org"), (b"Connection", b"close")],
129 stream=httpcore.ByteStream(b""),
130 extensions={},
131 )
132 status_code, headers, stream, extensions = response
133 body = await stream.aread()
134 assert status_code == 200
135 assert body == b"Hello, world."
136 assert await http.get_connection_info() == {}
137
138 # The second request will go out over a new connection.
139 response = await http.handle_async_request(
140 method=b"GET",
141 url=(b"http", b"example.org", None, b"/"),
142 headers=[(b"Host", b"example.org"), (b"Connection", b"close")],
143 stream=httpcore.ByteStream(b""),
144 extensions={},
145 )
146 status_code, headers, stream, extensions = response
147 body = await stream.aread()
148 assert status_code == 200
149 assert body == b"Hello, world."
150 assert await http.get_connection_info() == {}
151
152
153 @pytest.mark.trio
154 async def test_get_request_with_socket_disconnect_between_requests() -> None:
155 backend = MockBackend(
156 http_buffer=[
157 b"HTTP/1.1 200 OK\r\n",
158 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
159 b"Server: Apache\r\n",
160 b"Content-Length: 13\r\n",
161 b"Content-Type: text/plain\r\n",
162 b"\r\n",
163 b"Hello, world.",
164 ],
165 disconnect=True,
166 )
167
168 async with httpcore.AsyncConnectionPool(backend=backend) as http:
169 # Send an initial request. We're using a standard keep-alive
170 # connection, so the connection remains in the pool after completion.
171 response = await http.handle_async_request(
172 method=b"GET",
173 url=(b"http", b"example.org", None, b"/"),
174 headers=[(b"Host", b"example.org")],
175 stream=httpcore.ByteStream(b""),
176 extensions={},
177 )
178 status_code, headers, stream, extensions = response
179 body = await stream.aread()
180 assert status_code == 200
181 assert body == b"Hello, world."
182 assert await http.get_connection_info() == {
183 "http://example.org": ["HTTP/1.1, IDLE"]
184 }
185
186 # On sending this second request, at the point of pool re-acquiry the
187 # socket indicates that it has disconnected, and we'll send the request
188 # over a new connection.
189 response = await http.handle_async_request(
190 method=b"GET",
191 url=(b"http", b"example.org", None, b"/"),
192 headers=[(b"Host", b"example.org")],
193 stream=httpcore.ByteStream(b""),
194 extensions={},
195 )
196 status_code, headers, stream, extensions = response
197 body = await stream.aread()
198 assert status_code == 200
199 assert body == b"Hello, world."
200 assert await http.get_connection_info() == {
201 "http://example.org": ["HTTP/1.1, IDLE"]
202 }
203
204
205 @pytest.mark.trio
206 async def test_get_request_with_unclean_close_after_first_request() -> None:
207 backend = MockBackend(
208 http_buffer=[
209 b"HTTP/1.1 200 OK\r\n",
210 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
211 b"Server: Apache\r\n",
212 b"Content-Length: 13\r\n",
213 b"Content-Type: text/plain\r\n",
214 b"\r\n",
215 b"Hello, world.",
216 b"", # Terminate the connection.
217 ],
218 )
219
220 async with httpcore.AsyncConnectionPool(backend=backend) as http:
221 # Send an initial request. We're using a standard keep-alive
222 # connection, so the connection remains in the pool after completion.
223 response = await http.handle_async_request(
224 method=b"GET",
225 url=(b"http", b"example.org", None, b"/"),
226 headers=[(b"Host", b"example.org")],
227 stream=httpcore.ByteStream(b""),
228 extensions={},
229 )
230 status_code, headers, stream, extensions = response
231 body = await stream.aread()
232 assert status_code == 200
233 assert body == b"Hello, world."
234 assert await http.get_connection_info() == {
235 "http://example.org": ["HTTP/1.1, IDLE"]
236 }
237
238 # At this point we successfully write another request, but the socket
239 # read returns `b""`, indicating a premature close.
240 with pytest.raises(httpcore.RemoteProtocolError) as excinfo:
241 await http.handle_async_request(
242 method=b"GET",
243 url=(b"http", b"example.org", None, b"/"),
244 headers=[(b"Host", b"example.org")],
245 stream=httpcore.ByteStream(b""),
246 extensions={},
247 )
248 assert str(excinfo.value) == "Server disconnected without sending a response."
249
250
251 @pytest.mark.trio
252 async def test_request_with_missing_host_header() -> None:
253 backend = MockBackend(http_buffer=[])
254
255 async with httpcore.AsyncConnectionPool(backend=backend) as http:
256 with pytest.raises(httpcore.LocalProtocolError) as excinfo:
257 await http.handle_async_request(
258 method=b"GET",
259 url=(b"http", b"example.org", None, b"/"),
260 headers=[],
261 stream=httpcore.ByteStream(b""),
262 extensions={},
263 )
264 assert str(excinfo.value) == "Missing mandatory Host: header"
265
266
267 @pytest.mark.trio
268 async def test_concurrent_get_requests() -> None:
269 backend = MockBackend(
270 http_buffer=[
271 b"HTTP/1.1 200 OK\r\n",
272 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
273 b"Server: Apache\r\n",
274 b"Content-Length: 13\r\n",
275 b"Content-Type: text/plain\r\n",
276 b"\r\n",
277 b"Hello, world.",
278 ]
279 )
280
281 async with httpcore.AsyncConnectionPool(backend=backend) as http:
282 # We're sending a request with a standard keep-alive connection, so
283 # it will remain in the pool once we've sent the request.
284 response_1 = await http.handle_async_request(
285 method=b"GET",
286 url=(b"http", b"example.org", None, b"/"),
287 headers=[(b"Host", b"example.org")],
288 stream=httpcore.ByteStream(b""),
289 extensions={},
290 )
291 status_code, headers, stream_1, extensions = response_1
292 assert await http.get_connection_info() == {
293 "http://example.org": ["HTTP/1.1, ACTIVE"]
294 }
295
296 response_2 = await http.handle_async_request(
297 method=b"GET",
298 url=(b"http", b"example.org", None, b"/"),
299 headers=[(b"Host", b"example.org")],
300 stream=httpcore.ByteStream(b""),
301 extensions={},
302 )
303 status_code, headers, stream_2, extensions = response_2
304 assert await http.get_connection_info() == {
305 "http://example.org": ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]
306 }
307
308 await stream_1.aread()
309 assert await http.get_connection_info() == {
310 "http://example.org": ["HTTP/1.1, ACTIVE", "HTTP/1.1, IDLE"]
311 }
312
313 await stream_2.aread()
314 assert await http.get_connection_info() == {
315 "http://example.org": ["HTTP/1.1, IDLE", "HTTP/1.1, IDLE"]
316 }
+0
-249
tests/async_tests/test_http2.py less more
0 import collections
1
2 import h2.config
3 import h2.connection
4 import pytest
5
6 import httpcore
7 from httpcore._backends.auto import (
8 AsyncBackend,
9 AsyncLock,
10 AsyncSemaphore,
11 AsyncSocketStream,
12 )
13
14
15 class MockStream(AsyncSocketStream):
16 def __init__(self, http_buffer, disconnect):
17 self.read_buffer = collections.deque(http_buffer)
18 self.disconnect = disconnect
19
20 def get_http_version(self) -> str:
21 return "HTTP/2"
22
23 async def write(self, data, timeout):
24 pass
25
26 async def read(self, n, timeout):
27 return self.read_buffer.popleft()
28
29 async def aclose(self):
30 pass
31
32 def is_readable(self):
33 return self.disconnect
34
35
36 class MockLock(AsyncLock):
37 async def release(self):
38 pass
39
40 async def acquire(self):
41 pass
42
43
44 class MockSemaphore(AsyncSemaphore):
45 def __init__(self):
46 pass
47
48 async def acquire(self, timeout=None):
49 pass
50
51 async def release(self):
52 pass
53
54
55 class MockBackend(AsyncBackend):
56 def __init__(self, http_buffer, disconnect=False):
57 self.http_buffer = http_buffer
58 self.disconnect = disconnect
59
60 async def open_tcp_stream(
61 self, hostname, port, ssl_context, timeout, *, local_address
62 ):
63 return MockStream(self.http_buffer, self.disconnect)
64
65 def create_lock(self):
66 return MockLock()
67
68 def create_semaphore(self, max_value, exc_class):
69 return MockSemaphore()
70
71
72 class HTTP2BytesGenerator:
73 def __init__(self):
74 self.client_config = h2.config.H2Configuration(client_side=True)
75 self.client_conn = h2.connection.H2Connection(config=self.client_config)
76 self.server_config = h2.config.H2Configuration(client_side=False)
77 self.server_conn = h2.connection.H2Connection(config=self.server_config)
78 self.initialized = False
79
80 def get_server_bytes(
81 self, request_headers, request_data, response_headers, response_data
82 ):
83 if not self.initialized:
84 self.client_conn.initiate_connection()
85 self.server_conn.initiate_connection()
86 self.initialized = True
87
88 # Feed the request events to the client-side state machine
89 client_stream_id = self.client_conn.get_next_available_stream_id()
90 self.client_conn.send_headers(client_stream_id, headers=request_headers)
91 self.client_conn.send_data(client_stream_id, data=request_data, end_stream=True)
92
93 # Determine the bytes that're sent out the client side, and feed them
94 # into the server-side state machine to get it into the correct state.
95 client_bytes = self.client_conn.data_to_send()
96 events = self.server_conn.receive_data(client_bytes)
97 server_stream_id = [
98 event.stream_id
99 for event in events
100 if isinstance(event, h2.events.RequestReceived)
101 ][0]
102
103 # Feed the response events to the server-side state machine
104 self.server_conn.send_headers(server_stream_id, headers=response_headers)
105 self.server_conn.send_data(
106 server_stream_id, data=response_data, end_stream=True
107 )
108
109 return self.server_conn.data_to_send()
110
111
112 @pytest.mark.trio
113 async def test_get_request() -> None:
114 bytes_generator = HTTP2BytesGenerator()
115 http_buffer = [
116 bytes_generator.get_server_bytes(
117 request_headers=[
118 (b":method", b"GET"),
119 (b":authority", b"www.example.com"),
120 (b":scheme", b"https"),
121 (b":path", "/"),
122 ],
123 request_data=b"",
124 response_headers=[
125 (b":status", b"200"),
126 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
127 (b"server", b"Apache"),
128 (b"content-length", b"13"),
129 (b"content-type", b"text/plain"),
130 ],
131 response_data=b"Hello, world.",
132 ),
133 bytes_generator.get_server_bytes(
134 request_headers=[
135 (b":method", b"GET"),
136 (b":authority", b"www.example.com"),
137 (b":scheme", b"https"),
138 (b":path", "/"),
139 ],
140 request_data=b"",
141 response_headers=[
142 (b":status", b"200"),
143 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
144 (b"server", b"Apache"),
145 (b"content-length", b"13"),
146 (b"content-type", b"text/plain"),
147 ],
148 response_data=b"Hello, world.",
149 ),
150 ]
151 backend = MockBackend(http_buffer=http_buffer)
152
153 async with httpcore.AsyncConnectionPool(http2=True, backend=backend) as http:
154 # We're sending a request with a standard keep-alive connection, so
155 # it will remain in the pool once we've sent the request.
156 response = await http.handle_async_request(
157 method=b"GET",
158 url=(b"https", b"example.org", None, b"/"),
159 headers=[(b"Host", b"example.org")],
160 stream=httpcore.ByteStream(b""),
161 extensions={},
162 )
163 status_code, headers, stream, extensions = response
164 body = await stream.aread()
165 assert status_code == 200
166 assert body == b"Hello, world."
167 assert await http.get_connection_info() == {
168 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
169 }
170
171 # The second HTTP request will go out over the same connection.
172 response = await http.handle_async_request(
173 method=b"GET",
174 url=(b"https", b"example.org", None, b"/"),
175 headers=[(b"Host", b"example.org")],
176 stream=httpcore.ByteStream(b""),
177 extensions={},
178 )
179 status_code, headers, stream, extensions = response
180 body = await stream.aread()
181 assert status_code == 200
182 assert body == b"Hello, world."
183 assert await http.get_connection_info() == {
184 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
185 }
186
187
188 @pytest.mark.trio
189 async def test_post_request() -> None:
190 bytes_generator = HTTP2BytesGenerator()
191 bytes_to_send = bytes_generator.get_server_bytes(
192 request_headers=[
193 (b":method", b"POST"),
194 (b":authority", b"www.example.com"),
195 (b":scheme", b"https"),
196 (b":path", "/"),
197 (b"content-length", b"13"),
198 ],
199 request_data=b"Hello, world.",
200 response_headers=[
201 (b":status", b"200"),
202 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
203 (b"server", b"Apache"),
204 (b"content-length", b"13"),
205 (b"content-type", b"text/plain"),
206 ],
207 response_data=b"Hello, world.",
208 )
209 backend = MockBackend(http_buffer=[bytes_to_send])
210
211 async with httpcore.AsyncConnectionPool(http2=True, backend=backend) as http:
212 # We're sending a request with a standard keep-alive connection, so
213 # it will remain in the pool once we've sent the request.
214 response = await http.handle_async_request(
215 method=b"POST",
216 url=(b"https", b"example.org", None, b"/"),
217 headers=[(b"Host", b"example.org"), (b"Content-length", b"13")],
218 stream=httpcore.ByteStream(b"Hello, world."),
219 extensions={},
220 )
221 status_code, headers, stream, extensions = response
222 body = await stream.aread()
223 assert status_code == 200
224 assert body == b"Hello, world."
225 assert await http.get_connection_info() == {
226 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
227 }
228
229
230 @pytest.mark.trio
231 async def test_request_with_missing_host_header() -> None:
232 backend = MockBackend(http_buffer=[])
233
234 server_config = h2.config.H2Configuration(client_side=False)
235 server_conn = h2.connection.H2Connection(config=server_config)
236 server_conn.initiate_connection()
237 backend = MockBackend(http_buffer=[server_conn.data_to_send()])
238
239 async with httpcore.AsyncConnectionPool(backend=backend) as http:
240 with pytest.raises(httpcore.LocalProtocolError) as excinfo:
241 await http.handle_async_request(
242 method=b"GET",
243 url=(b"http", b"example.org", None, b"/"),
244 headers=[],
245 stream=httpcore.ByteStream(b""),
246 extensions={},
247 )
248 assert str(excinfo.value) == "Missing mandatory Host: header"
+0
-605
tests/async_tests/test_interfaces.py less more
0 import platform
1 from typing import Tuple
2
3 import pytest
4
5 import httpcore
6 from httpcore._types import URL
7 from tests.conftest import HTTPS_SERVER_URL
8 from tests.utils import Server, lookup_async_backend
9
10
11 @pytest.fixture(params=["auto", "anyio"])
12 def backend(request):
13 return request.param
14
15
16 async def read_body(stream: httpcore.AsyncByteStream) -> bytes:
17 try:
18 body = []
19 async for chunk in stream:
20 body.append(chunk)
21 return b"".join(body)
22 finally:
23 await stream.aclose()
24
25
26 def test_must_configure_either_http1_or_http2() -> None:
27 with pytest.raises(ValueError):
28 httpcore.AsyncConnectionPool(http1=False, http2=False)
29
30
31 @pytest.mark.anyio
32 async def test_http_request(backend: str, server: Server) -> None:
33 async with httpcore.AsyncConnectionPool(backend=backend) as http:
34 status_code, headers, stream, extensions = await http.handle_async_request(
35 method=b"GET",
36 url=(b"http", *server.netloc, b"/"),
37 headers=[server.host_header],
38 stream=httpcore.ByteStream(b""),
39 extensions={},
40 )
41 await read_body(stream)
42
43 assert status_code == 200
44 reason_phrase = b"OK" if server.sends_reason else b""
45 assert extensions == {
46 "http_version": b"HTTP/1.1",
47 "reason_phrase": reason_phrase,
48 }
49 origin = (b"http", *server.netloc)
50 assert len(http._connections[origin]) == 1 # type: ignore
51
52
53 @pytest.mark.anyio
54 async def test_https_request(backend: str, https_server: Server) -> None:
55 async with httpcore.AsyncConnectionPool(backend=backend) as http:
56 status_code, headers, stream, extensions = await http.handle_async_request(
57 method=b"GET",
58 url=(b"https", *https_server.netloc, b"/"),
59 headers=[https_server.host_header],
60 stream=httpcore.ByteStream(b""),
61 extensions={},
62 )
63 await read_body(stream)
64
65 assert status_code == 200
66 reason_phrase = b"OK" if https_server.sends_reason else b""
67 assert extensions == {
68 "http_version": b"HTTP/1.1",
69 "reason_phrase": reason_phrase,
70 }
71 origin = (b"https", *https_server.netloc)
72 assert len(http._connections[origin]) == 1 # type: ignore
73
74
75 @pytest.mark.anyio
76 @pytest.mark.parametrize(
77 "url", [(b"ftp", b"example.org", 443, b"/"), (b"", b"coolsite.org", 443, b"/")]
78 )
79 async def test_request_unsupported_protocol(
80 backend: str, url: Tuple[bytes, bytes, int, bytes]
81 ) -> None:
82 async with httpcore.AsyncConnectionPool(backend=backend) as http:
83 with pytest.raises(httpcore.UnsupportedProtocol):
84 await http.handle_async_request(
85 method=b"GET",
86 url=url,
87 headers=[(b"host", b"example.org")],
88 stream=httpcore.ByteStream(b""),
89 extensions={},
90 )
91
92
93 @pytest.mark.anyio
94 async def test_http2_request(backend: str, https_server: Server) -> None:
95 async with httpcore.AsyncConnectionPool(backend=backend, http2=True) as http:
96 status_code, headers, stream, extensions = await http.handle_async_request(
97 method=b"GET",
98 url=(b"https", *https_server.netloc, b"/"),
99 headers=[https_server.host_header],
100 stream=httpcore.ByteStream(b""),
101 extensions={},
102 )
103 await read_body(stream)
104
105 assert status_code == 200
106 assert extensions == {"http_version": b"HTTP/2"}
107 origin = (b"https", *https_server.netloc)
108 assert len(http._connections[origin]) == 1 # type: ignore
109
110
111 @pytest.mark.anyio
112 async def test_closing_http_request(backend: str, server: Server) -> None:
113 async with httpcore.AsyncConnectionPool(backend=backend) as http:
114 status_code, headers, stream, extensions = await http.handle_async_request(
115 method=b"GET",
116 url=(b"http", *server.netloc, b"/"),
117 headers=[server.host_header, (b"connection", b"close")],
118 stream=httpcore.ByteStream(b""),
119 extensions={},
120 )
121 await read_body(stream)
122
123 assert status_code == 200
124 reason_phrase = b"OK" if server.sends_reason else b""
125 assert extensions == {
126 "http_version": b"HTTP/1.1",
127 "reason_phrase": reason_phrase,
128 }
129 origin = (b"http", *server.netloc)
130 assert origin not in http._connections # type: ignore
131
132
133 @pytest.mark.anyio
134 async def test_http_request_reuse_connection(backend: str, server: Server) -> None:
135 async with httpcore.AsyncConnectionPool(backend=backend) as http:
136 status_code, headers, stream, extensions = await http.handle_async_request(
137 method=b"GET",
138 url=(b"http", *server.netloc, b"/"),
139 headers=[server.host_header],
140 stream=httpcore.ByteStream(b""),
141 extensions={},
142 )
143 await read_body(stream)
144
145 assert status_code == 200
146 reason_phrase = b"OK" if server.sends_reason else b""
147 assert extensions == {
148 "http_version": b"HTTP/1.1",
149 "reason_phrase": reason_phrase,
150 }
151 origin = (b"http", *server.netloc)
152 assert len(http._connections[origin]) == 1 # type: ignore
153
154 status_code, headers, stream, extensions = await http.handle_async_request(
155 method=b"GET",
156 url=(b"http", *server.netloc, b"/"),
157 headers=[server.host_header],
158 stream=httpcore.ByteStream(b""),
159 extensions={},
160 )
161 await read_body(stream)
162
163 assert status_code == 200
164 reason_phrase = b"OK" if server.sends_reason else b""
165 assert extensions == {
166 "http_version": b"HTTP/1.1",
167 "reason_phrase": reason_phrase,
168 }
169 origin = (b"http", *server.netloc)
170 assert len(http._connections[origin]) == 1 # type: ignore
171
172
173 @pytest.mark.anyio
174 async def test_https_request_reuse_connection(
175 backend: str, https_server: Server
176 ) -> None:
177 async with httpcore.AsyncConnectionPool(backend=backend) as http:
178 status_code, headers, stream, extensions = await http.handle_async_request(
179 method=b"GET",
180 url=(b"https", *https_server.netloc, b"/"),
181 headers=[https_server.host_header],
182 stream=httpcore.ByteStream(b""),
183 extensions={},
184 )
185 await read_body(stream)
186
187 assert status_code == 200
188 reason_phrase = b"OK" if https_server.sends_reason else b""
189 assert extensions == {
190 "http_version": b"HTTP/1.1",
191 "reason_phrase": reason_phrase,
192 }
193 origin = (b"https", *https_server.netloc)
194 assert len(http._connections[origin]) == 1 # type: ignore
195
196 status_code, headers, stream, extensions = await http.handle_async_request(
197 method=b"GET",
198 url=(b"https", *https_server.netloc, b"/"),
199 headers=[https_server.host_header],
200 stream=httpcore.ByteStream(b""),
201 extensions={},
202 )
203 await read_body(stream)
204
205 assert status_code == 200
206 reason_phrase = b"OK" if https_server.sends_reason else b""
207 assert extensions == {
208 "http_version": b"HTTP/1.1",
209 "reason_phrase": reason_phrase,
210 }
211 origin = (b"https", *https_server.netloc)
212 assert len(http._connections[origin]) == 1 # type: ignore
213
214
215 @pytest.mark.anyio
216 async def test_http_request_cannot_reuse_dropped_connection(
217 backend: str, server: Server
218 ) -> None:
219 async with httpcore.AsyncConnectionPool(backend=backend) as http:
220 status_code, headers, stream, extensions = await http.handle_async_request(
221 method=b"GET",
222 url=(b"http", *server.netloc, b"/"),
223 headers=[server.host_header],
224 stream=httpcore.ByteStream(b""),
225 extensions={},
226 )
227 await read_body(stream)
228
229 assert status_code == 200
230 reason_phrase = b"OK" if server.sends_reason else b""
231 assert extensions == {
232 "http_version": b"HTTP/1.1",
233 "reason_phrase": reason_phrase,
234 }
235 origin = (b"http", *server.netloc)
236 assert len(http._connections[origin]) == 1 # type: ignore
237
238 # Mock the connection as having been dropped.
239 connection = list(http._connections[origin])[0] # type: ignore
240 connection.is_socket_readable = lambda: True # type: ignore
241
242 status_code, headers, stream, extensions = await http.handle_async_request(
243 method=b"GET",
244 url=(b"http", *server.netloc, b"/"),
245 headers=[server.host_header],
246 stream=httpcore.ByteStream(b""),
247 extensions={},
248 )
249 await read_body(stream)
250
251 assert status_code == 200
252 reason_phrase = b"OK" if server.sends_reason else b""
253 assert extensions == {
254 "http_version": b"HTTP/1.1",
255 "reason_phrase": reason_phrase,
256 }
257 origin = (b"http", *server.netloc)
258 assert len(http._connections[origin]) == 1 # type: ignore
259
260
261 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
262 @pytest.mark.anyio
263 async def test_http_proxy(
264 proxy_server: URL, proxy_mode: str, backend: str, server: Server
265 ) -> None:
266 max_connections = 1
267 async with httpcore.AsyncHTTPProxy(
268 proxy_server,
269 proxy_mode=proxy_mode,
270 max_connections=max_connections,
271 backend=backend,
272 ) as http:
273 status_code, headers, stream, extensions = await http.handle_async_request(
274 method=b"GET",
275 url=(b"http", *server.netloc, b"/"),
276 headers=[server.host_header],
277 stream=httpcore.ByteStream(b""),
278 extensions={},
279 )
280 await read_body(stream)
281
282 assert status_code == 200
283 reason_phrase = b"OK" if server.sends_reason else b""
284 assert extensions == {
285 "http_version": b"HTTP/1.1",
286 "reason_phrase": reason_phrase,
287 }
288
289
290 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
291 @pytest.mark.parametrize("protocol,port", [(b"http", 80), (b"https", 443)])
292 @pytest.mark.trio
293 # Filter out ssl module deprecation warnings and asyncio module resource warning,
294 # convert other warnings to errors.
295 @pytest.mark.filterwarnings("ignore:.*(SSLContext|PROTOCOL_TLS):DeprecationWarning")
296 @pytest.mark.filterwarnings("ignore::ResourceWarning:asyncio")
297 @pytest.mark.filterwarnings("error")
298 async def test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool(
299 proxy_server: URL,
300 server: Server,
301 proxy_mode: str,
302 protocol: bytes,
303 port: int,
304 ):
305 async with httpcore.AsyncHTTPProxy(proxy_server, proxy_mode=proxy_mode) as http:
306 for _ in range(100):
307 try:
308 _ = await http.handle_async_request(
309 method=b"GET",
310 url=(protocol, b"blockedhost.example.com", port, b"/"),
311 headers=[(b"host", b"blockedhost.example.com")],
312 stream=httpcore.ByteStream(b""),
313 extensions={},
314 )
315 except (httpcore.ProxyError, httpcore.RemoteProtocolError):
316 pass
317
318
319 @pytest.mark.anyio
320 async def test_http_request_local_address(backend: str, server: Server) -> None:
321 if backend == "auto" and lookup_async_backend() == "trio":
322 pytest.skip("The trio backend does not support local_address")
323
324 async with httpcore.AsyncConnectionPool(
325 backend=backend, local_address="0.0.0.0"
326 ) as http:
327 status_code, headers, stream, extensions = await http.handle_async_request(
328 method=b"GET",
329 url=(b"http", *server.netloc, b"/"),
330 headers=[server.host_header],
331 stream=httpcore.ByteStream(b""),
332 extensions={},
333 )
334 await read_body(stream)
335
336 assert status_code == 200
337 reason_phrase = b"OK" if server.sends_reason else b""
338 assert extensions == {
339 "http_version": b"HTTP/1.1",
340 "reason_phrase": reason_phrase,
341 }
342 origin = (b"http", *server.netloc)
343 assert len(http._connections[origin]) == 1 # type: ignore
344
345
346 # mitmproxy does not support forwarding HTTPS requests
347 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "TUNNEL_ONLY"])
348 @pytest.mark.parametrize("http2", [False, True])
349 @pytest.mark.anyio
350 async def test_proxy_https_requests(
351 proxy_server: URL,
352 proxy_mode: str,
353 http2: bool,
354 https_server: Server,
355 ) -> None:
356 max_connections = 1
357 async with httpcore.AsyncHTTPProxy(
358 proxy_server,
359 proxy_mode=proxy_mode,
360 max_connections=max_connections,
361 http2=http2,
362 ) as http:
363 status_code, headers, stream, extensions = await http.handle_async_request(
364 method=b"GET",
365 url=(b"https", *https_server.netloc, b"/"),
366 headers=[https_server.host_header],
367 stream=httpcore.ByteStream(b""),
368 extensions={},
369 )
370 _ = await read_body(stream)
371
372 assert status_code == 200
373 assert extensions["http_version"] == b"HTTP/2" if http2 else b"HTTP/1.1"
374 assert extensions.get("reason_phrase", b"") == b"" if http2 else b"OK"
375
376
377 @pytest.mark.parametrize(
378 "http2,keepalive_expiry,expected_during_active,expected_during_idle",
379 [
380 (
381 False,
382 60.0,
383 {HTTPS_SERVER_URL: ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]},
384 {HTTPS_SERVER_URL: ["HTTP/1.1, IDLE", "HTTP/1.1, IDLE"]},
385 ),
386 (
387 True,
388 60.0,
389 {HTTPS_SERVER_URL: ["HTTP/2, ACTIVE, 2 streams"]},
390 {HTTPS_SERVER_URL: ["HTTP/2, IDLE, 0 streams"]},
391 ),
392 (
393 False,
394 0.0,
395 {HTTPS_SERVER_URL: ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]},
396 {},
397 ),
398 (
399 True,
400 0.0,
401 {HTTPS_SERVER_URL: ["HTTP/2, ACTIVE, 2 streams"]},
402 {},
403 ),
404 ],
405 )
406 @pytest.mark.anyio
407 async def test_connection_pool_get_connection_info(
408 http2: bool,
409 keepalive_expiry: float,
410 expected_during_active: dict,
411 expected_during_idle: dict,
412 backend: str,
413 https_server: Server,
414 ) -> None:
415 async with httpcore.AsyncConnectionPool(
416 http2=http2, keepalive_expiry=keepalive_expiry, backend=backend
417 ) as http:
418 _, _, stream_1, _ = await http.handle_async_request(
419 method=b"GET",
420 url=(b"https", *https_server.netloc, b"/"),
421 headers=[https_server.host_header],
422 stream=httpcore.ByteStream(b""),
423 extensions={},
424 )
425 _, _, stream_2, _ = await http.handle_async_request(
426 method=b"GET",
427 url=(b"https", *https_server.netloc, b"/"),
428 headers=[https_server.host_header],
429 stream=httpcore.ByteStream(b""),
430 extensions={},
431 )
432
433 try:
434 stats = await http.get_connection_info()
435 assert stats == expected_during_active
436 finally:
437 await read_body(stream_1)
438 await read_body(stream_2)
439
440 stats = await http.get_connection_info()
441 assert stats == expected_during_idle
442
443 stats = await http.get_connection_info()
444 assert stats == {}
445
446
447 @pytest.mark.skipif(
448 platform.system() not in ("Linux", "Darwin"),
449 reason="Unix Domain Sockets only exist on Unix",
450 )
451 @pytest.mark.anyio
452 async def test_http_request_unix_domain_socket(
453 uds_server: Server, backend: str
454 ) -> None:
455 uds = uds_server.uds
456 async with httpcore.AsyncConnectionPool(uds=uds, backend=backend) as http:
457 status_code, headers, stream, extensions = await http.handle_async_request(
458 method=b"GET",
459 url=(b"http", b"localhost", None, b"/"),
460 headers=[(b"host", b"localhost")],
461 stream=httpcore.ByteStream(b""),
462 extensions={},
463 )
464 assert status_code == 200
465 reason_phrase = b"OK" if uds_server.sends_reason else b""
466 assert extensions == {
467 "http_version": b"HTTP/1.1",
468 "reason_phrase": reason_phrase,
469 }
470 body = await read_body(stream)
471 assert body == b"Hello, world!"
472
473
474 @pytest.mark.parametrize("max_keepalive", [1, 3, 5])
475 @pytest.mark.parametrize("connections_number", [4])
476 @pytest.mark.anyio
477 async def test_max_keepalive_connections_handled_correctly(
478 max_keepalive: int, connections_number: int, backend: str, server: Server
479 ) -> None:
480 async with httpcore.AsyncConnectionPool(
481 max_keepalive_connections=max_keepalive, keepalive_expiry=60, backend=backend
482 ) as http:
483 connections_streams = []
484 for _ in range(connections_number):
485 _, _, stream, _ = await http.handle_async_request(
486 method=b"GET",
487 url=(b"http", *server.netloc, b"/"),
488 headers=[server.host_header],
489 stream=httpcore.ByteStream(b""),
490 extensions={},
491 )
492 connections_streams.append(stream)
493
494 try:
495 for i in range(len(connections_streams)):
496 await read_body(connections_streams[i])
497 finally:
498 stats = await http.get_connection_info()
499
500 connections_in_pool = next(iter(stats.values()))
501 assert len(connections_in_pool) == min(connections_number, max_keepalive)
502
503
504 @pytest.mark.anyio
505 async def test_explicit_backend_name(server: Server) -> None:
506 async with httpcore.AsyncConnectionPool(backend=lookup_async_backend()) as http:
507 status_code, headers, stream, extensions = await http.handle_async_request(
508 method=b"GET",
509 url=(b"http", *server.netloc, b"/"),
510 headers=[server.host_header],
511 stream=httpcore.ByteStream(b""),
512 extensions={},
513 )
514 await read_body(stream)
515
516 assert status_code == 200
517 reason_phrase = b"OK" if server.sends_reason else b""
518 assert extensions == {
519 "http_version": b"HTTP/1.1",
520 "reason_phrase": reason_phrase,
521 }
522 origin = (b"http", *server.netloc)
523 assert len(http._connections[origin]) == 1 # type: ignore
524
525
526 @pytest.mark.anyio
527 @pytest.mark.usefixtures("too_many_open_files_minus_one")
528 @pytest.mark.skipif(platform.system() != "Linux", reason="Only a problem on Linux")
529 async def test_broken_socket_detection_many_open_files(
530 backend: str, server: Server
531 ) -> None:
532 """
533 Regression test for: https://github.com/encode/httpcore/issues/182
534 """
535 async with httpcore.AsyncConnectionPool(backend=backend) as http:
536 # * First attempt will be successful because it will grab the last
537 # available fd before what select() supports on the platform.
538 # * Second attempt would have failed without a fix, due to a "filedescriptor
539 # out of range in select()" exception.
540 for _ in range(2):
541 (
542 status_code,
543 response_headers,
544 stream,
545 extensions,
546 ) = await http.handle_async_request(
547 method=b"GET",
548 url=(b"http", *server.netloc, b"/"),
549 headers=[server.host_header],
550 stream=httpcore.ByteStream(b""),
551 extensions={},
552 )
553 await read_body(stream)
554
555 assert status_code == 200
556 reason_phrase = b"OK" if server.sends_reason else b""
557 assert extensions == {
558 "http_version": b"HTTP/1.1",
559 "reason_phrase": reason_phrase,
560 }
561 origin = (b"http", *server.netloc)
562 assert len(http._connections[origin]) == 1 # type: ignore
563
564
565 @pytest.mark.anyio
566 @pytest.mark.parametrize(
567 "url",
568 [
569 pytest.param((b"http", b"localhost", 12345, b"/"), id="connection-refused"),
570 pytest.param(
571 (b"http", b"doesnotexistatall.org", None, b"/"), id="dns-resolution-failed"
572 ),
573 ],
574 )
575 async def test_cannot_connect_tcp(backend: str, url) -> None:
576 """
577 A properly wrapped error is raised when connecting to the server fails.
578 """
579 async with httpcore.AsyncConnectionPool(backend=backend) as http:
580 with pytest.raises(httpcore.ConnectError):
581 await http.handle_async_request(
582 method=b"GET",
583 url=url,
584 headers=[],
585 stream=httpcore.ByteStream(b""),
586 extensions={},
587 )
588
589
590 @pytest.mark.anyio
591 async def test_cannot_connect_uds(backend: str) -> None:
592 """
593 A properly wrapped error is raised when connecting to the UDS server fails.
594 """
595 uds = "/tmp/doesnotexist.sock"
596 async with httpcore.AsyncConnectionPool(backend=backend, uds=uds) as http:
597 with pytest.raises(httpcore.ConnectError):
598 await http.handle_async_request(
599 method=b"GET",
600 url=(b"http", b"localhost", None, b"/"),
601 headers=[],
602 stream=httpcore.ByteStream(b""),
603 extensions={},
604 )
+0
-200
tests/async_tests/test_retries.py less more
0 import queue
1 import time
2 from typing import Any, List, Optional
3
4 import pytest
5
6 import httpcore
7 from httpcore._backends.auto import AsyncSocketStream, AutoBackend
8 from tests.utils import Server
9
10
11 class AsyncMockBackend(AutoBackend):
12 def __init__(self) -> None:
13 super().__init__()
14 self._exceptions: queue.Queue[Optional[Exception]] = queue.Queue()
15 self._timestamps: List[float] = []
16
17 def push(self, *exceptions: Optional[Exception]) -> None:
18 for exc in exceptions:
19 self._exceptions.put(exc)
20
21 def pop_open_tcp_stream_intervals(self) -> list:
22 intervals = [b - a for a, b in zip(self._timestamps, self._timestamps[1:])]
23 self._timestamps.clear()
24 return intervals
25
26 async def open_tcp_stream(self, *args: Any, **kwargs: Any) -> AsyncSocketStream:
27 self._timestamps.append(time.time())
28 exc = None if self._exceptions.empty() else self._exceptions.get_nowait()
29 if exc is not None:
30 raise exc
31 return await super().open_tcp_stream(*args, **kwargs)
32
33
34 async def read_body(stream: httpcore.AsyncByteStream) -> bytes:
35 try:
36 return b"".join([chunk async for chunk in stream])
37 finally:
38 await stream.aclose()
39
40
41 @pytest.mark.anyio
42 async def test_no_retries(server: Server) -> None:
43 """
44 By default, connection failures are not retried on.
45 """
46 backend = AsyncMockBackend()
47
48 async with httpcore.AsyncConnectionPool(
49 max_keepalive_connections=0, backend=backend
50 ) as http:
51 response = await http.handle_async_request(
52 method=b"GET",
53 url=(b"http", *server.netloc, b"/"),
54 headers=[server.host_header],
55 stream=httpcore.ByteStream(b""),
56 extensions={},
57 )
58 status_code, _, stream, _ = response
59 assert status_code == 200
60 await read_body(stream)
61
62 backend.push(httpcore.ConnectTimeout(), httpcore.ConnectError())
63
64 with pytest.raises(httpcore.ConnectTimeout):
65 await http.handle_async_request(
66 method=b"GET",
67 url=(b"http", *server.netloc, b"/"),
68 headers=[server.host_header],
69 stream=httpcore.ByteStream(b""),
70 extensions={},
71 )
72
73 with pytest.raises(httpcore.ConnectError):
74 await http.handle_async_request(
75 method=b"GET",
76 url=(b"http", *server.netloc, b"/"),
77 headers=[server.host_header],
78 stream=httpcore.ByteStream(b""),
79 extensions={},
80 )
81
82
83 @pytest.mark.anyio
84 async def test_retries_enabled(server: Server) -> None:
85 """
86 When retries are enabled, connection failures are retried on with
87 a fixed exponential backoff.
88 """
89 backend = AsyncMockBackend()
90 retries = 10 # Large enough to not run out of retries within this test.
91
92 async with httpcore.AsyncConnectionPool(
93 retries=retries, max_keepalive_connections=0, backend=backend
94 ) as http:
95 # Standard case, no failures.
96 response = await http.handle_async_request(
97 method=b"GET",
98 url=(b"http", *server.netloc, b"/"),
99 headers=[server.host_header],
100 stream=httpcore.ByteStream(b""),
101 extensions={},
102 )
103 assert backend.pop_open_tcp_stream_intervals() == []
104 status_code, _, stream, _ = response
105 assert status_code == 200
106 await read_body(stream)
107
108 # One failure, then success.
109 backend.push(httpcore.ConnectError(), None)
110 response = await http.handle_async_request(
111 method=b"GET",
112 url=(b"http", *server.netloc, b"/"),
113 headers=[server.host_header],
114 stream=httpcore.ByteStream(b""),
115 extensions={},
116 )
117 assert backend.pop_open_tcp_stream_intervals() == [
118 pytest.approx(0, abs=5e-3), # Retry immediately.
119 ]
120 status_code, _, stream, _ = response
121 assert status_code == 200
122 await read_body(stream)
123
124 # Three failures, then success.
125 backend.push(
126 httpcore.ConnectError(),
127 httpcore.ConnectTimeout(),
128 httpcore.ConnectTimeout(),
129 None,
130 )
131 response = await http.handle_async_request(
132 method=b"GET",
133 url=(b"http", *server.netloc, b"/"),
134 headers=[server.host_header],
135 stream=httpcore.ByteStream(b""),
136 extensions={},
137 )
138 assert backend.pop_open_tcp_stream_intervals() == [
139 pytest.approx(0, abs=5e-3), # Retry immediately.
140 pytest.approx(0.5, rel=0.1), # First backoff.
141 pytest.approx(1.0, rel=0.1), # Second (increased) backoff.
142 ]
143 status_code, _, stream, _ = response
144 assert status_code == 200
145 await read_body(stream)
146
147 # Non-connect exceptions are not retried on.
148 backend.push(httpcore.ReadTimeout(), httpcore.NetworkError())
149 with pytest.raises(httpcore.ReadTimeout):
150 await http.handle_async_request(
151 method=b"GET",
152 url=(b"http", *server.netloc, b"/"),
153 headers=[server.host_header],
154 stream=httpcore.ByteStream(b""),
155 extensions={},
156 )
157 with pytest.raises(httpcore.NetworkError):
158 await http.handle_async_request(
159 method=b"GET",
160 url=(b"http", *server.netloc, b"/"),
161 headers=[server.host_header],
162 stream=httpcore.ByteStream(b""),
163 extensions={},
164 )
165
166
167 @pytest.mark.anyio
168 async def test_retries_exceeded(server: Server) -> None:
169 """
170 When retries are enabled and connecting failures more than the configured number
171 of retries, connect exceptions are raised.
172 """
173 backend = AsyncMockBackend()
174 retries = 1
175
176 async with httpcore.AsyncConnectionPool(
177 retries=retries, max_keepalive_connections=0, backend=backend
178 ) as http:
179 response = await http.handle_async_request(
180 method=b"GET",
181 url=(b"http", *server.netloc, b"/"),
182 headers=[server.host_header],
183 stream=httpcore.ByteStream(b""),
184 extensions={},
185 )
186 status_code, _, stream, _ = response
187 assert status_code == 200
188 await read_body(stream)
189
190 # First failure is retried on, second one isn't.
191 backend.push(httpcore.ConnectError(), httpcore.ConnectTimeout())
192 with pytest.raises(httpcore.ConnectTimeout):
193 await http.handle_async_request(
194 method=b"GET",
195 url=(b"http", *server.netloc, b"/"),
196 headers=[server.host_header],
197 stream=httpcore.ByteStream(b""),
198 extensions={},
199 )
+0
-0
tests/backend_tests/__init__.py less more
(Empty file)
+0
-32
tests/backend_tests/test_asyncio.py less more
0 from unittest.mock import MagicMock, patch
1
2 import pytest
3
4 from httpcore._backends.asyncio import SocketStream
5
6
7 class MockSocket:
8 def fileno(self):
9 return 1
10
11
12 class TestSocketStream:
13 class TestIsReadable:
14 @pytest.mark.asyncio
15 async def test_returns_true_when_transport_has_no_socket(self):
16 stream_reader = MagicMock()
17 stream_reader._transport.get_extra_info.return_value = None
18 sock_stream = SocketStream(stream_reader, MagicMock())
19
20 assert sock_stream.is_readable()
21
22 @pytest.mark.asyncio
23 async def test_returns_true_when_socket_is_readable(self):
24 stream_reader = MagicMock()
25 stream_reader._transport.get_extra_info.return_value = MockSocket()
26 sock_stream = SocketStream(stream_reader, MagicMock())
27
28 with patch(
29 "httpcore._utils.is_socket_readable", MagicMock(return_value=True)
30 ):
31 assert sock_stream.is_readable()
0 """
1 Some of our tests require branching of flow control.
2
3 We'd like to have the same kind of test for both async and sync environments,
4 and so we have functionality here that replicate's Trio's `open_nursery` API,
5 but in a plain old multi-threaded context.
6
7 We don't do any smarts around cancellations, or managing exceptions from
8 childen, because we don't need that for our use-case.
9 """
10 import threading
11 from types import TracebackType
12 from typing import List, Type
13
14
15 class Nursery:
16 def __init__(self) -> None:
17 self._threads: List[threading.Thread] = []
18
19 def __enter__(self) -> "Nursery":
20 return self
21
22 def __exit__(
23 self,
24 exc_type: Type[BaseException] = None,
25 exc_value: BaseException = None,
26 traceback: TracebackType = None,
27 ) -> None:
28 for thread in self._threads:
29 thread.start()
30 for thread in self._threads:
31 thread.join()
32
33 def start_soon(self, func, *args):
34 thread = threading.Thread(target=func, args=args)
35 self._threads.append(thread)
36
37
38 def open_nursery() -> Nursery:
39 return Nursery()
+0
-187
tests/conftest.py less more
0 import contextlib
1 import os
2 import threading
3 import time
4 import typing
5
6 import pytest
7 import trustme
8
9 from httpcore._types import URL
10
11 from .utils import HypercornServer, LiveServer, Server, http_proxy_server
12
13 try:
14 import hypercorn
15 except ImportError: # pragma: no cover # Python 3.6
16 hypercorn = None # type: ignore
17 SERVER_HOST = "example.org"
18 SERVER_HTTP_PORT = 80
19 SERVER_HTTPS_PORT = 443
20 HTTPS_SERVER_URL = "https://example.org"
21 else:
22 SERVER_HOST = "localhost"
23 SERVER_HTTP_PORT = 8002
24 SERVER_HTTPS_PORT = 8003
25 HTTPS_SERVER_URL = f"https://localhost:{SERVER_HTTPS_PORT}"
26
27
28 @pytest.fixture(scope="session")
29 def proxy_server() -> typing.Iterator[URL]:
30 proxy_host = "127.0.0.1"
31 proxy_port = 8080
32
33 with http_proxy_server(proxy_host, proxy_port) as proxy_url:
34 yield proxy_url
35
36
37 async def app(scope: dict, receive: typing.Callable, send: typing.Callable) -> None:
38 assert scope["type"] == "http"
39 await send(
40 {
41 "type": "http.response.start",
42 "status": 200,
43 "headers": [[b"content-type", b"text/plain"]],
44 }
45 )
46 await send({"type": "http.response.body", "body": b"Hello, world!"})
47
48
49 @pytest.fixture(scope="session")
50 def uds() -> typing.Iterator[str]:
51 uds = "test_server.sock"
52 try:
53 yield uds
54 finally:
55 os.remove(uds)
56
57
58 @pytest.fixture(scope="session")
59 def uds_server(uds: str) -> typing.Iterator[Server]:
60 if hypercorn is not None:
61 server = HypercornServer(app=app, bind=f"unix:{uds}")
62 with server.serve_in_thread():
63 yield server
64 else:
65 # On Python 3.6, use Uvicorn as a fallback.
66 import uvicorn
67
68 class UvicornServer(Server, uvicorn.Server):
69 sends_reason = True
70
71 @property
72 def uds(self) -> str:
73 uds = self.config.uds
74 assert uds is not None
75 return uds
76
77 def install_signal_handlers(self) -> None:
78 pass
79
80 @contextlib.contextmanager
81 def serve_in_thread(self) -> typing.Iterator[None]:
82 thread = threading.Thread(target=self.run)
83 thread.start()
84 try:
85 while not self.started:
86 time.sleep(1e-3)
87 yield
88 finally:
89 self.should_exit = True
90 thread.join()
91
92 config = uvicorn.Config(app=app, lifespan="off", loop="asyncio", uds=uds)
93 server = UvicornServer(config=config)
94 with server.serve_in_thread():
95 yield server
96
97
98 @pytest.fixture(scope="session")
99 def server() -> typing.Iterator[Server]: # pragma: no cover
100 server: Server # Please mypy.
101
102 if hypercorn is None:
103 server = LiveServer(host=SERVER_HOST, port=SERVER_HTTP_PORT)
104 yield server
105 return
106
107 server = HypercornServer(app=app, bind=f"{SERVER_HOST}:{SERVER_HTTP_PORT}")
108 with server.serve_in_thread():
109 yield server
110
111
112 @pytest.fixture(scope="session")
113 def cert_authority() -> trustme.CA:
114 return trustme.CA()
115
116
117 @pytest.fixture(scope="session")
118 def localhost_cert(cert_authority: trustme.CA) -> trustme.LeafCert:
119 return cert_authority.issue_cert("localhost")
120
121
122 @pytest.fixture(scope="session")
123 def localhost_cert_path(localhost_cert: trustme.LeafCert) -> typing.Iterator[str]:
124 with localhost_cert.private_key_and_cert_chain_pem.tempfile() as tmp:
125 yield tmp
126
127
128 @pytest.fixture(scope="session")
129 def localhost_cert_pem_file(localhost_cert: trustme.LeafCert) -> typing.Iterator[str]:
130 with localhost_cert.cert_chain_pems[0].tempfile() as tmp:
131 yield tmp
132
133
134 @pytest.fixture(scope="session")
135 def localhost_cert_private_key_file(
136 localhost_cert: trustme.LeafCert,
137 ) -> typing.Iterator[str]:
138 with localhost_cert.private_key_pem.tempfile() as tmp:
139 yield tmp
140
141
142 @pytest.fixture(scope="session")
143 def https_server(
144 localhost_cert_pem_file: str, localhost_cert_private_key_file: str
145 ) -> typing.Iterator[Server]: # pragma: no cover
146 server: Server # Please mypy.
147
148 if hypercorn is None:
149 server = LiveServer(host=SERVER_HOST, port=SERVER_HTTPS_PORT)
150 yield server
151 return
152
153 server = HypercornServer(
154 app=app,
155 bind=f"{SERVER_HOST}:{SERVER_HTTPS_PORT}",
156 certfile=localhost_cert_pem_file,
157 keyfile=localhost_cert_private_key_file,
158 )
159 with server.serve_in_thread():
160 yield server
161
162
163 @pytest.fixture(scope="function")
164 def too_many_open_files_minus_one() -> typing.Iterator[None]:
165 # Fixture for test regression on https://github.com/encode/httpcore/issues/182
166 # Max number of descriptors chosen according to:
167 # See: https://man7.org/linux/man-pages/man2/select.2.html#top_of_page
168 # "To monitor file descriptors greater than 1023, use poll or epoll instead."
169 max_num_descriptors = 1023
170
171 files = []
172
173 while True:
174 f = open("/dev/null")
175 # Leave one file descriptor available for a transport to perform
176 # a successful request.
177 if f.fileno() > max_num_descriptors - 1:
178 f.close()
179 break
180 files.append(f)
181
182 try:
183 yield
184 finally:
185 for f in files:
186 f.close()
+0
-0
tests/sync_tests/__init__.py less more
(Empty file)
+0
-194
tests/sync_tests/test_connection_pool.py less more
0 from typing import Iterator, Tuple
1
2 import pytest
3
4 import httpcore
5 from httpcore._async.base import ConnectionState
6 from httpcore._types import URL, Headers
7
8
9 class MockConnection(object):
10 def __init__(self, http_version):
11 self.origin = (b"http", b"example.org", 80)
12 self.state = ConnectionState.PENDING
13 self.is_http11 = http_version == "HTTP/1.1"
14 self.is_http2 = http_version == "HTTP/2"
15 self.stream_count = 0
16
17 def handle_request(
18 self,
19 method: bytes,
20 url: URL,
21 headers: Headers = None,
22 stream: httpcore.SyncByteStream = None,
23 extensions: dict = None,
24 ) -> Tuple[int, Headers, httpcore.SyncByteStream, dict]:
25 self.state = ConnectionState.ACTIVE
26 self.stream_count += 1
27
28 def on_close():
29 self.stream_count -= 1
30 if self.stream_count == 0:
31 self.state = ConnectionState.IDLE
32
33 def iterator() -> Iterator[bytes]:
34 yield b""
35
36 stream = httpcore.IteratorByteStream(
37 iterator=iterator(), close_func=on_close
38 )
39
40 return 200, [], stream, {}
41
42 def close(self):
43 pass
44
45 def info(self) -> str:
46 return self.state.name
47
48 def is_available(self):
49 if self.is_http11:
50 return self.state == ConnectionState.IDLE
51 else:
52 return self.state != ConnectionState.CLOSED
53
54 def should_close(self):
55 return False
56
57 def is_idle(self):
58 return self.state == ConnectionState.IDLE
59
60 def is_closed(self):
61 return False
62
63
64 class ConnectionPool(httpcore.SyncConnectionPool):
65 def __init__(self, http_version: str):
66 super().__init__()
67 self.http_version = http_version
68 assert http_version in ("HTTP/1.1", "HTTP/2")
69
70 def _create_connection(self, **kwargs):
71 return MockConnection(self.http_version)
72
73
74 def read_body(stream: httpcore.SyncByteStream) -> bytes:
75 try:
76 body = []
77 for chunk in stream:
78 body.append(chunk)
79 return b"".join(body)
80 finally:
81 stream.close()
82
83
84
85 @pytest.mark.parametrize("http_version", ["HTTP/1.1", "HTTP/2"])
86 def test_sequential_requests(http_version) -> None:
87 with ConnectionPool(http_version=http_version) as http:
88 info = http.get_connection_info()
89 assert info == {}
90
91 response = http.handle_request(
92 method=b"GET",
93 url=(b"http", b"example.org", None, b"/"),
94 headers=[],
95 stream=httpcore.ByteStream(b""),
96 extensions={},
97 )
98 status_code, headers, stream, extensions = response
99 info = http.get_connection_info()
100 assert info == {"http://example.org": ["ACTIVE"]}
101
102 read_body(stream)
103 info = http.get_connection_info()
104 assert info == {"http://example.org": ["IDLE"]}
105
106 response = http.handle_request(
107 method=b"GET",
108 url=(b"http", b"example.org", None, b"/"),
109 headers=[],
110 stream=httpcore.ByteStream(b""),
111 extensions={},
112 )
113 status_code, headers, stream, extensions = response
114 info = http.get_connection_info()
115 assert info == {"http://example.org": ["ACTIVE"]}
116
117 read_body(stream)
118 info = http.get_connection_info()
119 assert info == {"http://example.org": ["IDLE"]}
120
121
122
123 def test_concurrent_requests_h11() -> None:
124 with ConnectionPool(http_version="HTTP/1.1") as http:
125 info = http.get_connection_info()
126 assert info == {}
127
128 response_1 = http.handle_request(
129 method=b"GET",
130 url=(b"http", b"example.org", None, b"/"),
131 headers=[],
132 stream=httpcore.ByteStream(b""),
133 extensions={},
134 )
135 status_code_1, headers_1, stream_1, ext_1 = response_1
136 info = http.get_connection_info()
137 assert info == {"http://example.org": ["ACTIVE"]}
138
139 response_2 = http.handle_request(
140 method=b"GET",
141 url=(b"http", b"example.org", None, b"/"),
142 headers=[],
143 stream=httpcore.ByteStream(b""),
144 extensions={},
145 )
146 status_code_2, headers_2, stream_2, ext_2 = response_2
147 info = http.get_connection_info()
148 assert info == {"http://example.org": ["ACTIVE", "ACTIVE"]}
149
150 read_body(stream_1)
151 info = http.get_connection_info()
152 assert info == {"http://example.org": ["ACTIVE", "IDLE"]}
153
154 read_body(stream_2)
155 info = http.get_connection_info()
156 assert info == {"http://example.org": ["IDLE", "IDLE"]}
157
158
159
160 def test_concurrent_requests_h2() -> None:
161 with ConnectionPool(http_version="HTTP/2") as http:
162 info = http.get_connection_info()
163 assert info == {}
164
165 response_1 = http.handle_request(
166 method=b"GET",
167 url=(b"http", b"example.org", None, b"/"),
168 headers=[],
169 stream=httpcore.ByteStream(b""),
170 extensions={},
171 )
172 status_code_1, headers_1, stream_1, ext_1 = response_1
173 info = http.get_connection_info()
174 assert info == {"http://example.org": ["ACTIVE"]}
175
176 response_2 = http.handle_request(
177 method=b"GET",
178 url=(b"http", b"example.org", None, b"/"),
179 headers=[],
180 stream=httpcore.ByteStream(b""),
181 extensions={},
182 )
183 status_code_2, headers_2, stream_2, ext_2 = response_2
184 info = http.get_connection_info()
185 assert info == {"http://example.org": ["ACTIVE"]}
186
187 read_body(stream_1)
188 info = http.get_connection_info()
189 assert info == {"http://example.org": ["ACTIVE"]}
190
191 read_body(stream_2)
192 info = http.get_connection_info()
193 assert info == {"http://example.org": ["IDLE"]}
+0
-317
tests/sync_tests/test_http11.py less more
0 import collections
1
2 import pytest
3
4 import httpcore
5 from httpcore._backends.sync import SyncBackend, SyncLock, SyncSocketStream
6
7
8 class MockStream(SyncSocketStream):
9 def __init__(self, http_buffer, disconnect):
10 self.read_buffer = collections.deque(http_buffer)
11 self.disconnect = disconnect
12
13 def get_http_version(self) -> str:
14 return "HTTP/1.1"
15
16 def write(self, data, timeout):
17 pass
18
19 def read(self, n, timeout):
20 return self.read_buffer.popleft()
21
22 def close(self):
23 pass
24
25 def is_readable(self):
26 return self.disconnect
27
28
29 class MockLock(SyncLock):
30 def release(self) -> None:
31 pass
32
33 def acquire(self) -> None:
34 pass
35
36
37 class MockBackend(SyncBackend):
38 def __init__(self, http_buffer, disconnect=False):
39 self.http_buffer = http_buffer
40 self.disconnect = disconnect
41
42 def open_tcp_stream(
43 self, hostname, port, ssl_context, timeout, *, local_address
44 ):
45 return MockStream(self.http_buffer, self.disconnect)
46
47 def create_lock(self):
48 return MockLock()
49
50
51
52 def test_get_request_with_connection_keepalive() -> None:
53 backend = MockBackend(
54 http_buffer=[
55 b"HTTP/1.1 200 OK\r\n",
56 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
57 b"Server: Apache\r\n",
58 b"Content-Length: 13\r\n",
59 b"Content-Type: text/plain\r\n",
60 b"\r\n",
61 b"Hello, world.",
62 b"HTTP/1.1 200 OK\r\n",
63 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
64 b"Server: Apache\r\n",
65 b"Content-Length: 13\r\n",
66 b"Content-Type: text/plain\r\n",
67 b"\r\n",
68 b"Hello, world.",
69 ]
70 )
71
72 with httpcore.SyncConnectionPool(backend=backend) as http:
73 # We're sending a request with a standard keep-alive connection, so
74 # it will remain in the pool once we've sent the request.
75 response = http.handle_request(
76 method=b"GET",
77 url=(b"http", b"example.org", None, b"/"),
78 headers=[(b"Host", b"example.org")],
79 stream=httpcore.ByteStream(b""),
80 extensions={},
81 )
82 status_code, headers, stream, extensions = response
83 body = stream.read()
84 assert status_code == 200
85 assert body == b"Hello, world."
86 assert http.get_connection_info() == {
87 "http://example.org": ["HTTP/1.1, IDLE"]
88 }
89
90 # This second request will go out over the same connection.
91 response = http.handle_request(
92 method=b"GET",
93 url=(b"http", b"example.org", None, b"/"),
94 headers=[(b"Host", b"example.org")],
95 stream=httpcore.ByteStream(b""),
96 extensions={},
97 )
98 status_code, headers, stream, extensions = response
99 body = stream.read()
100 assert status_code == 200
101 assert body == b"Hello, world."
102 assert http.get_connection_info() == {
103 "http://example.org": ["HTTP/1.1, IDLE"]
104 }
105
106
107
108 def test_get_request_with_connection_close_header() -> None:
109 backend = MockBackend(
110 http_buffer=[
111 b"HTTP/1.1 200 OK\r\n",
112 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
113 b"Server: Apache\r\n",
114 b"Content-Length: 13\r\n",
115 b"Content-Type: text/plain\r\n",
116 b"\r\n",
117 b"Hello, world.",
118 b"", # Terminate the connection.
119 ]
120 )
121
122 with httpcore.SyncConnectionPool(backend=backend) as http:
123 # We're sending a request with 'Connection: close', so the connection
124 # does not remain in the pool once we've sent the request.
125 response = http.handle_request(
126 method=b"GET",
127 url=(b"http", b"example.org", None, b"/"),
128 headers=[(b"Host", b"example.org"), (b"Connection", b"close")],
129 stream=httpcore.ByteStream(b""),
130 extensions={},
131 )
132 status_code, headers, stream, extensions = response
133 body = stream.read()
134 assert status_code == 200
135 assert body == b"Hello, world."
136 assert http.get_connection_info() == {}
137
138 # The second request will go out over a new connection.
139 response = http.handle_request(
140 method=b"GET",
141 url=(b"http", b"example.org", None, b"/"),
142 headers=[(b"Host", b"example.org"), (b"Connection", b"close")],
143 stream=httpcore.ByteStream(b""),
144 extensions={},
145 )
146 status_code, headers, stream, extensions = response
147 body = stream.read()
148 assert status_code == 200
149 assert body == b"Hello, world."
150 assert http.get_connection_info() == {}
151
152
153
154 def test_get_request_with_socket_disconnect_between_requests() -> None:
155 backend = MockBackend(
156 http_buffer=[
157 b"HTTP/1.1 200 OK\r\n",
158 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
159 b"Server: Apache\r\n",
160 b"Content-Length: 13\r\n",
161 b"Content-Type: text/plain\r\n",
162 b"\r\n",
163 b"Hello, world.",
164 ],
165 disconnect=True,
166 )
167
168 with httpcore.SyncConnectionPool(backend=backend) as http:
169 # Send an initial request. We're using a standard keep-alive
170 # connection, so the connection remains in the pool after completion.
171 response = http.handle_request(
172 method=b"GET",
173 url=(b"http", b"example.org", None, b"/"),
174 headers=[(b"Host", b"example.org")],
175 stream=httpcore.ByteStream(b""),
176 extensions={},
177 )
178 status_code, headers, stream, extensions = response
179 body = stream.read()
180 assert status_code == 200
181 assert body == b"Hello, world."
182 assert http.get_connection_info() == {
183 "http://example.org": ["HTTP/1.1, IDLE"]
184 }
185
186 # On sending this second request, at the point of pool re-acquiry the
187 # socket indicates that it has disconnected, and we'll send the request
188 # over a new connection.
189 response = http.handle_request(
190 method=b"GET",
191 url=(b"http", b"example.org", None, b"/"),
192 headers=[(b"Host", b"example.org")],
193 stream=httpcore.ByteStream(b""),
194 extensions={},
195 )
196 status_code, headers, stream, extensions = response
197 body = stream.read()
198 assert status_code == 200
199 assert body == b"Hello, world."
200 assert http.get_connection_info() == {
201 "http://example.org": ["HTTP/1.1, IDLE"]
202 }
203
204
205
206 def test_get_request_with_unclean_close_after_first_request() -> None:
207 backend = MockBackend(
208 http_buffer=[
209 b"HTTP/1.1 200 OK\r\n",
210 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
211 b"Server: Apache\r\n",
212 b"Content-Length: 13\r\n",
213 b"Content-Type: text/plain\r\n",
214 b"\r\n",
215 b"Hello, world.",
216 b"", # Terminate the connection.
217 ],
218 )
219
220 with httpcore.SyncConnectionPool(backend=backend) as http:
221 # Send an initial request. We're using a standard keep-alive
222 # connection, so the connection remains in the pool after completion.
223 response = http.handle_request(
224 method=b"GET",
225 url=(b"http", b"example.org", None, b"/"),
226 headers=[(b"Host", b"example.org")],
227 stream=httpcore.ByteStream(b""),
228 extensions={},
229 )
230 status_code, headers, stream, extensions = response
231 body = stream.read()
232 assert status_code == 200
233 assert body == b"Hello, world."
234 assert http.get_connection_info() == {
235 "http://example.org": ["HTTP/1.1, IDLE"]
236 }
237
238 # At this point we successfully write another request, but the socket
239 # read returns `b""`, indicating a premature close.
240 with pytest.raises(httpcore.RemoteProtocolError) as excinfo:
241 http.handle_request(
242 method=b"GET",
243 url=(b"http", b"example.org", None, b"/"),
244 headers=[(b"Host", b"example.org")],
245 stream=httpcore.ByteStream(b""),
246 extensions={},
247 )
248 assert str(excinfo.value) == "Server disconnected without sending a response."
249
250
251
252 def test_request_with_missing_host_header() -> None:
253 backend = MockBackend(http_buffer=[])
254
255 with httpcore.SyncConnectionPool(backend=backend) as http:
256 with pytest.raises(httpcore.LocalProtocolError) as excinfo:
257 http.handle_request(
258 method=b"GET",
259 url=(b"http", b"example.org", None, b"/"),
260 headers=[],
261 stream=httpcore.ByteStream(b""),
262 extensions={},
263 )
264 assert str(excinfo.value) == "Missing mandatory Host: header"
265
266
267
268 def test_concurrent_get_requests() -> None:
269 backend = MockBackend(
270 http_buffer=[
271 b"HTTP/1.1 200 OK\r\n",
272 b"Date: Sat, 06 Oct 2049 12:34:56 GMT\r\n",
273 b"Server: Apache\r\n",
274 b"Content-Length: 13\r\n",
275 b"Content-Type: text/plain\r\n",
276 b"\r\n",
277 b"Hello, world.",
278 ]
279 )
280
281 with httpcore.SyncConnectionPool(backend=backend) as http:
282 # We're sending a request with a standard keep-alive connection, so
283 # it will remain in the pool once we've sent the request.
284 response_1 = http.handle_request(
285 method=b"GET",
286 url=(b"http", b"example.org", None, b"/"),
287 headers=[(b"Host", b"example.org")],
288 stream=httpcore.ByteStream(b""),
289 extensions={},
290 )
291 status_code, headers, stream_1, extensions = response_1
292 assert http.get_connection_info() == {
293 "http://example.org": ["HTTP/1.1, ACTIVE"]
294 }
295
296 response_2 = http.handle_request(
297 method=b"GET",
298 url=(b"http", b"example.org", None, b"/"),
299 headers=[(b"Host", b"example.org")],
300 stream=httpcore.ByteStream(b""),
301 extensions={},
302 )
303 status_code, headers, stream_2, extensions = response_2
304 assert http.get_connection_info() == {
305 "http://example.org": ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]
306 }
307
308 stream_1.read()
309 assert http.get_connection_info() == {
310 "http://example.org": ["HTTP/1.1, ACTIVE", "HTTP/1.1, IDLE"]
311 }
312
313 stream_2.read()
314 assert http.get_connection_info() == {
315 "http://example.org": ["HTTP/1.1, IDLE", "HTTP/1.1, IDLE"]
316 }
+0
-249
tests/sync_tests/test_http2.py less more
0 import collections
1
2 import h2.config
3 import h2.connection
4 import pytest
5
6 import httpcore
7 from httpcore._backends.sync import (
8 SyncBackend,
9 SyncLock,
10 SyncSemaphore,
11 SyncSocketStream,
12 )
13
14
15 class MockStream(SyncSocketStream):
16 def __init__(self, http_buffer, disconnect):
17 self.read_buffer = collections.deque(http_buffer)
18 self.disconnect = disconnect
19
20 def get_http_version(self) -> str:
21 return "HTTP/2"
22
23 def write(self, data, timeout):
24 pass
25
26 def read(self, n, timeout):
27 return self.read_buffer.popleft()
28
29 def close(self):
30 pass
31
32 def is_readable(self):
33 return self.disconnect
34
35
36 class MockLock(SyncLock):
37 def release(self):
38 pass
39
40 def acquire(self):
41 pass
42
43
44 class MockSemaphore(SyncSemaphore):
45 def __init__(self):
46 pass
47
48 def acquire(self, timeout=None):
49 pass
50
51 def release(self):
52 pass
53
54
55 class MockBackend(SyncBackend):
56 def __init__(self, http_buffer, disconnect=False):
57 self.http_buffer = http_buffer
58 self.disconnect = disconnect
59
60 def open_tcp_stream(
61 self, hostname, port, ssl_context, timeout, *, local_address
62 ):
63 return MockStream(self.http_buffer, self.disconnect)
64
65 def create_lock(self):
66 return MockLock()
67
68 def create_semaphore(self, max_value, exc_class):
69 return MockSemaphore()
70
71
72 class HTTP2BytesGenerator:
73 def __init__(self):
74 self.client_config = h2.config.H2Configuration(client_side=True)
75 self.client_conn = h2.connection.H2Connection(config=self.client_config)
76 self.server_config = h2.config.H2Configuration(client_side=False)
77 self.server_conn = h2.connection.H2Connection(config=self.server_config)
78 self.initialized = False
79
80 def get_server_bytes(
81 self, request_headers, request_data, response_headers, response_data
82 ):
83 if not self.initialized:
84 self.client_conn.initiate_connection()
85 self.server_conn.initiate_connection()
86 self.initialized = True
87
88 # Feed the request events to the client-side state machine
89 client_stream_id = self.client_conn.get_next_available_stream_id()
90 self.client_conn.send_headers(client_stream_id, headers=request_headers)
91 self.client_conn.send_data(client_stream_id, data=request_data, end_stream=True)
92
93 # Determine the bytes that're sent out the client side, and feed them
94 # into the server-side state machine to get it into the correct state.
95 client_bytes = self.client_conn.data_to_send()
96 events = self.server_conn.receive_data(client_bytes)
97 server_stream_id = [
98 event.stream_id
99 for event in events
100 if isinstance(event, h2.events.RequestReceived)
101 ][0]
102
103 # Feed the response events to the server-side state machine
104 self.server_conn.send_headers(server_stream_id, headers=response_headers)
105 self.server_conn.send_data(
106 server_stream_id, data=response_data, end_stream=True
107 )
108
109 return self.server_conn.data_to_send()
110
111
112
113 def test_get_request() -> None:
114 bytes_generator = HTTP2BytesGenerator()
115 http_buffer = [
116 bytes_generator.get_server_bytes(
117 request_headers=[
118 (b":method", b"GET"),
119 (b":authority", b"www.example.com"),
120 (b":scheme", b"https"),
121 (b":path", "/"),
122 ],
123 request_data=b"",
124 response_headers=[
125 (b":status", b"200"),
126 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
127 (b"server", b"Apache"),
128 (b"content-length", b"13"),
129 (b"content-type", b"text/plain"),
130 ],
131 response_data=b"Hello, world.",
132 ),
133 bytes_generator.get_server_bytes(
134 request_headers=[
135 (b":method", b"GET"),
136 (b":authority", b"www.example.com"),
137 (b":scheme", b"https"),
138 (b":path", "/"),
139 ],
140 request_data=b"",
141 response_headers=[
142 (b":status", b"200"),
143 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
144 (b"server", b"Apache"),
145 (b"content-length", b"13"),
146 (b"content-type", b"text/plain"),
147 ],
148 response_data=b"Hello, world.",
149 ),
150 ]
151 backend = MockBackend(http_buffer=http_buffer)
152
153 with httpcore.SyncConnectionPool(http2=True, backend=backend) as http:
154 # We're sending a request with a standard keep-alive connection, so
155 # it will remain in the pool once we've sent the request.
156 response = http.handle_request(
157 method=b"GET",
158 url=(b"https", b"example.org", None, b"/"),
159 headers=[(b"Host", b"example.org")],
160 stream=httpcore.ByteStream(b""),
161 extensions={},
162 )
163 status_code, headers, stream, extensions = response
164 body = stream.read()
165 assert status_code == 200
166 assert body == b"Hello, world."
167 assert http.get_connection_info() == {
168 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
169 }
170
171 # The second HTTP request will go out over the same connection.
172 response = http.handle_request(
173 method=b"GET",
174 url=(b"https", b"example.org", None, b"/"),
175 headers=[(b"Host", b"example.org")],
176 stream=httpcore.ByteStream(b""),
177 extensions={},
178 )
179 status_code, headers, stream, extensions = response
180 body = stream.read()
181 assert status_code == 200
182 assert body == b"Hello, world."
183 assert http.get_connection_info() == {
184 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
185 }
186
187
188
189 def test_post_request() -> None:
190 bytes_generator = HTTP2BytesGenerator()
191 bytes_to_send = bytes_generator.get_server_bytes(
192 request_headers=[
193 (b":method", b"POST"),
194 (b":authority", b"www.example.com"),
195 (b":scheme", b"https"),
196 (b":path", "/"),
197 (b"content-length", b"13"),
198 ],
199 request_data=b"Hello, world.",
200 response_headers=[
201 (b":status", b"200"),
202 (b"date", b"Sat, 06 Oct 2049 12:34:56 GMT"),
203 (b"server", b"Apache"),
204 (b"content-length", b"13"),
205 (b"content-type", b"text/plain"),
206 ],
207 response_data=b"Hello, world.",
208 )
209 backend = MockBackend(http_buffer=[bytes_to_send])
210
211 with httpcore.SyncConnectionPool(http2=True, backend=backend) as http:
212 # We're sending a request with a standard keep-alive connection, so
213 # it will remain in the pool once we've sent the request.
214 response = http.handle_request(
215 method=b"POST",
216 url=(b"https", b"example.org", None, b"/"),
217 headers=[(b"Host", b"example.org"), (b"Content-length", b"13")],
218 stream=httpcore.ByteStream(b"Hello, world."),
219 extensions={},
220 )
221 status_code, headers, stream, extensions = response
222 body = stream.read()
223 assert status_code == 200
224 assert body == b"Hello, world."
225 assert http.get_connection_info() == {
226 "https://example.org": ["HTTP/2, IDLE, 0 streams"]
227 }
228
229
230
231 def test_request_with_missing_host_header() -> None:
232 backend = MockBackend(http_buffer=[])
233
234 server_config = h2.config.H2Configuration(client_side=False)
235 server_conn = h2.connection.H2Connection(config=server_config)
236 server_conn.initiate_connection()
237 backend = MockBackend(http_buffer=[server_conn.data_to_send()])
238
239 with httpcore.SyncConnectionPool(backend=backend) as http:
240 with pytest.raises(httpcore.LocalProtocolError) as excinfo:
241 http.handle_request(
242 method=b"GET",
243 url=(b"http", b"example.org", None, b"/"),
244 headers=[],
245 stream=httpcore.ByteStream(b""),
246 extensions={},
247 )
248 assert str(excinfo.value) == "Missing mandatory Host: header"
+0
-605
tests/sync_tests/test_interfaces.py less more
0 import platform
1 from typing import Tuple
2
3 import pytest
4
5 import httpcore
6 from httpcore._types import URL
7 from tests.conftest import HTTPS_SERVER_URL
8 from tests.utils import Server, lookup_sync_backend
9
10
11 @pytest.fixture(params=["sync"])
12 def backend(request):
13 return request.param
14
15
16 def read_body(stream: httpcore.SyncByteStream) -> bytes:
17 try:
18 body = []
19 for chunk in stream:
20 body.append(chunk)
21 return b"".join(body)
22 finally:
23 stream.close()
24
25
26 def test_must_configure_either_http1_or_http2() -> None:
27 with pytest.raises(ValueError):
28 httpcore.SyncConnectionPool(http1=False, http2=False)
29
30
31
32 def test_http_request(backend: str, server: Server) -> None:
33 with httpcore.SyncConnectionPool(backend=backend) as http:
34 status_code, headers, stream, extensions = http.handle_request(
35 method=b"GET",
36 url=(b"http", *server.netloc, b"/"),
37 headers=[server.host_header],
38 stream=httpcore.ByteStream(b""),
39 extensions={},
40 )
41 read_body(stream)
42
43 assert status_code == 200
44 reason_phrase = b"OK" if server.sends_reason else b""
45 assert extensions == {
46 "http_version": b"HTTP/1.1",
47 "reason_phrase": reason_phrase,
48 }
49 origin = (b"http", *server.netloc)
50 assert len(http._connections[origin]) == 1 # type: ignore
51
52
53
54 def test_https_request(backend: str, https_server: Server) -> None:
55 with httpcore.SyncConnectionPool(backend=backend) as http:
56 status_code, headers, stream, extensions = http.handle_request(
57 method=b"GET",
58 url=(b"https", *https_server.netloc, b"/"),
59 headers=[https_server.host_header],
60 stream=httpcore.ByteStream(b""),
61 extensions={},
62 )
63 read_body(stream)
64
65 assert status_code == 200
66 reason_phrase = b"OK" if https_server.sends_reason else b""
67 assert extensions == {
68 "http_version": b"HTTP/1.1",
69 "reason_phrase": reason_phrase,
70 }
71 origin = (b"https", *https_server.netloc)
72 assert len(http._connections[origin]) == 1 # type: ignore
73
74
75
76 @pytest.mark.parametrize(
77 "url", [(b"ftp", b"example.org", 443, b"/"), (b"", b"coolsite.org", 443, b"/")]
78 )
79 def test_request_unsupported_protocol(
80 backend: str, url: Tuple[bytes, bytes, int, bytes]
81 ) -> None:
82 with httpcore.SyncConnectionPool(backend=backend) as http:
83 with pytest.raises(httpcore.UnsupportedProtocol):
84 http.handle_request(
85 method=b"GET",
86 url=url,
87 headers=[(b"host", b"example.org")],
88 stream=httpcore.ByteStream(b""),
89 extensions={},
90 )
91
92
93
94 def test_http2_request(backend: str, https_server: Server) -> None:
95 with httpcore.SyncConnectionPool(backend=backend, http2=True) as http:
96 status_code, headers, stream, extensions = http.handle_request(
97 method=b"GET",
98 url=(b"https", *https_server.netloc, b"/"),
99 headers=[https_server.host_header],
100 stream=httpcore.ByteStream(b""),
101 extensions={},
102 )
103 read_body(stream)
104
105 assert status_code == 200
106 assert extensions == {"http_version": b"HTTP/2"}
107 origin = (b"https", *https_server.netloc)
108 assert len(http._connections[origin]) == 1 # type: ignore
109
110
111
112 def test_closing_http_request(backend: str, server: Server) -> None:
113 with httpcore.SyncConnectionPool(backend=backend) as http:
114 status_code, headers, stream, extensions = http.handle_request(
115 method=b"GET",
116 url=(b"http", *server.netloc, b"/"),
117 headers=[server.host_header, (b"connection", b"close")],
118 stream=httpcore.ByteStream(b""),
119 extensions={},
120 )
121 read_body(stream)
122
123 assert status_code == 200
124 reason_phrase = b"OK" if server.sends_reason else b""
125 assert extensions == {
126 "http_version": b"HTTP/1.1",
127 "reason_phrase": reason_phrase,
128 }
129 origin = (b"http", *server.netloc)
130 assert origin not in http._connections # type: ignore
131
132
133
134 def test_http_request_reuse_connection(backend: str, server: Server) -> None:
135 with httpcore.SyncConnectionPool(backend=backend) as http:
136 status_code, headers, stream, extensions = http.handle_request(
137 method=b"GET",
138 url=(b"http", *server.netloc, b"/"),
139 headers=[server.host_header],
140 stream=httpcore.ByteStream(b""),
141 extensions={},
142 )
143 read_body(stream)
144
145 assert status_code == 200
146 reason_phrase = b"OK" if server.sends_reason else b""
147 assert extensions == {
148 "http_version": b"HTTP/1.1",
149 "reason_phrase": reason_phrase,
150 }
151 origin = (b"http", *server.netloc)
152 assert len(http._connections[origin]) == 1 # type: ignore
153
154 status_code, headers, stream, extensions = http.handle_request(
155 method=b"GET",
156 url=(b"http", *server.netloc, b"/"),
157 headers=[server.host_header],
158 stream=httpcore.ByteStream(b""),
159 extensions={},
160 )
161 read_body(stream)
162
163 assert status_code == 200
164 reason_phrase = b"OK" if server.sends_reason else b""
165 assert extensions == {
166 "http_version": b"HTTP/1.1",
167 "reason_phrase": reason_phrase,
168 }
169 origin = (b"http", *server.netloc)
170 assert len(http._connections[origin]) == 1 # type: ignore
171
172
173
174 def test_https_request_reuse_connection(
175 backend: str, https_server: Server
176 ) -> None:
177 with httpcore.SyncConnectionPool(backend=backend) as http:
178 status_code, headers, stream, extensions = http.handle_request(
179 method=b"GET",
180 url=(b"https", *https_server.netloc, b"/"),
181 headers=[https_server.host_header],
182 stream=httpcore.ByteStream(b""),
183 extensions={},
184 )
185 read_body(stream)
186
187 assert status_code == 200
188 reason_phrase = b"OK" if https_server.sends_reason else b""
189 assert extensions == {
190 "http_version": b"HTTP/1.1",
191 "reason_phrase": reason_phrase,
192 }
193 origin = (b"https", *https_server.netloc)
194 assert len(http._connections[origin]) == 1 # type: ignore
195
196 status_code, headers, stream, extensions = http.handle_request(
197 method=b"GET",
198 url=(b"https", *https_server.netloc, b"/"),
199 headers=[https_server.host_header],
200 stream=httpcore.ByteStream(b""),
201 extensions={},
202 )
203 read_body(stream)
204
205 assert status_code == 200
206 reason_phrase = b"OK" if https_server.sends_reason else b""
207 assert extensions == {
208 "http_version": b"HTTP/1.1",
209 "reason_phrase": reason_phrase,
210 }
211 origin = (b"https", *https_server.netloc)
212 assert len(http._connections[origin]) == 1 # type: ignore
213
214
215
216 def test_http_request_cannot_reuse_dropped_connection(
217 backend: str, server: Server
218 ) -> None:
219 with httpcore.SyncConnectionPool(backend=backend) as http:
220 status_code, headers, stream, extensions = http.handle_request(
221 method=b"GET",
222 url=(b"http", *server.netloc, b"/"),
223 headers=[server.host_header],
224 stream=httpcore.ByteStream(b""),
225 extensions={},
226 )
227 read_body(stream)
228
229 assert status_code == 200
230 reason_phrase = b"OK" if server.sends_reason else b""
231 assert extensions == {
232 "http_version": b"HTTP/1.1",
233 "reason_phrase": reason_phrase,
234 }
235 origin = (b"http", *server.netloc)
236 assert len(http._connections[origin]) == 1 # type: ignore
237
238 # Mock the connection as having been dropped.
239 connection = list(http._connections[origin])[0] # type: ignore
240 connection.is_socket_readable = lambda: True # type: ignore
241
242 status_code, headers, stream, extensions = http.handle_request(
243 method=b"GET",
244 url=(b"http", *server.netloc, b"/"),
245 headers=[server.host_header],
246 stream=httpcore.ByteStream(b""),
247 extensions={},
248 )
249 read_body(stream)
250
251 assert status_code == 200
252 reason_phrase = b"OK" if server.sends_reason else b""
253 assert extensions == {
254 "http_version": b"HTTP/1.1",
255 "reason_phrase": reason_phrase,
256 }
257 origin = (b"http", *server.netloc)
258 assert len(http._connections[origin]) == 1 # type: ignore
259
260
261 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
262
263 def test_http_proxy(
264 proxy_server: URL, proxy_mode: str, backend: str, server: Server
265 ) -> None:
266 max_connections = 1
267 with httpcore.SyncHTTPProxy(
268 proxy_server,
269 proxy_mode=proxy_mode,
270 max_connections=max_connections,
271 backend=backend,
272 ) as http:
273 status_code, headers, stream, extensions = http.handle_request(
274 method=b"GET",
275 url=(b"http", *server.netloc, b"/"),
276 headers=[server.host_header],
277 stream=httpcore.ByteStream(b""),
278 extensions={},
279 )
280 read_body(stream)
281
282 assert status_code == 200
283 reason_phrase = b"OK" if server.sends_reason else b""
284 assert extensions == {
285 "http_version": b"HTTP/1.1",
286 "reason_phrase": reason_phrase,
287 }
288
289
290 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
291 @pytest.mark.parametrize("protocol,port", [(b"http", 80), (b"https", 443)])
292
293 # Filter out ssl module deprecation warnings and asyncio module resource warning,
294 # convert other warnings to errors.
295 @pytest.mark.filterwarnings("ignore:.*(SSLContext|PROTOCOL_TLS):DeprecationWarning")
296 @pytest.mark.filterwarnings("ignore::ResourceWarning:asyncio")
297 @pytest.mark.filterwarnings("error")
298 def test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool(
299 proxy_server: URL,
300 server: Server,
301 proxy_mode: str,
302 protocol: bytes,
303 port: int,
304 ):
305 with httpcore.SyncHTTPProxy(proxy_server, proxy_mode=proxy_mode) as http:
306 for _ in range(100):
307 try:
308 _ = http.handle_request(
309 method=b"GET",
310 url=(protocol, b"blockedhost.example.com", port, b"/"),
311 headers=[(b"host", b"blockedhost.example.com")],
312 stream=httpcore.ByteStream(b""),
313 extensions={},
314 )
315 except (httpcore.ProxyError, httpcore.RemoteProtocolError):
316 pass
317
318
319
320 def test_http_request_local_address(backend: str, server: Server) -> None:
321 if backend == "sync" and lookup_sync_backend() == "trio":
322 pytest.skip("The trio backend does not support local_address")
323
324 with httpcore.SyncConnectionPool(
325 backend=backend, local_address="0.0.0.0"
326 ) as http:
327 status_code, headers, stream, extensions = http.handle_request(
328 method=b"GET",
329 url=(b"http", *server.netloc, b"/"),
330 headers=[server.host_header],
331 stream=httpcore.ByteStream(b""),
332 extensions={},
333 )
334 read_body(stream)
335
336 assert status_code == 200
337 reason_phrase = b"OK" if server.sends_reason else b""
338 assert extensions == {
339 "http_version": b"HTTP/1.1",
340 "reason_phrase": reason_phrase,
341 }
342 origin = (b"http", *server.netloc)
343 assert len(http._connections[origin]) == 1 # type: ignore
344
345
346 # mitmproxy does not support forwarding HTTPS requests
347 @pytest.mark.parametrize("proxy_mode", ["DEFAULT", "TUNNEL_ONLY"])
348 @pytest.mark.parametrize("http2", [False, True])
349
350 def test_proxy_https_requests(
351 proxy_server: URL,
352 proxy_mode: str,
353 http2: bool,
354 https_server: Server,
355 ) -> None:
356 max_connections = 1
357 with httpcore.SyncHTTPProxy(
358 proxy_server,
359 proxy_mode=proxy_mode,
360 max_connections=max_connections,
361 http2=http2,
362 ) as http:
363 status_code, headers, stream, extensions = http.handle_request(
364 method=b"GET",
365 url=(b"https", *https_server.netloc, b"/"),
366 headers=[https_server.host_header],
367 stream=httpcore.ByteStream(b""),
368 extensions={},
369 )
370 _ = read_body(stream)
371
372 assert status_code == 200
373 assert extensions["http_version"] == b"HTTP/2" if http2 else b"HTTP/1.1"
374 assert extensions.get("reason_phrase", b"") == b"" if http2 else b"OK"
375
376
377 @pytest.mark.parametrize(
378 "http2,keepalive_expiry,expected_during_active,expected_during_idle",
379 [
380 (
381 False,
382 60.0,
383 {HTTPS_SERVER_URL: ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]},
384 {HTTPS_SERVER_URL: ["HTTP/1.1, IDLE", "HTTP/1.1, IDLE"]},
385 ),
386 (
387 True,
388 60.0,
389 {HTTPS_SERVER_URL: ["HTTP/2, ACTIVE, 2 streams"]},
390 {HTTPS_SERVER_URL: ["HTTP/2, IDLE, 0 streams"]},
391 ),
392 (
393 False,
394 0.0,
395 {HTTPS_SERVER_URL: ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]},
396 {},
397 ),
398 (
399 True,
400 0.0,
401 {HTTPS_SERVER_URL: ["HTTP/2, ACTIVE, 2 streams"]},
402 {},
403 ),
404 ],
405 )
406
407 def test_connection_pool_get_connection_info(
408 http2: bool,
409 keepalive_expiry: float,
410 expected_during_active: dict,
411 expected_during_idle: dict,
412 backend: str,
413 https_server: Server,
414 ) -> None:
415 with httpcore.SyncConnectionPool(
416 http2=http2, keepalive_expiry=keepalive_expiry, backend=backend
417 ) as http:
418 _, _, stream_1, _ = http.handle_request(
419 method=b"GET",
420 url=(b"https", *https_server.netloc, b"/"),
421 headers=[https_server.host_header],
422 stream=httpcore.ByteStream(b""),
423 extensions={},
424 )
425 _, _, stream_2, _ = http.handle_request(
426 method=b"GET",
427 url=(b"https", *https_server.netloc, b"/"),
428 headers=[https_server.host_header],
429 stream=httpcore.ByteStream(b""),
430 extensions={},
431 )
432
433 try:
434 stats = http.get_connection_info()
435 assert stats == expected_during_active
436 finally:
437 read_body(stream_1)
438 read_body(stream_2)
439
440 stats = http.get_connection_info()
441 assert stats == expected_during_idle
442
443 stats = http.get_connection_info()
444 assert stats == {}
445
446
447 @pytest.mark.skipif(
448 platform.system() not in ("Linux", "Darwin"),
449 reason="Unix Domain Sockets only exist on Unix",
450 )
451
452 def test_http_request_unix_domain_socket(
453 uds_server: Server, backend: str
454 ) -> None:
455 uds = uds_server.uds
456 with httpcore.SyncConnectionPool(uds=uds, backend=backend) as http:
457 status_code, headers, stream, extensions = http.handle_request(
458 method=b"GET",
459 url=(b"http", b"localhost", None, b"/"),
460 headers=[(b"host", b"localhost")],
461 stream=httpcore.ByteStream(b""),
462 extensions={},
463 )
464 assert status_code == 200
465 reason_phrase = b"OK" if uds_server.sends_reason else b""
466 assert extensions == {
467 "http_version": b"HTTP/1.1",
468 "reason_phrase": reason_phrase,
469 }
470 body = read_body(stream)
471 assert body == b"Hello, world!"
472
473
474 @pytest.mark.parametrize("max_keepalive", [1, 3, 5])
475 @pytest.mark.parametrize("connections_number", [4])
476
477 def test_max_keepalive_connections_handled_correctly(
478 max_keepalive: int, connections_number: int, backend: str, server: Server
479 ) -> None:
480 with httpcore.SyncConnectionPool(
481 max_keepalive_connections=max_keepalive, keepalive_expiry=60, backend=backend
482 ) as http:
483 connections_streams = []
484 for _ in range(connections_number):
485 _, _, stream, _ = http.handle_request(
486 method=b"GET",
487 url=(b"http", *server.netloc, b"/"),
488 headers=[server.host_header],
489 stream=httpcore.ByteStream(b""),
490 extensions={},
491 )
492 connections_streams.append(stream)
493
494 try:
495 for i in range(len(connections_streams)):
496 read_body(connections_streams[i])
497 finally:
498 stats = http.get_connection_info()
499
500 connections_in_pool = next(iter(stats.values()))
501 assert len(connections_in_pool) == min(connections_number, max_keepalive)
502
503
504
505 def test_explicit_backend_name(server: Server) -> None:
506 with httpcore.SyncConnectionPool(backend=lookup_sync_backend()) as http:
507 status_code, headers, stream, extensions = http.handle_request(
508 method=b"GET",
509 url=(b"http", *server.netloc, b"/"),
510 headers=[server.host_header],
511 stream=httpcore.ByteStream(b""),
512 extensions={},
513 )
514 read_body(stream)
515
516 assert status_code == 200
517 reason_phrase = b"OK" if server.sends_reason else b""
518 assert extensions == {
519 "http_version": b"HTTP/1.1",
520 "reason_phrase": reason_phrase,
521 }
522 origin = (b"http", *server.netloc)
523 assert len(http._connections[origin]) == 1 # type: ignore
524
525
526
527 @pytest.mark.usefixtures("too_many_open_files_minus_one")
528 @pytest.mark.skipif(platform.system() != "Linux", reason="Only a problem on Linux")
529 def test_broken_socket_detection_many_open_files(
530 backend: str, server: Server
531 ) -> None:
532 """
533 Regression test for: https://github.com/encode/httpcore/issues/182
534 """
535 with httpcore.SyncConnectionPool(backend=backend) as http:
536 # * First attempt will be successful because it will grab the last
537 # available fd before what select() supports on the platform.
538 # * Second attempt would have failed without a fix, due to a "filedescriptor
539 # out of range in select()" exception.
540 for _ in range(2):
541 (
542 status_code,
543 response_headers,
544 stream,
545 extensions,
546 ) = http.handle_request(
547 method=b"GET",
548 url=(b"http", *server.netloc, b"/"),
549 headers=[server.host_header],
550 stream=httpcore.ByteStream(b""),
551 extensions={},
552 )
553 read_body(stream)
554
555 assert status_code == 200
556 reason_phrase = b"OK" if server.sends_reason else b""
557 assert extensions == {
558 "http_version": b"HTTP/1.1",
559 "reason_phrase": reason_phrase,
560 }
561 origin = (b"http", *server.netloc)
562 assert len(http._connections[origin]) == 1 # type: ignore
563
564
565
566 @pytest.mark.parametrize(
567 "url",
568 [
569 pytest.param((b"http", b"localhost", 12345, b"/"), id="connection-refused"),
570 pytest.param(
571 (b"http", b"doesnotexistatall.org", None, b"/"), id="dns-resolution-failed"
572 ),
573 ],
574 )
575 def test_cannot_connect_tcp(backend: str, url) -> None:
576 """
577 A properly wrapped error is raised when connecting to the server fails.
578 """
579 with httpcore.SyncConnectionPool(backend=backend) as http:
580 with pytest.raises(httpcore.ConnectError):
581 http.handle_request(
582 method=b"GET",
583 url=url,
584 headers=[],
585 stream=httpcore.ByteStream(b""),
586 extensions={},
587 )
588
589
590
591 def test_cannot_connect_uds(backend: str) -> None:
592 """
593 A properly wrapped error is raised when connecting to the UDS server fails.
594 """
595 uds = "/tmp/doesnotexist.sock"
596 with httpcore.SyncConnectionPool(backend=backend, uds=uds) as http:
597 with pytest.raises(httpcore.ConnectError):
598 http.handle_request(
599 method=b"GET",
600 url=(b"http", b"localhost", None, b"/"),
601 headers=[],
602 stream=httpcore.ByteStream(b""),
603 extensions={},
604 )
+0
-200
tests/sync_tests/test_retries.py less more
0 import queue
1 import time
2 from typing import Any, List, Optional
3
4 import pytest
5
6 import httpcore
7 from httpcore._backends.sync import SyncSocketStream, SyncBackend
8 from tests.utils import Server
9
10
11 class SyncMockBackend(SyncBackend):
12 def __init__(self) -> None:
13 super().__init__()
14 self._exceptions: queue.Queue[Optional[Exception]] = queue.Queue()
15 self._timestamps: List[float] = []
16
17 def push(self, *exceptions: Optional[Exception]) -> None:
18 for exc in exceptions:
19 self._exceptions.put(exc)
20
21 def pop_open_tcp_stream_intervals(self) -> list:
22 intervals = [b - a for a, b in zip(self._timestamps, self._timestamps[1:])]
23 self._timestamps.clear()
24 return intervals
25
26 def open_tcp_stream(self, *args: Any, **kwargs: Any) -> SyncSocketStream:
27 self._timestamps.append(time.time())
28 exc = None if self._exceptions.empty() else self._exceptions.get_nowait()
29 if exc is not None:
30 raise exc
31 return super().open_tcp_stream(*args, **kwargs)
32
33
34 def read_body(stream: httpcore.SyncByteStream) -> bytes:
35 try:
36 return b"".join([chunk for chunk in stream])
37 finally:
38 stream.close()
39
40
41
42 def test_no_retries(server: Server) -> None:
43 """
44 By default, connection failures are not retried on.
45 """
46 backend = SyncMockBackend()
47
48 with httpcore.SyncConnectionPool(
49 max_keepalive_connections=0, backend=backend
50 ) as http:
51 response = http.handle_request(
52 method=b"GET",
53 url=(b"http", *server.netloc, b"/"),
54 headers=[server.host_header],
55 stream=httpcore.ByteStream(b""),
56 extensions={},
57 )
58 status_code, _, stream, _ = response
59 assert status_code == 200
60 read_body(stream)
61
62 backend.push(httpcore.ConnectTimeout(), httpcore.ConnectError())
63
64 with pytest.raises(httpcore.ConnectTimeout):
65 http.handle_request(
66 method=b"GET",
67 url=(b"http", *server.netloc, b"/"),
68 headers=[server.host_header],
69 stream=httpcore.ByteStream(b""),
70 extensions={},
71 )
72
73 with pytest.raises(httpcore.ConnectError):
74 http.handle_request(
75 method=b"GET",
76 url=(b"http", *server.netloc, b"/"),
77 headers=[server.host_header],
78 stream=httpcore.ByteStream(b""),
79 extensions={},
80 )
81
82
83
84 def test_retries_enabled(server: Server) -> None:
85 """
86 When retries are enabled, connection failures are retried on with
87 a fixed exponential backoff.
88 """
89 backend = SyncMockBackend()
90 retries = 10 # Large enough to not run out of retries within this test.
91
92 with httpcore.SyncConnectionPool(
93 retries=retries, max_keepalive_connections=0, backend=backend
94 ) as http:
95 # Standard case, no failures.
96 response = http.handle_request(
97 method=b"GET",
98 url=(b"http", *server.netloc, b"/"),
99 headers=[server.host_header],
100 stream=httpcore.ByteStream(b""),
101 extensions={},
102 )
103 assert backend.pop_open_tcp_stream_intervals() == []
104 status_code, _, stream, _ = response
105 assert status_code == 200
106 read_body(stream)
107
108 # One failure, then success.
109 backend.push(httpcore.ConnectError(), None)
110 response = http.handle_request(
111 method=b"GET",
112 url=(b"http", *server.netloc, b"/"),
113 headers=[server.host_header],
114 stream=httpcore.ByteStream(b""),
115 extensions={},
116 )
117 assert backend.pop_open_tcp_stream_intervals() == [
118 pytest.approx(0, abs=5e-3), # Retry immediately.
119 ]
120 status_code, _, stream, _ = response
121 assert status_code == 200
122 read_body(stream)
123
124 # Three failures, then success.
125 backend.push(
126 httpcore.ConnectError(),
127 httpcore.ConnectTimeout(),
128 httpcore.ConnectTimeout(),
129 None,
130 )
131 response = http.handle_request(
132 method=b"GET",
133 url=(b"http", *server.netloc, b"/"),
134 headers=[server.host_header],
135 stream=httpcore.ByteStream(b""),
136 extensions={},
137 )
138 assert backend.pop_open_tcp_stream_intervals() == [
139 pytest.approx(0, abs=5e-3), # Retry immediately.
140 pytest.approx(0.5, rel=0.1), # First backoff.
141 pytest.approx(1.0, rel=0.1), # Second (increased) backoff.
142 ]
143 status_code, _, stream, _ = response
144 assert status_code == 200
145 read_body(stream)
146
147 # Non-connect exceptions are not retried on.
148 backend.push(httpcore.ReadTimeout(), httpcore.NetworkError())
149 with pytest.raises(httpcore.ReadTimeout):
150 http.handle_request(
151 method=b"GET",
152 url=(b"http", *server.netloc, b"/"),
153 headers=[server.host_header],
154 stream=httpcore.ByteStream(b""),
155 extensions={},
156 )
157 with pytest.raises(httpcore.NetworkError):
158 http.handle_request(
159 method=b"GET",
160 url=(b"http", *server.netloc, b"/"),
161 headers=[server.host_header],
162 stream=httpcore.ByteStream(b""),
163 extensions={},
164 )
165
166
167
168 def test_retries_exceeded(server: Server) -> None:
169 """
170 When retries are enabled and connecting failures more than the configured number
171 of retries, connect exceptions are raised.
172 """
173 backend = SyncMockBackend()
174 retries = 1
175
176 with httpcore.SyncConnectionPool(
177 retries=retries, max_keepalive_connections=0, backend=backend
178 ) as http:
179 response = http.handle_request(
180 method=b"GET",
181 url=(b"http", *server.netloc, b"/"),
182 headers=[server.host_header],
183 stream=httpcore.ByteStream(b""),
184 extensions={},
185 )
186 status_code, _, stream, _ = response
187 assert status_code == 200
188 read_body(stream)
189
190 # First failure is retried on, second one isn't.
191 backend.push(httpcore.ConnectError(), httpcore.ConnectTimeout())
192 with pytest.raises(httpcore.ConnectTimeout):
193 http.handle_request(
194 method=b"GET",
195 url=(b"http", *server.netloc, b"/"),
196 headers=[server.host_header],
197 stream=httpcore.ByteStream(b""),
198 extensions={},
199 )
0 import json
1
2 import httpcore
3
4
5 def test_request(httpbin):
6 response = httpcore.request("GET", httpbin.url)
7 assert response.status == 200
8
9
10 def test_stream(httpbin):
11 with httpcore.stream("GET", httpbin.url) as response:
12 assert response.status == 200
13
14
15 def test_request_with_content(httpbin):
16 url = f"{httpbin.url}/post"
17 response = httpcore.request("POST", url, content=b'{"hello":"world"}')
18 assert response.status == 200
19 assert json.loads(response.content)["json"] == {"hello": "world"}
+0
-8
tests/test_exported_members.py less more
0 import httpcore
1 from httpcore import __all__ as exported_members
2
3
4 def test_all_imports_are_exported() -> None:
5 assert exported_members == sorted(
6 member for member in vars(httpcore).keys() if not member.startswith("_")
7 )
+0
-21
tests/test_map_exceptions.py less more
0 import pytest
1
2 from httpcore._exceptions import map_exceptions
3
4
5 def test_map_single_exception() -> None:
6 with pytest.raises(TypeError):
7 with map_exceptions({ValueError: TypeError}):
8 raise ValueError("nope")
9
10
11 def test_map_multiple_exceptions() -> None:
12 with pytest.raises(ValueError):
13 with map_exceptions({IndexError: ValueError, KeyError: ValueError}):
14 raise KeyError("nope")
15
16
17 def test_unhandled_map_exception() -> None:
18 with pytest.raises(TypeError):
19 with map_exceptions({IndexError: ValueError, KeyError: ValueError}):
20 raise TypeError("nope")
0 from typing import AsyncIterator, Iterator, List
1
2 import pytest
3
4 import httpcore
5
6 # URL
7
8
9 def test_url():
10 url = httpcore.URL("https://www.example.com/")
11 assert url == httpcore.URL(
12 scheme="https", host="www.example.com", port=None, target="/"
13 )
14 assert bytes(url) == b"https://www.example.com/"
15
16
17 def test_url_with_port():
18 url = httpcore.URL("https://www.example.com:443/")
19 assert url == httpcore.URL(
20 scheme="https", host="www.example.com", port=443, target="/"
21 )
22 assert bytes(url) == b"https://www.example.com:443/"
23
24
25 def test_url_with_invalid_argument():
26 with pytest.raises(TypeError) as exc_info:
27 httpcore.URL(123) # type: ignore
28 assert str(exc_info.value) == "url must be bytes or str, but got int."
29
30
31 def test_url_cannot_include_unicode_strings():
32 """
33 URLs instantiated with strings outside of the plain ASCII range are disallowed,
34 but the explicit style allows for these ambiguous cases to be precisely expressed.
35 """
36 with pytest.raises(TypeError) as exc_info:
37 httpcore.URL("https://www.example.com/☺")
38 assert str(exc_info.value) == "url strings may not include unicode characters."
39
40 httpcore.URL(scheme=b"https", host=b"www.example.com", target="/☺".encode("utf-8"))
41
42
43 # Request
44
45
46 def test_request():
47 request = httpcore.Request("GET", "https://www.example.com/")
48 assert request.method == b"GET"
49 assert request.url == httpcore.URL("https://www.example.com/")
50 assert request.headers == []
51 assert request.extensions == {}
52 assert repr(request) == "<Request [b'GET']>"
53 assert (
54 repr(request.url)
55 == "URL(scheme=b'https', host=b'www.example.com', port=None, target=b'/')"
56 )
57 assert repr(request.stream) == "<ByteStream [0 bytes]>"
58
59
60 def test_request_with_invalid_method():
61 with pytest.raises(TypeError) as exc_info:
62 httpcore.Request(123, "https://www.example.com/") # type: ignore
63 assert str(exc_info.value) == "method must be bytes or str, but got int."
64
65
66 def test_request_with_invalid_url():
67 with pytest.raises(TypeError) as exc_info:
68 httpcore.Request("GET", 123) # type: ignore
69 assert str(exc_info.value) == "url must be a URL, bytes, or str, but got int."
70
71
72 def test_request_with_invalid_headers():
73 with pytest.raises(TypeError) as exc_info:
74 httpcore.Request("GET", "https://www.example.com/", headers=123) # type: ignore
75 assert (
76 str(exc_info.value)
77 == "headers must be a mapping or sequence of two-tuples, but got int."
78 )
79
80
81 # Response
82
83
84 def test_response():
85 response = httpcore.Response(200)
86 assert response.status == 200
87 assert response.headers == []
88 assert response.extensions == {}
89 assert repr(response) == "<Response [200]>"
90 assert repr(response.stream) == "<ByteStream [0 bytes]>"
91
92
93 # Tests for reading and streaming sync byte streams...
94
95
96 class ByteIterator:
97 def __init__(self, chunks: List[bytes]) -> None:
98 self._chunks = chunks
99
100 def __iter__(self) -> Iterator[bytes]:
101 for chunk in self._chunks:
102 yield chunk
103
104
105 def test_response_sync_read():
106 stream = ByteIterator([b"Hello, ", b"world!"])
107 response = httpcore.Response(200, content=stream)
108 assert response.read() == b"Hello, world!"
109 assert response.content == b"Hello, world!"
110
111
112 def test_response_sync_streaming():
113 stream = ByteIterator([b"Hello, ", b"world!"])
114 response = httpcore.Response(200, content=stream)
115 content = b"".join([chunk for chunk in response.iter_stream()])
116 assert content == b"Hello, world!"
117
118 # We streamed the response rather than reading it, so .content is not available.
119 with pytest.raises(RuntimeError):
120 response.content
121
122 # Once we've streamed the response, we can't access the stream again.
123 with pytest.raises(RuntimeError):
124 for _chunk in response.iter_stream():
125 pass # pragma: nocover
126
127
128 # Tests for reading and streaming async byte streams...
129
130
131 class AsyncByteIterator:
132 def __init__(self, chunks: List[bytes]) -> None:
133 self._chunks = chunks
134
135 async def __aiter__(self) -> AsyncIterator[bytes]:
136 for chunk in self._chunks:
137 yield chunk
138
139
140 @pytest.mark.trio
141 async def test_response_async_read():
142 stream = AsyncByteIterator([b"Hello, ", b"world!"])
143 response = httpcore.Response(200, content=stream)
144 assert await response.aread() == b"Hello, world!"
145 assert response.content == b"Hello, world!"
146
147
148 @pytest.mark.trio
149 async def test_response_async_streaming():
150 stream = AsyncByteIterator([b"Hello, ", b"world!"])
151 response = httpcore.Response(200, content=stream)
152 content = b"".join([chunk async for chunk in response.aiter_stream()])
153 assert content == b"Hello, world!"
154
155 # We streamed the response rather than reading it, so .content is not available.
156 with pytest.raises(RuntimeError):
157 response.content
158
159 # Once we've streamed the response, we can't access the stream again.
160 with pytest.raises(RuntimeError):
161 async for chunk in response.aiter_stream():
162 pass # pragma: nocover
+0
-49
tests/test_threadsafety.py less more
0 import concurrent.futures
1
2 import pytest
3
4 import httpcore
5
6 from .utils import Server
7
8
9 def read_body(stream: httpcore.SyncByteStream) -> bytes:
10 try:
11 return b"".join(chunk for chunk in stream)
12 finally:
13 stream.close()
14
15
16 @pytest.mark.parametrize(
17 "http2", [pytest.param(False, id="h11"), pytest.param(True, id="h2")]
18 )
19 def test_threadsafe_basic(server: Server, http2: bool) -> None:
20 """
21 The sync connection pool can be used to perform requests concurrently using
22 threads.
23
24 Also a regression test for: https://github.com/encode/httpx/issues/1393
25 """
26 with httpcore.SyncConnectionPool(http2=http2) as http:
27
28 def request(http: httpcore.SyncHTTPTransport) -> int:
29 status_code, headers, stream, extensions = http.handle_request(
30 method=b"GET",
31 url=(b"http", *server.netloc, b"/"),
32 headers=[server.host_header],
33 stream=httpcore.ByteStream(b""),
34 extensions={},
35 )
36 read_body(stream)
37 return status_code
38
39 with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
40 futures = [executor.submit(request, http) for _ in range(10)]
41 num_results = 0
42
43 for future in concurrent.futures.as_completed(futures):
44 status_code = future.result()
45 assert status_code == 200
46 num_results += 1
47
48 assert num_results == 10
+0
-19
tests/test_utils.py less more
0 import itertools
1 from typing import List
2
3 import pytest
4
5 from httpcore._utils import exponential_backoff
6
7
8 @pytest.mark.parametrize(
9 "factor, expected",
10 [
11 (0.1, [0, 0.1, 0.2, 0.4, 0.8]),
12 (0.2, [0, 0.2, 0.4, 0.8, 1.6]),
13 (0.5, [0, 0.5, 1.0, 2.0, 4.0]),
14 ],
15 )
16 def test_exponential_backoff(factor: float, expected: List[int]) -> None:
17 delays = list(itertools.islice(exponential_backoff(factor), 5))
18 assert delays == expected
+0
-199
tests/utils.py less more
0 import contextlib
1 import functools
2 import socket
3 import subprocess
4 import tempfile
5 import threading
6 import time
7 from typing import Callable, Iterator, List, Tuple
8
9 import sniffio
10 import trio
11
12 try:
13 from hypercorn import config as hypercorn_config, trio as hypercorn_trio
14 except ImportError: # pragma: no cover # Python 3.6
15 hypercorn_config = None # type: ignore
16 hypercorn_trio = None # type: ignore
17
18
19 def lookup_async_backend():
20 return sniffio.current_async_library()
21
22
23 def lookup_sync_backend():
24 return "sync"
25
26
27 def _wait_can_connect(host: str, port: int):
28 while True:
29 try:
30 sock = socket.create_connection((host, port))
31 except ConnectionRefusedError:
32 time.sleep(0.25)
33 else:
34 sock.close()
35 break
36
37
38 class Server:
39 """
40 Base interface for servers we can test against.
41 """
42
43 @property
44 def sends_reason(self) -> bool:
45 raise NotImplementedError # pragma: no cover
46
47 @property
48 def netloc(self) -> Tuple[bytes, int]:
49 raise NotImplementedError # pragma: no cover
50
51 @property
52 def uds(self) -> str:
53 raise NotImplementedError # pragma: no cover
54
55 @property
56 def host_header(self) -> Tuple[bytes, bytes]:
57 raise NotImplementedError # pragma: no cover
58
59
60 class LiveServer(Server): # pragma: no cover # Python 3.6 only
61 """
62 A test server running on a live location.
63 """
64
65 sends_reason = True
66
67 def __init__(self, host: str, port: int) -> None:
68 self._host = host
69 self._port = port
70
71 @property
72 def netloc(self) -> Tuple[bytes, int]:
73 return (self._host.encode("ascii"), self._port)
74
75 @property
76 def host_header(self) -> Tuple[bytes, bytes]:
77 return (b"host", self._host.encode("ascii"))
78
79
80 class HypercornServer(Server): # pragma: no cover # Python 3.7+ only
81 """
82 A test server running in-process, powered by Hypercorn.
83 """
84
85 sends_reason = False
86
87 def __init__(
88 self,
89 app: Callable,
90 bind: str,
91 certfile: str = None,
92 keyfile: str = None,
93 ) -> None:
94 assert hypercorn_config is not None
95 self._app = app
96 self._config = hypercorn_config.Config()
97 self._config.bind = [bind]
98 self._config.certfile = certfile
99 self._config.keyfile = keyfile
100 self._config.worker_class = "asyncio"
101 self._started = False
102 self._should_exit = False
103
104 @property
105 def netloc(self) -> Tuple[bytes, int]:
106 bind = self._config.bind[0]
107 host, port = bind.split(":")
108 return (host.encode("ascii"), int(port))
109
110 @property
111 def host_header(self) -> Tuple[bytes, bytes]:
112 return (b"host", self.netloc[0])
113
114 @property
115 def uds(self) -> str:
116 bind = self._config.bind[0]
117 scheme, _, uds = bind.partition(":")
118 assert scheme == "unix"
119 return uds
120
121 def _run(self) -> None:
122 async def shutdown_trigger() -> None:
123 while not self._should_exit:
124 await trio.sleep(0.01)
125
126 serve = functools.partial(
127 hypercorn_trio.serve, shutdown_trigger=shutdown_trigger
128 )
129
130 async def main() -> None:
131 async with trio.open_nursery() as nursery:
132 await nursery.start(serve, self._app, self._config)
133 self._started = True
134
135 trio.run(main)
136
137 @contextlib.contextmanager
138 def serve_in_thread(self) -> Iterator[None]:
139 thread = threading.Thread(target=self._run)
140 thread.start()
141 try:
142 while not self._started:
143 time.sleep(1e-3)
144 yield
145 finally:
146 self._should_exit = True
147 thread.join()
148
149
150 @contextlib.contextmanager
151 def http_proxy_server(proxy_host: str, proxy_port: int):
152 """
153 This function launches pproxy process like this:
154 $ pproxy -b <blocked_hosts_file> -l http://127.0.0.1:8080
155 What does it mean?
156 It runs HTTP proxy on 127.0.0.1:8080 and blocks access to some external hosts,
157 specified in blocked_hosts_file
158
159 Relevant pproxy docs could be found in their github repo:
160 https://github.com/qwj/python-proxy
161 """
162 proc = None
163
164 with create_proxy_block_file(["blockedhost.example.com"]) as block_file_name:
165 try:
166 command = [
167 "pproxy",
168 "-b",
169 block_file_name,
170 "-l",
171 f"http://{proxy_host}:{proxy_port}/",
172 ]
173 proc = subprocess.Popen(command)
174
175 _wait_can_connect(proxy_host, proxy_port)
176
177 yield b"http", proxy_host.encode(), proxy_port, b"/"
178 finally:
179 if proc is not None:
180 proc.kill()
181
182
183 @contextlib.contextmanager
184 def create_proxy_block_file(blocked_domains: List[str]):
185 """
186 The context manager yields pproxy block file.
187 This file should contain line delimited hostnames. We use it in the following test:
188 test_proxy_socket_does_not_leak_when_the_connection_hasnt_been_added_to_pool
189 """
190 with tempfile.NamedTemporaryFile(delete=True, mode="w+") as file:
191
192 for domain in blocked_domains:
193 file.write(domain)
194 file.write("\n")
195
196 file.flush()
197
198 yield file.name
33 import sys
44
55 SUBS = [
6 ('AsyncIteratorByteStream', 'IteratorByteStream'),
6 ('from .._compat import asynccontextmanager', 'from contextlib import contextmanager'),
7 ('from ..backends.auto import AutoBackend', 'from ..backends.sync import SyncBackend'),
8 ('import trio as concurrency', 'from tests import concurrency'),
9 ('AsyncByteStream', 'SyncByteStream'),
710 ('AsyncIterator', 'Iterator'),
811 ('AutoBackend', 'SyncBackend'),
9 ('Async([A-Z][A-Za-z0-9_]*)', r'Sync\2'),
12 ('Async([A-Z][A-Za-z0-9_]*)', r'\2'),
1013 ('async def', 'def'),
1114 ('async with', 'with'),
1215 ('async for', 'for'),
1619 ('aclose_func', 'close_func'),
1720 ('aiterator', 'iterator'),
1821 ('aread', 'read'),
22 ('asynccontextmanager', 'contextmanager'),
1923 ('__aenter__', '__enter__'),
2024 ('__aexit__', '__exit__'),
2125 ('__aiter__', '__iter__'),
2226 ('@pytest.mark.anyio', ''),
2327 ('@pytest.mark.trio', ''),
24 (r'@pytest.fixture\(params=\["auto", "anyio"\]\)',
25 '@pytest.fixture(params=["sync"])'),
26 ('lookup_async_backend', "lookup_sync_backend"),
27 ('auto', 'sync'),
28 ('AutoBackend', 'SyncBackend'),
2829 ]
2930 COMPILED_SUBS = [
3031 (re.compile(r'(^|\b)' + regex + r'($|\b)'), repl)
7778 def main():
7879 check_only = '--check' in sys.argv
7980 unasync_dir("httpcore/_async", "httpcore/_sync", check_only=check_only)
80 unasync_dir("tests/async_tests", "tests/sync_tests", check_only=check_only)
81 unasync_dir("tests/_async", "tests/_sync", check_only=check_only)
8182
8283
8384 if __name__ == '__main__':